Breaking

Vulnerability Disclosure: Stealing Emails via Prompt Injections

With the rise of AI assistance features in an increasing number of products, we have begun to focus some of our research efforts on refining our internal detection and testing guidelines for LLMs by taking a brief look at the new AI integrations we discover.

Alongside the rise of applications with LLM integrations, an increasing number of customers come to ERNW to specifically assess AI applications. Our colleagues Florian Grunow and Hannes Mohr analyzed the novel attack vectors that emerged and presented the results at TROOPERS24 already.

In this blog post, written by my colleague Malte Heinzelmann and me, Florian Port, we will examine multiple interesting exploit chains that we identified in an exemplary application, highlighting the risks resulting from the combination of sensitive data exposure and excessive agency. The target application is an AI email client, which adds a ChatGPT-like assistant to your Google Mail account.

Ultimately, we discovered a prompt injection payload that can be concealed within HTML emails, which is still interpreted by the model even if the user does not directly interact with the malicious email.

Continue reading “Vulnerability Disclosure: Stealing Emails via Prompt Injections”

Continue reading
Breaking

I know what you ordered last summer @ Winterkongress 2024

Dennis and I already published blog posts about our research project dealing with vulnerabilities in parcel tracking implementations at DHL and DPD. At the Winterkongress (winter congress) in Winterthur, Switzerland, we had the great opportunity to give a talk about the matter. The talk was recorded and can be watched here.

DigiGes held the Winterkongress, which took place in Winterthur on 01.03. till 02.03.2024. The main topics are ethics, threats, and opportunities of IT. This year, many talks looked at AI in some way. Continue reading “I know what you ordered last summer @ Winterkongress 2024”

Continue reading