With the rise of AI assistance features in an increasing number of products, we have begun to focus some of our research efforts on refining our internal detection and testing guidelines for LLMs by taking a brief look at the new AI integrations we discover.
Alongside the rise of applications with LLM integrations, an increasing number of customers come to ERNW to specifically assess AI applications. Our colleagues Florian Grunow and Hannes Mohr analyzed the novel attack vectors that emerged and presented the results at TROOPERS24 already.
In this blog post, written by my colleague Malte Heinzelmann and me, Florian Port, we will examine multiple interesting exploit chains that we identified in an exemplary application, highlighting the risks resulting from the combination of sensitive data exposure and excessive agency. The target application is an AI email client, which adds a ChatGPT-like assistant to your Google Mail account.
Ultimately, we discovered a prompt injection payload that can be concealed within HTML emails, which is still interpreted by the model even if the user does not directly interact with the malicious email.
Continue reading “Vulnerability Disclosure: Stealing Emails via Prompt Injections”
Continue reading