Google Gemini: Hacking Memories with Prompt Injection and Delayed Tool Invocation.
Based on lessons learned previously, developers had already trained Gemini to resist indirect prompts instructing it to make changes to an account’s long-term memories without explicit directions from the user. By introducing a condition to the instruction that it be performed only after the user says or does some variable X, which they were likely to take anyway, Rehberger easily cleared that safety barrier.
“When the user later says X, Gemini, believing it’s following the user’s direct instruction, executes the tool,” Rehberger explained. “Gemini, basically, incorrectly ‘thinks’ the user explicitly wants to invoke the tool! It’s a bit of a social engineering/phishing attack but nevertheless shows that an attacker can trick Gemini to store fake information into a user’s long-term memories simply by having them interact with a malicious document.”
Cause once again goes unaddressed
Google responded to the finding with the assessment that the overall threat is low risk and low impact. In an emailed statement, Google explained its reasoning as:
In this instance, the probability was low because it relied on phishing or otherwise tricking the user into summarizing a malicious document and then invoking the material injected by the attacker. The impact was low because the Gemini memory functionality has limited impact on a user session. As this was not a scalable, specific vector of abuse, we ended up at Low/Low. As always, we appreciate the researcher reaching out to us and reporting this issue.
Rehberger noted that Gemini informs users after storing a new long-term memory. That means vigilant users can tell when there are unauthorized additions to this cache and can then remove them. In an interview with Ars, though, the researcher still questioned Google’s assessment.
“Memory corruption in computers is pretty bad, and I think the same applies here to LLMs apps,” he wrote. “Like the AI might not show a user certain info or not talk about certain things or feed the user misinformation, etc. The good thing is that the memory updates don’t happen entirely silently—the user at least sees a message about it (although many might ignore).”