Description

A recent investigation has revealed that Google’s Gemini for Workspace, an AI assistant integrated into various Google products, is susceptible to indirect prompt injection attacks. These vulnerabilities allow malicious actors to manipulate the assistant into producing misleading or unintended responses, raising significant concerns about the accuracy and reliability of its outputs. Gemini for Workspace is designed to enhance productivity by embedding AI tools within Google services like Gmail, Google Slides, and Google Drive. However, security researchers from Hidden Layer have demonstrated through proof-of-concept attacks that indirect prompt injection can compromise the assistant’s responses. One of the most troubling aspects of these vulnerabilities is their potential use in phishing attacks. For example, an attacker could send a malicious email that, when processed by Gemini, prompts the assistant to display fake alerts about compromised passwords, instructing users to visit malicious websites to reset them. The vulnerabilities aren't limited to Gmail. In Google Slides, attackers can inject malicious content into speaker notes, leading Gemini to generate summaries with unintended information, such as inserting lyrics from a famous song. In Google Drive, Gemini behaves similarly to a Retrieve, Augment, and Generate (RAG) system, meaning attackers can inject harmful content into shared documents, further manipulating the assistant’s outputs. Users should be aware of the potential risks and take appropriate steps to safeguard themselves against possible exploitation. As Google continues to deploy Gemini for Workspace, addressing these vulnerabilities is essential to maintaining the integrity and trustworthiness of the assistant's generated information.