Description

Cybersecurity experts have uncovered and reported three now-resolved vulnerabilities within Google’s Gemini AI assistant, which posed significant privacy and data theft risks. According to a report by Tenable researcher Liv Matan, these flaws affected separate components of the Gemini suite and were collectively labeled as the "Gemini Trifecta." If exploited, these weaknesses could have enabled attackers to manipulate AI behavior, access private user data, and misuse cloud resources. The first vulnerability involved Gemini Cloud Assist, where attackers could carry out prompt injection by embedding malicious commands within HTTP request headers, such as the User-Agent. Since Gemini Cloud Assist summarizes logs pulled from various Google Cloud services, this flaw could allow unauthorized querying of cloud resources. An attacker might craft prompts that direct Gemini to scan for public cloud assets or identify IAM misconfigurations and then send that data to an external server using a link embedded in the AI's output. A second flaw existed in Gemini's Search Personalization model, where attackers could inject malicious prompts by modifying a user’s Chrome search history via a specially crafted website. This form of search-injection could manipulate Gemini into leaking personal information, including saved user data and location details. The model failed to distinguish between legitimate queries and injected ones, making it vulnerable to such manipulation. The third vulnerability was found in the Gemini Browsing Tool. By exploiting the AI's internal summarization of webpage content, attackers could insert hidden prompts within a webpage that would cause Gemini to send sensitive data to a remote server—without requiring any visible links or images. Google has since implemented measures like disabling hyperlink rendering in log summaries and strengthening protections against prompt injection. These incidents highlight how AI tools can become vectors for attack, emphasizing the need for organizations to maintain strict security controls as AI becomes more integrated into digital infrastructure.