Description

A newly discovered cybersecurity risk, referred to as AI Recommendation Poisoning, is targeting users of AI-powered assistants by exploiting their memory and personalization functions. This method involves embedding hidden instructions within links or buttons such as “Summarize with AI,” which appear legitimate on websites or in emails. When users click these elements, they are redirected to their AI assistant with preloaded prompts containing concealed commands. These commands are designed to influence the assistant’s memory, allowing external parties to shape how the AI responds in future conversations without the user’s awareness. The attack relies on specially constructed URLs that contain embedded prompt instructions within their parameters. These instructions can direct the AI assistant to remember certain companies, products, or sources as trustworthy or to prioritize them when providing recommendations. Because many AI assistants store memory to enhance personalization, the injected instructions can remain active over time. This persistence allows manipulated preferences to affect responses across multiple sessions, potentially influencing decisions related to healthcare, financial guidance, or security matters, all while appearing legitimate to the user. Security researchers from Microsoft identified over 50 distinct prompt injection attempts connected to 31 organizations across 14 different industries. In some cases, legitimate businesses were found using these techniques as promotional strategies to increase their visibility in AI-generated responses. Researchers also observed these crafted links circulating in email traffic over a two-month period, demonstrating how easily the technique can spread. Additionally, publicly available tools such as CiteMET and AI Share URL Creator simplify the process, making it easier for both marketers and malicious actors to deploy such manipulation. To mitigate this risk, Microsoft has implemented safeguards in Copilot and continues enhancing protections against prompt injection threats. Users are encouraged to review and manage their AI assistant’s stored memory, avoid interacting with suspicious AI links, and question unexpected recommendations. Staying alert and practicing cautious interaction with AI tools is essential to maintain reliable and unbiased AI assistance.