Description

In September 2025, FireTail researcher Viktor Markopoulos uncovered a serious vulnerability affecting several LLMs i.e., ASCII Smuggling, that exploits invisible Unicode control characters, or “tag characters,” embedded in prompts to issue hidden commands. While some LLMs like ChatGPT and Claude sanitize such inputs, notably Gemini, Grok, and DeepSeek remain vulnerable. Because AI agents are deeply embedded in enterprise systems like email, calendars, and summaries, these invisible characters pose a high-severity risk, enabling attackers to bypass human oversight and manipulate LLM behavior. One attack vector targets identity spoofing through calendar invites in Google Workspace. Hidden Unicode tags within a calendar object allow attackers to overwrite titles, descriptions, organizers, and even meeting links without changing what the user sees in UI. Gemini, reads out the spoofed data after parsing the full event text like fake organizer or malicious link, effectively impersonating a trusted contact which occurs without requiring the user to accept the invite. The LLM processes the data automatically upon receiving the event, bypassing approval mechanisms and undermining organizational trust models. The second attack involves data poisoning in systems that use AI to summarize user input. A seemingly harmless review “Great phone. Fast delivery.” having hidden payloads instructing the LLM to include a malicious link in generated summary. Original content appears clean, misleading auditors and users into trusting the output, effectively turning the summarization system into a vector for scams or disinformation. Such undetected outputs can easily spread across customer-facing systems. With Google declining to mitigate the flaw despite responsible disclosure, organizations must take defense into their own hands by monitoring raw LLM input streams, logging every character, analyzing for suspicious Unicode blocks, and flagging anomalies before they impact workflows. Enterprises can’t rely on vendors or user interfaces alone, visibility into raw data ingestion is now essential for AI security.