Description

Salesforce's Slack Technologies recently patched a critical vulnerability in its Slack AI that could have allowed attackers to steal data from private channels or launch phishing attacks. The flaw, discovered by security firm PromptArmor, involved a prompt injection issue where Slack AI's large language model (LLM) mistakenly processed malicious commands as valid queries. The vulnerability became even more concerning after Slack's August 14 update, which expanded the AI's capabilities to ingest documents and files from Google Drive. This increase in functionality inadvertently widened the platform’s attack surface, creating more opportunities for hackers. By embedding malicious instructions in files or public channels, attackers could potentially gain unauthorized access to sensitive data or trick users into revealing credentials. PromptArmor identified two key attack methods: data exfiltration from private channels and phishing campaigns aimed at stealing user login information. Although Slack initially downplayed the vulnerability, calling it "intended behavior," further conversations led to the release of a patch addressing a scenario where an insider within the same workspace could use phishing tactics. This incident has sparked broader concerns about the security of AI-driven tools, highlighting the need for these technologies to manage data with greater caution and ethics. Security experts urge organizations using Slack to adjust their AI settings to limit document ingestion, which could help reduce exposure to potential attacks from malicious actors.