NVIDIA has released a severe security patch after the company found a high-severity bug in its NeMo Curator tool, which it employs for filtering and managing datasets within AI and LLM training pipelines. Cataloged as CVE-2025-23307, the vulnerability impacts all versions below 25.07 on Windows, Linux, and macOS. The vulnerability can be attacked through a maliciously created file, resulting in remote code execution, privilege escalation, and even tampering with sensitive training data. This is riskier for organizations that use NeMo Curator as a part of their AI development setups. The vulnerability is especially riskier since NeMo Curator directly works with datasets that determine the behavior of machine learning models. A successful exploit might enable threat actors to inject malicious code, exfiltrate or tamper with data, or even poison AI training pipelines—resulting in faulty, biased, or insecure AI behavior. The exploit can be conducted remotely, and after execution, it provides the attacker with profound access into the AI infrastructure. Data confidentiality, integrity, and system availability are compromised, as evidenced by its CVSS score of 7.8. Companies should also perform an internal audit to verify if there are any indications of compromise and confirm input files before processing. Hardening AI training pipelines, observing system activity, and performing timely updates are crucial actions to prevent such incidents in the future.
A critical security issue has been identified in Google Cloud’s Vertex AI platform that allows low-privileged users to escalate privileges and compromise high-permission service ...
A set of critical vulnerabilities has been identified in CrewAI, a widely used platform for building multi-agent AI systems. These flaws expose environments to prompt injection att...
A critical security flaw in Oracle WebLogic Server has rapidly become a prime target for attackers worldwide. Identified as CVE-2026-21962, the issue carries the highest possible s...