Researchers are warning organizations using NVIDIA GPUs for AI workloads to immediately patch systems against critical security flaws in the NVIDIA Container Toolkit. Left unpatched, the vulnerabilities could allow attackers to steal sensitive data, exfiltrate proprietary AI models, or cause severe service disruptions.
In September, NVIDIA released a patch for CVE-2024-0132, a time-of-check time-of-use (TOCTOU) vulnerability rated 9.0 out of 10 on the CVSS scale. The flaw affected how the NVIDIA Container Toolkit manages GPU-accelerated containers. However, new findings from Trend Micro and Wiz have revealed a secondary vulnerability that was not mitigated by the original patch, exposing systems—even those already patched—to continued risk.
Incomplete Fix Exposes Systems to Continued Risk
Trend Micro flagged the incomplete fix for CVE-2024-0132 in a recent report, noting it allows denial-of-service (DoS) attacks. This confusion, researchers say, may have led many enterprises to believe they were fully protected when they were not.
NVIDIA originally disclosed that CVE-2024-0132 could allow code execution, privilege escalation, DoS, and data tampering. The unpatched secondary flaw, now tracked as ZDI-25-087 / CVE-2025-23359, was later acknowledged and patched by NVIDIA in February 2025, with Wiz publishing its own findings shortly afterward.
NVIDIA confirmed the vulnerability on April 14, but offered no explanation for the discrepancy between Trend Micro’s and Wiz’s disclosure timelines. Trend Micro did not respond to repeated inquiries.
AI and Cloud Workloads Face Elevated Threats
Security experts warn that confusion over patch completeness increases pressure on overworked IT and cybersecurity teams, especially those managing AI and container-based infrastructures.
“This research challenges defenders to question patch completeness and adopt a proactive stance toward driver integrity,” said Jason Soroko, Senior Fellow at Sectigo. “It adds more weight to already-burdened security teams.”
Organizations running AI, cloud, or Docker container workloads—particularly those with default configurations—are most at risk. According to Trend Micro, attackers could exploit CVE-2025-23359 to gain access to host-level data, steal AI models, or cause service outages through resource exhaustion.
Thomas Richards, Director at Black Duck Security, emphasized: “With NVIDIA GPUs being the standard in AI processing, the scope of risk is enormous. These vulnerabilities should prompt urgent action.”
Inside the NVIDIA Vulnerabilities and Exploitation Path
The NVIDIA Container Toolkit helps users deploy GPU-accelerated containers. Attackers could exploit CVE-2024-0132 on systems using default configurations—though not when Container Device Interface (CDI) is enabled, per NVIDIA’s advisory.
The secondary flaw (CVE-2025-23359) affects Linux systems running Docker. It lets attackers bypass access controls via the Docker API, which is treated as a privileged interface. This misconfiguration grants root-level access to anyone with API privileges, creating a serious threat vector.
Attackers can exploit this flaw by creating two malicious containers linked via volume symlink, then deploying them through direct access, supply chain compromise, or social engineering. This attack path allows them to access the host file system via a race condition and take control of the system via Unix socket exploitation, Trend Micro researchers explained.
Mitigation Steps for NVIDIA CVE-2024-0132 and CVE-2025-23359
NVIDIA has now patched both flaws, and experts strongly recommend immediate updates to the latest toolkit versions.
Trend Micro further advises organizations to:
- Restrict Docker API access to authorized personnel.
- Avoid granting unnecessary root or escalated privileges.
- Disable unused NVIDIA Container Toolkit features.
- Enforce strong image admission policies within CI/CD pipelines.
- Audit container-to-host interactions and deploy anomaly detection for container runtime behavior.
These additional measures help reduce the attack surface and detect signs of exploitation, improving resilience against future threats targeting AI infrastructure.