A growing number of organizations deploying artificial intelligence (AI) services in the cloud are unknowingly exposing themselves to serious security risks due to misconfigurations — particularly granting excessive permissions like root access by default. This troubling trend, highlighted in Tenable’s newly released Cloud AI Risk Report 2025, mirrors the early pitfalls enterprises faced during their initial cloud migrations.
According to the report, cloud-based AI services are often set up without adequate security controls, making them prime targets for cybercriminals. Common issues like misconfigurations, public exposure, and overly permissive access levels are putting sensitive assets — including proprietary algorithms, AI models, and intellectual property — at risk of exploitation.
“AI services create massive data volumes, making the cloud an obvious platform for growth,” the report states. “However, AI’s dynamic nature and sensitive components make it vulnerable if not properly secured.” The findings are based on nearly two years of telemetry from cloud environments between December 2022 and November 2024.
Root Access: The Silent Risk Lurking in Cloud AI Deployments
One of the most alarming discoveries was how frequently enterprises grant unnecessary privileges to AI users. For instance, 91% of companies using Amazon SageMaker — a widely adopted AI data analytics platform — left the service’s root access enabled, often relying on default configurations.
This decision could have dangerous consequences. “Users with root access can modify critical system files, install malware, or tamper with AI models,” explained Shelly Raban, Senior Cloud Security Research Engineer at Tenable. If hackers compromise a privileged account, they gain unrestricted control over the cloud environment, opening the door to widespread damage.
Raban noted that these risks aren’t limited to SageMaker. They reflect a broader trend of organizations neglecting security fundamentals when deploying AI in the cloud. “Many default settings are easy to miss, especially in cloud consoles or infrastructure-as-code templates. But these overlooked gaps expose environments to severe threats,” she added.
AI Cloud Services Risks Stack Up Like a Dangerous Game of Jenga
Tenable’s report compares these missteps to playing a high-stakes game of Jenga — where each poorly secured service stacks another risky block on top of the last. AI tools and services inherit the vulnerabilities of the layers below them. If one is compromised, the attacker gains a pathway into others.
“AI services often depend on hidden infrastructure created by the cloud provider. Unfortunately, these hidden layers carry their own risks, sometimes unknown to the user,” Raban warned. Attackers can exploit these weak points, escalating privileges and moving laterally across services.
To break this risky cycle, Tenable strongly advises enterprises to maintain a detailed inventory of all cloud resources, especially AI components. Continuous monitoring for misconfigurations and immediate remediation is critical — particularly when sensitive or public-facing resources are involved.
Tenable’s report also predicts a surge in AI and generative AI (GenAI)-focused cyberattacks in 2025. Potential threats include the hijacking of AI infrastructure to control large language models (LLMs) — a tactic dubbed “LLMjacking” — and the leaking of sensitive access keys that unlock AI services.
Raban stressed that companies should shift toward a holistic exposure management strategy. This approach means enhancing visibility across every digital asset, assessing the potential risk, and taking prioritized action as threats evolve. “Teams need unified visibility to manage risk efficiently as their environment — and AI threat landscape — rapidly changes,” she added.
Following the principle of least privilege is equally critical. Organizations must strictly control who can access their AI models, datasets, and services to prevent unauthorized or overprivileged access. Effective identity management and permission controls are no longer optional — they are a necessity.
Ultimately, securing AI in the cloud requires a proactive mindset. By addressing these early-stage misconfigurations now, enterprises can safely unlock AI’s full potential without putting their sensitive assets — or their reputations — at risk.