There's a comfortable assumption in enterprise security: the cloud is the thing you protect. You lock down the perimeter, monitor for anomalies, patch the vulnerabilities, and trust that the infrastructure itself is on your side.
SentinelOne's 2025 Cloud Security Risk Report dismantles that assumption across 38 pages. The most consequential finding isn't about a new vulnerability or a novel exploit chain. It's about a pattern: attackers are increasingly weaponizing native cloud features to achieve objectives that traditional security controls were never designed to detect.
AWS SSE-C encryption for ransomware. Lambda functions for on-demand backdoor generation. S3 Transfer Acceleration for data exfiltration. Azure Blob Storage for staging stolen data. These aren't exploits. They're features, used exactly as designed, just not by the people you intended.
The Cloud as Weapon: Native Feature Abuse
The report documents a threat actor called Codefinger that represents this shift perfectly. Codefinger targets AWS S3 buckets, but not through misconfiguration or stolen credentials in the traditional sense. Instead, it leverages AWS SSE-C (Server-Side Encryption with Customer-Provided Keys) to encrypt victim data with attacker-controlled keys.
The critical detail: AWS does not store the customer encryption key. That's a feature, not a bug. It's designed to give customers complete control over their encryption. But it also means that once Codefinger encrypts your data with their key, recovery is impossible without paying. AWS cannot help you. There is no backdoor, no master key, no recovery mechanism. The security feature IS the ransomware mechanism.
To create urgency, Codefinger modifies S3 lifecycle policies to auto-delete encrypted files after seven days. Another native feature, repurposed as a countdown timer. This fits the broader ransomware evolution from encryption to exfiltration, but Codefinger takes it further: it doesn't just steal data or lock files. It turns the cloud provider's own encryption guarantees into the ransom mechanism.
This pattern extends beyond storage. The report describes a technique likely associated with the JavaGhost threat actor that amounts to "persistence-as-a-service" in cloud environments. The attacker creates externally accessible Lambda functions that dynamically generate IAM users when triggered via an HTTP API Gateway endpoint. It's an on-demand backdoor factory, built entirely from native AWS services.
Think about what that means for detection. There's no malware to scan for. No suspicious binary. No command-and-control traffic to a known bad IP. Every component is a legitimate AWS service operating within normal parameters. The only anomaly is the intent.
As Nick Davis, SentinelOne's Sr. Director of Cloud Security & Exposure Management, puts it: "Attackers do not share our siloed approach to security. They don't care which group within the Security Organization owns threats vs. risks, or Enterprise vs. Cloud."
AI Tools Are Accelerating the Problem
While attackers weaponize cloud features, the tools enterprises deploy to increase developer productivity are quietly expanding the attack surface.
The report cites research across 20,000 GitHub repositories showing that repositories using GitHub Copilot leak secrets at a rate of 6.4%, compared to 4.6% across all public repositories. That's a 39% increase in credential leakage from AI-assisted development. The tools designed to make developers faster are also making them less careful with secrets.
This isn't a marginal finding. Over 1.1 million secrets were leaked in environment files across 58,000 websites, with AWS keys ranking as the number one leaked critical credential at 57,689 critical-severity instances. When AI coding assistants increase the rate at which those secrets enter public repositories, the compound effect is significant.
The problem extends beyond credential leaks. Research shows that 20% of AI-recommended packages do not exist: 205,000 unique hallucinated package names across tested models. This creates a predictable attack vector called "slopsquatting," where attackers register packages with names that AI models consistently hallucinate. One researcher created a dummy package with a ChatGPT-hallucinated name and it received over 30,000 downloads in three months. Even more concerning, 43% of hallucinated packages appear every time when re-running the same prompt, making them reliable targets.
I've written before about how AI-generated code creates security debt at scale. The SentinelOne data makes the mechanism concrete: it's not just that AI writes insecure code. It's that AI-assisted development systematically increases the rate of credential exposure and introduces phantom dependencies that attackers can weaponize.
Supply Chain Attacks at Industrial Scale
The report's supply chain findings reinforce something I've been tracking across multiple posts: the attack surface isn't in your application code. It's in the tooling and infrastructure you depend on.
The TJ-actions/changed-files incident (CVE-2025-30066) is the clearest illustration. A single malicious commit retroactively tainted previous version tags, potentially impacting over 23,000 software development projects. The injected payload used a Python script to probe Runner Worker process memory for AWS Access Keys, GitHub PATs, and private RSA keys, then double-encoded the extracted credentials in base64 and deposited them in build logs.
The attack was actually a multi-hop supply chain compromise: it originated from the compromise of reviewdog/action-setup, which was a dependency of tj-actions/eslint-changed-files, which tj-actions/changed-files relied on. Three degrees of separation between the initial compromise and the 23,000 affected projects.
But the finding that should keep cloud architects up at night involves deprovisioned infrastructure. Researchers identified 150 deprovisioned S3 buckets previously owned by government organizations, Fortune 500 companies, and tech firms. After co-opting these buckets, they received over 8 million requests for build artifacts, software updates, and configuration files.
Eight million requests to infrastructure that no longer existed, from production systems that still trusted it. This isn't a security vulnerability. It's an operational management failure that reveals how poorly enterprises track their own cloud dependencies. SentinelOne reports alerting clients to over 1,250 instances of subdomain takeover risk from deprovisioned cloud resources in the past year alone.
The Infostealer Economy Goes Cloud-Native
The report documents a parallel development that connects credential theft to cloud compromise at scale. Tools like AlienFox, FBot, Predator AI, and Xeon Sender target SaaS and cloud providers specifically, sharing code and techniques across an ecosystem distributed through Telegram channels with user-friendly GUIs.
As SentinelOne Senior Threat Researcher Alex Delamotte notes: "Tracking and detecting these tools can be quite challenging, as these cloud hacktools are modular by design and evolve constantly."
The significance isn't the tools themselves. It's the democratization of cloud compromise. When cloud-targeting infostealers come with graphical interfaces and Telegram support channels, the attacker population expands well beyond sophisticated APT groups. Combined with the infostealer epidemic already documented across consumer and enterprise environments, this creates a volume problem that signature-based detection cannot solve.
The first and sixth most common malware targeting Windows in 2024 were infostealer variants (LummaStealer and SolarMarker), according to SentinelOne's WatchTower data. SentinelOne's research on cloud-targeting hacktools was significant enough to become a founding citation of MITRE ATT&CK v16's new technique: T1496.004, Cloud Service Hijacking.
What This Means for Defense
The SentinelOne report makes one thing clear: the traditional security model of protecting the perimeter and scanning for known threats is structurally inadequate for cloud environments where the infrastructure itself is the attack vector.
Three shifts matter:
Monitor feature usage, not just vulnerabilities. When SSE-C encryption, Lambda function creation, and S3 lifecycle policy modifications are all legitimate operations, detection has to move from "is this a known exploit?" to "is this a legitimate use of this feature by this identity at this time?" That requires behavioral baselines, not signature databases.
Treat AI development tools as an attack surface expansion. The 39% increase in credential leaks from Copilot-using repositories isn't a reason to ban AI coding assistants. It's a reason to make secret scanning a mandatory, automated gate before any code reaches a repository, regardless of whether a human or an AI wrote it. The same applies to dependency validation: if 20% of AI-recommended packages are phantoms, automated package verification needs to be part of the CI/CD pipeline.
Close the cloud asset lifecycle gap. Eight million requests to 150 deprovisioned S3 buckets means organizations are decommissioning infrastructure without understanding what still depends on it. Cloud asset inventory isn't a security tool problem; it's a process problem. Every deprovisioned resource needs a dependency audit before deletion, and dangling DNS records need automated detection and cleanup.
The attackers aren't waiting for the next zero-day. They're reading the same cloud documentation you are, and they're finding ways to use your infrastructure against you. The question isn't whether your cloud is secure. It's whether you're monitoring for the possibility that its own features are being used as weapons.