On February 20, 2026, Amazon's CISO CJ Moses published research that should have been humbling for the cybersecurity industry. A Russian-speaking threat actor, possibly a single individual, used commercial AI tools to breach more than 600 FortiGate firewalls across 55 countries in five weeks.
The headlines wrote themselves: "AI-powered hacking campaign compromises firewalls globally." The implication was clear. We've entered a new era where AI enables sophisticated attacks at unprecedented scale.
Except that's not what happened.
The attacker wasn't sophisticated. The AI tools didn't enable some novel exploit chain. What actually happened is far less dramatic and far more damning: 600 organizations left their firewall management interfaces exposed to the internet with reused passwords and no multi-factor authentication. The attacker just needed AI to check the doors faster.
A Five-Week Campaign Built on Fundamentals Failures
The Amazon threat intelligence report details a campaign that ran from January 11 to February 18, 2026. The threat actor scanned for FortiGate management interfaces on ports 443, 8443, 10443, and 4443, looking for devices exposed to the internet. Then they used brute-force attacks with commonly reused credentials to gain access.
No zero-day exploits. No novel vulnerability chains. No advanced persistent threat tradecraft. Just credential stuffing against exposed admin panels.
What made this campaign different from the thousands of brute-force attacks that happen every day was what came after initial access. The attacker used at least two commercial LLM providers to build custom tools for parsing stolen configuration files, automating lateral movement, and scaling the operation across hundreds of targets simultaneously. AI-generated Python and Go scripts handled VPN routing table ingestion, network classification, service discovery, and vulnerability scanning integration.
The post-exploitation playbook was textbook ransomware staging: DCSync attacks using Meterpreter with the mimikatz module to extract NTLM password hashes from Active Directory, targeted attacks against Veeam Backup & Replication servers using custom PowerShell scripts, and compiled credential-extraction tools. Amazon's researchers noted that the observed activity matched pre-ransomware staging, preparing infrastructure for extortion campaigns rather than immediate disruption.
I wrote about exactly this pattern two days ago in Backups Were the Answer to Ransomware. Then Attackers Stopped Encrypting: attackers are increasingly targeting backup infrastructure first, because they've learned that organizations who can restore from backups won't pay. Destroying the backups is now step one.
The AI Was Mediocre. That's the Point.
Here's what the alarming headlines leave out: Amazon's research explicitly describes the attacker's AI-generated tools as exhibiting "redundant comments, simplistic architecture, and naive JSON parsing via string matching." The actor "largely failed when attempting to exploit anything beyond the most straightforward, automated attack paths" and abandoned hardened targets rather than persisting.
This wasn't a genius leveraging AI to crack previously impenetrable defenses. This was a low-skilled operator using ChatGPT-era tools to automate the cybersecurity equivalent of jiggling door handles at scale.
Two weeks ago, I wrote about VoidLink's 88,000 lines of AI-generated malware and the gap between AI-assisted capability and AI-assisted discipline. The FortiGate campaign is the same pattern, inverted. VoidLink showed a capable framework undermined by amateur OPSEC. The FortiGate campaign shows amateur-grade tooling succeeding because it didn't need to be sophisticated. When 600 doors are unlocked, you don't need a master key.
That's the part the industry needs to sit with. AI didn't elevate this attacker to a new capability tier. AI just let them exploit existing failures faster than those failures could be remediated. The threat actor's skill ceiling didn't change. The speed at which they could reach it did.
Three Fortinet Incidents in Three Months
What makes this particularly painful is the context. This is the third major Fortinet security event in roughly 90 days.
In December 2025, Arctic Wolf began observing active exploitation of CVE-2025-59718 and CVE-2025-59719, critical SAML SSO bypass vulnerabilities that allowed unauthenticated attackers to gain admin access via crafted SAML messages. I covered this in detail: The Fortinet SSO Breach Proves Perimeter Security Is Dead. The attacks continued even after patching, with Fortinet acknowledging a "new attack path" that bypassed the fix.
In January 2026, CISA issued guidance on CVE-2026-24858, another critical authentication bypass via FortiCloud SSO that was being exploited as a zero-day.
Now, in February 2026, Amazon reveals that an AI-assisted attacker compromised 600+ devices using nothing more sophisticated than exposed interfaces and weak passwords.
Each incident targets the same architecture: FortiGate devices with internet-facing management interfaces. Each succeeds for the same fundamental reason: organizations haven't implemented basic access controls. The specific vulnerability changes. The underlying failure doesn't.
The Bomb Disposal Lesson
In Navy EOD, there's no concept of "getting around to" following procedure. When you're working with explosives, every shortcut, every deferred maintenance check, every "we'll fix it next quarter" conversation has an immediate, non-negotiable consequence. The threat environment doesn't wait for your remediation timeline.
Cybersecurity has always preached this urgency without actually experiencing it. Patching cycles stretch to 60 days. MFA rollouts take quarters. Exposed management interfaces sit on the to-do list behind feature releases and cost optimization initiatives. The risk was always theoretical until it wasn't.
AI has collapsed that gap between theoretical and actual. When a single individual with mediocre skills and a ChatGPT subscription can compromise 600 firewalls in 35 days, the window for "eventually" has closed. The threat environment now moves at AI speed, and every exposed management interface, every reused password, every device without MFA is a live vulnerability with a countdown that just got dramatically shorter.
This isn't hypothetical. AI-powered cyberattacks surged 72% year-over-year in 2025, and the fastest intrusions are now reaching exfiltration in just 1.2 hours, down from 4.8 hours the prior year. A separate Sysdig analysis found attackers moving from initial access to full admin privileges in an AWS environment in 8 minutes. The acceleration isn't coming. It's here.
The Credential Problem Nobody Wants to Admit
The 600 FortiGate devices in this campaign weren't breached through some exotic technique. They were breached through credential-based access to management interfaces using commonly reused passwords. That means these organizations had, at minimum, two failures: they exposed their admin interfaces to the internet, and they didn't require MFA.
This connects to a systemic issue I've written about before. When 149 million stolen credentials were found sitting in an unprotected database, growing in real time, it demonstrated the industrial-scale credential harvesting pipeline feeding operations exactly like this one. Stolen and reused credentials are the fuel, and AI is the engine that burns through them faster than humans can respond.
The Mora_001 threat actor, operating during the same timeframe, exploited the same Fortinet vulnerability class with a different approach: deploying SuperBlack ransomware tied to the LockBit ecosystem. Different actors, same attack surface, same fundamental failures enabling access.
For small and mid-size organizations, this is especially dangerous. As I noted in my analysis of why SMBs are now primary targets, these organizations often lack dedicated security teams and rely on edge devices as their primary defensive layer. When that layer is compromised through default credentials, there's nothing behind it.
What This Actually Demands
Amazon's report includes two IOCs (212.11.64.250 and 185.196.11.225) and a set of defensive recommendations that read like a basic security hygiene checklist. That's not a criticism of Amazon. It's an indictment of the state of the industry. The defensive recommendations for an AI-assisted campaign that breached 600 devices in 55 countries are:
-
Don't expose management interfaces to the internet. This is not new advice. It's been best practice for decades.
-
Implement MFA on all administrative and VPN access. Also not new. Also apparently still optional for 600+ organizations running critical network infrastructure.
-
Rotate credentials and audit password reuse between FortiGate and Active Directory. The campaign succeeded because the same passwords were used across systems. That's a policy failure, not a technology gap.
-
Monitor for behavioral anomalies rather than signatures. Because the attacker used legitimate open-source tools (Meterpreter, mimikatz, PowerShell), signature-based detection was ineffective. Watch for unusual VPN authentication patterns, unexpected AD replication events, and lateral movement from VPN address pools.
-
Isolate backup servers and patch Veeam against credential extraction vulnerabilities. The explicit targeting of backup infrastructure signals ransomware intent. If your backups are accessible from the same network segment as your compromised devices, they're already at risk.
None of this is advanced. None of it requires new technology. It requires organizations to actually do the things they've been told to do for years, before the threat environment makes the decision for them.
The Real Lesson
The cybersecurity industry has spent the last year debating whether AI will create superhuman hackers. Amazon just provided the answer: it won't. What it will do is far more dangerous. It will make the existing pool of unsophisticated attackers dramatically more effective against the existing pool of organizations that haven't mastered the basics.
That's not a technology problem. It's a discipline problem. And unlike technology problems, you can't solve it by buying another product.
The 600 organizations in this campaign didn't need better firewalls. They needed to configure the ones they had. The threat actor didn't need better AI. They just needed the AI to check more doors. When the gap between attacker capability and defender hygiene is this wide, AI doesn't change the game. It just speeds up the clock on a loss that was already inevitable.
The window for "eventually" is closed. AI made sure of that.