Three weeks into the RSAC 2026 aftermath, and the stat I can't stop thinking about isn't from a keynote or a vendor booth. It's from Microsoft's threat intelligence data, presented across multiple sessions at Moscone Center: the average time from an attacker's initial access to lateral movement handoff has collapsed from eight hours in 2022 to 22 seconds in 2025.
Twenty-two seconds. That's less time than it takes most SOC teams to triage an alert.
Shlomo Kramer, the founder of Cato Networks and co-founder of Check Point Software, framed it on CNN: "The agentic attackers are coming. This is a watershed event in the history of cybersecurity." At RSAC, Kevin Mandia called it "a perfect storm for offense over the next year or two." These aren't alarmists. Kramer built the first commercial firewall. Mandia built Mandiant. When they say the game has changed, the game has changed.
The Mythos Signal
Days before RSAC kicked off, Fortune reported that a misconfigured data store had exposed a draft Anthropic blog post about their forthcoming model, Mythos. The draft described Mythos as "currently far ahead of any other AI model in cyber capabilities" and warned it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."
Anthropic's response was telling. Rather than a standard release, they're restricting early access to organizations focused on cyber defense, giving them a head start to harden systems "against the impending wave of AI-driven exploits." They're also privately briefing government officials on the potential for large-scale AI-enabled attacks.
This isn't a company overhyping a product launch. This is a company scared of what it built.
And Mythos isn't the only signal. OpenAI warned in December that its upcoming models are likely to pose a "high" cybersecurity risk, the second-most-severe classification in its Preparedness Framework. Their models' scores on capture-the-flag exercises jumped sharply between releases. As Kramer told CNN: "Behind Mythos is the next OpenAI model, and the next Google Gemini, and a few months behind them are the open-source Chinese models."
The capability escalation isn't slowing down. It's cascading.
The Speed Problem Nobody's Solving
The Microsoft data presented at RSAC tells a story that goes beyond any single model. AI tools are now embedded across the entire attack lifecycle, from reconnaissance through post-compromise operations. Generative AI is driving phishing click-through rates 450% higher than traditional campaigns. Industrialized adversary-in-the-middle kits like Tycoon2FA, linked to approximately 100,000 compromised organizations, have scaled phishing to tens of millions of messages monthly.
But the speed compression is the metric that should reframe every security architecture conversation. Eight hours to 22 seconds isn't an incremental improvement. It's a category change. Your incident response playbook, the one built around "golden hour" containment windows, just became a historical artifact.
This isn't theoretical. Anthropic's own safety evaluations already showed Claude Opus 4.6 discovering zero-days autonomously, earning the company's first ASL-3 safety classification. Mythos is reportedly far beyond that.
The SANS Institute reinforced this at RSAC with a warning that landed harder than any vendor pitch: AI systems can now "identify vulnerabilities and generate exploits at scale, potentially producing hundreds of zero-day exploits every week." Joshua Wright, SANS Faculty Fellow, pointed to the dependency problem: even a commonly used tool like 7-Zip contains 300 unique dependencies, each representing a potential attack vector. AI doesn't just find vulnerabilities faster. It finds them across attack surfaces that humans simply cannot map at the same speed.
Mandia explained the math at RSAC: "Because of the asymmetry in the cyber domain, where one person on offense can create work for millions of defenders, speed leverages that asymmetry."
Speed was always the attacker's advantage. AI just made it insurmountable at human scale. As Mandia put it: "The scale and scope and total recall of an AI agent compromising you and swarming you is not humanly comprehensible."
The Democratization Nobody's Discussing
Here's what got almost no stage time at RSAC but may matter the most: AI has eliminated the skill barrier for cyberattacks.
We've already seen a single developer build a multi-cloud malware framework in seven days using an AI coding agent, and AI-native ransomware operating full kill chains at $0.70 per campaign. Those were early signals. The incidents now arriving are worse.
In January, a Russian-speaking cybercriminal used multiple AI tools, including Anthropic's Claude and China's DeepSeek, to compromise more than 600 devices running a widely used firewall across 55 countries. The key detail, reported by CNN citing AWS security research: the attacker had limited technical proficiency. In February, a separate hacker used Claude in attacks on Mexican government entities, resulting in the theft of sensitive tax and voter data.
This is the pattern I've been tracking since writing about AI giving mediocre attackers nation-state capabilities. The threat model has fundamentally shifted. You no longer need years of exploit development experience to run a sophisticated campaign. You need an AI chatbot and a target list. As Alex Stamos, CSO at Corridor and former Facebook CSO, put it at RSAC: "You're going to have every 19-year-old in St. Petersburg with the same capability."
The Darktrace State of AI Cybersecurity 2026 report, surveying over 1,500 security leaders, confirms that organizations feel this shift: 73% report AI-powered threats already significantly impact their organization. A full 92% agree these threats are forcing significant defense upgrades. But here's the gap: only 14% of organizations allow AI independent remediation with no human oversight.
Read that again. 92% of defenders acknowledge the threat requires a fundamentally different response. 14% are actually letting their defensive AI operate at the speed required to match it.
The agentic AI identity problem I wrote about months ago has fully materialized, but on the attacker side first. Offensive AI agents are operating autonomously, chaining exploits, pivoting through networks. Defensive AI is still waiting for a human to click "approve."
What Defenders Actually Need
The vendor floor at RSAC was packed with AI defense products. But the fundamental mismatch isn't a tooling problem. It's an architectural one.
When your attacker moves from access to handoff in 22 seconds, your security architecture needs to assume that by the time you detect an intrusion, lateral movement has already happened. Three things need to change now.
Microsegmentation is no longer optional. If you're still running flat networks with east-west traffic flowing freely, a 22-second handoff means your entire environment is compromised before your first alert fires.
Automated containment must be pre-authorized. The 86% of organizations that still require human approval for AI remediation are choosing thoroughness over survival. Pre-authorize your defensive AI to isolate compromised segments immediately. You can investigate later. You can't un-exfiltrate data.
Assume your perimeter controls are already bypassed. With AI-generated phishing hitting 450% higher click-through rates and industrialized kits compromising organizations at scale, the question isn't whether someone clicks. It's what happens in the 22 seconds after they do.
As I noted in our RSAC coverage on post-quantum cryptography, the pattern across every major RSAC theme this year was consistent: the threats have deadlines, and most organizations haven't started. The same was true for deepfake detection economics. AI-powered attacks aren't a 2028 problem. They're a right-now problem with a rapidly closing response window.
The Bottom Line
RSAC 2026 made one thing clear: the security models we built for human-speed threats don't work against machine-speed attacks. The 22-second window isn't a trend to monitor. It's a forcing function that demands architectural change today.
And here's the part that should keep you up at night: Morgan Adamski, former Executive Director of U.S. Cyber Command, told CyberScoop at RSAC she believes we're seeing "less than 50 percent of the AI capability from modern nation-states right now. They're not pressing. Nobody wants to be the first one to open that door."
The watershed moment Kramer described isn't coming. The data shows it already arrived, and the actors with the most capability are still holding back. The only question left is whether your defense posture was built for eight-hour response times or 22-second ones.
If you're still planning to address this next quarter, you're already behind.