I just got back from RSAC 2026 in San Francisco, and one thing was impossible to miss: deepfake detection was everywhere. Booth after booth across Moscone Center featured live demos of AI catching AI. Vendors showed faces morphing in real time, liveness checks rejecting synthetic video, and detection engines flagging injected camera feeds with confident red overlays.
The demos were impressive. The problem is that demos are not production.
The Numbers That Should Alarm You
The identity fraud landscape has shifted dramatically, and the statistics presented across multiple RSAC sessions paint a grim picture. US consumers lost $47 billion to identity fraud in 2024, according to Javelin Strategy & Research. Globally, losses exceeded $50 billion in 2025. The Deloitte Center for Financial Services predicts generative AI fraud losses alone will hit $40 billion by 2027.
But the trajectory data is what stood out. iProov reported a 783% increase in injection attacks in 2024. Jumio followed that with an 88% year-on-year rise in 2025. Deepfake usage in biometric fraud surged 58%. A 2025 Gartner survey found 62% of organizations experienced a deepfake attack in the past 12 months.
These are not projections. These are things that already happened.
The $15 Problem
Here is the cost asymmetry that nobody on the expo floor was eager to discuss. According to Biometric Update's January 2026 report, creating an AI-generated fake ID costs as little as $15 and takes about 30 minutes. A ready-to-use synthetic identity sells for up to $15 on underground markets. Deepfake image creation services range from $10 to $50 per image.
On the other side of that equation, enterprise-grade liveness detection and deepfake defense platforms cost hundreds of thousands to deploy, integrate, and maintain. This is one of the most extreme attacker-defender cost asymmetries in cybersecurity, and I did not see a single vendor addressing it directly.
The Deepfake-as-a-Service economy is real. Attackers are not building custom models; they are buying turnkey services. And they are stacking techniques: a high-quality deepfake gets replayed, a replay gets injected into a camera feed, and that injected stream gets automated at scale. As Ricardo Amper, CEO of Incode, put it at the conference: "The right question is not only 'Does this face look real?' It is 'Can we trust this entire session end-to-end?'"
The Booth Demo Problem
This is what bothered me most walking the floor. Every detection vendor demonstrated their product under controlled conditions: well-lit faces, high-resolution video, clean camera feeds. The demos worked beautifully.
Then there is the Purdue data. Purdue University's Political Deepfakes Incident Database (PDID) benchmark evaluated commercial deepfake detectors using real-world media scraped from social platforms. These are heavily compressed, sub-720p, short mobile-first clips; the kind of media that actually circulates in the wild. The result: detector performance varies "dramatically" once inputs look like production conditions rather than lab conditions.
X-PHY won the Global InfoSec Award for "Innovative AI Safety and Security" at the conference. 1Kosmos won Most Innovative Workforce Identity Verification Solution. Reality Defender won the RSAC Innovation Sandbox the year before for real-time deepfake detection. Awards are not the problem. The problem is the gap between what works at booth 5256 and what works when an attacker sends a compressed, re-encoded video through a virtual camera on a rooted Android emulator.
The Identity Supply Chain Nobody Mentions
There is a second-order problem that I did not hear discussed in any session or see addressed at any booth. When a deepfake passes KYC onboarding at a financial institution, that verified identity becomes a trust anchor. Other services downstream accept it through Open Banking integrations, digital identity wallets, and cross-platform verification. A single upstream verification failure cascades through the entire identity ecosystem.
This is the identity supply chain problem. The IDMerit breach that exposed a billion identity records already showed how verification systems become honeypots when they concentrate trust. We talk about software supply chain attacks all the time. We have SBOMs, dependency scanning, and signed builds. But there is no equivalent framework for the chain of trust that starts with "this person passed our KYC check." One broken verification at the top poisons every system that trusts it downstream.
The Arup deepfake incident should have made this obvious. Deepfakes mimicking senior management facilitated a $25 million transfer. That was not a technology failure; it was a trust chain failure. The system trusted the video call because it trusted the identity, and the identity was synthetic. We have already seen voice cloning reduce CEO identity fraud to a three-second problem. Deepfakes are the visual equivalent, and the identity supply chain is not built to handle either.
Two Parallel Tracks That Need to Converge
Something else struck me at the conference. RSAC 2026 had two parallel conversations happening in complete isolation. On one track, vendors were presenting deepfake defenses for human identity verification. On the other, companies like Yubico and Delinea were announcing solutions for AI agent identity governance, including Role Delegation Tokens that require a physical YubiKey tap for high-consequence AI agent actions.
These two problems are converging fast. As I wrote when Okta confirmed that AI agents are an identity crisis, the line between human and machine identity is already blurring. When AI agents can impersonate humans and humans can use AI to impersonate other humans, that boundary collapses entirely. As Albert Biketi, Yubico's Chief Product & Technology Officer, put it: "The hard problem in agentic AI security is accountability: can you prove a specific human approved a high-consequence action?"
The hardware attestation approach is one of the few solutions I saw that bridges both worlds. But it was presented in an enterprise authentication context, completely disconnected from the deepfake identity verification conversation happening 200 feet away on the expo floor.
What Needs to Change
Walking out of Moscone, three things were clear.
First, the industry needs to distinguish verification from authentication. Most RSAC coverage treats "identity verification" as a single problem. It is not. Onboarding verification, where you prove your identity once against a government document, is far more vulnerable than continuous biometric authentication. Hardening your onboarding flow is more urgent than hardening your daily login, but the vendor market is not reflecting that priority.
Second, detection alone is a losing strategy. When the attacker's cost is $15 and the defender's cost is six figures, detection is an arms race with unfavorable economics. The WEF Cybercrime Atlas reviewed 17 deepfake programs and found they could undermine facial-recognition algorithms used in KYC verification. The answer is not better detectors; it is end-to-end session integrity that makes detection one layer of many, not the entire defense.
Third, regulators are fighting the last war. The EU AI Act addresses deepfakes through disclosure requirements. KYC regulations under AML directives and eIDAS still fundamentally assume a human is present during verification. There is no regulatory framework for what happens when the "person" passing KYC never existed. Until that changes, the compliance checkbox and the actual security posture will keep diverging.
The Takeaway
RSAC 2026 proved that the industry recognizes deepfakes as a first-tier threat. The demos were polished. The awards were deserved. The investment is real.
But recognition is not the same as solution. The cost math does not work. The detectors do not perform in production the way they perform in demos. The identity trust chain has no supply chain security framework. And the human identity and machine identity problems are being solved in parallel by teams that are not talking to each other.
The deepfake problem is not a detection problem. It is an architecture problem. And until the industry treats it that way, the $15 fake ID will keep winning.