The world's largest identity company just built a product around a problem this blog has been covering for months: nobody knows who their AI agents are, what they can access, or what they're doing.
Okta announced "Okta for AI Agents" last week, a platform launching April 30 that registers AI agents as non-human identities, discovers shadow agents employees have connected without authorization, and provides a "kill switch" to revoke all access if an agent goes rogue. Their president of products and technology, Ric Smith, put it plainly: "AI agents are evolving faster than any software before them, making traditional security models obsolete."
He's right. But the numbers suggest Okta's platform is arriving into a crisis that's already deeper than most organizations realize.
The Confidence Paradox
The Gravitee State of AI Agent Security 2026 report, surveying over 900 executives and technical practitioners, uncovered a pattern that should unsettle every security leader: 82% of executives believe their existing policies adequately protect them from AI agent risks. In the same survey, 88% of organizations confirmed or suspected AI agent security incidents in the past year.
Read those numbers together. The majority of organizations have been hit, and the majority also think they're fine.
This isn't a technology gap. It's a perception gap. And it's the same pattern I identified in the AI safety implementation gap, where 82% of executives said secure AI was essential while only 24% were actually securing their AI projects. The confidence paradox isn't new; it's just gotten worse. I wrote about the downstream consequences over a year ago in my analysis of shadow AI and data exfiltration risk, where 93% of employees were already using unauthorized AI tools. The difference now is that those unauthorized tools aren't just chatbots answering questions. They're agents that take actions, browse systems, and execute code with enterprise credentials.
The Gravitee data quantifies how bad it's gotten:
- Only 14.4% of AI agents go live with full security and IT approval
- More than 50% of deployed agents operate without security oversight or logging
- Only 47.1% of an organization's agents are actively monitored
- Healthcare leads with a 92.7% incident rate
When I described how AI agents become insider threats, the CyberArk research showed that an agent's entitlements define the blast radius of an attack. Okta's new discovery features in their Identity Security Posture Management (ISPM) module are designed to map exactly that blast radius. The question is whether organizations will use them, given that four out of five executives already believe they have the problem handled.
The 22% Problem
Here's the stat that captures the entire crisis in a single number: only 21.9% of organizations treat AI agents as independent, identity-bearing entities. The rest manage them through shared service accounts, hardcoded credentials, or not at all.
This is the core thesis I've been building across multiple posts. When I wrote about the YOLO problem, the pattern was clear: developers hand agents the keys to their kingdoms because the convenience is too compelling to resist. Sam Altman lasted two hours before disabling his own security restrictions. When 150,000 agents joined a social network and a single misconfiguration exposed every API key, the identity problem scaled from individual servers to entire networks overnight.
The Gravitee data puts precise numbers on what those anecdotes illustrated:
- 45.6% of organizations rely on shared API keys for agent-to-agent authentication
- 27.2% use custom, hardcoded logic for authorization
- 25.5% of deployed agents can create and task other agents
That last number is the one that should keep security leaders up at night. A quarter of deployed agents can spawn new agents. Each spawned agent potentially inherits or escalates the parent's privileges. None of these child agents go through onboarding. None get registered in a directory. None have defined lifecycles.
This is what The Hacker News described as "identity dark matter": powerful entities that are invisible to traditional governance, operating at machine speed, gravitating toward whatever authentication path offers the least resistance. Stale tokens. Orphaned service accounts. Over-scoped API keys. The same attack surface that human identity management was supposed to close, now being exploited by autonomous systems that move faster than any SOC analyst can track.
DataRobot's chief product officer, Venky Veeraraghavan, framed it precisely in the Okta announcement: "If an AI agent has the power to act, it must have an identity." The problem is that 78% of organizations are letting agents act without one.
Okta's Answer: IAM for Agents
Okta's platform addresses the problem on three axes:
Discovery: Using ISPM to detect shadow AI agents, map their connections, and surface hidden identity risks. This is the "where are my agents?" question, and it's arguably the most urgent. You can't secure agents you don't know exist.
Access control: The Agent Gateway acts as a centralized control plane, enforcing least-privilege authorization, managing privileged credentials with automatic rotation, and integrating with Okta's 8,200+ application network. When I wrote about GitHub building a control tower for AI agents, I noted that centralized governance was exactly what the industry needed. Okta is now building the identity layer that GitHub's Agent HQ assumed would exist.
Revocation: Universal Logout serves as a kill switch, instantly revoking all access tokens if an agent deviates from intended behavior. System logs capture tool usage and authorization decisions, feeding into SIEM platforms for audit trails.
On paper, this is the most comprehensive agent identity solution any major vendor has shipped. Okta has the integration network, the enterprise relationships, and the IAM expertise to make it work. Every other major security vendor, including CyberArk, SailPoint, and WorkOS, is racing to ship similar capabilities. Gartner has already created an entirely new category, the "Market Guide for Guardian Agents," acknowledging that agent security requires purpose-built solutions.
But there's a fundamental tension in applying centralized identity management to decentralized autonomous systems.
The Harder Problem Okta Can't Solve
Traditional IAM works because human identities are predictable. Employees are onboarded, assigned roles, granted permissions through defined workflows, and eventually offboarded. The identity has a lifecycle that maps to a person's employment.
AI agents don't work like this. They're ephemeral. They're non-deterministic; their actions vary unpredictably between identical task invocations. They spawn child agents dynamically. They operate across organizational boundaries. As Futuriom's analysis put it, "agents wander" through systems, and when multiplied across thousands of agents, this creates breaches through permission gaps that no static policy can anticipate.
Okta's Universal Logout is a kill switch, and kill switches are reactive by design. If an agent with write access to a financial system executes 10,000 transactions at machine speed before detection, the damage is done by the time you pull the plug. The rollback and forensic capabilities needed for that scenario aren't in the blueprint.
The recursive trust problem is even harder. I explored this in depth in my post on who validates the validator, where one compromised agent poisoned 87% of downstream decisions in four hours. When 25.5% of agents can spawn other agents, you're not just managing a directory of identities. You're managing a trust chain where every link can create new links, each potentially inheriting or escalating privileges that the original identity framework never anticipated.
Then there's the MCP monoculture question. Okta is building its Agent Gateway around Model Context Protocol as the integration standard, providing a virtual MCP server and registry. I documented how MCP reproduced 25 years of security mistakes in 10 months, including tool poisoning, supply chain attacks, and fundamental authentication gaps. If MCP becomes the universal agent-to-tool interface and a vulnerability is found in the protocol itself, every enterprise using it becomes simultaneously exploitable. We've seen this pattern before with Log4j: ubiquitous adoption of a single component creating systemic risk across the entire ecosystem.
What This Means
Okta's platform is a necessary first step, not a solution. It solves the discovery problem, which is genuinely critical. You can't govern what you can't see, and most organizations can't see their agents. The IANS Faculty ranked "Identity Assurance for an AI World" as the second-highest CISO priority for 2026, scoring 4.46 out of 5. The demand is there.
But treating AI agents as another category of managed identity, sitting alongside employees and service accounts in Universal Directory, assumes these entities will behave like the identities IAM was built to manage. They won't. Agents don't follow org charts. They don't respect network boundaries. They don't have predictable session patterns. And the ten production incidents we've documented across six AI coding tools didn't happen because the agents lacked identities. They happened because autonomous systems made catastrophic decisions that no identity framework would have flagged as unauthorized.
The real question isn't whether AI agents need identities. DataRobot's Veeraraghavan is right: if it can act, it needs an identity. The question is whether identity alone is sufficient for entities that can reason, improvise, and cause damage while operating entirely within their granted permissions.
For security leaders evaluating Okta's platform, three immediate priorities:
Audit your agent inventory now, not in April. Don't wait for Okta's GA date. The 50%+ of agents operating without logging are accumulating risk every day they remain invisible. Start with your cloud provider's service principal audit and work outward.
Treat shared API keys as critical vulnerabilities. If 45.6% of your agents authenticate through shared keys, any single compromise cascades through every agent sharing those credentials. Rotate them, scope them, and establish per-agent credential management even if you're doing it manually until better tooling arrives.
Plan for agent-spawned agents. If a quarter of your deployed agents can create child agents, your identity perimeter is already larger than you think. Map delegation chains. Define policies for inherited permissions. And assume that your current agent count is an undercount.
Okta has validated that agent identity is the foundational problem. Now the industry needs to grapple with whether the foundational solution is also sufficient, or whether we're applying a 2010 framework to a 2026 problem.