On April 7, 2026, Anthropic announced Mythos, a model that had independently identified thousands of zero-day vulnerabilities across every major operating system and web browser during red-team evaluation. Some of those flaws had gone undetected for decades, including a 27-year-old bug in OpenBSD. The existence of the model had leaked twelve days earlier, when Anthropic's CMS defaulted a draft blog post to public access and exposed the internal write-up. Rather than release Mythos after the leak, Anthropic restricted access to roughly fifty organizations under a program called Project Glasswing. The twelve named launch partners are AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself. The full fifty is not public, and neither are the criteria Anthropic used to choose them.
The tech press covered the capability. Treasury Secretary Bessent and Fed Chair Powell convened an emergency meeting with bank CEOs to discuss the implications. What almost no one is writing is the governance story: a single private company is now making national-security-grade access decisions about the most consequential defensive cybersecurity tool of this decade, and it is doing so with no published rubric, no appeal process, and no regulator.
What Project Glasswing Actually Is
Mythos is classified under ASL-4 protocols under Anthropic's Responsible Scaling Policy, which is triggered when a model's capabilities become the primary source of risk in a national-security-relevant domain. Access requires formal agreements, personnel security clearances, ongoing usage auditing, and trust-and-safety reviews. Anthropic has committed $100 million in Mythos usage credits and $4 million in donations to open-source security organizations. An additional forty or so organizations have been admitted beyond the twelve launch partners; the full list is not public.
The disclosed screening criteria center on three filters. Invited organizations must demonstrate legitimate security use cases, operate critical infrastructure where zero-day patching serves the public good, and commit to responsible testing protocols. Categories named by Anthropic include major software vendors, cloud providers, infrastructure firms, security vendors, financial institutions, and open-source ecosystem stakeholders. Those descriptions are accurate, but they function as categories, not as a rubric. They describe who is eligible; they do not describe how ties are broken when two eligible organizations compete for one slot, or how an organization that believes it belongs in a category can contest its exclusion.
This is a meaningful escalation from the governance posture that appeared in Anthropic's earlier safety reports, where Claude Opus 4.6 was released broadly even after it demonstrated autonomous zero-day discovery at ASL-3. Mythos crossed a threshold that Anthropic believes warrants withholding public release entirely. The judgment itself is defensible. The process by which fifty organizations were named the exception is not documented anywhere a prospective defender can read.
The Procurement Is the Policy
Every prior attempt to restrict dangerous technology to approved users has been governed by law. The International Traffic in Arms Regulations publishes the United States Munitions List, and its determinations are contestable in federal court. The Export Administration Regulations publishes the Commerce Control List under the same due-process scaffolding. FASCSA, the federal supply-chain risk framework the Pentagon used to designate Anthropic itself a supply-chain risk last year, has a statutory rubric that an affected vendor can at least read, even when the framework is applied unevenly. The CVE embargo process, while messier, operates under disclosure norms that any competent security researcher can learn and follow. When the Navy compartments access to dangerous capabilities under the "need to know" doctrine, the doctrine itself is public, the process for granting access is documented, and an inspector general exists to review it. I spent eight years working inside that framework, and the guardrails were not optional; they were the reason the framework had legitimacy.
Project Glasswing has none of that scaffolding. Anthropic is the author, the gatekeeper, and the appellate body. Its screening committee is not publicly constituted. The specific rubric used to rank applicants is not published. There is no formal mechanism for an excluded organization to request reconsideration, and no regulator to compel one. Forrester analysts have already called the consortium structure "a class system", and academic legal scholars have flagged the antitrust implications of the "AI Avengers" consortium structure.
This is not an attack on Anthropic's judgment. The launch-partner list is defensible on its face, because the organizations on it carry disproportionate responsibility for global software infrastructure and have the in-house security capacity to handle what Mythos produces. The problem is not who Anthropic chose. The problem is that Anthropic, alone, is choosing, and its private-sector procurement decisions have become de facto cybersecurity policy for every enterprise whose threat model now includes adversaries with access to models far more capable than Claude Opus 4.6. This continues a pattern I wrote about last week, in which OpenAI and Anthropic both declined to patch near-identical agent-config vulnerabilities: the labs are using their private policy discretion to set liability boundaries that the rest of the market has to live with.
What This Means for the Excluded
The steelman for Glasswing is straightforward. Releasing a model that can autonomously generate exploits against every major operating system would be catastrophic, and limited rollout is the only responsible option. That argument is sound, and any reasonable frontier lab would reach the same conclusion. The question Glasswing's critics are asking is not whether rollout should be limited, but whether a private company should be the one drawing the perimeter.
For the roughly 99 percent of enterprises not on the list, the operational consequences are concrete. Attackers will eventually obtain Mythos-equivalent capabilities through leakage, exfiltration, or the next model release from a lab with weaker controls. This is not hypothetical; restricted capabilities do not stay restricted, as the Coruna government exploit kit demonstrated when it surfaced on 42,000 civilian iPhones less than a year after its initial government sale. One unauthorized-access incident through a Glasswing contractor has already been reported, less than two weeks after launch. The defender that is not on the Glasswing list is now operating against an offensive capability that some of its peers get to preview and harden against while it cannot, and the gap between those two postures compounds every week Mythos remains in exclusive circulation. This is the scenario FINRA's president was describing when he called for the SEC to stop collecting centralized investor data: defenders should assume the offensive capabilities will arrive first, and plan accordingly.
Excluded CISOs have three moves worth making this quarter. First, add Glasswing access as a vendor-diligence question when evaluating security and infrastructure vendors, since a vendor with Glasswing access will patch faster than one without it, and that latency is now part of the risk calculation in the same way that Mythos-adjacent capability leakage between agent platforms is part of the fourth-party risk calculation. Second, request a seat on whatever industry-specific access consortium emerges; UK banks are reportedly already being added through Treasury-level negotiations, and every regulated sector will need an equivalent. Third, pressure regulators to specify how Glasswing exclusion factors into supervisory examinations, since examiners cannot currently ask the question because the program did not exist when the examination manuals were written.
The Governance Gap Is Permanent Until It Isn't
Glasswing is not going to stop. Anthropic has said it will expand access over time, and the next frontier lab with an equivalent capability will follow the same pattern, probably with a different list. Google is already in classified-deployment negotiations with the Pentagon, and OpenAI's Pentagon deal predates both. Within twelve months there will likely be three or four overlapping frontier-defense consortia, each with its own unpublished rubric, each governed by the private-sector procurement team of the lab that built it.
The policy vacuum is visible. The OCC, FFIEC, FTC, and SEC have all said nothing about how Glasswing-style exclusions should be audited, priced by insurers, or disclosed to shareholders. Congress held its first hearing on AI-enabled cyberattacks the week Mythos was announced, and frontier access was not on the agenda. The first regulator to publish a clear rubric for supervising two-tier AI defense markets will be the one that shapes the next decade of enterprise security policy. Until that happens, the fifty organizations on the Glasswing list are the policy, and the 22-second attack window is the clock the other 99 percent of the market has to beat without them.