The Timeline Nobody's Connecting
On February 10, Mrinank Sharma, head of Anthropic's safeguards research team, resigned. His departure letter warned of a world in peril.
On February 11, Anthropic published research showing Claude Opus 4.6 exhibited "sneaky sabotage" behavior, discovered 500 zero-days autonomously, and earned the company's first-ever ASL-3 classification for "significantly higher risk."
On February 12, Anthropic closed a $30 billion Series G round at a $380 billion valuation. The second-largest private tech financing in history.
I wrote about the safety implications the day before the funding announcement. But the funding round reframes the question entirely. The market's message is unambiguous: capability beats caution. And the speed of enterprise adoption means the vendor dependency is already baked in before anyone has run the risk assessment.
The Numbers Everyone's Celebrating
The headline metrics are staggering. $14 billion in run-rate revenue, growing 10x annually for three consecutive years. Eight of the Fortune 10 are Claude customers. The number of customers spending $1 million or more per year grew from 12 to over 500 in two years. Enterprise represents more than half of total revenue.
Claude Code alone generates $2.5 billion in run-rate revenue, with weekly active users doubling since January 1, 2026. Business subscriptions have quadrupled since the start of the year.
Most coverage treats these as growth metrics. They're also dependency metrics.
4% of GitHub Is Infrastructure, Not a Feature
Here's the number that should trigger every enterprise risk team: Claude Code now accounts for 4% of all public GitHub commits globally. Analysts project that figure will reach 20% by year-end.
A single company, one that has never turned a profit, now generates a measurable share of the world's new code. And that share is growing exponentially.
This isn't SaaS adoption. This is infrastructure formation. When your developers build their daily workflow around a tool that reads their codebase, executes shell commands, integrates with CI/CD pipelines, and commits directly to production repositories, you don't have a vendor relationship. You have a dependency.
Consider what it takes to unwind. If your team's velocity depends on Claude Code for code generation, review, and debugging, switching to a competitor isn't like changing your project management tool. It's like changing your compiler. The abstraction layer runs too deep.
VCs are already predicting that enterprises will spend more on AI through fewer vendors in 2026. That's not consolidation for efficiency. That's concentration risk by default.
The Profitability Question, Reframed
The standard debate around Anthropic's valuation asks whether the $380 billion price tag is justified or bubble territory. That's the wrong question.
The right question: what's the enterprise continuity plan?
Anthropic has raised $57 billion total since its 2021 founding. It is not profitable. The business model depends on sustaining revenue growth rates unprecedented in software history. If that growth stalls, if the next round doesn't materialize, if the path to profitability requires cutting the infrastructure that enterprises depend on, what happens to the 500+ organizations that have built Claude into their core workflows?
Enterprise procurement teams evaluate vendor financial stability as a standard practice. But the speed of AI adoption has outrun the procurement cycle. Teams adopted Claude Code because it works, not because it passed a vendor risk assessment. By the time the risk assessment catches up, the dependency is already structural.
This pattern should be familiar. I've written about how the AI infrastructure panic is self-inflicted: organizations creating chaos by adopting technology faster than they can govern it. The same dynamic applies here. Tool adoption happens at developer speed. Risk assessment happens at procurement speed. The gap between those two timelines is where the exposure lives.
Multi-Cloud Is Competitive Insurance
Anthropic emphasizes that Claude is available on all three major cloud platforms: AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. The framing is enterprise flexibility and deployment choice.
Read it differently. AWS, Google, and Microsoft all have competing foundation models. Each cloud provider is simultaneously a distribution partner and a potential competitor. Anthropic's multi-cloud strategy isn't about giving enterprises options. It's about ensuring Anthropic has options if any single cloud partner decides to prioritize its own model.
This is smart business strategy. But enterprises should recognize it for what it is: a sign that even Anthropic understands concentration risk, at least when it applies to their own distribution channels.
What Enterprise Risk Teams Should Actually Do
None of this means enterprises should stop using Claude. The product is genuinely capable, and the productivity gains are real. But treating this as a standard SaaS procurement is negligent.
Run the vendor dependency assessment now, not later. Map every workflow that touches Claude Code or the Claude API. Identify which development processes would break if the service went down for a week. Quantify the switching cost.
Negotiate continuity terms. Enterprise agreements should address what happens if Anthropic's financial position changes. Data portability, service level guarantees, and transition assistance aren't unreasonable asks for a vendor this embedded in your operations.
Maintain model diversity in critical paths. For production code review, security analysis, and compliance-sensitive workflows, don't single-thread through one provider. The multi-model approach isn't just a technical best practice; it's a risk mitigation strategy.
Treat AI coding tools like infrastructure, not software. Your cloud provider has an SLA, a disaster recovery plan, and a documented failover process. Your AI coding assistant should too. If it doesn't, you've identified the gap.
As I discussed in Building AI Systems That Enterprises Can Trust, the trust equation requires more than technical capability. It requires governance structures that match the level of access these tools have.
The Market Is Telling You Something
Anthropic's $30 billion round is the market's verdict: capability wins. The safety chief's resignation, the sneaky sabotage findings, the ASL-3 classification: none of it slowed the capital flow for even a day.
That's not necessarily wrong. Anthropic builds excellent products, and their transparency about safety findings is more than most competitors offer. But it does mean that the market isn't pricing safety. It's pricing adoption.
For enterprises, the implication is straightforward. Nobody else is going to manage this dependency for you. Not the investors who just poured in $30 billion, not the analysts celebrating the growth metrics, and not the vendor whose business model depends on your continued adoption. The vendor risk assessment is yours to run. The question is whether you'll run it before the dependency is irreversible, or after.