Anthropic published a detailed report yesterday naming three Chinese AI firms, DeepSeek, Moonshot AI, and MiniMax, in what it calls "industrial-scale" distillation campaigns. The numbers are significant: 16 million exchanges through approximately 24,000 fraudulent accounts, all designed to extract Claude's most valuable capabilities in agentic reasoning, tool use, and coding.
The coverage has landed squarely in the geopolitics bucket. Trade war. IP theft. National security. Those frames aren't wrong. But read the technical details closely, and you'll find something the headlines are missing: this isn't a novel attack category. It's the oldest playbook in cybersecurity, applied to the newest target, and wrapped in a policy brief.
Eleven days ago, I wrote about OpenAI's memo to Congress accusing DeepSeek of distillation. I argued that the security concerns were real but the messenger had half a trillion reasons to exaggerate them. Anthropic's report is the sequel. The evidence is stronger. The policy ask is the same. And the cybersecurity fundamentals hiding behind the new terminology deserve a closer look.
Old Attacks, New Names
Strip away the AI-specific language and Anthropic's report reads like a standard cyber threat intelligence brief. Every technique they describe has been documented, defended against, and presented at security conferences for years.
Fraudulent accounts at scale. Anthropic detected approximately 24,000 fake accounts used to distribute attack traffic. In cybersecurity, this is credential abuse. Financial services companies detect and shut down fake account networks daily. Entire product categories exist to combat it: device fingerprinting, behavioral biometrics, identity verification platforms.
Commercial proxy services for attribution evasion. The attackers routed traffic through proxy networks to mask their origin, with one network managing more than 20,000 accounts simultaneously. This is botnet infrastructure by another name. Proxy rotation to avoid IP-based blocking has been a standard technique in web scraping, credential stuffing, and distributed attacks for over a decade.
Behavioral fingerprinting for detection. Anthropic built classifiers to identify distillation patterns: high volumes concentrated in specific capability areas, repetitive structures, and chain-of-thought elicitation that maps onto training data construction. In cybersecurity, this is UEBA (User and Entity Behavior Analytics). It's how security teams distinguish legitimate users from attackers based on behavioral patterns rather than static rules.
Intelligence sharing with industry partners. Anthropic corroborated their findings through IP address correlation, request metadata, infrastructure indicators, and information from partners who observed the same actors. This is threat intelligence sharing, the same model that ISACs (Information Sharing and Analysis Centers) have used across critical infrastructure sectors for two decades.
The "hydra cluster" architecture that Anthropic describes, sprawling networks of accounts that distribute traffic and mix attack queries with normal usage, is functionally identical to the botnet architectures that cybersecurity researchers have tracked since the early 2000s. The targets changed. The playbook didn't.
The Scale Isn't Unprecedented Either
Sixteen million queries sounds massive until you put it in context. Akamai has reported detecting billions of credential stuffing attacks per year across their network. Financial services companies process millions of potentially fraudulent transactions daily. E-commerce platforms fight bot networks that attempt hundreds of millions of price-scraping requests per month.
What Anthropic described is moderate-volume, targeted API abuse. It's serious. It's worth defending against. But the framing as some unprecedented threat category is a stretch. The real story is that AI companies are now facing the same API abuse patterns that every other industry with valuable data behind an API has faced for years.
Google's Threat Intelligence Group identified over 100,000 prompts in distillation attacks against Gemini. The attackers are running the same playbook across every major model provider, which is exactly how commodity attack campaigns work. They don't innovate. They scale.
The Business Model Contradiction
Here's where the analysis gets more interesting than a simple attribution story. The fundamental tension isn't geopolitical. It's architectural.
AI companies built their business models around API access. The entire value proposition depends on letting users, developers, and enterprises interact with the model millions of times. Every API response reveals something about the model's capabilities, reasoning patterns, and training. That's not a bug. That's the product.
This creates a security paradox that other industries have already confronted. At Capital One, the challenge was similar in structure: how do you let customers, partners, and internal systems access sensitive financial data at scale while preventing extraction and misuse? The answer wasn't just better fraud detection. It was architectural.
Tokenization works for financial data because intercepted tokens are meaningless without the vault. A stolen credit card token can't be used to make purchases, can't be sold on the dark web, and can't be reverse-engineered back to the original number. The data travels in a protected form.
AI model outputs can't be tokenized. Every API response IS the intellectual property. Every chain-of-thought trace, every reasoning step, every code completion is the crown jewel being served directly to the requester. Behavioral fingerprinting and fraud detection are necessary, but they're inherently reactive. They catch attacks after extraction has already begun. The model has already answered millions of queries before the pattern triggers an alert.
This is the same limitation that plagued perimeter-based security for decades before the industry shifted toward zero trust. Detecting bad actors at the gate works until it doesn't. And when your adversary is a state-backed AI lab with resources to create 24,000 accounts and route traffic through commercial proxy infrastructure, the gate is going to get tested.
The Policy Brief Inside the Security Report
Anthropic's report doesn't end with threat intelligence findings and defensive recommendations. It explicitly advocates for export controls, arguing that distillation attacks "reinforce the rationale for export controls" because restricted chip access limits both direct model training and the scale of distillation operations.
This follows a pattern. OpenAI sent its memo to Congress on February 12. Anthropic published its report on February 23. Both companies present legitimate security findings. Both companies then extend those findings into policy recommendations that happen to protect their competitive position.
As I wrote eleven days ago, the security concerns and the business interests aren't mutually exclusive. DeepSeek, Moonshot AI, and MiniMax almost certainly violated Anthropic's terms of service and probably broke laws around fraud and computer misuse. The legal landscape around distillation and IP is genuinely unsettled, with copyright law offering uncertain protection at best.
But policymakers should notice that every major AI company making distillation accusations is simultaneously the company that would benefit most from regulations restricting access to their competitors' training alternatives. When Anthropic says export controls are the solution, it's worth asking: the solution to what, exactly? To API abuse that behavioral fingerprinting can detect and block? Or to competitive pressure from labs that can build capable models at a fraction of the cost?
The irony runs deeper. The AI industry built its models by ingesting vast amounts of publicly available data, often without explicit permission from content creators. OpenAI faces ongoing litigation over exactly this practice. The argument that scraping the open web to build AI models constitutes fair use, but querying an AI API to train another model constitutes theft, is legally and philosophically inconsistent. It may still be legally correct once courts sort it out, but the inconsistency matters for how we evaluate the policy recommendations attached to these reports.
What Security Leaders Should Actually Do
The distillation story is grabbing headlines, but the practical security lessons apply far beyond AI companies. If state-backed labs with significant resources can extract value from Anthropic's API through relatively straightforward account fraud and proxy rotation, your APIs are vulnerable too.
Invest in behavioral analytics, not just rate limiting. Static rate limits are trivially bypassed through account distribution. Anthropic's detection relied on behavioral patterns: query focus areas, structural repetition, and usage patterns inconsistent with legitimate use. UEBA isn't new, but most organizations still rely primarily on volume-based controls.
Assume your API responses are training data. This applies to any organization serving valuable outputs through an API. Legal documents, financial analyses, proprietary research, competitive intelligence. If it's valuable enough to protect, it's valuable enough to scrape. Design your API outputs with extraction in mind.
Share threat intelligence. Anthropic's attribution was strengthened by corroboration from industry partners who observed the same actors. The AI industry is learning what financial services, healthcare, and critical infrastructure learned years ago: shared threat intelligence raises the cost of attacks for everyone.
Don't confuse detection with prevention. Behavioral fingerprinting detected the attacks. It didn't prevent 16 million queries from being answered first. If your security strategy depends entirely on catching bad actors after they've accessed your systems, you're running the same architecture that made this attack possible.
The Real Lesson
Anthropic didn't discover a new attack. It documented the oldest attack in cybersecurity, credential abuse combined with automated data extraction, applied to the newest target, and wrapped the disclosure in a policy brief. The security response was competent. The detection techniques were sound. The intelligence sharing was valuable.
But the framing matters. Calling this a "distillation attack" instead of "API abuse" makes it sound novel and unprecedented. It isn't. Linking the security findings to export control policy makes a defensive cybersecurity report serve double duty as a lobbying document. And treating this as primarily a geopolitical problem obscures the architectural reality: when your business model requires exposing your crown jewels through an API, someone will extract them at scale.
The cybersecurity industry learned this lesson decades ago. The AI industry is learning it now. The question is whether they'll adopt the same solutions, architectural protection, zero trust, field-level data controls, or just keep asking for bigger fences around an open door.