Yesterday, Bloomberg reported that OpenAI sent a memo to the House Select Committee on China accusing DeepSeek of using "new, obfuscated methods" to distill capabilities from American AI models. The memo describes DeepSeek's "ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier labs" and claims that DeepSeek employees circumvented guardrails using obfuscated third-party routers to extract reasoning outputs.
The security community should take note. Not because everything in the memo is wrong. Because not everything in it is disinterested.
OpenAI is simultaneously the company best positioned to identify DeepSeek's technical methods and the company with the most to gain from a government ban on its most cost-effective competitor. That dual role should shape how every CISO, security leader, and policymaker reads this memo.
The Intelligence That's Real
Let's start with what holds up under scrutiny, because the national security case against DeepSeek isn't manufactured from nothing.
The House Select Committee on the CCP's "DeepSeek Unmasked" report found that DeepSeek's backend infrastructure routes data through entities connected to Chinese military companies. The report documented that DeepSeek covertly manipulates outputs to align with CCP propaganda, as required under Chinese law. These aren't OpenAI's claims; they're findings from a bipartisan congressional investigation.
The data sovereignty problem is straightforward. Under China's cybersecurity and national security laws, the Chinese government can compel any domestic company to hand over user data on request. Every prompt, every document, every piece of proprietary information entered into DeepSeek is stored on servers subject to that legal framework. Italy's data protection authority banned DeepSeek outright. Australia, South Korea, and Taiwan followed. The U.S. Navy, NASA, and the Pentagon have all prohibited DeepSeek on government devices. Bipartisan legislation, the "No DeepSeek on Government Devices Act," is moving through Congress.
These aren't theoretical concerns. If you're running an enterprise that handles sensitive data, DeepSeek's legal jurisdiction is a real operational risk, the same kind of shadow AI data exfiltration risk I've written about before. The difference is that with DeepSeek, the exfiltration path leads to a government with strategic intelligence objectives.
The distillation accusations also have technical substance. Stanford's FSI analysis noted that DeepSeek's R1 model performs on par with OpenAI's o1 reasoning model at a fraction of the cost. When DeepSeek's release hit the market, Nvidia lost $589 billion in a single day. That kind of capability leap from a small subsidiary of a Chinese financial services firm raises legitimate questions about how those capabilities were developed.
The Part That's Lobbying
Now look at the messenger.
OpenAI is not a disinterested national security analyst delivering intelligence to Congress. It is a commercial entity with the most aggressive government entanglement in the AI industry, and DeepSeek is its most disruptive market threat.
Start with the money. OpenAI's Stargate project represents $500 billion in AI infrastructure investment, announced alongside the Trump administration and backed by SoftBank, Oracle, and Nvidia. The project claims 100,000 new American jobs and nearly 7 gigawatts of planned data center capacity. When a company is building the government's AI infrastructure at that scale, its competitive interests and national security interests become functionally inseparable.
Then there's the political spending. OpenAI President Greg Brockman co-founded "Leading the Future," a super PAC that raised $125 million with Andreessen Horowitz to elect candidates who support unfettered AI development. OpenAI's own lobbying spend increased nearly sevenfold in recent years, reaching $1.76 million in 2024 alone, with spending continuing to accelerate.
And here's the contradiction that should bother every security professional: OpenAI is simultaneously lobbying for relaxed privacy regulations on its own data collection practices while arguing that DeepSeek should be banned for its data practices. The company lobbied the EU to avoid having its tools classified as "high-risk" under the AI Act, and has pushed the Trump administration to focus on speed and light regulation for the domestic AI industry.
Fewer rules for me, more rules for my competitor. That's not threat intelligence, it's just regulatory capture.
The Pattern Enterprise Leaders Should Recognize
I spent years in Navy EOD learning to evaluate threat intelligence with a simple framework: what does the source know, and what does the source want? Good intelligence can come from compromised sources. But you never take the assessment at face value without understanding the incentive structure behind it. A line I heard in both the Navy and my MBA leadership coursework captures it well: "Show me the incentive and I'll show you the behavior."
OpenAI knows more about DeepSeek's distillation methods than almost anyone. They operate the models being targeted. They can observe the access patterns, the obfuscation techniques, the third-party routing. That technical visibility is real and valuable.
Now look at the incentive. OpenAI has $500 billion in government infrastructure deals on the line, a $125 million super PAC shaping elections, and a competitor offering comparable capabilities at a fraction of the price. Show me that incentive, and I'll show you a company lobbying Congress to ban its biggest threat under the banner of national security. The behavior is predictable because the incentive is obvious.
This is the same dynamic I wrote about in The AI Infrastructure Panic Is Self-Inflicted: vendors and incumbents framing urgent action around problems where the proposed solution happens to benefit them most. The infrastructure panic benefits cloud providers and chip manufacturers. The DeepSeek panic benefits OpenAI's market position.
What CISOs Should Actually Do
The worst response to this situation is to pick a side. Either dismissing the DeepSeek risk because OpenAI has a conflict of interest, or accepting OpenAI's framing wholesale because the national security language sounds authoritative.
Here's a more disciplined approach:
Evaluate DeepSeek's data risk on its own merits. The Chinese legal jurisdiction problem is real regardless of who raises it. If your organization handles regulated data, customer PII, intellectual property, or anything subject to ITAR, HIPAA, or financial compliance, DeepSeek's data residency is a non-starter. You don't need OpenAI's memo to reach that conclusion. Your legal team can get there by reading DeepSeek's own privacy policy and China's cybersecurity laws.
Separate the model capability question from the data sovereignty question. DeepSeek's R1 is a genuinely capable model. Open-source alternatives based on its architecture can be self-hosted, air-gapped, and run on infrastructure you control. The model itself isn't the risk; the risk is the hosted service, its data flows, and its legal jurisdiction. Enterprises already navigating AI governance challenges should apply the same data residency and vendor risk frameworks they use for any other third-party tool.
Be skeptical of any vendor who wants you afraid of their competitor. This applies beyond the OpenAI-DeepSeek dynamic. Every time a vendor frames a competitor as a security risk, ask what they stand to gain from that framing. The intelligence might be accurate and the motive might still be self-serving. Both things can be true.
Watch what Congress actually does, not what companies ask Congress to do. The bipartisan "No DeepSeek on Government Devices Act" focuses narrowly on government device usage, a proportionate response to a real data sovereignty risk. OpenAI's broader proposal, banning "PRC-produced models" from allied nations, would functionally eliminate its biggest competitive threat under the banner of national security. Those are very different policy outcomes.
The Bigger Picture
The AI industry is entering a phase where geopolitical competition and commercial competition are becoming indistinguishable. OpenAI's memo to Congress isn't just about DeepSeek; it's a template for how AI companies will use national security framing to shape markets for the next decade.
The companies with the deepest government relationships will have the most influence over which competitors get labeled as threats. The companies with the largest lobbying budgets will shape which regulations apply to whom. And security leaders will be left sorting through intelligence reports that are simultaneously accurate and self-serving.
The skill that matters now isn't knowing whether DeepSeek is dangerous. It's knowing how to evaluate claims about danger when the people making them have billions riding on your answer.
That's not a cybersecurity skill. It's an intelligence analysis skill. And most enterprise security teams aren't trained for it.