The Deal That Never Was
Defense undersecretary Emil Michael was on the phone Friday offering Anthropic a deal to save its $200 million Pentagon contract when Defense Secretary Pete Hegseth tweeted that the AI company would be designated a "supply chain risk" — a label historically reserved for adversaries like Huawei. The proposed deal would have required Anthropic to allow the Pentagon to collect and analyze data on Americans, from geolocation to web browsing history to personal financial information purchased from data brokers, according to a source familiar with the negotiations.
Anthropic refused. By Friday evening, the company said it would sue the Pentagon and rejected the government's claim that military contractors would be barred from using Claude. "No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," the company said in a statement.
OpenAI Gets the Same Deal Anthropic Was Punished For Refusing
Hours later, Sam Altman announced OpenAI had reached an agreement with the Pentagon — with safety provisions nearly identical to Anthropic's rejected terms. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman posted on X. "The DoD agrees with these principles, reflects them in law and policy, and we put them into our agreement."
The Pentagon quickly pivoted from Anthropic to OpenAI despite OpenAI having "similar red lines and a robust safety stack to enforce them," according to a source familiar with the matter. The difference: Altman's agreement explicitly ties restrictions to existing U.S. law and Pentagon policy, rather than proposing new legal standards. Anthropic argues the law hasn't caught up with AI's capability to supercharge the legal collection of publicly available data.
Why Prediction Markets Should Care
The dispute reveals fault lines in how AI companies will negotiate government contracts worth billions. Anthropic is citing statute 10 USC 3252, arguing a supply chain risk designation can only affect Pentagon contracts — not commercial customers or contractors serving non-Defense clients. If Anthropic wins in court, it would establish precedent for AI companies to maintain safety standards while still accessing government revenue.
Sen. Elizabeth Warren called Trump and Hegseth's move an attempt to "extort" Anthropic into removing "common sense guardrails." The six-month wind-down period for Anthropic's Pentagon contract creates a natural timeline for legal challenges and potential market moves. Watch for whether other AI companies face similar pressure — and whether they follow Anthropic's legal strategy or Altman's diplomatic approach.
What to Watch Next
Altman said he wants OpenAI's safety terms "extended to all the AI labs" — a move that would validate Anthropic's position while undercutting the Pentagon's rationale for blacklisting the company. The legal battle will test whether "supply chain risk" designations can be weaponized against domestic companies that refuse specific contract terms, or if that power is truly limited to foreign adversaries. Meanwhile, the Pentagon must certify that its contractors don't use Claude in their workflows — a requirement that could ripple through the defense industrial base and pressure other AI labs to preemptively negotiate their own red lines.