Anthropic, the AI company behind Claude, rejected the Pentagon’s final contract offer, refusing to allow unrestricted military use of its technology, while Sam Altman’s OpenAI reached a deal. The company has two red lines: Claude will not be used for mass surveillance of American citizens, and it will not power fully autonomous weapons. The Pentagon wants both restrictions removed. CEO Dario Amodei’s response, published in a blog post Thursday night: “These threats do not change our position: we cannot in good conscience accede to their request.”
The deadline passed. Trump posted on Truth Social calling Anthropic “Leftwing nut jobs” and directed every federal agency to immediately stop using its technology, with a six-month phase-out for agencies where Claude is already embedded. Hegseth designated Anthropic a supply chain risk: effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct any commercial activity with Anthropic. He called the company “sanctimonious” and accused it of trying to “seize veto power over the operational decisions of the United States military.”
That’s When Open AI Walked In
Then OpenAI walked in. Hours after Anthropic was blacklisted — on the same Friday night the U.S. and Israel began bombing Iran — OpenAI CEO Sam Altman announced his company had reached a deal to deploy models on classified networks. The timing was impossible to ignore. Altman claimed the deal includes the same protections Anthropic fought for: no mass surveillance, no autonomous weapons, plus no high-stakes automated decisions like social credit systems. OpenAI retains full control over its safety stack and deploys cloud-only so models can’t be embedded into weapons systems.
But the contract says the Pentagon may use OpenAI for “all lawful purposes”: the exact standard Anthropic rejected. The protections aren’t explicit prohibitions. They reference existing legal frameworks, including Executive Order 12333, the framework the NSA uses to collect communications by tapping lines outside the U.S., even when those communications involve Americans. Critics immediately argued this creates a loophole for technically legal mass data collection. The contract also doesn’t explicitly bar collecting Americans’ publicly available information, reportedly a key sticking point in Anthropic’s talks.
This debate over “lawful purposes” is not theoretical. A recent WIRED investigation found that ICE and Customs and Border Protection have collectively spent at least $515 million since 2023 on technology from Palantir, Microsoft, Amazon and Google to power immigration enforcement. The reporting details how Palantir’s Investigative Case Management system serves as ICE’s core law enforcement case management tool, integrating data across federal databases, and how new AI-driven tools such as ELITE generate on-the-spot dossiers and “confidence scores” about potential deportation targets.
Those systems run on major cloud providers including AWS, Microsoft Azure and Google Cloud. Amazon infrastructure hosts ICE’s Digital Records Manager, Data Warehouse and the Law Enforcement Information Sharing Service, which functions as a backend data-sharing highway between ICE and other agencies. Google cloud services support border surveillance systems including the Integrated Fixed Towers used to monitor remote areas of Arizona.
In other words, the government already relies on AI-enabled data aggregation, cross-database search and predictive analytics for domestic enforcement. The question Anthropic raised was not whether surveillance exists, but how explicitly guardrails should be written into contracts before even more advanced foundation models are integrated into those workflows.
Altman admitted the deal was “definitely rushed” and “the optics don’t look good.” When asked if he worried about future disputes with the Pentagon over what’s legal, he replied that he was, but that OpenAI will take on that fight if it comes.
Before the deadline, more than 300 Google employees and 60 OpenAI employees had signed an open letter backing Anthropic’s position. Altman himself said that morning he didn’t think the Pentagon should threaten the Defense Production Act against AI companies. Then he signed the deal that evening.
What’s Happened Since
The backlash has escalated on every front — legal, legislative, and commercial.
On the legal side, Lawfare published an analysis arguing the designation “won’t survive first contact with the legal system.” The statute Hegseth invoked requires either a 30-day notice period with an opportunity for Anthropic to respond and congressional notification, or — on the faster track — a formal risk assessment that still hasn’t been completed. Hegseth announced the designation on X. As of today, no formal supply chain risk paperwork has been filed. Lawfare argues the required findings don’t hold up, and that Hegseth’s own public statements — calling Anthropic “sanctimonious,” calling Amodei a “liar” with a “God complex” — may have doomed the government’s litigation posture by making the retaliatory motive obvious.
On Capitol Hill, Senator Ron Wyden, the top Democrat on the Senate Finance Committee, vowed to “pull out all the stops” and said he’s seeking bipartisan legislation to fight the designation. “This is so breathtakingly wrong that I think we’ll have support across the political spectrum,” Wyden said. Rep. Sam Liccardo is introducing an amendment to the Defense Production Act that would bar federal agencies from retaliating against companies for seeking reasonable limits on how their technology is used.
In the tech industry, hundreds of workers signed a new open letter urging the DOD to withdraw the designation and calling on Congress to examine whether using these authorities against an American company is appropriate. Signatories include employees from OpenAI itself, plus Slack, IBM, Cursor, and Salesforce Ventures. OpenAI researcher Boaz Barak wrote that blocking governments from using AI for mass domestic surveillance is his “personal red line” and “it should be all of ours.”
Meanwhile, Claude hit number one in U.S. App Store downloads Saturday, overtaking ChatGPT. A Reddit post urging people to cancel ChatGPT got 30,000 upvotes. Katy Perry posted Claude’s pricing page with a red heart. Chalk art appeared outside Anthropic’s offices reading “you give us courage.” Anthropic said daily signups broke all-time records every day that week, free users grew over 60% since January, and paid subscribers more than doubled this year. European leaders and tech executives are publicly inviting Anthropic to relocate overseas, arguing its values better align with EU regulatory frameworks. A Cisco principal engineer noted the designation poisons the well for every company downstream: “Every Fortune 500 company with any Pentagon exposure now has to ask its general counsel whether using Claude creates contract risk.”
OpenAI’s own blog said Anthropic should not be designated a supply chain risk and asked the Pentagon to offer the same terms to all AI companies.
Why This Should Matter To You
The Pentagon says it has no intention of using AI for autonomous killing or mass surveillance. Anthropic says: write that down. The Pentagon says: we already have laws against that. Anthropic says: the contract you sent us would let you override those protections at will.
Look at what happened to the company that held its ground. Within hours: blacklisted with a designation reserved for Huawei, banned from federal systems, publicly attacked by the president, and replaced by a competitor whose deal critics argue offers weaker protections in stronger language. The message to every other AI company is not subtle.
The difference between the two deals isn’t whether either CEO wants AI used for surveillance. It’s how the line gets drawn. Anthropic wanted explicit prohibitions — Claude will not do X, full stop. OpenAI accepted references to existing laws that surveillance critics have spent years arguing contain exactly the kind of loopholes that make mass data collection possible under the right interpretation. The question is what happens five years from now, under a different administration, when the contract says “all lawful purposes” and the definition of lawful has shifted.
WIRED’s analysis describes this posture as a “collect it all” mentality, where agencies aggregate as much data as possible and determine later how it can be used. Privacy advocates cited in the report warn that the Trump administration is increasingly merging datasets originally collected for non-immigration purposes into enforcement systems. That dynamic makes the wording of AI contracts consequential, because once foundation models are embedded into these data environments, they can accelerate classification, pattern recognition and targeting at scale.
The infrastructure is already in place. What changes with generative AI is speed, synthesis and automation. That is the structural tension underlying the Pentagon deal: not whether surveillance exists, but whether the next generation of AI systems should operate inside it without explicit contractual limits.
Now the fight has moved to Congress and the courts. The legal consensus is forming that Hegseth skipped the required process — and his own public statements made the retaliation so obvious that the designation may not survive a legal challenge. But even if Anthropic wins in court, that could take years. In the meantime, every general counsel at every company with Pentagon exposure is asking one question: is using Claude worth the risk?
That’s what this fight was always about. Not one contract. Whether the precedent for AI and government power gets set by explicit restriction — or by trust in institutions that are already arguing they shouldn’t have to put their promises in writing.