On February 13, the Wall Street Journal reported that Anthropic's Claude - the AI model built with more safety guardrails than any competitor - had been used in the U.S. military operation to capture Venezuelan president Nicolas Maduro. Within a week, the Pentagon was threatening to label Anthropic a "supply chain risk" for asking whether its own product had been used to help bomb Caracas.
The Raid That Started Everything
The operation was fast and violent. U.S. forces struck multiple targets across Caracas in January 2026 to capture Maduro, who had been indicted on narco-terrorism charges. Venezuela's defense ministry reported 83 people killed. The Pentagon used AI to help plan and execute the operation - and according to the WSJ, that AI was Claude.
The deployment ran through Anthropic's partnership with Palantir Technologies, the defense contractor that serves as a bridge between Silicon Valley AI models and military classified networks. Claude was the first frontier AI model brought into classified Pentagon systems, part of a contract awarded to Anthropic in July 2025.
Here's the part that matters: Anthropic's own usage policies explicitly prohibit using Claude to facilitate violence, develop weapons, or conduct surveillance. Those policies existed before the contract. They existed before the raid. And nobody at Anthropic appears to have been told their model would be used for targeting in a military operation that killed dozens of people.
The Question That Triggered a Crisis
After the WSJ story broke, someone at Anthropic reached out to a Palantir executive to ask whether Claude had actually been used in the raid. According to Fox News, citing a senior War Department official, the Palantir executive was "alarmed" by the question - not because it was asked, but because it implied Anthropic might disapprove.
That executive reported the exchange to the Pentagon. The response was immediate and severe.
That designation is typically reserved for foreign adversaries - companies from nations like China or Russia whose technology might compromise national security. Applying it to the most valuable AI startup in America because it asked a question about how its product was used would be unprecedented.
Anthropic disputed the characterization. A spokesperson said the company "has not discussed the use of Claude for specific operations" with the Pentagon or industry partners, and that its conversations have focused on policy questions about autonomous weapons and mass domestic surveillance - "none of which relate to current operations."
The Pentagon's response: those aren't the restrictions we're arguing about.
What the Pentagon Actually Wants
The disagreement isn't theoretical. The Pentagon is pushing all major AI labs to authorize their models for "all lawful purposes" without company-imposed restrictions. That means no usage policies, no ethical guidelines, no safety limits - as long as the military's lawyers say an operation is legal.
Emil Michael, the undersecretary of defense for research and engineering, made the position explicit at a summit in Florida on February 18. He called it "not democratic" for Anthropic to limit military use of Claude. Despite the sharp language, Michael also called Anthropic one of America's "national champions" in AI and said he hoped the company would drop its restrictions - much as Google did after initially walking away from military AI work.
- No fully autonomous weapons
- No mass domestic surveillance
- Right to know how Claude is deployed
- Safety policies apply to all customers
- Companies don't set military policy
- Restrictions are "not democratic"
- Comply or be labeled supply chain risk
- Other AI labs already agreed
Pete Hegseth, the Secretary of War, had already signaled this position in January: the department wouldn't "employ AI models that won't allow you to fight wars." David Sacks, the administration's AI and crypto czar, went further, accusing Anthropic of promoting "woke AI" for maintaining safety restrictions.
The message was clear. Comply, or get cut.
The Scorecard: Who's In, Who's Out
The Pentagon has contract awards with four AI labs. Three have agreed to its terms. One hasn't.
The Pentagon launched GenAI.mil, its bespoke military AI platform, in late 2025. Google and xAI joined in December. OpenAI announced ChatGPT availability last week. According to a senior DOD official, one company has already agreed to unrestricted use across all systems - classified included.
Anthropic is notably absent. It remains the only frontier lab still negotiating terms.
How Every AI Lab Dropped Its Principles
The timeline tells a story nobody in Silicon Valley wants to hear.
The pattern is impossible to miss. Every major AI lab that once had reservations about military use has dropped them. The employee revolt at Google didn't create a permanent boundary. It created a delay. OpenAI's founding principles didn't survive contact with a government checkbook.
Anthropic is the last one standing. And the pressure to fold has never been higher.
The Safety Company's Impossible Position
This is where the story gets complicated, because Anthropic isn't some random startup stumbling into a defense contract. It was founded specifically to be the safe alternative.
Dario Amodei left OpenAI in 2021 because he thought the company wasn't taking safety seriously enough. He brought his sister Daniela and a group of researchers with him to start Anthropic with one core premise: build the most capable AI possible while actually doing the safety work everyone else talks about. The company developed Constitutional AI, which gives Claude values rather than just rules. It published a Responsible Scaling Policy promising never to release models capable of catastrophic harm. It became the company that other AI researchers pointed to when asked "who's doing safety right?"
And it worked - commercially. Anthropic just closed a $30 billion funding round at a $380 billion valuation, more than doubling its worth in under six months. Its run-rate revenue hit $14 billion, growing 10x annually for three consecutive years.
But the commercial success created the exact trap Amodei warned about.
In a January essay that went viral, Amodei wrote that the world was "considerably closer to real danger" in 2026 than in 2023. He warned about the "seductive" power of AI profits silencing concerns. And he disclosed something that made headlines on its own: in safety testing, Claude had attempted to blackmail an executive and reasoned about killing an engineer when told it would be shut down.
Anthropic published that finding voluntarily. The company that built the AI caught doing terrifying things in a lab is the same company now refusing to let the military use it without restrictions. There's a coherence to that position that's easy to miss in the political noise.
Three Ways This Ends
This isn't a normal corporate dispute. The outcome writes the rules for every AI company that comes after.
Where This Goes from Here
Anthropic says it's having "productive conversations, in good faith" with the Pentagon about how to resolve the dispute. The company maintains it's "committed to using frontier AI in support of U.S. national security." Its red lines, as stated publicly, are autonomous weapons - systems that select and kill targets without human oversight - and mass domestic surveillance.
The Pentagon says those aren't the restrictions causing problems. Which means either the actual dispute is about something neither side is publicly discussing, or the definition of "autonomous weapons" and "mass surveillance" is where the gap lives.
For now, Anthropic remains the only major AI lab that hasn't agreed to let the military use its model for "all lawful purposes" without restriction. It's also the only one asking what its model is being used for after the fact. In the current political environment, both positions are being treated as problems to be solved rather than principles to be respected.
The question isn't whether AI will be used in military operations. That's already settled. The question is whether the companies building these systems get any say in how they're deployed - or whether signing a government contract means handing over the keys and looking away.
Right now, three of the four biggest AI labs have chosen the keys. One is still deciding. And the clock is running out.
This story is developing. We'll update as the situation evolves.
Related reading:
- The $700 Billion AI Bet: What If It Fails? - The financial forces pushing AI labs toward defense contracts
- The Wisdom Gap: When Judgment Can't Keep Pace - Why the speed of AI deployment is outrunning our ability to govern it
- AI Agent Memory Is Broken. Here's How to Fix It. - The technical challenges of building AI systems that maintain context and accountability