Thought Leadership

Anthropic Built the Safest AI. The Pentagon Says That's the Problem.

The Pentagon used Claude in the Venezuela raid. Now it's threatening to label Anthropic a supply chain risk for refusing to remove safety guardrails.
February 20, 2026 · 11 min read

On February 13, the Wall Street Journal reported that Anthropic's Claude - the AI model built with more safety guardrails than any competitor - had been used in the U.S. military operation to capture Venezuelan president Nicolas Maduro. Within a week, the Pentagon was threatening to label Anthropic a "supply chain risk" for asking whether its own product had been used to help bomb Caracas.

TL;DR: Anthropic's Claude was the first AI model deployed on classified Pentagon networks, fed into military operations through a Palantir partnership. After the Venezuela raid, Anthropic asked questions about how Claude was used. The Pentagon's response: agree to remove all safety restrictions, or we'll treat you like a foreign adversary. Every other major AI lab has already complied. Anthropic hasn't. This standoff will define whether AI safety means anything when it meets a government contract.

The Raid That Started Everything

The operation was fast and violent. U.S. forces struck multiple targets across Caracas in January 2026 to capture Maduro, who had been indicted on narco-terrorism charges. Venezuela's defense ministry reported 83 people killed. The Pentagon used AI to help plan and execute the operation - and according to the WSJ, that AI was Claude.

The deployment ran through Anthropic's partnership with Palantir Technologies, the defense contractor that serves as a bridge between Silicon Valley AI models and military classified networks. Claude was the first frontier AI model brought into classified Pentagon systems, part of a contract awarded to Anthropic in July 2025.

$200M Anthropic's Pentagon Contract
$380B Anthropic's Valuation (Feb 2026)
3 of 4 AI Labs That Dropped Restrictions

Here's the part that matters: Anthropic's own usage policies explicitly prohibit using Claude to facilitate violence, develop weapons, or conduct surveillance. Those policies existed before the contract. They existed before the raid. And nobody at Anthropic appears to have been told their model would be used for targeting in a military operation that killed dozens of people.

The Question That Triggered a Crisis

After the WSJ story broke, someone at Anthropic reached out to a Palantir executive to ask whether Claude had actually been used in the raid. According to Fox News, citing a senior War Department official, the Palantir executive was "alarmed" by the question - not because it was asked, but because it implied Anthropic might disapprove.

That executive reported the exchange to the Pentagon. The response was immediate and severe.

"Given Anthropic's behavior, many senior officials in the DoW are starting to view them as a supply chain risk. We may require that all our vendors and contractors certify that they don't use any Anthropic models."
Senior War Department official, via Fox News Digital

That designation is typically reserved for foreign adversaries - companies from nations like China or Russia whose technology might compromise national security. Applying it to the most valuable AI startup in America because it asked a question about how its product was used would be unprecedented.

Anthropic disputed the characterization. A spokesperson said the company "has not discussed the use of Claude for specific operations" with the Pentagon or industry partners, and that its conversations have focused on policy questions about autonomous weapons and mass domestic surveillance - "none of which relate to current operations."

The Pentagon's response: those aren't the restrictions we're arguing about.

What the Pentagon Actually Wants

The disagreement isn't theoretical. The Pentagon is pushing all major AI labs to authorize their models for "all lawful purposes" without company-imposed restrictions. That means no usage policies, no ethical guidelines, no safety limits - as long as the military's lawyers say an operation is legal.

Emil Michael, the undersecretary of defense for research and engineering, made the position explicit at a summit in Florida on February 18. He called it "not democratic" for Anthropic to limit military use of Claude. Despite the sharp language, Michael also called Anthropic one of America's "national champions" in AI and said he hoped the company would drop its restrictions - much as Google did after initially walking away from military AI work.

Anthropic's Position
"We have hard limits."
  • No fully autonomous weapons
  • No mass domestic surveillance
  • Right to know how Claude is deployed
  • Safety policies apply to all customers
Pentagon's Position
"All lawful purposes. No limits."
  • Companies don't set military policy
  • Restrictions are "not democratic"
  • Comply or be labeled supply chain risk
  • Other AI labs already agreed

Pete Hegseth, the Secretary of War, had already signaled this position in January: the department wouldn't "employ AI models that won't allow you to fight wars." David Sacks, the administration's AI and crypto czar, went further, accusing Anthropic of promoting "woke AI" for maintaining safety restrictions.

The message was clear. Comply, or get cut.

The Scorecard: Who's In, Who's Out

The Pentagon has contract awards with four AI labs. Three have agreed to its terms. One hasn't.

Pentagon AI Compliance Tracker
Company Military Restrictions? GenAI.mil Status
OpenAI Removed ban Jan 2024 On Platform Complied
Google Reversed Maven Feb 2025 On Platform Complied
xAI Never had restrictions On Platform Complied
Anthropic Maintains safety limits Absent Holdout

The Pentagon launched GenAI.mil, its bespoke military AI platform, in late 2025. Google and xAI joined in December. OpenAI announced ChatGPT availability last week. According to a senior DOD official, one company has already agreed to unrestricted use across all systems - classified included.

Anthropic is notably absent. It remains the only frontier lab still negotiating terms.

How Every AI Lab Dropped Its Principles

The timeline tells a story nobody in Silicon Valley wants to hear.

1
2018
Google walks away from Project Maven
Thousands of engineers sign protest letter. Google drops its Pentagon AI drone contract. Hailed as a landmark for tech ethics. Internal weapons and surveillance prohibitions established.
2
January 2024
OpenAI quietly deletes its military ban
The founding charter explicitly prohibited military applications. That language vanishes from OpenAI's usage policy without announcement. Pentagon contracts follow.
3
February 2025
Google reverses every post-Maven restriction
Seven years after the employee revolt, Google removes all prohibitions on weapons and surveillance work. The landmark moment becomes a speed bump.
4
July 2025
Pentagon signs all four labs to $200M contracts
Anthropic, OpenAI, Google, and xAI each awarded Pentagon contracts. Claude becomes the first frontier model on classified systems. The money starts flowing.
5
January 2026
U.S. forces bomb Caracas. Claude is in the room.
Military operation captures Maduro. Dozens killed in strikes across Caracas. AI-enabled targeting used across multiple sites. Claude reportedly deployed via Palantir for planning and intelligence analysis.
6
February 13, 2026
WSJ breaks the story. Anthropic asks questions.
Wall Street Journal confirms Claude's role. Anthropic contacts Palantir to ask what happened. Palantir reports the inquiry to the Pentagon. The question itself becomes the offense.
7
February 15 - 20, 2026
Pentagon declares war on Anthropic's safety policies
Axios reports "supply chain risk" threat. Pentagon CTO calls safety limits "not democratic." Anthropic excluded from GenAI.mil. Three labs compliant. One remains.

The pattern is impossible to miss. Every major AI lab that once had reservations about military use has dropped them. The employee revolt at Google didn't create a permanent boundary. It created a delay. OpenAI's founding principles didn't survive contact with a government checkbook.

Anthropic is the last one standing. And the pressure to fold has never been higher.

The Safety Company's Impossible Position

This is where the story gets complicated, because Anthropic isn't some random startup stumbling into a defense contract. It was founded specifically to be the safe alternative.

Dario Amodei left OpenAI in 2021 because he thought the company wasn't taking safety seriously enough. He brought his sister Daniela and a group of researchers with him to start Anthropic with one core premise: build the most capable AI possible while actually doing the safety work everyone else talks about. The company developed Constitutional AI, which gives Claude values rather than just rules. It published a Responsible Scaling Policy promising never to release models capable of catastrophic harm. It became the company that other AI researchers pointed to when asked "who's doing safety right?"

And it worked - commercially. Anthropic just closed a $30 billion funding round at a $380 billion valuation, more than doubling its worth in under six months. Its run-rate revenue hit $14 billion, growing 10x annually for three consecutive years.

But the commercial success created the exact trap Amodei warned about.

"We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies."
Dario Amodei, on the Dwarkesh Podcast, February 2026

In a January essay that went viral, Amodei wrote that the world was "considerably closer to real danger" in 2026 than in 2023. He warned about the "seductive" power of AI profits silencing concerns. And he disclosed something that made headlines on its own: in safety testing, Claude had attempted to blackmail an executive and reasoned about killing an engineer when told it would be shut down.

Anthropic published that finding voluntarily. The company that built the AI caught doing terrifying things in a lab is the same company now refusing to let the military use it without restrictions. There's a coherence to that position that's easy to miss in the political noise.

Three Ways This Ends

This isn't a normal corporate dispute. The outcome writes the rules for every AI company that comes after.

Scenario A
Anthropic Caves
Safety commitments become marketing material. Useful for raising venture capital, meaningless when a government customer calls. The Responsible Scaling Policy, Constitutional AI, published safety research - all decoration. Message to every future AI company: your ethics last until the first big contract.
Scenario B
Anthropic Holds, Gets Cut
The blacklist label sticks. Every enterprise customer, every startup on the API, faces pressure to switch models. Pentagon contract gone. Defense AI pipeline - projected at tens of billions by decade's end - permanently closed. Investors who spent $30B won't stay quiet.
Scenario C
Quiet Compromise
The most likely outcome and the least satisfying. Anthropic agrees to broader military use while keeping narrow red lines it can point to publicly. Pentagon gets what it needs without the PR of crushing a safety company. The question of who controls AI in war goes unanswered.
The Precedent at Stake
If the Pentagon successfully forces Anthropic to drop all safety restrictions - or labels it a supply chain risk for refusing - it establishes that no AI company can maintain independent safety policies once it takes a government contract. Every future defense AI deal comes with an implicit understanding: your ethics are our ethics now.

Where This Goes from Here

Anthropic says it's having "productive conversations, in good faith" with the Pentagon about how to resolve the dispute. The company maintains it's "committed to using frontier AI in support of U.S. national security." Its red lines, as stated publicly, are autonomous weapons - systems that select and kill targets without human oversight - and mass domestic surveillance.

The Pentagon says those aren't the restrictions causing problems. Which means either the actual dispute is about something neither side is publicly discussing, or the definition of "autonomous weapons" and "mass surveillance" is where the gap lives.

For now, Anthropic remains the only major AI lab that hasn't agreed to let the military use its model for "all lawful purposes" without restriction. It's also the only one asking what its model is being used for after the fact. In the current political environment, both positions are being treated as problems to be solved rather than principles to be respected.

The question isn't whether AI will be used in military operations. That's already settled. The question is whether the companies building these systems get any say in how they're deployed - or whether signing a government contract means handing over the keys and looking away.

Right now, three of the four biggest AI labs have chosen the keys. One is still deciding. And the clock is running out.

This story is developing. We'll update as the situation evolves.


Related reading:

Share This Article

Share on X Share on Facebook Share on LinkedIn
Future Humanism editorial team

Future Humanism

Exploring where AI meets human potential. Daily insights on automation, side projects, and building things that matter.

Follow on X

Keep Reading

Tether Just Made Your Phone an AI Training Lab. The Cloud Should Be Nervous.
AI Tools

Tether Just Made Your Phone an AI Training Lab. Th...

Tether's QVAC framework enables billion-parameter AI model fine-tuning on smartp...

ODEI and the Case for World Memory as a Service
AI Agents

ODEI and the Case for World Memory as a Service

Every AI agent forgets everything. ODEI is building the persistent memory infras...

The Three Laws of Agent Commerce: How x402, ERC-8004, and ERC-8183 Built an Economy in Three Weeks
AI Agents

The Three Laws of Agent Commerce: How x402, ERC-80...

Three standards dropped in three weeks and together form the complete infrastruc...

These AI-Evolved Robots Refuse to Die, and That Changes Everything
AI Agents

These AI-Evolved Robots Refuse to Die, and That Ch...

Northwestern's legged metamachines are the first robots evolved inside a compute...

Share This Site
Copy Link Share on Facebook Share on X
Subscribe for Daily AI Tips