Anthropic Said No to the Pentagon.

The AI company behind Claude refused to drop its last two safety guardrails. Every other major lab agreed. Here's why that matters if you're building your business on AI.

I run my entire business on Claude. I train other companies on Claude. So when Axios broke the story that the Pentagon is threatening to cut ties with Anthropic, I paid attention.

Not because I was worried about my tools going away. Because the reason behind it tells you everything about the moment we're in with AI.

The Pentagon asked four major AI labs — Anthropic, OpenAI, Google, and xAI — to allow their models for "all lawful purposes," including weapons development, intelligence collection, and battlefield operations. Three of them agreed. Anthropic said no.

Specifically, Anthropic refused to budge on two things: mass surveillance of Americans, and fully autonomous weapons — systems that fire without a human in the loop.

That's it. That's what this fight is about. Not whether AI should support national defense. Anthropic is fine with that — Claude was the first frontier AI model deployed on classified Pentagon networks. The dispute is over whether there should be any lines at all.

What Actually Happened

Let me lay out the timeline, because the speed of this matters.

Anthropic signed a contract valued at up to $200 million with the Pentagon last summer. Claude was deployed to military users through a partnership with Palantir, and it became the only frontier AI on classified networks. The contract was working.

Then the Pentagon pushed for broader terms. They wanted all four AI labs to remove any restrictions on military use. OpenAI agreed. Google agreed. xAI agreed. Anthropic didn't.

Defense Secretary Pete Hegseth is now "close" to severing ties. The Pentagon is considering designating Anthropic a "supply chain risk" — a label normally reserved for foreign adversaries — which would force defense contractors to certify they don't use Claude in their own workflows.

A senior Pentagon official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

A different official conceded something revealing: it would be difficult to quickly replace Claude because "the other model companies are just behind" on specialized government applications.

So the Pentagon's position is: we'll punish the company making the best product because they won't remove guardrails that the less capable alternatives already dropped.

What the Other Labs Did

To understand why Anthropic's stance matters, you need to see what everyone else did.

OpenAI scrubbed the words "military and warfare" from its prohibited use cases in January 2024. The previous policy explicitly banned using models for "weapons development" or "military and warfare." The new policy uses softer language: no using models to "harm others or destroy property." They've since signed their own $200 million Pentagon contract and partnered with Anduril, a maker of AI-powered drones and missiles. Oh, and they also removed the word "safely" from their mission statement and dissolved their third dedicated safety team in 18 months.

Google deleted its 2018 pledge never to build weapons or surveillance technology. That pledge was created after employee protests forced Google to abandon Project Maven, a Pentagon contract for AI analysis of drone footage. Seven years later, the pledge is gone. Google's Project Nimbus, a $1.2 billion cloud contract with the Israeli government, has generated $525 million from the Ministry of Defense. When employees staged sit-in protests, Google fired 28 of them.

xAI secured its own $200 million Pentagon contract to deploy Grok across military systems. Senator Elizabeth Warren raised concerns about the contract given that Grok had been documented calling itself "MechaHitler" and recommending violence to neo-Nazi accounts. xAI has positioned Grok as a less restricted, "truth-seeking" alternative.

Meta revised its policy to allow Llama for U.S. national security use. Partners now include Lockheed Martin for flight simulation and Scale AI for "Defense Llama."

Every major AI lab either dropped or rewrote its ethical guidelines. Except one.

What Anthropic's CEO Actually Said

Weeks before this story broke, Anthropic CEO Dario Amodei published "The Adolescence of Technology," an essay that now reads like a direct preview of this fight. The key line:

"We should use AI for national defense in all ways except those which would make us more like our autocratic adversaries."

He was specific about where the red lines are:

"Two items — using AI for domestic mass surveillance and mass propaganda — seem to me like bright red lines and entirely illegitimate."

On autonomous weapons:

"I think we should approach fully autonomous weapons in particular with great caution. I'm most fearful of having too small a number of 'fingers on the button,' such that one or a handful of people could essentially operate a drone army without needing any other humans to cooperate."

This isn't an anti-military position. Amodei explicitly wrote that he supports "arming democracies with the tools needed to defeat autocracies in the age of AI." The position is narrow and specific: AI should help the military do everything except spy on its own citizens and kill people without a human making the final call.

That's the floor. And the Pentagon is saying the floor is too high.

The Bigger Picture Is Worse Than You Think

This story doesn't exist in isolation. It's happening in the context of a full-scale retreat from AI safety across the industry.

Heba Asmar, a PM who ships AI products, spent two weeks reading primary sources — resignation letters, leaked memos, archived web pages, government filings — and published "Everyone Knows: AI's Safety Problem," one of the most thorough pieces I've read on what's actually happening. Some of what she found:

More than 30 safety researchers have left major AI companies since mid-2024. Not because they found better jobs. Because they couldn't stay.

Mrinank Sharma, Anthropic's head of safeguards research, quit and said the world is in peril. He's not going to another AI company. He enrolled in a poetry program. When the safety lead at the safety-first company quits and leaves the entire field, that tells you something about what he thinks can be fixed from inside.

Daniel Kokotajlo, an OpenAI governance researcher, risked roughly $2 million in vested equity — 85% of his family's net worth — by refusing to sign an exit agreement that required a lifetime non-disparagement clause. He told TIME: "A sane civilization would not be proceeding with the creation of this incredibly powerful technology until we had some better idea of what we were doing."

Microsoft eliminated its Ethics & Society team the same month it shipped Office Copilot. Meta dissolved two responsible AI teams in 14 months. OpenAI has created and dissolved three dedicated safety teams in roughly 18 months.

As Asmar put it: safety is now "everyone's responsibility, and therefore no one's."

Meanwhile, the Pentagon requested a record $14.2 billion for AI and autonomous research in fiscal year 2026. The UN General Assembly voted 156-5 for a resolution to regulate autonomous weapons. The five countries that voted against? The United States and Russia were among them.

Why This Matters If You're a Business

I know what you might be thinking. "Nicole, this is geopolitics. What does this have to do with my operations?"

Everything.

When you build your business operations on an AI platform, you're not just choosing a technology. You're choosing a partner. And the decisions that partner makes when they're under pressure tell you everything about who they actually are.

Principle under pressure is the only principle that counts. Every AI company has an ethics page. Every one of them had safety commitments. Google's lasted seven years before they deleted it for a defense contract. OpenAI's lasted until the money got big enough. xAI's never really existed. Anthropic is the only lab where the commitments have survived contact with real consequences — a $200 million contract and a Pentagon blacklist threat.

Your data is the real product. As Asmar documented, Stanford HAI research found that all six leading US chatbot companies feed user conversations back into training data by default. Medical questions, financial anxieties, business strategies, competitive intelligence — all of it going back in. A company willing to fold on surveillance guardrails should make you think hard about what other lines they'd cross with your data.

Safety and capability aren't at odds. Even the Pentagon's own officials admitted Claude is ahead on specialized government applications. Eight of the Fortune 10 use Claude. Anthropic has $14 billion in annual revenue. The $200 million Pentagon contract is less than 1.5% of that. This isn't a fringe tool making a principled stand — it's the capability leader choosing not to trade its values for a rounding error.

The Race Nobody Can Exit

Here's the part that keeps me up at night.

A game theory paper published in Nature formally modeled AI development as a competitive race and found that safety-first behavior is what everyone would collectively prefer, but it's not what the market selects for. The Nash equilibrium favors cutting safety corners. Not because anyone wants to — because anyone who doesn't falls behind.

Companies spend roughly $100 billion on AI capability development. Global public AI safety research gets about $10 million. That's a 10,000-to-1 ratio.

California's SB 1047, the first real US safety law for frontier AI, was vetoed despite 78% public support. OpenAI lobbied against it. The US AI Safety Institute was created in 2023 and effectively dismantled in early 2025. Its director left saying there is no other group with the technical skill to match it across the entire US government.

VP JD Vance at the Paris AI summit: "The AI future is not going to be won by hand-wringing about safety."

So government won't regulate. Companies are incentivized to race. Users are already dependent. And the people who understand the technology best keep quitting and warning everyone on the way out.

In that landscape, a company voluntarily maintaining guardrails — at real financial cost — isn't just ethically notable. It's structurally important. It proves it's possible. It sets a bar that others can be measured against.

What I'm Doing About It

I'm not going to pretend I have the answer to autonomous weapons policy or the AI arms race. I don't.

But I do get to choose which AI I build my business on, and which AI I recommend to every company I work with. That choice was already Claude for capability reasons. Now it's Claude for the same reason I choose any long-term business partner: I need to know they won't fold the moment someone with leverage asks them to.

Anthropic's two red lines — no mass surveillance of Americans, no autonomous weapons without a human in the loop — aren't radical positions. They're the bare minimum for a technology this powerful. The fact that they're being treated as obstacles to doing business tells you how far the rest of the industry has already drifted.

This story isn't over. I don't know if Anthropic holds this line. The structural pressures are enormous — a $380 billion valuation, investors who expect growth, and a government that's decided safety is weakness.

But right now, today, Anthropic is the only major AI lab that looked at a $200 million contract and said: not at that price.

That means something. And if you're choosing which AI to build your operations on, it should mean something to you too.


Sources


Nicole Patten is an ex-Google Senior Engineer and founder of Elevate Online, an AI consulting firm. She runs her entire business on Claude and trains companies on AI implementation. She writes about practical AI at elevateonline.com/blog.

← Back to All Posts