When a military is heavily reliant on artificial intelligence, who ultimately calls the shots — the commander in chief or the company’s CEO?

This question was answered last Friday after the Department of War ended its contract with Anthropic.

For the past several months, the AI developer Anthropic had been negotiating with the U.S. military over the use of its models on classified systems.

Anthropic would not allow the Pentagon to use its AI to create autonomous weapons or conduct mass surveillance on citizens. The company’s CEO, Dario Amodei, said both actions are technically “legal” under U.S. law.

Tensions escalated last Tuesday when Secretary of War Pete Hegseth said he would terminate the AI company’s $200 million contract if it didn’t lift restrictions on all lawful military uses by Friday evening.

Related
How government can help instead of hinder America’s AI ambitions

Less than two hours ahead Friday’s deadline, President Donald Trump wrote on Truth Social that Anthropic was trying to “strong-arm the Department of War” and “force them to obey their Terms of Service instead of our Constitution.”

Several hours later, Hegseth followed up on X. Anthropic was trying “to seize veto power over the operational decisions of the United States military,” he wrote. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”

Hegseth added that he was directing his department to designate Anthropic as a “supply-chain risk to national security.”

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service,” he wrote.

In a statement released by Anthropic following Hegseth’s X post, the AI company defended its stance.

“To the best of our knowledge, these exceptions have not affected a single government mission to date,” it said.

Regarding the alleged designation as a supply chain risk, Anthropic said the move was “unprecedented.” Historically, the label has been “reserved for U.S. adversaries, never before publicly applied to an American company.”

“As the first frontier AI company to deploy models in the U.S. government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so,” the company wrote.

Related
OpenAI faces seven lawsuits alleging ChatGPT had a role in suicide deaths

Enter Sam Altman and OpenAI

On the same day Anthropic’s deal with the Department of War fell apart, OpenAI CEO Sam Altman posted on X that he’d reached a deal with the Pentagon.

In OpenAI’s announcement, the company said it holds the same two red lines Anthropic does, plus a third: AI should not be tasked with high-stakes decision making.

When asked why it was able to make a deal and Anthropic was not, OpenAI said, “We believe our contract provides better guarantees and more responsible safeguards than earlier agreements, including Anthropic’s original contract.”

The rivalry between Amodei and Altman in the artificial intelligence space spans more than a decade. It started in 2015, when Elon Musk, Altman and others co-founded OpenAI to rival the AI model Google was developing.

Musk introduced Amodei to the group, and shortly after he left the company in 2018, Amodei was promoted to research director.

From these early days of AI development, Amodei was focused on safety, while Altman focused on speed. When asked why he left the company in 2020, Amodei later told podcaster Lex Fridman, “It is incredibly unproductive to try and argue with someone else’s vision.”

Related
Perspective: What Anthropic’s internal study suggests about the future of work

Amodei says Anthropic will challenge the Pentagon in court

In an interview with CBS on Saturday, Amodei said his company had not received any formal information from the federal government.

“There’s just been tweets saying what they claim they’re going to do. We haven’t received any formal information whatsoever. All we’ve seen are tweets from the president and tweets from Secretary Hegseth,” he said.

If and when Anthropic receives formal action from the Department of War, “We will look at it, we will understand it, and we will challenge it in court,” he added.

Amodei said Anthropic is still trying to reach a deal with the Pentagon.

“We are willing to serve national security of this country. We are willing to provide our models to all branches of government, including the Department of War, the intelligence community, the more civilian branches of government, under the terms that we’ve provided, under our red lines,” he said. “This whole timeline has been driven by the Department of War, not by us. We are trying to provide continuity, we are trying to provide the services, we are trying to reach a deal here.”

To Amodei, the right long-term solution for AI governance should come from Congress. But as things currently stand, U.S. law has not caught up with the pace of AI innovation.

“We simply believe that the reliability is not there yet, and we need to have a conversation about oversight. We have offered to work with the Department of War, to prototype them in a sandbox, but they weren’t interested in this unless they could do whatever they want from the beginning,” Amodei said.

20
Comments

He added, “We are not categorically against fully autonomous weapons.”

Related
February’s cascade of Epstein-related resignations

What are Anthropic’s specific safety concerns?

Amodei is concerned about AI’s ability to breach the Fourth Amendment, which ensures Americans privacy from the government. He is also concerned that fully autonomous weapons would further complicate war.

“The right not to be spied on by the government and the right for our military officers to make decisions about war themselves and not turn it over completely to a machine — these are fundamental principles," he said.

As technology currently stands, fully autonomous weapons could “target the wrong person” or “shoot a civilian.” Artificial intelligence “doesn’t show the judgment that a human soldier would show. ... We don’t want to sell something we don’t think is reliable, and we don’t want to get our own people killed or get innocent people killed.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.