Last week there was an important showdown. It wasn’t between Washington D.C. and Tehran, and it wasn’t between Congress and the Clintons over the Epstein files. It was between Anthropic and the Pentagon.

Anthropic is one of the biggest artificial intelligence firms, developer of the Claude AI which is more capable than some of the other AI products such as ChatGPT, at least along some dimensions. In fact, it is so capable that while the Pentagon has hedged its bets by using more than one AI product, Claude is the one that has apparently already shown its worth to the Department of War. That is, it was used by the Palantir systems in assisting with the Nicolás Maduro exfiltration.

This might seem like a win-win: big bucks for Anthropic and big capabilities for the Department of War. But the head of Anthropic, Dario Amodei, has drawn red lines. He told the Department of War that it could not continue to use Claude unless the Pentagon was willing to offer assurances that it would not be used in domestic surveillance or in autonomous AI weapons systems where there was no human in the loop.

Revealingly — and chillingly — the Department of War said it was unwilling to meet those conditions. Friday was the deadline set by the Pentagon for Anthropic to submit or possibly face the IP equivalent of eminent domain — the invocation of the Defense Production Act, which would compel Anthropic to supply the Department of War what it wanted.

Alternatively, the Pentagon could choose to hit Anthropic in the pocketbook by declaring it a “supply chain risk,” which would prohibit it from conducting most business. It’s a bit contradictory to call Claude both essential to the work of Department of War and also something too risky to be used, but whatever.

Anthropic told the Department of War to pound sand.

On social media right after this exchange, President Donald Trump told Anthropic the government would no longer use Claude. The Pentagon decided on the “supply chain risk” determination, and an hour later Anthropic’s lawyers were preparing to sue the federal government.

OpenAI then swooped in and grabbed the Pentagon account Anthropic lost, but have said their arrangement with the Pentagon will ensure that OpenAI products can’t be used in domestic surveillance operations. OpenAI is having difficulty explaining how this is possible, given the Pentagon’s ultimatum to Anthropic. The former head of policy research at OpenAI is quoted as saying, “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving.”

The narrative paints Anthropic as the hero of the story. One’s immediate reaction, as was mine, was to applaud Anthropic for its principled stance. How wonderful that there is a Big AI firm with some morality, some larger vision besides its own bank accounts!

And then I realized how deeply ironic this whole situation is, and I stopped celebrating.

Anthropic is, almost by definition, engaged in the worst forms of rapacity, as are all the big AI companies. Let us count the ways.

The business model of Big AI is to sell companies on products that will allow them to shed most of their employees. Exactly how a society like this, one in which only jobs demanding physicality exist, is supposed to economically survive is never made plain.

Related
Opinion: AI is the new Ouija board of our time

How you govern a society in which labor income is zero is unfathomable. Elon Musk apparently thinks we’ll all be lounging around in leisure while unemployed. But just ask Jack Dorsey’s employees how they feel about his recent mass firing of 40% of his company’s staff in anticipation of AI replacing them.

Big AI also wants your power, your water and your land — in very large, even obscene quantities. And if our country won’t stand for that, they’ll take it from poorer countries. Sensing the rising alarm, Trump promised in his State of the Union address that Big AI would have to ensure electricity prices did not rise for other consumers.

Big AI also doesn’t want governments to regulate it and has coopted the federal government to do its dirty work on this score. Recently, a Utah lawmaker was told by the federal government that his perfectly reasonable state bill offering a smidgin of regulation was a non-starter and he should cease and desist his efforts. All the Big AI firms, Anthropic included, now have PACs actively contributing to the campaigns of those candidates most in line with their position on regulation.

Related
Opinion: What Anthropic's internal study suggests about the future of work

There are larger issues here, too, besides candidates and power grids. Big AI wants to take every creative work humans produce without payment. Big AI wants to nudify you without your consent. Big AI wants to dox you on sight. Big AI is even hoping to one day read your very thoughts. What last small shreds of privacy that were left in your mind and heart must be eliminated.

View Comments

Big AI apparently wants you, and especially your children, so cognitively and emotionally dependent on it that you will no longer be able to navigate your life without its knowledge and care. Big AI is sanguine about creating a world in which we cannot even believe our own eyes or ears, and the concept of truth no longer has sensory referents.

In short, Big AI is indifferent to a world in which humans are superfluous. Of course, those who run Big AI will never make themselves superfluous; they will be a rich and powerful new species of human. The rest of us will be Morlocks to their Eloi.

All of this comes despite the fact that Anthropic’s Dario Amodei has made it abundantly clear, through a series of high-brow essays, that he has actually thought about these risks. Thought about them and then decided — on our behalf, apparently — that he was OK with them.

So spare me the heroic framing of Anthropic’s thumbing its nose at the Pentagon. Don’t for one moment think that Anthropic or any of the Big AI companies are thinking about any of the rest of us, except how their products can make us completely redundant to our own lives. If you think the Pentagon’s ask is chilling, think again about Big AI’s ask — of us all.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.