It has been most unfortunate to see how indebted the Trump administration seems to be to Big Tech. I’ve previously recounted how a single sentence was snuck into the 1,118-page “big, beautiful budget bill” in May, which would have banned the regulation of artificial intelligence by states for a full decade. Fortunately, there were vigilant senators who caught this sneaky move and stripped the measure from the reconciliation version of the bill by a vote of 99 to 1.

And yet here we are again. Apparently, those with money and clout will continue to push this sociopathic agenda one way or the other. This time, two alternative routes to achieving this ban are being pursued: the first is to once more try to insert a sentence into a large, important bill, to wit, the National Defense Authorization Act, which funds the Department of Defense. Frankly, given the debacle in May, this strategy appears to be dead on arrival.

Knowing those chances are low, the Trump administration is reportedly considering a Plan B: issuing an executive order banning state regulation of AI. The upshot would be that the Department of Justice would then be in a position to punish states that promulgated AI regulation, by suing them and by withholding funds. The draft version of the EO would establish several lines of action for the federal government.

First, the Department of Justice would establish an AI Litigation Task Force to challenge state laws in court on the grounds that such regulation would “unconstitutionally regulate interstate commerce.” That regulation is granted to the federal government as one of its core powers vis-à-vis the states. (Unfortunately for the DOJ, there’s a pesky 10th Amendment standing in its way.)

In addition, the secretary of commerce would be tasked with creating a list of state laws that would be “onerous” for AI companies to implement, and states with such “onerous” laws would also lose funding under various Commerce programs.

A smidgen more promisingly, within 90 days of the issuance of the order, the Federal Communications Commission would “initiate a proceeding to determine whether to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.” Then at some future undefined point, the administration would prepare “a legislative recommendation establishing a uniform Federal regulatory framework for AI that preempts State AI laws that conflict with the policy set forth in this order.”

Related
Perspective: The algorithm accountability act deserves our support

This, of course, has been the issue all along. There has been almost no action by Congress on federal-level AI regulation, and that situation has forced the states to act in its stead to protect their citizens. Now the administration is proposing to scrap state laws and at some unspecified point in the future provide some basic protections for citizens. It’s almost as if citizens were an afterthought to the much more important goal of protecting AI companies. This amounts to nothing less than a sellout of the American people.

But surely AI companies wouldn’t be in the business of harming humans, would they? Unfortunately, the utter casualness with which AI companies treat American lives is on full display for all to see. Consider the lawsuit seeking damages from OpenAI in the suicide of Adam Raine.

Raine, a 16-year-old in California, discussed his suicidal ideas with OpenAI’s ChatGPT extensively over the course of months. In April, he killed himself, leaving his parents to read the lengthy record of these “chats.” The Guardian notes, “The lawsuit alleges the teenager discussed a method of suicide with ChatGPT on several occasions, that it guided him on whether a suggested method would work, [and] offered to help him write a suicide note to his parents.”

OpenAI’s defense was finally revealed in a California courtroom last month. The company asserted that “to the extent that any ‘cause’ can be attributed to this tragic event,” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”

OpenAI says its terms of use prohibited asking ChatGPT for advice about self-harm and contains a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information.”

In other words, if AI nudges you toward suicide, that’s your fault. If AI lies to you and you believe it, that’s your fault. The right for AI companies to make money off of you even while harming you currently trumps all.

If there was ever a case in which government intervention in the marketplace was justified to prevent harm to the American people, this is it. It’s the harm caused by AI that’s “onerous” and not the regulation of AI.

Related
Perspective: Why is Peter Thiel suddenly talking so much about the Antichrist?

What makes the situation even sadder is that the states have come up with a variety of innovative regulations that Congress could adopt as part of federal minimum standards on AI. The states have led out, and we should be learning best practices from their efforts.

6
Comments

Utah, for example, has been at the forefront of policy efforts to shield children from the worst effects of our internet-centered culture. It was one of the very first states to insist that online porn sites implement age verification measures, and this year Utah required that app stores must also verify users’ age. Utah also legislated protections for its citizens when they interact with so-called “mental health chatbots.”

I’ve written previously about how Utah Sen. John Curtis’ new proposed bill, the Algorithm Accountability Act, is another praiseworthy attempt to prevent and disincentivize the harm done to our citizens through the use of AI algorithms. Utah’s creative and proactive stance can only be applauded — but under the executive order under consideration, Utah would be prohibited by the federal government from enforcing any of its admirable AI-related laws.

Fortunately, there is a broad, bipartisan consensus that prohibiting state regulation of AI is unconscionable. From Gov. Gavin Newsom to Sen. Josh Hawley, political figures on both the left and the right, have come out in opposition to this shameful agenda. More than 300 state lawmakers from all 50 states and 36 state attorneys general have signed letters opposing the regulation ban; a new survey finds Americans 3-to-1 against such a ban.

The continued pushing of this agenda in the face of bipartisan resistance simply “shows what money can do,” according to Hawley. He’s not wrong. On one side are the pecuniary interests of our tech billionaires and even trillionaires; on the other, the lives and wellbeing of the American people. It should be a no-brainer for the president of the United States of America. Why isn’t it?

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.