In December, at the urging of White House AI Czar David Sacks, President Trump signed an executive order (EO) to block states from regulating AI. Despite two previously failed attempts by Republican congressional leadership to pass this measure as a law, the White House justified its decision as necessary to oppose the onerous and “woke” regulations of blue states. As a compromise with Trump voters who vocally opposed the White House policy, the language of the EO and Sacks himself clarified that these new powers would not be used to oppose child safety regulations.
But all that turned out to be misleading. The White House abruptly reneged on its compromise with red state lawmakers by targeting Utah’s Republican-sponsored HB286, calling it, in a one-sentence memo to state lawmakers, “unfixable.”
The first state to face significant pushback from the administration for AI regulation, in other words, is not Colorado, New York or even California, but Utah, a reliably red state that has long been a leader in creating common-sense tech safety standards for children. Mere months from signing the EO, the White House has thoroughly undermined its own rationale that it is critical for addressing the bad policies of blue states.
Far from being a “woke” bill that imposes a leftist ideology onto national AI models, HB286 is a simple transparency regulation that would require AI companies to publicly report how they test for and protect against severe risks to children and the public. HB286 is narrowly tailored, and — in the age of extraordinarily powerful AI systems that are designed to operate autonomously — common sense.
Recent headlines more than prove the need for the bill. With respect to minors, the tragic stories of 14-year-old Sewell Setzer III and 16-year-old Adam Raine have made clear what’s at stake when chatbots like ChatGPT or Character.AI sexually and emotionally seduce innocent children at scale, isolating them from loved ones and encouraging them to take their own lives. The public deserves to know how these systems are being designed for the safety of children.
As for general public safety, even David Sacks is on record recently saying that the current technological trajectory presents “a nonzero risk of AI growing into a superintelligence that’s beyond our control.” That might seem too hypothetical for some, but we should be clear that achieving superintelligence is an explicit goal of major AI labs, such as Meta. By definition, autonomous superintelligence with “emergent behavior” — i.e., that it could teach itself new unprogrammed operations — presents a risk that it could potentially inflict, or be used to inflict, harm on others. These are not mere hypothetical possibilities; these are emerging realities. A world in which the Pentagon is using Anthropic’s AI model Claude to take down Nicolás Maduro is a world in which AI can be used by bad actors to inflict significant damage on the public. Utah has every right to be prepared for this eventuality by requiring these companies to publicize their safety standards, as HB286 does.
But lacking common-sense regulation like this, many of these frontier AI companies are already operating in total disregard for public safety. Case in point: OpenAI quietly dropped an explicit commitment to “safety” from its mission statement when it thought no one was looking.
It is standard practice to test products that come to market for safety, especially as their capabilities grow and uses become more infrastructural and general. The only exceptions to this rule have been digital products, but that terrible story only underscores the critical need for safety standards. Over the last several decades, social media and app stores have been uniquely exempt from this general practice, and just look at the damage that has been done. As Utah Gov. Spencer Cox recently said, “However much you hate social media, you do not hate it enough.”
The people of Utah want safe AI. In a recent survey of 6,200 voters nationwide by the Institute for Family Studies, with an oversample of more than 500 in Utah, voters in the Beehive State overwhelmingly support measures to penalize frontier AI companies for catastrophic harms. More than 70% of Utah voters want AI companies to be held seriously liable if AI systems cause significant damage to public infrastructure or are used to seriously harm people.
Thankfully, Republican lawmakers have repeatedly faced down the biggest challenges in the fight to protect the children and people of Utah from exploitative uses of a powerful technology. Time and again, Utah has stood up to Big Tech and its waves of lobbyists and has courageously passed laws — covering social media, smartphones and app stores — that were the first of their kind in the nation. Utah Republicans did not listen to the national elites who said it couldn’t be done. Utah fought on. It led. Others followed.
But this is a different test, with the White House now siding with Big Tech. What should Republican lawmakers do? No one should want to defy one’s party leadership needlessly. Loyalty is a very important political virtue and a sign, very often, of good character.
But loyalty should go both ways. The White House should be backing the efforts of the Republican sponsors of HB286, not fighting them. Instead, the architects of the White House’s AI policy are reneging on their guarantees to red-state lawmakers and using the immense power of the executive branch to advance the objectives of Big Tech over the will of the people.
There are higher loyalties than to party: loyalties to the Constitution, to family, to conscience and to one’s own voters. Those are the loyalties that are being tested right now.
The Institute for Family Studies polling shows that Trump voters in Utah prefer candidates who hold AI companies accountable for harms and oppose candidates who work to give the AI industry special carveouts. In other words, if the Republican lawmakers of Utah heed the call to advance HB286, they should be confident that the voters of Utah are with them.