The open internet is a failure at this point. With clickbait and pop-ups desperately trying to distract you from the article you are intending to read and the barrage of cookie consent banners and privacy “accept all” buttons wearing out your mouse button — well, if you are like me, you tend to throw up your hands and click blindly in order to move through the mess.
It may (or may not!) surprise you to learn that much of what is broken is the result of regulation gone wrong in the form of data privacy laws originally intended to protect consumers’ privacy. Instead, what these rules did for the most part was push us behind paywalled accounts where we trade our data and anonymity for convenience. Adding to the irony, the laws most to blame were adopted in other states and in foreign countries and don’t technically apply to us here in Utah. But because we all browse the same internet, we are stuck with them anyway.
I lament what happened to the internet. But I know why it happened.
It was still early days, and lawmakers were trying to hit a moving target. They were attempting to regulate a world that was evolving fast. So like hunters shooting at a bird on the wing, they took a shotgun approach. They blasted away at the internet horizon with a wide spread of shrapnel, and this is what we got.
The next technology of promise is, of course, artificial intelligence. This time, the stakes are higher and the regulatory task even harder. If the internet was moving like a bird on the wing, then AI is an interdimensional shape-shifter. Good luck creating laws today that will fully cover the risks of tomorrow with anything short of some dystopian ban on thinking machines.
Which is why much of the debate right now is over whether AI should be regulated at all.
During 2025, lawmakers across the 50 states opened over 1,000 AI-related bills. Congress became justifiably worried that local lawmakers would go overboard, and so Congress has come very close to passing an AI regulatory moratorium that would preclude states from weighing in. Many states, including Utah, pushed back firmly on this. But on Dec. 11, the White House took matters into its own hands with an executive order laying out a plan for discouraging state laws that regulate AI in ways that are imprudent.
That’s where we are today, with some big questions that still need to be answered: Should AI be regulated? Who should be doing it? How would it even work?
Enter Utah Gov. Spencer J. Cox and the Utah Legislature.
At the recent AI Summit hosted by Utah’s Office of Economic Opportunity, Gov. Cox proposed what could be a third way. He began with some tough talk, calling out and then calling on technology companies to build in a way that promotes “human flourishing,” and he proposed some guardrails. He was followed by a panel of state legislators who described their priorities.
Here is what I like about what they described. The Utah approach appears to be more about process than ivory tower rulemaking. For example, the state will run regulatory sandboxes to give innovators regulatory relief in a controlled environment. It will also promote learning centers and draw a hard line on protecting children. The overall policy could be summed up as: We want you to succeed, and we will provide you some space and even support, but we are watching you.
Historically, our most successful regulations have been ones that created frameworks rather than ones that tried to control outcomes. They empowered consumers and promoted competition. They set the rules of the game but didn’t pick winners. Such an approach is perhaps the only way to balance the enormous risks of AI with its enormous opportunities.