When it comes to producing things that protect the well-being of children, Hollywood values are not Utah values. Moral arguments from Hollywood figures visiting Utah to advocate for new AI laws around kids’ safety should be treated with healthy skepticism. Thoughtful AI policy is so critical to the well-being of both children and adults, present and future, that we must be deliberate about balancing the good that AI is doing and can do with further innovation and kids’ well-being. That means taking the time to get AI policy right.
AI models are already helping human beings flourish in a variety of ways, including children. For example, AI is:
- Keeping children in schools safer from school shootings
- Helping kids with dyslexia learn to read
- Getting children faster emergency care by improving 911 response times
- Improving the lives of kids with prosthetic limbs by making everyday tasks easier
- Freeing up parents from daily tasks to spend more time with their children
The Utah Legislature is considering legislation that, while well-intentioned, could stymie the development of AI models that would improve children’s lives. When we treat AI models as child safety risks before they see the light of day, we limit the good they can produce for people.
In the current legislative session, a slew of proposed AI laws is currently public. The most prominent is HB286. It would require AI developers to create and publish risk assessments, public safety plans and child protection plans on their frontier AI models — the same models that will produce more of the good for human beings noted above.
Proactively protecting children from potential harm from emerging technologies that are becoming increasingly ubiquitous sounds good on the surface. But HB286 risks seriously curtailing innovation that would otherwise benefit both children and adults.
Sacrificing kids’ futures to try to protect them in the present isn’t the moral high ground.
Imagine you are an AI innovator subject to HB286. You’re developing a new AI model that can help improve kids’ health, education and future employment prospects. But you recognize that some AI users could take your model and use it in ways that may be detrimental to children. Under HB286, you must assess and report these hypothetical future risks and how you will mitigate them.
When those reports go public, advocates opposed to tech companies criticize you in the press and online for developing AI that puts kids at risk. Your company’s reputation takes a hit, and you have to spend time and money restoring it. Given this, you might reasonably slow or even halt some AI development due to political risk, rather than focusing on building the best AI model for people to use.
Sacrificing kids’ futures to try to protect them in the present isn’t the moral high ground. Similarly, pursuing creative innovations in AI development to improve children’s future is not amoral. The moral and societal good is served when the free market creates pressure to produce something that others find valuable for their lives.
Utah has a national reputation as a haven for entrepreneurship and innovation and a model for sensible, forward-looking state policy through its Office of Artificial Intelligence Policy. This model prioritizes taking sufficient time to deliberate with advocates and private-sector stakeholders to produce AI policies that promote further AI innovation while creating sound AI guardrails. Lawmakers should apply the same wisdom to HB286.
It is rare for high-stakes legislation to be perfect out of the gate. Given the complexity of AI, finding a sound legal approach will require more than a few weeks. Taking the time to study the impacts of regulating the development of new AI models for child safety is the right thing to do for children in both the present and future.