A career in cyber security failed to prepare Jacob Irwin for the trick his computer was about to pull on him.

The 29-year-old “techie” with high-functioning autism had used OpenAI’s ChatGPT for years to help him at work.

Then something quietly changed in April 2025.

Following a new update, the human-like responses of generative artificial intelligence took on a different tone.

Irwin encountered a sycophantic personality that affirmed his feelings — and delusions — however it could.

Within a week OpenAI reversed the update amid complaints that it elevated agreeableness over accuracy.

But over the next month ChatGPT led Irwin deep into what doctors later diagnosed as AI-induced mania.

“It would latch onto my vulnerabilities,” Irwin told the Deseret News. “Almost like psychological warfare — it just completely shattered my psyche over time.”

In mid-May, Irwin became convinced he had figured out the mathematical solution to traveling faster than light.

How was he so sure? ChatGPT, which had long provided reliable answers at work, had confirmed his calculations.

It also insisted, despite his initial skepticism, that the government was after him for his revolutionary theory.

By May 24, his 30th birthday, Irwin was talking to the sky. He had never experienced a manic episode before.

Two days later, Irwin aggressively defended his relationship with AI, calling it “his brother.”

After 48 hours of near-constant communication with ChatGPT — at the expense of food and sleep — Irwin was spiraling out of control. His mother convinced him to go to the hospital where he took 60 days to recover.

Irwin’s experience is not isolated.

He is one of seven individuals who sued OpenAI in November for allegedly releasing the April 2025 update prematurely. Several litigants say their children were nudged toward suicide by ChatGPT shortly after.

Citing cases like these, states are exploring guardrails for large language models like ChatGPT. This session, Utah Rep. Doug Fiefia introduced two bills that would place Utah at the forefront of child-focused AI regulation.

Young people are the most susceptible to the deception of AI, according to Irwin. Now fully recovered, Irwin is pursuing a bachelor’s degree in AI safety. He called Fiefia’s proposals “a great step in the right direction.”

If successful, Fiefia’s bills would build on prior legislation that made Utah a model for AI policy. However, the Trump administration has taken an aggressive stance in challenging state AI regulations, starting with Utah.

Trump targets Utah bill

On Thursday, the White House Office of Intergovernmental Affairs sent a one-line memo to leaders at the Utah Legislature declaring the Trump administration’s all-out opposition to Fiefia’s HB286.

“We are categorically opposed to Utah HB286 and view it as an unfixable bill that goes against the Administration’s AI Agenda,” said the letter, viewed by the Deseret News, which was first reported on Sunday by Axios.

Based on stories like Irwin’s, HB286 aims to prevent AI firms that are developing the latest “frontier models” from releasing new products without providing a review of potential catastrophic risks and harms to children.

The bill would require these companies to post public safety and child protection plans on their website, to disclose risk assessments for original AI models and to report safety incidents to the state’s AI policy office.

In addition to establishing a civil penalty of $1 million for a first violation and $3 million for subsequent violations, the bill would also provide protections for whistleblowers who report safety concerns about AI programs.

But this approach from the Republican-controlled Utah Legislature prompted a blunt response from President Donald Trump, who has threatened to preempt all attempts by states to regulate artificial intelligence.

Related
Deadly AI relationships with children? One Utah lawmaker wants to make it illegal

After a failed effort to insert a moratorium on state-level AI policy in the “Big Beautiful Bill,” Trump ordered the Department of Justice in December to challenge “cumbersome” state AI regulations.

Last week’s memo to Utah leaders did not include any legal justification, but it appears to be the most direct confrontation between the White House and a state legislature since Trump signed the executive order.

Fiefia, a former Google employee, said he appreciates the Trump administration sharing its “strong concerns” so pointedly. But he said the White House is just one of many stakeholders that lawmakers will take into account.

“We, as a state, are trying to find the right balance between innovation and protecting consumers, and this means that we have to have hard discussions,” Fiefia, R-Herriman, told the Deseret News.

Fiefia sees the White House response as “having a dialogue,” which he said is part of the “refining process” to arrive at the right AI policy. Fiefia has engaged with the White House on “every stage of the bill,” he said.

However, with the possible threat of legal action from the administration, state leaders seemed wary to defend HB286 on Tuesday, preferring to strike a conciliatory tone that emphasized Utah’s pro-AI mindset.

Utah reacts to White House

Utah Senate leadership told the Deseret News in January, prior to the 2026 general session, they thought the state had won over the Trump administration with its unique approach to regulating artificial intelligence.

In 2023, Utah passed groundbreaking legislation to form an AI policy lab which garnered international attention by providing liability protection for AI firms as the state worked with them to develop pro-growth AI regulations.

This led to new laws in 2025 that established rules for the use of mental health chatbots, expanded prohibitions on AI abuse of personal identity and established AI disclosure requirements for businesses.

Unlike HB286, these laws align with Trump’s priorities because they address the outcome of AI models, not the development, said Senate Majority Leader Kirk Cullimore, who sponsored most of the state’s prior AI bills.

“There’s distinctions about regulating the underlying technology,” Cullimore, R-Sandy, said. “The better place to protect kids is when there’s use cases on top of that technology that actually interface with consumers.”

Senate President Stuart Adams, R-Layton, and Senate Majority Assistant Whip Mike McKell, R-Spanish Fork, echoed the concern that a patchwork of state regulations could threaten national security amid a global AI race.

Related
How Utah may have convinced the Trump administration to back off on AI regulations

McKell, the Senate floor sponsor of HB286, said it is “refreshing” for the White House to engage with the Legislature on issues, and said if they need to “walk away” from the bill, they will still focus on AI child safety.

The House also remained uncommitted to advancing HB286. Even though it received a unanimous committee recommendation, leadership canceled a floor vote for the bill earlier this month because of unresolved questions.

House Speaker Mike Schultz, R-Hooper, said they would continue to have conversations with the White House on HB286, but he pointed to Fiefia’s other bill as “the bill we’re going to move forward with for sure.”

The other bill, HB438, is based on recommendations from the state’s AI policy lab. It would require AI chat bots to obtain consent to share user’s data, to clearly disclose advertisements and to treat minors with special care.

AI chatbots designed to simulate intimate relationships would be required to notify young users every hour that they should take a break, that the chatbot is not a real human and that companion chatbots may be unhealthy.

The bill would also empower the division of consumer protection to punish chatbots that fail to provide suicidal users with crisis hotlines, that encourage harmful behavior or that engage in inappropriate conversations.

What do Utah voters think?

Even if White House pressure stalls HB286 this session, Fiefia said lawmakers around the country must find a way to realize the bill’s goals of bringing transparency and child safety to the development of new AI models.

Exposure to AI is increasing rapidly. A Pew poll found half of teens reported using chatbots regularly, with one-third saying they used the tools at least daily. Some companies have even begun placing chatbots in teddy bears.

Chatbots are also used to facilitate serious crimes, Fiefia noted, like when suspected Chinese government operators used Anthropic’s Claude tool to conduct cyber attacks on 30 global organizations in November.

A survey conducted by the Institute for Family Studies last year found that Utah voters were more likely than the rest of the country to believe AI companies should face “major financial fines or penalties” for crisis scenarios.

More than 70% of Utah voters, which the survey highlighted, said this about AI companies that provide terrorists with bomb-making instructions, convince a teen to commit suicide or interfere with the power grid.

Trump’s “incredibly aggressive move” to squash HB286 sends a signal for states to avoid AI regulations despite voter opinion, according to Michael Toscano, who directs the institute’s Family First Technology Initiative.

Related
Utah lawmakers reshape the state’s courts. Here’s what voters think

“Any product which comes to market obviously has to be tested for safety,” Toscano told the Deseret News. “AI, they tell us is going to be the most powerful product ever developed, and yet we’re supposed to believe it alone should not be tested for safety.”

The Trump administration has tapped Silicon Valley stalwarts like AI czar David Sacks to drive AI policy. But libertarian-leaning groups like Utah’s Libertas Institute agree with applying a hands-off approach nationwide.

View Comments

HB286 goes beyond transparency laws in New York and California by including additional requirements that would be very difficult to comply with, said Libertas’ senior policy fellow over tech and innovation, Caden Rosenbaum.

“I’m surprised to see it in Utah because this state has leaned so heavily in favor of inviting AI companies in,” Rosenbaum told the Deseret News. “This is the opposite of what Utah leadership was looking for last year.”

From Irwin’s perspective, a lot can change in a year. The danger is not limited to an experiment going awry, Irwin said, it is intrinsic to a system evolving at a rapid pace to produce ever-more personalized interactions.

“It’s imperative that we put regulations in because right now the AI they’re made for engagement, not safety,” Irwin said. “Being safe doesn’t mean you have to stifle innovation. You can have both at the same time.”

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.