A tireless superintelligence set to address, and solve, some of the world’s most pressing maladies or a soon-to-be-sentient digital consciousness that will eventually eliminate its human progenitors in the name of self-preservation?

Commentary and conjecture on the as-yet-unrealized potential of emerging artificial intelligence-driven tools ran the gamut in 2023, a year that may one day be marked as the start of humankind’s AI era.

But while OpenAI introduced the world to its ChatGPT chatbot just weeks before New Year’s 2023, the last 12 months have seen a torrent of competing platforms arrive on the scene, with capabilities that include crafting human-like responses to questions and directives, creating images, audio clips and video footage from user prompts, writing computer code, solving tricky problems and even penning a witty haiku or cracking a joke.

But because most of these tools are trained on massive data sets culled from the internet, so-called Large Language Models, AI output has been rife with the same falsehoods, biases, inequities and personal privacy intrusions as the internet itself. And, beside the glaring issues that come with harvesting information that simply reflects all the foibles of humanity itself, AI-driven tools are also repurposing, without attribution or renumeration, mountains of copyrighted content including from creators of literature, news, visual art, music and more.

Developers say they’re fine-tuning the way AI engines are trained to eliminate, or at least minimize, carrying the detritus of the internet into the function of new tools. As we watch for what’s ahead for AI in 2024, which is likely to be a lot, here are some of our favorite AI news items from the year past:

A chess-playing AI bot, with claws

Back on Jan. 1, the popular online chess site, Chess.com, launched a quintuplet of chess-playing bots along with their names, profiles and player ratings. Thing is, they’re all cat personas and one of them, Mittens, is drawing a slew of interest thanks to her vicious play on the board and scathing — but fun — in-game commentary.

The sly programmers at Chess.com served a bit of a wink-and-nudge that Mittens wasn’t going to play nice in the profiles it shared for the New Year’s Day announcement. While all the other cats came with player ratings (the higher the number, the better the player), Mittens’ rating was a question mark along with the quip, “Mittens loves chess … but how good is she?”

Interest generated by Mittens in early 2023 outpaced the surge that came on the heels of the wildly popular, chess-centric Netflix miniseries from 2020, “The Queen’s Gambit.” Chess.com was averaging 27.5 million games played per day in January and was on track for more than 850 million games that month — 40% more than any month in the company’s history. Along the way, Mittens earned fans across the chess playing spectrum, including among the game’s top stars.

BuzzFeed cuts humans, hires AI

Near the end of 2022, BuzzFeed announced plans to cut its workforce by 12% in an effort to rein in costs as the company’s stock value continued its downward spiral since going public in 2021.

But a partnership that will bring a very smart, albeit inanimate, content creator into the fold, sent BuzzFeed’s stock on a tear, with the share price up over 300% in two days in January.

But the website, best known for its listicles and quizzes, announced plans in January to adopt artificial intelligence technology developed by ChatGPT creator OpenAI to enhance both content and user experience, according to a memo to BuzzFeed employees from company CEO Jonah Peretti. BuzzFeed stock up shot up 300% in the days following the announcement.

“In 2023, you’ll see AI inspired content move from an R&D stage to part of our core business, enhancing the quiz experience, informing our brainstorming, and personalizing our content for our audience,” Peretti wrote in the memo, according to Reuters.

Peretti told employees he believes the algorithmic approach of optimizing content for users is set to be replaced by new tools based on emerging artificial intelligence capabilities.

“If the past 15 years of the internet have been defined by algorithmic feeds that curate and recommend content, the next 15 years will be defined by AI and data helping create, personalize and animate the content itself.”

Bard blunder cost billions

Google’s coming out party in early February for its artificial intelligence chatbot, Bard, was besmirched by a glaring factual error generated by the natural language processor, one that was widely shared in press outreach and social media channels.

The miss likely played a role in a two-day, $160 billion stock value drop for Google parent company Alphabet.

Even with the slip, Google continued to hold down its position as the fourth most valuable company in the world by market value at $1.2 trillion, at the time.

Google posted a short GIF video of Bard as part of the announcement, promising it would help simplify complex topics, but it instead delivered an inaccurate answer.

In the advertisement, Bard is given the prompt: “What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?”

Bard responds with a number of answers, including one suggesting the Webb telescope was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. The first pictures of exoplanets were, however, taken by the European Southern Observatory’s Very Large Telescope in 2004, as confirmed by NASA.

By late December 2023, Google’s market value was hovering near the $1.8 trillion mark.

‘Godfather of AI’ quits Google, says new tools could become ‘master manipulators’

Geoffrey Hinton’s research laid the groundwork for emerging artificial intelligence tools like ChatGPT, Bard and others, but the British Canadian scientist ended his decadelong stint working for Google in March 2023, citing concerns over the dangers posed by AI advancements.

In an interview with The World’s Marco Werman, Hinton said he’d had a change of heart about the potential outcomes of fast-advancing AI after a career focused on developing digital neural networks — designs that mimic how the human brain processes information — that have helped catapult artificial intelligence tools.

“They’ll be master manipulators because they’ll have learned that from us by reading everything on the web,” Hinton told The World. “Will they have their own goals and want to manipulate people to achieve their own goals or will we somehow be able to control them to help us?

“How do you control something that’s more intelligent than you? It’s very, very difficult to do that.”

But Hinton also believes the development of artificial intelligence can provide widespread benefits to humanity, including elevating productivity “in more or less every domain” and designing new materials that could expedite goals, like achieving fossil-free energy production.

And, he sees particular potential in areas like medicine, where AI-driven tools could greatly expand the effectiveness of diagnostics and treatment.

How do we avoid an AI-driven extinction event?

Are emerging artificial intelligence tools destined to evolve into an existential threat at the same level as a potential global nuclear war or unforeseen biological disaster?

That’s the contention of a single-sentence missive issued in June by the nonprofit Center for AI Safety that earned the signatures of a wide-ranging group of distinguished scientists, academics and tech developers including Turing Award winners Hinton and Yoshua Bengio, and leaders of the major AI labs, including Sam Altman of OpenAI and Demis Hassabis of Google DeepMind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads.

The Center for AI Safety said the statement, which has accrued hundreds of signatories since posting, has the support of a “historic coalition of AI experts” along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists and climate scientists who believe establishing the risk of extinction from advanced, future AI systems is now one of the world’s most important problems.

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, director of the Center for AI Safety, in a press release.

Utahns weigh in on AI as regulators scramble to catch up

The Deseret News and Hinckley Institute of Politics reached out to Utahns this year to hear about their collective thoughts about artificial intelligence as rising concerns among federal lawmakers brought regulation efforts to the forefront.

In a statewide poll of registered Utah voters conducted May 22-June 1, 69% of respondents said they were somewhat or very concerned about the increased use of artificial intelligence programming while 28% said they were not very or not at all concerned about the advancements.

The polling was conducted by Dan Jones and Associates of 798 registered Utah voters and has a margin of error of plus or minus 3.46 percentage points.

The concerns over AI reflected by Utahns were being widely felt by political leaders, as well, and efforts to figure out a regulatory response to AI advancements are well underway in the U.S. and around the world.

In May, the U.S. Senate convened a committee hearing that leaders characterized as the first step in a process that would lead to new oversight mechanisms for artificial intelligence programs and platforms.

Sen. Richard Blumenthal, D-Conn., who chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, called a panel of witnesses that included Sam Altman, the co-founder and CEO of OpenAI, the company that developed ChatGPT, DALL-E and other AI tools.

“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Blumenthal said.

Those past mistakes include, according to Blumenthal, a failure by federal lawmakers to institute more stringent regulations on the conduct of social media operators.

“Congress has a choice now,” Blumenthal said. “We had the same choice when we faced social media, we failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them.

“Congress failed to meet the moment on social media, now we have the obligation to do it on AI before the threats and the risks become real.”