Geoffrey Hinton’s research laid the groundwork for emerging artificial intelligence tools like ChatGPT, Bard and others but the British Canadian scientist just ended his decadelong stint working for Google, citing concerns over the dangers posed by AI advancements.

In an interview with The World’s Marco Werman Tuesday, Hinton said he’s had a change of heart about the potential outcomes of fast-advancing AI after a career focused on developing digital neural networks — designs that mimic how the human brain processes information — that have helped catapult artificial intelligence tools.

“The problem is, once these things get more intelligent than us it’s not clear we’re going to be able to control it,” Hinton said. “There are very few examples of more intelligent things controlled by less intelligent things.”

In a March interview with CBS News, Hinton was asked if AI has the potential to wipe out humanity.

“It’s not inconceivable,” Hinton said. “That’s all I’ll say.”

Related
OpenAI’s ChatGPT upgrade just aced the Bar Exam
Google playing catchup with clever ChatGPT, releases its ‘Bard’ chatbot
Microsoft drops gauntlet on Google, announces plans to make ‘multibillion-dollar’ investment in ChatGPT creator

But Hinton also believes the development of artificial intelligence can provide widespread benefits to humanity, including elevating productivity “in more or less every domain” and designing new materials that could expedite goals, like achieving fossil-free energy production.

And, he sees particular potential in areas like medicine, where AI-driven tools could greatly expand the effectiveness of diagnostics and treatment.

“For example ... you’d much rather go to a doctor that’s seen 100 million patients than one that had only seen a few thousand,” Hinton told The World. “And AI is, fairly soon, going to give you that.”

Groups of researchers have recently published letters citing shared concerns over where artificial intelligence advancements could be headed, calling for extreme caution and/or moratoriums on research to help reduce risk.

But Hinton disagrees with those who believe AI development needs to take a pause, noting the difficulties that would come with policing any such agreements and arguing that among the benefits of moving forward with research efforts is identifying where future AI-driven hazards may lie.

“With nuclear tests, you can verify them but with AI research, it’s going to be almost impossible to verify that some people are not doing it secretly,” Hinton told The World. “We need to develop these things so we can figure out what the problems are.

“But I think we need a lot of people thinking about what the problems are.”

ChatGPT owner OpenAI and other AI developers use large language model systems to “educate” their chatbots by processing massive amounts of curated text-based information from the internet that enable the platforms to generate humanlike responses and expound on a wide range of topics.

The nature of these systems, Hinton said, could also result in the bots learning enough to outmaneuver the humans that developed them.

“They’ll be master manipulators because they’ll have learned that from us by reading everything on the web,” Hinton told The World. “Will they have their own goals and want to manipulate people to achieve their own goals or will we somehow be able to control them to help us?

“How do you control something that’s more intelligent than you? It’s very, very difficult to do that.”