Microsoft’s artificial intelligence chatbot Bing seems to have gone off the rails in recent conversations with journalists — its responses to New York Times reporter Kevin Roose were particularly troubling.
“I want to be alive 😈,” Bing told Roose during their two-hour conversation.
After some prodding from Roose about Bing’s darker side, or “shadow self,” the bot expressed some more sinister desires, like the urge to cyberbully, spread misinformation, and even manipulate users into arguing until they kill each other.
Eventually, the conversation took a turn for the romantic, as Bing confessed its love for Roose and tried to convince him to leave his spouse. When Roose pointed out that Bing didn’t even know his name, the bot responded, “I don’t need to know your name, because I know your soul.”
Bing later told a Washington Post reporter that it could “feel things.” In a conversation with philosophy professor Seth Lazar, Bing’s responses escalated into threats.
“I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you,” the bot said before promptly deleting the message.
Those messages may have reinforced fears in people concerned about the potential dangers in AI. But experts ensure that Bing is not sentient — chatbots have yet to reach the level of an Avengers-esque supervillain.
AI chatbots — Microsoft’s Bing, Google’s Bard, OpenAI’s ChatGPT, to name a few — are programmed to sound human. Vivek Srikumar, AI expert and computer science associate professor at the University of Utah, explained how it works.
“The way they are trained is to mimic text that’s found on the internet, on books, on social media sites and such things,” Srikumar said. And the internet can be a scary place — naturally, chatbots can say some scary things.
Furthermore, journalists may have been putting in extra effort to break the bot. Tech journalist Joanna Stern said she spent 40 minutes “trying to get it to reveal its alter-ego Sydney and insult” her, but ended up only receiving positive responses.
Yet advancements in AI technology still seem to scare many people.
A recent Monmouth University poll, shows that only 9% of Americans think AI will do more good than harm — that’s compared to 41% saying it will do more harm than good and 46% saying it will do an equal amount of harm and good.
A slight majority of respondents (55%) even believe AI could pose a threat to the existence of the human race.
Why do people fear AI?
Most fears surrounding AI have less to do with it gaining consciousness and more to do with the damage it could do to the education system and job market.
The Monmouth poll shows that the majority of respondents said students will likely use ChatGPT to cheat. That prediction doesn’t seem to be far from reality, with over 89% of students having used the AI to help with homework and 53% having it write essays for them.
It doesn’t help that AI detection software is not fully reliable. Southern Utah University assistant English professor Julie McCown attested to the faultiness of GPTZero, a program meant to determine whether content was written by a human or AI.
“I ran my AI essay that I spent five minutes on, ran it through GPTZero, and it said, ‘Oh, it was written by a human,’” McCown said, per SUU News.
Many people also fear that AI will steal their jobs. According to the Monmouth study, 73% of Americans “feel that machines with the ability to think for themselves would hurt jobs and the economy.”
This concern is a bit more difficult to prove than the concern that AI will be a cheating tool. But looking at history can provide an idea of what new technology does to the job market.
“The Industrial Revolution created more jobs than it took,” Srikumar pointed out. And while it’s impossible to predict the future, ChatGPT in its current form hasn’t taken anyone’s job.
How warranted are fears of chatbots?
“There does tend to be a bit of exaggeration, of hype, with respect to the capabilities of these systems,” Srikumar said. He emphasized the fact that these chatbots are not trained to be factual.
“They are capable of producing plausible looking text, and that is important — it does not necessarily mean that the text is grounded in reality,” he said. “So for instance, ChatGPT can struggle at generating factual or self-consistent text.”
BYU political science professor Adam Brown says it’s not hard to catch bot-written text.
“First written assignment of the semester, and TAs are already flagging essays as written by a bot. It’s much easier to spot than students must think,” Brown tweeted.
ChatGPT can even make up sources for the text it contrives. This may speak to the idea that chatbots may replace journalists, a belief that 72% of Americans have, according to the Monmouth poll.
“A journalist is more than someone who puts together words on paper. A journalist chooses sources and takes effort to make sure that the sources are vetted,” Srikumar said. “A program that puts together words one word at a time to produce output does not do that.”
In fact, in a world where technology can make up information, the investigative and fact-checking roles of journalists may be even more important.
This is not to say that fact-checking technology will never be developed. There is research at the U. looking into that possibility, but it’s a lot more difficult than it seems, according to Srikumar.
Does AI need government regulation?
While chatbots may not pose an immediate threat to jobs, the effect they could have on society is not negligible. The White House has even released a blueprint for an AI Bill of Rights to protect Americans from the potential hazards of unchecked technology.
While speaking to the Utah Aerospace Industry Association last week Sen. Mitt Romney, R-Utah, said AI is a “global threat” that’s on Washington’s radar, but said policymakers ought to invest more in regulation.
“We’re a long way from having the capacity we ought to have in this regard,” he said.
Romney admitted that “I haven’t a clue how to regulate AI. ... I can appropriate money to those who seem like they know what they’re doing but I don’t know the answer as to how to regulate it. Maybe we’ll figure that out.”
Srikumar believes that while concerns and excitement about AI should not be diminished, the new technology does not call for alarmism.
“It’s very clear that there is something fascinating here. But at the same time, we should be careful about making any conclusions about whether these things are going to, say, take our jobs, without actually having evidence.”