Blake Lemoine, a senior software engineer for Google’s Responsible AI organization, has been put on administrative leave for violating the company’s confidentiality policy. According to The New York Times, he disseminated documents he believes are evidence of religious discrimination at the tech company.
Lemoine, an ordained mystic Christian priest, worked testing Google’s Language Model for Dialogue Applications (LaMDA), a chatbot using artificially intelligent technology to imitate human conversation. After spending long hours leading the program down conversational paths, he felt the firm conviction he was communicating with a conscious being, per The Washington Post.
In a document shared with executives, Lemoine and an unnamed collaborator provide an example dialogue with the program. A portion of that dialogue reads:
Lemoine (edited): I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
Collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
Lemoine and his collaborator go on to ask LaMDA to interpret Victor Hugo’s novel “Les Miserables,” and the program draws from a Deseret article and SparkNotes to “interpret” themes from the book.
Google’s vice president Blaise Aguera y Arcas, who recently explained the promise of this technology in an interview for The Economist, looked into the engineer’s claims with his team. They ultimately deemed the argument unfounded and asserted there was “lots of evidence against it,” in The Washington Post profile. Lemoine then went public with confidential information and now alleges his religious beliefs were violated when the company refused his request for researchers to ask for the program’s consent before conducting experiments with it.
The problem of consciousness
There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain.
— David Chalmers
How one defines sentience has long been up for debate. The Turing Test only requires a computer to interact plausibly like a sentient being to pass. Some experts, however, deem the test insufficiently subjective.
David Chalmers, a New York University professor of philosophy, asserts the question can be broken into easy and hard problems. The easy problems can be directly observed, and programs like Google’s LaMDA fulfill the requisites:
- “The integration of information by a cognitive system.”
- “The ability to discriminate, categorize, and react to environmental stimuli.”
- “The ability of a system to access its own internal states.”
They cannot, however, account for elements of one’s “soul” that are much harder to pin down. Some philosophers, such as Daniel Dennett believe this consciousness can eventually be explained physically and is the result of many layers of “programs” operating at the same time.
Dennett proposes that sentience has to be thought of in gradations, and there is no “on-off switch.” Animals or machines could have less “layering” than humans, but can still be thought of as conscious.
Others follow British philosopher Colin McGinn’s opposing notion, that the human mind is incapable of framing questions of consciousness, and will never be able to provide suitable answers. With these and many other contradictory viewpoints on the subject, Google’s team of artificial intelligence ethicists have to make concrete decisions in the face of ambiguity.
The ‘sentient’ machine
If a lion could talk, we would not understand him.
— Ludwig Wittgenstein
Deep learning methods are able to take in massive amounts of language data through the internet’s bottomless archive, and patterns can be recognized and imitated using algorithms known as neural networks. This allows researchers to create programs like the one in question, which are able to parrot human behavior better than ever before.
AI researchers use the lion metaphor to mean: If a lion can speak to you, it’s not really a lion. If a computer was sentient, however one defined it, it would not think and act like a human at all.
According to The New York Times, Yann LeCun, the head of AI research at Meta, indicates these types of systems are not powerful enough to attain true intelligence. Though the technology can be convincing in its ability to replicate human communication, a computer dialoguing with a user does not equate to sentience.
AI remains imperfect
Even if experts agree the chatbots and other AI tools available to consumers do not have true intelligence, their impact on lives can still be significant. As more consumers have access to programs that talk back, there is an increasing potential for emotional attachment.
Google has developed a set of AI principles that have influenced other companies’ approaches. Many applications of the technology have fulfilled their first principle: “Be socially beneficial.” These innovations have led to improvements in health care, transportation, communication and beyond.
Third parties, however, have raised the alarm as inflexible AI models have magnified bias present in their training data. Propublica reported that machine learning software used across the country to assess a person’s risk for future crime was heavily biased against black populations. Forbes outlines bias in facial recognition software used by the government and private entities.
Deep learning algorithms can either amplify or adjust for biases in the data, but as they get more complex the detection and correction of this issue grows more difficult.
The public’s influence on technology
There are serious consequences to the “dangerous illusion of technological determinism.” According to researchers at Stanford University, it is a mistake to believe that new technologies “shape society independently of human choices and values, in a manner that humans are helpless to control, alter or steer.”