Modern thought begins in doubt. René Descartes decided to doubt everything, even his own existence, before he hit bedrock: If doubting is going on, there must be a doubter. So there is something real, i.e. the doubter. I think, therefore I am. Boom! He had found a sure foundation. And build upon it he did. He was a mathematical genius who anticipated most STEM fields today. Descartes is a symbolic launch-point for modern science and technology, a forerunner of the digital age.

Less well known is that Descartes also doubted people were actually human. Looking out his Amsterdam window at passersby in their hats and coats, he wondered if they might just be cleverly constructed automata. He lived at the dawn of a mechanical age. Clockwork and lenses were cutting-edge technologies. Automata could look uncannily alive. Good ones could move, talk and soon even seemingly eat or play chess. Lenses revealed impossible new sights beyond the reach of the unaided eye, the craters on the moon and the flapping tails of spermatozoa. Telescopes and microscopes breached bounds of sight and knowledge that fenced in all previous mortals.  

For four centuries since, we’ve been worried. What if the machines, with their obviously superior capacities, took over? How can we defend what is uniquely human from their threat? Do our devices pass divinely given limits and threaten our humanity? Previous generations worried about Frankenstein, robots and assembly lines; today we worry about AI and the large language models that power ChatGPT. It’s a legitimate question. But maybe it’s also the wrong one. 

The future is here

At least, we often go about trying to answer it the wrong way. Ridley Scott’s dark 1982 sci-fi film “Blade Runner” portrays a drizzly neon-lit Los Angeles in an indefinite cyberfuture where renegade “replicants” mingle undetected with the human population. These are artificial humanoids whose engineered identity can be revealed only by complicated tests. Some of them even think they are human. A former cop, Rick Deckard, played by Harrison Ford, is hired to hunt them down and “retire” (i.e. kill) them. Throughout, the film drops subtle hints, however, that this bounty hunter, too, might be a replicant. He’s named Deckard: Get it? (Descartes!) The question the film raises is not just whether the machines will take over but a deeper one: Am I a machine, too? What would it take for me to be human? 

Artificial intelligence, no doubt, gives reasons to worry. The First Industrial Revolution, powered by steam, replaced physical labor, to some degree, by machines. Goodbye shovel and scythe, hello backhoe and combine. The Second Industrial Revolution, powered by electricity, replaced mental labor, to some degree, with automation. Goodbye telephone operator and reference librarian, hello automated switchboard and Google. Technological change has never been smooth: Workers always have strong opinions when made redundant. The recent Hollywood writers strike, for instance, is partly about guaranteeing a place for human talent when computer-generated scripts (and potentially actors, as well) are cheap and easy.  

But let’s ask the deeper question: Does AI threaten what it means to be human? The annals of thought are littered with fallen defenses of what is uniquely human. Reasoning? Back-and-forth conversation? Empathy? All have wobbled. Ironically enough, we are regularly quizzed online whether we are human. We have to pick out the bicyclists, fire hydrants or crosswalks from an array of photos and then check a box attesting “I am not a robot.” Such CAPTCHA tests fend off spam and bots, while also mining valuable data for self-driving car developers. Declaring you are not a robot is an online open sesame! 

But being a human is much harder than checking a box. Descartes stared out his window at the passing crowds and wondered if they were human beings. We stare at our screens and might sometimes have the same thought. We appear as avatars or “profiles,” a word once used mostly for criminals.  Online we are all “replicants,” indistinguishable between android and human. In cyberspace, we are creatures of pixel and type. Maybe this is one reason the online world breeds so many bounty hunters out for the kill. 

Will humans be replaced? This question assumes too much — that we are already human. A quick survey of online behavior will suggest: Perhaps not. Humanity is not what we have; it’s what we need. A frequently administered test of our humanity would show us sometimes mechanical and lacking in empathy! (How we act online and off may be precisely such a test.) Science fiction is full of humanlike androids who yearn to be human. In this they are actually very much like us: To be human is to strive to transcend what we already are. Smart machines do not unfurl unprecedented challenges; they remind us of the oldest test of all, how to be humane.   

John Durham Peters is the María Rosa Menocal Professor of English and of film and media studies at Yale University.

This story appears in the October issue of Deseret Magazine. Learn more about how to subscribe.