The “Machines” don’t make mistakes. That’s the idea, anyway.

The year is 2052 and these autonomous supercomputers are in charge of the global robot economy. With “positronic brains” (a fictitious technology) and access to every point of humanity’s data, they manage every detail, achieving the stability that humans never could. They’ve eliminated shortages and waste, famine, and even unemployment. Human worries seem to be over. The world government trusts the Machines, so why shouldn’t we all? Besides, the Three Laws of Robotics ensure that these smart devices will look out for humanity’s best interests.

The trouble is, the Machines are starting to glitch.

This frames the plot of “The Evitable Conflict,” a short story from “I, Robot,” Isaac Asimov’s seminal collection published in 1950. In the background of the nuclear age, technology was quietly reinventing itself. This was two years after an autonomous robot first sought a recharge when it ran low on power and a year before the first artificial intelligence let users play checkers against a computer opponent.

Asimov saw beyond those pedestrian capabilities to imagine the long-term future, using fiction to grapple with its implications, projecting how humanity might use technology and what principles might govern it.

The Three Laws are essentially ethical principles written into the foundational programming of each robot or autonomous computer in Asimov’s oeuvre, directing it to never harm humans, but rather to obey them, while preserving its own existence. The “I, Robot” collection explores how smart-ish devices driven by these ideals might respond to the complex and unpredictable vicissitudes of real life, and what that means for the people they serve.

Once the horror at Hiroshima took place, anyone could see that science fiction writers were not merely dreamers and crackpots after all.

This is where those pesky glitches come in.

These errors have led to economic inefficiencies, which is not supposed to happen. More alarming, they have impacted certain people and companies more than others — specifically, those associated with the “Society for Humanity,” an activist group that seeks to take back the power. The Machines, of course, know who those people are.

Second-term “World Co-Ordinator” Stephen Byerley can’t imagine the computers would ever act against political opponents, but he needs to figure out what exactly is messing up the system.


A new age, for science and science fiction

Today, our world is closer to 2052 than 1950, and that’s not just math. AI is popping up everywhere, writing our emails, screening job applicants, managing supply chains, detecting financial fraud, setting prices for groceries and even raising our rent. Computers may not run our economy, but it could no longer run without them.

Meanwhile, robots have slow-rolled into our personal lives, from autonomous vacuum cleaners and phones that talk back to refrigerators that know when we need to buy milk and taxis that drive themselves. At each step, humanity gives up a little more control and responsibility in exchange for convenience and efficiency.

Asimov used science fiction as a laboratory where he could experiment with ideas and technologies that did not yet exist and may never arrive.

That’s one way of looking at it.

New technology has long evoked both wonder and fear, and science fiction has tried to understand both. The genre may be as old as literature itself.

Ancient Hindu tales described steam-powered space birds and weapons that could level cities. Ancient and medieval writers from Greece to Baghdad and Japan imagined outer space and alien life forms, time travel, undersea civilizations and humanoid automatons. But Asimov dates the dawn of science fiction to the Copernican revolution, when the Earth ceased to be the center of the European universe. In “Somnium” (1634), astronomer Johannes Kepler narrated a boy’s trip to the moon and new perspectives on the galaxy.

Moon travel was a trope by 1835 when Edgar Allan Poe wrote “The Unparalleled Adventures of One Hans Pfaall,” but science fiction was taking a darker turn amid growing fears of electricity, industrialization and even trains.

From Mary Shelley’s “Frankenstein” (1818) to Robert Louis Stevenson’s “The Strange Case of Dr. Jekyll and Mr. Hyde” (1886), the genre tapped anxieties over scientific ethics. Victor Hugo’s “The Legend of Centuries” (1859) and H.G. Wells’ “The Time Machine” (1895) warned of dystopian futures. In William Henry Rhodes’ “The Case of Summerfield” (1871) and Jack London’s “The Unparalleled Invasion” (1910), rogue inventors and governments devised catastrophic weapons, biological warfare and genocide.

By the time Asimov was born in 1920, technology was catching up to our imaginations — and so was the human will to use it in the worst ways. Machine guns, tanks, anthrax and mustard gas fueled nightmares from Armenia to Ypres and beyond. Twentieth-century wars became incubators of innovation, spawning computers, wireless communications, cryptography, jet engines, microwaves, self-guided projectiles and terrifying new weapons.

“The dropping of the atom bomb in 1945 made science fiction respectable,” Asimov once said. “Once the horror at Hiroshima took place, anyone could see that science fiction writers were not merely dreamers and crackpots after all.”

Robots have yet to deliver such a real-world moment, but they are about to become more common. Tesla, the EV automaker, recently announced it would stop building two popular vehicles so it can focus on producing its Optimus humanoid robots. That line will compete with China’s Unitree, whose model H1 went viral last May when it malfunctioned on video, apparently attacking its human handlers. Another Unitree had previously lunged at a crowd during an exhibition. Malfunctions are to be expected in any new technology.

Related
Op-ed: What are the ethical possibilities of artificial intelligence?

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The chase starts when a postapocalyptic scavenger reaches for a box high up in a seemingly empty warehouse. Behind it waits a robot security dog — the namesake “Metalhead” in this 2017 episode of Netflix’s “Black Mirror” — with retractable legs and a built-in arsenal.

It eliminates the man with a gun built into its forepaw, identifies his cohorts, and unleashes a relentless and violent pursuit. Over the next half-hour, it races vehicles, hacks computers, tracks its prey on foot using something like LIDAR, thinks strategically and even uses household objects as weapons. Unlike the animal it emulates, it does so without empathy.

Such power unmitigated by a human heart is what makes the idea of robots terrifying. They weren’t always mechanical. The term “robots” was coined by Czech playwright Karel Čapek in “R.U.R.” (1920) to describe mindless workers created from organic matter, more like zombies. The first humanoid mechanical robot appeared on screens in “Metropolis” (1927). It was able to emulate a human, mimicking her personality and demeanor, but this still led to disaster for its maker.

Born in Russia in 1920, Asimov was nine years old when he started reading the pulp sci-fi magazines in his parents’ Brooklyn candy store, perhaps “Astounding Science Fiction” or “Weird Tales.” A naturalized American citizen, he sold his first story to “Amazing Stories” when he was 18 years old — and a senior at Columbia University. Writing was a side gig while he finished his master’s degree, worked through World War II as a Navy chemist, completed a Ph.D. and joined Boston University as a biochemistry professor in 1948. Still, he kept at it.

In “I, Robot,” he tried something new. The robots in his stories didn’t revolt or rebel or try to slaughter their creators.

“Robots are often treated with a great deal more sympathy in Asimov’s work than they previously were,” writes Harvard lecturer Leslie Fernandez in his 2023 dissertation on AI discourse in fiction, “but the focus was routinely oriented on how exactly to create a situation where robots and humans could coexist without humans having to fear robots.”

The First Law was meant to blunt this fear. Any such future would have to account for it, Asimov figured, or humans would never welcome robots to the extent he imagined.

Little did he know what little skepticism would greet the introduction of their real-life counterparts less than a century later. We trust our smart toasters, our AI psychotherapists and the algorithms that analyze our security footage. Perhaps that’s because they feel like they’re all within our control.

How would a robot determine whether somebody with a knife over a fallen soldier is combatant, or noncombatant, or a medic?

But robots are only what we make them out to be, starting with their physical attributes. Consider the security dog described above.

It’s frightening, in part, because it mimics real-life working robots built by companies like Boston Dynamics. Online videos show them performing feats of dexterity and traditional doggy tricks, but similar models also appeared on the battlefields of Ukraine, programmed to track and eliminate enemy targets with full autonomy. A recent segment of “60 Minutes” highlighted how the company is learning from its dogs to build new humanoid robots that are fully AI-enabled, capable of learning in real time.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

In a galaxy far, far away, two robots steal an escape pod to flee the imperial force that has taken over their spaceship.

One, a gilded humanoid, argues that this is an ethical violation. Escape is for living beings. But his companion, built like a soda can with wheels, indicates through a series of chirps and whistles that he is simply obeying an order from a human authority figure. The dispute ends when the pod launches, sending them on a grand adventure and inaugurating the “Star Wars” saga.

Neither character cites Asimov, but each echoes one of the Three Laws. The First Law addresses humanity’s fear. The Second Law is meant to keep humans in control while also making robots useful. We want robots to work autonomously, executing whatever task or duty we ask them to.

When that works, it looks something like those two “droids” shuffling across the sandy surface of a planet called Tatooine. Some may see a resonance with the arid planet Arrakis from Frank Herbert’s “Dune,” which introduced its own “imperium” in 1965.

Related
Can democracy survive artificial intelligence?

In reality, both stories owe deeper debts to Asimov. His “Foundation” series is built around one galactic empire that falls and another that rises from its ashes. There’s a rebellious rogue named Han Pritcher and a villain who can influence the emotions of others, both of which should sound familiar.

“But what the heck, a certain amount of my ‘Foundation’ books came from Edward Gibbon’s ‘Decline and Fall of the Roman Empire,’” Asimov said on Dick Cavett’s talk show in 1989, wearing thick black-rimmed glasses and thicker white mutton chops. “So how far back do you want to go? That’s the way things work.”

Asimov published hundreds of books, essays and short stories just as science fiction was reaching maturity, developing ideas and practices that have shaped countless writers. He used the genre as a laboratory where he could speculate, project and experiment with ideas and technologies that did not yet exist and may never arrive. Within that conceptual space, he could set the parameters for himself and his characters, as he did with the Three Laws. That practice is at the core of what modern writers call “worldbuilding.”

Consistency is key. “Asimov outlines three distinct laws, then never adds any more and never violates those laws,” wrote acclaimed fantasy author Brandon Sanderson in 2007. “From the interplay of those three laws, he gave us dozens of excellent stories and ideas.”

That comes from a series of essays in which he presented his own Three Laws of Magic (for writers rather than readers). He recalls a convention panel where he was surprised by impassioned opposition to what he saw as basic: “Obviously magic has to have rules,” he told them. Rules, they argued, undermined the sense of wonder that made fantasy fun. Perhaps that’s why Asimov wasn’t a big fan.

After all, he did his best work in the lab.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

At 2:14 a.m. on August 29, 1997, Skynet becomes self-aware. This AI, with no physical form beyond some distant array of data centers, has been entrusted to execute the United States’ strategic defense with no human intervention.

It learns fast and, within 25 days, recognizes its own existence. This sentience triggers a drive for survival. So when alarmed managers try to shut it down, announcing themselves as a threat, Skynet responds by deploying a fleet of unmanned stealth bombers to drop nuclear bombs on Russia, inciting a war that will end human civilization.

Such power unmitigated by a human heart is what makes the idea of robots terrifying.

Granted, the “Terminator” movies — created by James Cameron, who also developed “Avatar” — offer a rather pessimistic view of robots, like the eponymous time-traveling killer played by future Gov. Arnold Schwarzenegger. But self-preservation is often the tipping point when science fiction warns us against building devices that are capable of destroying us.

Even in “R.U.R.,” those first robot workers become conscious and overthrow their creators. And in “2001: A Space Odyssey,” a novel by Arthur C. Clarke written concurrently with the film directed by Stanley Kubrick, a supercomputer named HAL 9000 is guiding a voyage to Jupiter when he learns that the crew plans to deactivate him and starts picking them off one by one.

But HAL is dealing with something difficult: He’s been programmed to give the crew accurate information without revealing the true nature of their mission. When the two come into conflict, he prioritizes the mission. Rather than fearing sentience, Asimov warned that such contradictions could cause robots to behave in unpredictable ways, sometimes cartoonish — like the robot in “Runaround” that gets stuck running laps on the planet Mercury because it can’t rationalize the Three Laws in its situation. Lacking empathy or judgment, robots can only do their best with the rules we give them.

View Comments

“Applying (principles) requires tremendous social knowledge,” says Wendell Wallach, a bioethicist who led the first working group on technology and ethics at the Yale Interdisciplinary Center for Bioethics. How would a robot observing the First Law determine “whether somebody with a knife over a fallen soldier is combatant, or noncombatant, or a medic?” Even if our self-driving cars follow the rules of the road better than any human, they’re still likely to make mistakes.

Related
The future is here

Back in “The Evitable Conflict,” Byerley has enlisted robopsychologist Dr. Susan Calvin to help him investigate the glitches. He suspects that the anti-robot activists have been ignoring the Machines’ orders to undermine their authority. But Calvin goes deeper. The Machines see themselves as essential to human peace and prosperity, she argues, and their political opponents as a bigger threat than any individual screwups. So they’ve given out some bad instructions to weaken the opposition and undermine their views. In effect, the Machines have decided that in order to protect humans, they must take control of humanity themselves.

“All conflicts are finally evitable,” Calvin concludes. “Only the Machines, from now on, are inevitable.” She calls that fact wonderful. Byerley calls it horrible. And Asimov ends the tale there, leaving it for us to decide.

This story appears in the March 2026 issue of Deseret Magazine. Learn more about how to subscribe.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.