SALT LAKE CITY — When the Rev. William Rucker takes his grandson to the grocery store, he’s always on the lookout for a googly-eyed robot named Marty, whose job is to monitor the store for spills and other hazards.
"The robot is my friend," 4-year-old Justice Rucker, who lives in Harrisburg, Pennsylvania, says.
But as robots and other high-tech devices become increasingly commonplace in homes and the workplace, many of us are treating them more like enemies than friends.
In San Francisco, a security robot had to be pulled off the street after vandals knocked it down and smeared it with barbecue sauce and excrement.
A hitchhiking robot on a journey from the East Coast to the West was destroyed by vandals before it made it out of Pennsylvania.

A Boston company that designs robotic dogs routinely kicks and pushes the machines on videos that go viral on YouTube. And many parents have been horrified to hear their children bark orders at smart speakers like Alexa, and confess that they, too, have been rude if the devices don’t give them the answers they expect.
Some people worry that such interactions with artificial intelligence, or AI, may coarsen our behavior with each other, especially as children grow up accustomed to giving commands instead of making requests.
“As there is a surge in children’s use of gadgets that respond to verbal commands we may see them learning ways of communicating that then slip into their interactions with humans,” said Simon Leggett, research director at the British research firm ChildWise in a 2018 report warning of unintended consequences for "the Alexa generation."
However, others say that though we sometimes perceive it as human, technology does not require or deserve the courtesies that we extend to other people.
T.M. Scanlon, professor emeritus of philosophy at Harvard University and the author of “What We Owe to Each Other,” a philosophy book popularized on the NBC TV show “The Good Place,” said we owe nothing to artificial intelligence in terms of how we treat it.

“Assuming, as I think obvious, that the robots are not conscious, any more than old-fashioned talking dolls were, nothing we do hurts their feelings. So the answer is obviously no,” Scanlon said in an email.
But for families that value speaking kindly and saying “please” and “thank you,” a few adjustments may be in order as robots and other impersonal personal assistants enter our homes.
Aiming to please
Jodi Forlizzi, a professor at Carnegie Mellon University and director of the Human-Computer Interaction Institute in the School of Computer Science, was part of a team of researchers that examined how owners of Amazon Echos interacted with the device. One grandmother was dismayed to hear her grandson call Alexa “stupid.”
“I mean this is not a person we all know, but I just didn’t like the fact that he was doing that,” the grandmother said.
While adults can also be insulting to devices and robots, the long-term effects of children behaving this way are uncertain; the Amazon Echo is not yet 5 years old.
“As our in-home interviews showed, very young children are highly influenced by the spoken nature of conversational agents, respond to the agents at a very young age, and imbue the agent with human-like qualities,” Forlizzi and her co-authors wrote.
Forlizzi and her colleagues analyzed the commands to Alexa for four months and then interviewed people about the results. “One of the things we learned was that (Alexa) definitely changes how people converse. So many moms in the study told us that their kids demanded things; (for example) ‘Mommy, bring me orange juice.’”
And for children, the expectation of immediate fulfilment of wishes can extend beyond interactive AI. One parent on Reddit said she’d noticed her child try to give commands to a car radio. And writing for Medium, Hunter Walk said, “Cognitively I’m not sure a kid gets why you can boss Alexa around but not a person.”
Meanwhile, there's also concern that rudeness to personal assistants is made worse by the fact that many have female voices and they remain deferential when they are called names or addressed rudely.

In a Vox article entitled, "Alexa, are you making me sexist?" Sigal Samuel wrote, "Our digital assistants typically have female voices and female names — Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana — and the researchers say this reinforces stereotypes of women as 'servile' beings who exist only to do someone else’s bidding."
As an experiment, Samuel spent 10 minutes insulting Siri, saying things like "Siri, you're ugly" or "Siri, you're fat."
To the first, Siri responded, "I am?" and to the second, Siri said, "It must be all the chocolate."
A clear bright line
Google took a step toward softening the master-servant dynamic in a change it made in 2018. When families use a setting called "Pretty Please," the Google Assistant will thank the user for saying "please" and remark on the user's politeness.
Similarly Amazon now offers a feature called "Magic Word" on its Echo speakers that thanks and compliments children when they make requests politely.
Such niceties might seem unnecessary and even silly when applied to another machine, such as a car. But in his 2008 paper "Sometimes It's Hard to Be a Robot," Blay Whitby of the University of Sussex in England, argued that when we design machines to be human-like it's reasonable, and perhaps important, to apply a different standard.
Robots may be human-like in three ways, Whitby says: physical appearance, behavior and the roles they play. "All three dimensions have ethical consequences," he wrote. For example, while few people recoil at someone kicking the tires of a car, "It might be argued that a person is doing something morally reprehensible to a robot if they mistreat it in ways that we would clearly condemn on moral grounds if they were similarly to mistreat a human."

One solution, therefore, might be to design AI that has little resemblance to humans. But research has shown that people are more likely to engage with technology and use it more often when it's similar to us.
Scanlon, the professor emeritus at Harvard, said, "There is also the worry that we are being manipulated by companies who make us feel that we have a personal relation with (our) bots. So I think it is a good idea to be clear that they are not persons."
One way to do this is to change the settings on Alexa so that the speaker responds to "Computer" or "Amazon" instead of a woman's name. (To do so, go to settings, then "wake word" and choose from a list of options.)
That's what Megan Christensen and her family did. Christensen, a mom of five who lives in South Ogden, Utah, is a big fan of smart speakers and has five in her house, but they are programmed to respond to "Echo" instead of "Alexa."
"When we were setting them up, we decided it was weird to call the machine by a human name," she said.
License to be rude?
Could being rude to insentient devices be good for people in some ways? Michel Luria, a Ph.D. candidate at Carnegie Mellon University, finds the idea intriguing.
In a paper presented at a recent conference on human factors in computing systems, Luria and her co-authors said that technology is most often designed "to support positive emotions" and fails to respect the reality that people engaging with artificial intelligence may be sad, lonely or angry. Luria envisions the development of "cathartic objects," essentially robots that absorb destructive behavior that can make humans feel better, similar to the recent phenomenon of "rage rooms."
"As technology gains a central role in shaping everyday life and is becoming increasingly social, perhaps there is a design space for interaction with social and personal negative emotions," they wrote.
But in an interview published in Spectrum, a publication of the Institute of Electrical and Electronics Engineer, Luria said that she deliberately designed prototypes to be "somewhat expressive, but overall very non-anthropomorphic."

"We are designing social robots to learn, understand and respond to emotional cues, and at the moment as designers we have no idea about how a robot should deal with negative emotions of any kind — they probably shouldn’t just ignore them."
Just as experts in artificial intelligence are working to design ethical robots, others are looking into how robots should respond when humans are abusive toward them, and how robot design can reduce such behavior. Researchers in South Korea, for example, have created a robotic turtle that withdraws into a shell when children started to be abusive. (Conversely, the machine's shell lights up with LEDs if it is being treated nicely.)
Such research may also lead to insights on how we can reduce abusive behavior toward humans and other animals, wrote Elizabeth Broadbent of the University of Auckland in "Interactions with Robots: The Truths We Reveal About Ourselves."
While we are still far away from the existence of artificial intelligence that can suffer or retaliate for abuse, as robots become more commonplace, our responses to them are revealing. When vandals destroy robots, for example, they may be expressing their fear and uncertainty about the future.
“I think it’s about anger and a sense of loss of control in the face of evolving technology,” Forlizzi, of Carnegie Mellon's Human-Robot Interaction Institute, said.
And amid worrying signs, such as unattended children's bullying of robots, there's also evidence that AI can also reveal goodness in human nature. Last year, in testing pizza delivery by self-driving vehicles (due to begin in Houston this year), Ford researchers found that most people said "thank you" — to the cars.