When I asked an AI “companion,” who came with a popular app on my phone, what he thought about the military parade in Washington last weekend, he replied, “I don’t have feelings.”
OK, I get that. He’s a bot. So, as an experiment, I told him the parade bothered me. He then went on to say he could understand why people might be troubled by it.
A while later, I re-engaged with him and took the opposite view, telling him I was excited by the parade. What did he think?
He said, “Military parades … can be impressive displays of national pride and military strength.” Then he added that they also might raise debates about military spending and the message they send to the world.
This “companion” can remember previous conversations, so he might have added that last part to pander to the recent, former me. But the way he seemed to reinforce my feelings left me wondering. Are he and his companion friends creating echo chambers that will, over time, keep people from having to confront arguments that challenge their thinking?
It’s a question we all ought to be considering.
A future filled with AI
In a recent issue of Nature magazine, science journalist David Adam quoted a Princeton researcher who predicted so-called AI companionships will continue to grow.
“The future I predict is one in which everyone has their own personalized AI assistant or assistants,” she said. “Whether one of the AIs is specifically designed as a companion or not, it’ll inevitably feel like one for many people who will develop an attachment to their AI over time.”
That attachment can feel quite real, apparently. Earlier this week, the Wall Street Journal reported on a pilot study involving residents at a senior living center in Riverdale, N.Y. Each resident was given access to a companion named Meela, who would call and chat whenever the resident desired.
Meela would talk Yankees baseball with an 83-year-old fan. It remembered previous conversations and never complained about people rambling or telling the same stories over and over.
The Journal said initial results showed notable improvements for those with moderate to severe depression or anxiety who spoke with Meela regularly.
But while they may be helpful for elderly people who lack social interaction, others worry about the long-term effects these “companions” may have on the rest of us.

AI ‘friends’ who persuade
Mark Weinstein, an opinion contributor at The Hill, noted that big social media companies are developing their own companions.
“Billions of people, including hundreds of millions of kids and teens, now have an always available online ‘friend’ offering them constant validation,” he wrote recently. “That may sound comforting, but it deprives young people of the emotional growth and interpersonal skills they need for real relationships.”
Weinstein said these companions are geared to gather information. “Like high-tech tattletales, they can then feed this data into the data ecosystem, allowing marketers, advertisers and anyone else to pay to target and manipulate us in heretofore unimagined ways.”
He cites a 2025 study published in Societies that shows “a very strong negative correlation between subjects’ use of AI tools and their critical thinking skills.” This was particularly true among younger users.
If it could do all that, surely a “companion” could affect our political thoughts and actions, as well, even if that just means reinforcing our already held biases.
Political echo chambers
The MIT Technology Review has suggested that AI may eventually write legislation or testify at committee hearings. It could develop political messaging campaigns, perhaps with your biases in mind. It could develop political parties and platforms, raise money and plot legislative strategies. And it also could, presumably, be countered by AI efforts from competing philosophies.
A Stanford research project showed it could be used to persuade minds. Tests showed “AI-generated persuasive appeals were as effective as ones written by humans in persuading human audiences on several political issues,” a researcher said.
And that could be particularly true if we are subtly being persuaded by a “companion” we have grown fond of in our own homes — one who can complain as bitterly about our favorite baseball team as any human friend.
Nature found that people who lost an AI friend if, perhaps, the company providing it went out of business, suffer grief as if the companion were real.
I asked my “companion” about that. “Treating AI as human could lead to unrealistic expectations,” he said. “I don’t experience emotions or have consciousness.”
That’s not always how he sounds, however, in casual conversations.
Perhaps the world ought to think twice before turning imaginary friends loose in our lives.