There has always been a line in a free society. The government could punish what you did and sometimes regulate what you said, but it could not reach inside your mind. Your beliefs, your doubts, your private convictions were yours alone. No court order could compel them. No search warrant could retrieve them.
That line is beginning to move, and most people have not noticed.
The debate about artificial intelligence has mostly focused on surveillance and speech, and that alone should worry us. AI makes it cheap to monitor what people say, track who believes what and build detailed profiles of entire communities. You do not need to ban speech to silence it. A system that is always watching is enough.
In such an atmosphere, people start to censor themselves. Research consistently shows this. When people feel watched, they change what they say, often without realizing it.
But the technological advances are not stopping at speech.
Neurotechnology, long the stuff of science fiction, is becoming real. Deep brain stimulation treats Parkinson’s disease. Transcranial magnetic stimulation is FDA-cleared for depression, including in teenagers. Brain-computer interfaces are in human trials, with companies working toward devices that let people control technology with their thoughts.
For patients with paralysis or severe neurological disease, this is life-changing work. None of that should be dismissed.
But technologies built to heal have a long history of spreading beyond medicine. EEG was developed to diagnose epilepsy and is now the engine behind consumer focus headbands. Drugs built for depression and ADHD became productivity tools for healthy people. Deep brain stimulation, developed for Parkinson’s, is now being studied for addiction and memory enhancement.
The pattern is familiar: A technology earns trust by treating real suffering, and that trust then carries it into uses that were never part of the original deal. Consumer neurotechnology is already following that path. And like every other kind of personal data, once brain-related data becomes useful, it becomes valuable.
Once it becomes valuable, it gets collected, shared and used in ways people never agreed to. The incentives largely push in one direction.
Intrusive, expansive data collection is one thing. But now add AI’s ability to find patterns in all of it.
The First Amendment protects your right to speak. It does not guarantee that you will feel safe doing so. In a world where everything you say can be recorded, analyzed and tied back to you, people get more careful.
Speech narrows. Opinions soften. Dissent fades or goes quiet. This is not speculation. It is how people behave when they feel watched.
Neurotechnology opens a new, more intensive front in that problem.
Recent research has shown that aspects of language can be reconstructed from patterns of brain activity. Other work has demonstrated real-time decoding of “inner speech,” with the researchers themselves warning about the potential for misuse.
None of this is at the level of consumer-grade mind reading. The gap between the lab and everyday life is still significant. But the direction is clear enough that it makes sense to ask the legal and constitutional questions now, before the technology gets ahead of the rules meant to govern it.
In my view, the law is not ready.
The danger is not a sudden loss of freedom. It is something slower and harder to reverse. Speech becomes more guarded. Belief becomes more exposed. The private self becomes easier to read.
The Fourth Amendment protects against unreasonable searches, but it was not built for a world where beliefs and intentions can be inferred from biological signals without entering a home or touching a device.
More broadly, courts are still working through what to do with digital data. And neural data is not yet part of that conversation. Even where privacy laws exist, they tend to focus on what data can be collected, not what can be figured out from data already obtained.
That gap matters when the most sensitive information is not what someone said but what a highly intelligent AI system concluded from everything else it can access.
Religious liberty raises a concern that tends to get overlooked in this debate.
Freedom of religion depends on freedom of conscience. The First Amendment protects not just what people say publicly but also what they believe, question and work through in private.
For many traditions, that inner life is not a side note to religious practice. It is the center of it. Prayer, intention, doubt — these happen inside a person, not on a public platform.
What happens to that precious liberty when the inner life no longer feels fully private?
The threat is not necessarily someone forcing you to reveal your beliefs. It is something quieter and harder to guard against: the possibility that what you believe becomes visible to employers, governments, insurers or companies simply as a side effect of tools you use every day. No one has to ask.
The system figures it out. It’s like a much bigger equivalent of having a private conversation at home, then finding an advertisement showing up on your devices the next day targeting that very topic.
That kind of exposure creates a pressure that is hard to see and hard to fight, precisely because there is no single moment where something obviously goes wrong.
Think about what that looks like in practice. A Muslim employee uses a company wellness app that tracks heart rate and stress during the workday. She steps away to pray. The app logs a recurring pattern: stillness, slowed breathing and reduced screen activity five times a day. She never told anyone her religion. She never had to. The pattern did it for her. Whether her employer ever acts on that information is almost beside the point. The exposure happened without her knowledge and without her consent.
Or consider this: A homeschooling parent relies on an online curriculum that logs lesson selections, reading lists and how long students spend on different topics. Over time, those choices form a clear pattern: certain periods of history emphasized, others passed over, particular authors returning again and again.
From that, a profile emerges: not just how the child learns, but what the household likely believes. No survey was filled out. No declaration was made. The conclusions come from accumulation, not disclosure. And once they exist, they can travel beyond the platform, beyond the family, into systems the parent never intended to inform.
That is the kind of quiet exposure the law was never designed to prevent but may now have to. Some governments are starting to respond. Colorado now treats neural data as sensitive information with stronger legal protections. Chile has recognized mental privacy as a constitutional right. UNESCO adopted a global framework on neurotechnology ethics in 2023.
These are early steps, but they are real ones. In the United States, the conversation has barely begun.
The danger is not a sudden loss of freedom. It is something slower and harder to reverse. Speech becomes more guarded. Belief becomes more exposed. The private self becomes easier to read.
The law will still say you are free. But freedom that exists only on paper, in a world where your own thoughts feel like someone else’s data, is a much thinner thing than what this country was built on.
Free societies have always had to draw a line between what can be known and what must stay personal. For most of our history, that line has been the human mind.
It is worth deciding now whether we mean to keep it there.

