Should autonomous weapons kill people without human judgment? Should a government conduct mass surveillance on its own citizens? The U.S. government has taken action against a San Francisco-based AI firm for refusing to enable either.

On the surface, the dispute is over whether President Donald Trump’s Pentagon or Anthropic should set the terms on how AI is used. In February, the Pentagon designated Anthropic a “supply chain risk,” and federal agencies were directed to stop using its product, with potential penalties for continued use.

But the questions at the heart of this case — when may a life be taken, and who decides; when does snooping become an intrusion on human dignity — existed long before the idea of country or corporation. They have haunted every civilization and every faith.

As leaders from the Jewish, Christian and Muslim communities, we feel called to bring ancient wisdom to the debates of this moment. Though our doctrines may disagree, we are united in our unease with how AI, unwisely used, erodes values that the world’s faith traditions strive to uphold.

Related
Is AI harming or helping faith communities?

Consider autonomous weapons. Judaism and Christianity teach that every human being is created in the image of God — “b’tselem Elohim” in Hebrew; “imago Dei” in Latin. The Quran states: “Whoever kills a soul, it is as if he has killed all of humanity; and whoever saves a soul, it is as if he has saved all of humanity” (5:32).

If we delegate the decision to kill to machines, we do not eliminate the moral weight of that act; we ensure that no one actually bears it. We enter a world in which lives are ended by systems incapable of remorse, mercy or moral accountability.

The human cost of deciding to take a life — the burden of confronting that choice and living with it — acts as a brake on violence, however imperfect. Remove it, and the threshold for killing drops.

AI, unwisely used, erodes values that the world’s faith traditions strive to uphold.

Mass surveillance raises equally grave concerns. All three of our traditions recognize that defending the intimacies of community and family from centralized surveillance is essential to human dignity.

What is at stake here is not pinpoint intelligence-gathering against a specific suspect or foreign adversary but the blanket surveillance of a country’s own people. AI can now process and analyze human behavior at a scale no previous apparatus could approach, making restraint more urgent than ever.

The argument that such surveillance is “legal” offers thin comfort. Every one of our traditions teaches that legality is not the same as morality — and history confirms this across nations and eras. Governments routinely redefine “lawful” to suit their aims; after 9/11, classified U.S. memos reinterpreted anti-torture law to permit harsh interrogation methods, a position later repudiated.

Given the pace of AI development, we scarcely dare imagine what might befall the world if we were to wait seven years for immoral applications of it to be curtailed.

That a government would take actions that could economically cripple a company for honoring the ethical commitments on which it was founded strikes at the human right on which our practice as religious leaders depends: the freedom of conscience.

Related
‘Issue of our age’: In Vatican City, an apostle offers a plan to test the moral compass of AI programs
The religion of AI

Rabbis throughout history have recognized moral choice as a core human value. The Catholic tradition holds that conscience is a guide no earthly authority may compel one to violate. The Prophet Muhammad, peace be upon him, taught: “There is no obedience to any created being if it involves disobedience to the Creator.”

These principles of free expression are also foundational to the American experiment. As Justice Robert H. Jackson said in 1943, “If there is any fixed star in our constitutional constellation, it is that no official … can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.”

The American legal tradition has long given this principle teeth through the protection of conscientious objection — the right to decline participation in what one believes to be wrong. This protection extends beyond the individual to communities: Quakers, for example, were among the earliest protected conscientious objectors, and the same moral seriousness that grounded their pacifism built reputations for fair dealing that helped shape early American commerce.

If a principled refusal to enable autonomous killing and mass surveillance can now be met with economic annihilation and threats of prosecution, few organizations, including religious ones, will dare exercise that right again.

Given the pace of AI development, we scarcely dare imagine what might befall the world if we were to wait seven years for immoral applications of it to be curtailed.

Our position is motivated neither by antipathy to AI, with its great promise to promote human flourishing, nor by unbounded support for the right of private companies, often motivated by profit, to develop it as they wish.

One of us writes from a shelter where AI-enabled defense systems are helping preserve his life. At the same time, many practices of technology companies — including their treatment of the rights of content creators — deserve public scrutiny.

While we look forward to the day when the most fundamental principles of human dignity in relation to AI are codified in the laws of the land, until then, we are called as witnesses to them and, in that witness, to make common cause with their defenders, however flawed.

Related
Would your favorite AI tool get an ‘F’ in faith and protecting children from sexual abuse?

When the actions of private companies fundamentally contradict the laws of God, we will always stand for the efforts of the state, however imperfect, to restrain them. But when the state does the same, we will stand by the rights of conscience, however imperfect those who exercise them are.

So far, AI policy debates have been dominated by techies, corporate lawyers and a handful of civil society groups. Religions have mostly steered clear. But this needs to change.

Tens of millions of people in America alone are affiliated with a spiritual community. If their voices and values were better heard, it could shape the trajectory of AI. We call on people of all faiths (and none) to play an active role in AI policy matters. In this case, we encourage others to use their institutional power as voters or funders to insist that the American government end its vindictive retribution against Anthropic.

Our traditions give us a language for human dignity that predates both countries and corporations — one that will, by the grace of God, see us past any number of technological revolutions. It thus falls to religious communities, in the words of Dietrich Bonhoeffer — a German pastor executed by the Nazi regime for resisting it — to “drive a spoke in the wheel” of injustice.


View Comments

Father Paolo Benanti is a Franciscan friar, professor of moral theology at the Pontifical Gregorian University in Rome and a member of the U.N. AI Advisory Body.

Dr. Yasir Qadhi is the chairman of the Fiqh Council of North America and the dean of the Islamic Seminary of America.

Rabbi Amichai Lau-Lavie is the founding rabbi of Lab/Shul, an experimental Jewish community in New York City.

All three are part of the Faith Family Technology Network, which released an open statement on this issue, from which this op-ed has been adapted.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.