We frequently hear in the 21st century that data is the new oil. Those who controlled oil flows in the 1970s had a near stranglehold on the global economy. Today, those who hold data might well control the new economy. Data, however, is diffuse, hard to track and nearly impossible to regulate, which could have unparalleled implications for human rights and religious freedom.
Big data companies have poured billions into research to bring technology and data into direct contact with us every day through artificial intelligence.
The idea of using AI for good has been the call to action for big data companies and democracies around the world. It wouldn’t just beat humans in chess or “Jeopardy!” it would eventually design smarter cities and driverless cars while improving businesses and agricultural yields with less water use. And, at its best, artificial intelligence holds the promise of allowing doctors to make better decisions about patient care or teachers to take a holistic view of students’ capabilities.
And during the pandemic, AI even helped faith to go online. Pastors and rabbis could connect communities and families for lifecycle events like bar mitzvahs, funerals or weddings. Saudi Arabia deployed AI robot guides at the Grand Mosque in Mecca that can answer questions from visitors in 11 languages, particularly useful during the hajj each year.
In early 2022, an AI-enabled livestream literally helped save lives when congregants, tuning in virtually from home, notified law enforcement officers of a hostage situation unfolding at Congregation Beth Israel, a synagogue in Colleyville, Texas. What started as a joyful Sabbath service ended in a shootout with all of the hostages freed and one terrorist eliminated. Without the livestream, without the AI-enabled understanding of the congregation for law enforcement officers, the outcome might have been different.
Despite all the applications of AI for good, particularly when it comes to advancing faith and religious freedom, critics of this technology are now widespread, with even the Vatican vowing to fight back against AI’s threats to human rights and religious freedom.
“All the gathered information that can help facilitate faith outreach, can also be used by malign actors to stamp out minority beliefs.”
Here is the paradox. All the gathered information that can help facilitate faith outreach, particularly information about vulnerable individuals or countries’ national security infrastructures, can also be used by malign actors to stamp out minority beliefs. What happens, suddenly, when an authoritarian regime determines that a specific type of faith or religious adherence is a “threat to society”? Data and artificial intelligence can be deployed to target individuals and faith groups. Facial recognition technology, enabled by AI, can identify people entering or exiting houses of worship, which might later be used against them to bolster unjust charges of sedition.
Welcome to the roaring ’20s of the 21st century where this occurs every day in countries around the world. The ferocious behaviors of dictators are enabled by technology that was originally designed with an eye toward human progress and creating a better world.
Chinese authorities have already used a vast system of advanced facial recognition technology to profile religionists. This data then serves as a hub for authorities to track movements, search people’s homes and detain citizens. Labor camps have been set up that officials in Beijing cynically refer to as vocational training camps — these are “reeducation camps” designed specifically to target Chinese Uyghur Muslims.
The Uyghurs are not alone in experiencing the abuse of AI technology. Catholics in Venezuela, comprising 70% of the country’s population, are becoming the archetype of what life looks like for religious adherents living under hostile governments when authoritarian regimes partner to export these technologies abroad. As the Catholic Church in Venezuela has worked to position itself as the bulwark defending human rights and dignity against Venezuela’s regime, government actors have been known to crash Sunday Masses and to keep tabs on prominent priests.
A 2018 Reuters investigative report showed startling ties between the Venezuelan government and a Chinese state-owned technology giant. Those ties have grown stronger over time with the Chinese Communist Party’s export of some of its most powerful AI tools to fellow authoritarians.
Meanwhile, in Iran, the Iranian Revolutionary Guard Corps has used artificial intelligence to surveil all citizens. As early as 2009, The Wall Street Journal profiled Iran’s spying web, which, at the time, was aided by European technology. But it’s no secret that many governments, including democracies like the United States, use big data and artificial intelligence under the banner of national security.
AI, in other words, is here to stay.
It is incumbent on the free world, including business, government and religious leaders, to harness its capabilities to advance human rights and religious freedom. The response from the U.S. and like-minded nations should be twofold: calling out the malign actors who use the technology to tyrannize religious and other minority groups, while also maintaining boundaries on how AI is deployed and focusing its use for good.
Bonnie Glick is the director of the Krach Institute for Tech Diplomacy at Purdue. She is the former deputy administrator and chief operating officer of the U.S. Agency for International Development.
Kennedy Lee is special assistant within the Center for Tech Diplomacy at Purdue.
This story appears in the May issue of Deseret Magazine. Learn more about how to subscribe.