Editor’s note: Artificial intelligence promises to reshape human existence. In this essay, Lyric Kaplan, counsel at AI League for Good, argues that without regulation, AI presents an existential risk that could result in a totalitarian cyberdystopia. But Eric Schmidt, the former CEO of Google, contends in another essay that AI has the potential to unleash unprecedented human potential and the most efficient and transparent government in history. Is there a middle ground?

Artificial intelligence is rapidly becoming the most transformative technology of the 21st century. From smart assistants and self-driving cars to medical diagnostics, AI is making its presence felt in almost every aspect of our lives. But with great potential comes significant risk, and one of the areas most affected by this new technological revolution is our democratic institutions. Recent reports have shown declining public trust in government institutions, and AI’s power to influence and shape information may be a major contributing factor. The events surrounding Cambridge Analytica’s psychographic profiling serve as stark reminders of AI’s potential impact.

The Democracy Index, published annually by The Economist, reported that half of the world’s countries saw their scores fall in recent years, including the United States, which was demoted from “full democracy” to “flawed democracy.” This decline was largely due to an erosion of confidence in public institutions.

AI is increasingly being used to influence information dissemination, voter behavior and public discourse — actions that are reshaping the democratic landscape. These impacts highlight the need for careful scrutiny and regulation of AI’s role in society.

Beyond elections, a new generation of AI-driven technology stacks being deployed to industries like surveillance are compromising individual privacy, a key pillar of democratic society.

The relationship between AI, privacy and democracy is complex. It will be up to tech companies and government to strike a balance between innovation and, when it comes to preserving democracy, safeguarding our core constitutional values.

While proponents argue that these technologies can help prevent crime and enhance security, critics warn that they can also be used to stifle dissent and target marginalized communities.


Ai technology is often portrayed as a force for good — an innovation that enhances productivity, reduces inefficiencies and makes life more convenient. AI-powered devices can recommend personalized products, predict our preferences and assist us with everything from writing emails to navigating busy roads. AI has also been transformative in fields like health care, where machine learning models are being used to detect diseases earlier and with greater accuracy, and in education, where personalized learning tools can adapt to each student’s unique needs.

But as with any powerful tool, AI can also be used to harm. The vast amounts of personal data required for AI to function pose significant risks to privacy and individual autonomy. The technology that powers your smart assistant and provides personalized recommendations can also be used to track your behavior, build detailed profiles of you and influence your decisions without your awareness.

AI is at the core of “big data” analytics and the Internet of Things, both of which contribute to the vast surveillance ecosystem we find ourselves in today and have made it possible to gather information on an unprecedented scale. These “things” include everything from wearable fitness trackers to smart thermostats. They are all collecting data on our activities, preferences and habits. When combined with AI, these devices create a powerful surveillance network capable of building detailed profiles of our behaviors.

This surveillance ecosystem isn’t limited to just the private sector. Governments are also using AI-driven technologies to monitor citizens. The third-party doctrine, which allows the government to collect information from third-party providers (such as telecom companies or social media platforms) without a warrant, raises significant privacy concerns. This concept highlights the challenges in balancing government access with individual privacy rights in the digital age. In such a setting, our ability to control who has access to our personal information is severely compromised.

The proliferation of surveillance cameras, facial recognition technology and other AI-powered monitoring tools has further expanded the reach of the surveillance ecosystem. In many cities around the world, cameras equipped with facial recognition software are used to track individuals in real time, often without their consent. This level of surveillance raises significant ethical questions about the balance between public safety and individual privacy. While proponents argue that these technologies can help prevent crime and enhance security, critics warn that they can also be used to stifle dissent and target marginalized communities.

Many AI systems operate as “black boxes,” making decisions without offering any explanation for how those decisions were reached. This lack of transparency is the antithesis of democratic accountability.


AI’s impact extends beyond surveillance; it can also directly shape our decisions. Online behavioral advertising, for instance, uses AI to target advertisements based on our online activities. These ads are not just trying to sell us products — they are influencing our choices, often in ways that are difficult for us to detect. By using data about our preferences, habits and even emotional states, AI can create highly personalized ads that subtly nudge us toward particular decisions.

We are entering a new era in recommender systems technology. Previously, it was about optimizing for matching relevant ad assets to users out of a finite pool. Now, we are moving into a chapter of hyper-personalization where new ad assets will be generated in real time, uniquely tailored to each specific individual. These assets will be crafted by foundation models based on a person’s values, goals, fears, hopes and dreams. This new paradigm goes beyond mere demographics, creating ads that are unique in how they look and feel for each user, and have a greater propensity to influence behavior.

While targeted advertising can help us find relevant products and services, it also blurs the line between persuasion and manipulation. When AI knows more about our behaviors and desires than we do, it becomes easy for companies to exploit our vulnerabilities. This manipulation erodes our ability to make autonomous decisions and undermines the democratic value of free will. As these types of manipulations become more complex and harder to identify, the potential for undue influence increases, challenging our capacity to make decisions freely.

Greg Mably for the Deseret News

Moreover, AI’s ability to influence decisions extends to political contexts. Political campaigns are increasingly using AI to micro-target voters with tailored messages designed to appeal to their specific fears, desires or biases. This kind of targeted political advertising can deepen societal divisions and create echo chambers where individuals are only exposed to information that reinforces their existing beliefs. The result is a fragmented society where meaningful dialogue becomes difficult, and the ability to reach consensus is weakened.

This has the potential to threaten the foundations of democracy itself. Elections, the bedrock of democratic governance, are increasingly vulnerable to manipulation through AI-driven technologies. Psychographic profiling, as used by Cambridge Analytica, shows how AI can be utilized to influence voter behavior. This example demonstrates the power of AI to target individuals based on their data, shaping political messaging in a way that can significantly impact electoral outcomes. By analyzing data from social media and other sources, AI can craft targeted messages designed to sway voters, often by exploiting their fears and biases.

In addition to direct electoral manipulation, AI has contributed to the proliferation of “fake news” and misinformation. When users see content that aligns with their beliefs, they’re more likely to trust and share it — regardless of its accuracy. This plays into personal biases and reinforces existing belief systems, making people less likely to question the validity of what they’re reading. By creating echo chambers and promoting sensational content, AI algorithms can distort the information landscape, making it difficult for citizens to make informed decisions.

The opacity of AI also poses a significant problem. Many AI systems operate as “black boxes,” making decisions without offering any explanation for how those decisions were reached. This lack of transparency is the antithesis of democratic accountability. When decisions that affect our lives are made by algorithms that we cannot scrutinize or understand, it becomes impossible to hold those in power accountable.

A recent study by Pew Research Center found that over half of U.S. adults now rely on social media for at least some of their news consumption. The role of social media platforms in spreading misinformation is also a major concern. AI algorithms prioritize content that generates engagement, which often means promoting sensational or divisive material. This has led to the rapid spread of misinformation, conspiracy theories and polarizing content, all of which contribute to a decline in public trust in democratic institutions. To address these issues, there is a need for greater transparency and for stronger measures to curb the spread of harmful content.

When decisions that affect our lives are made by algorithms that we cannot scrutinize or understand, it becomes impossible to hold those in power accountable.


The challenges posed by AI are not insurmountable, but they do require thoughtful regulation. Europe’s approach to privacy and AI regulation has been able to harmonize laws regionally rather than at the country level, which has streamlined implementation of the law across its member states. In 2018, the General Data Protection Regulation came into effect, applying a unified privacy standard across all countries in the European Economic Area. Similarly, the EU Artificial Intelligence Act, which went into force in August 2024, establishes consistent AI regulations across Europe, fostering a balanced approach to innovation and ethical standards.

In contrast, the United States lacks a federal AI law, instead relying on state-level legislation that addresses specific AI harms like deepfakes in elections, transparency of data used to train AI systems and disclosures of health care communications made with generative AI. This case-specific patchwork approach has created a complex matrix of AI laws that tech companies need to navigate. For example, in California alone, around 47 AI-related bills were introduced in 2024, with the governor signing some 17 into law.

In the United States, there have been recent attempts to introduce comprehensive data privacy legislation at the federal level. The American Data Privacy Protection Act was a proposed piece of legislation introduced in 2022, aiming to provide a federal baseline for privacy protections across the United States, focusing on data minimization, transparency and user control. Despite initial bipartisan support, the bill faced challenges in gaining full legislative approval, and the lack of a unified federal framework continues to leave gaps in data protection. These regulations highlight a growing recognition of the need to protect individuals’ privacy rights in an increasingly data-driven world.

States like California have taken the lead in enacting stronger privacy laws. The California Consumer Privacy Act and its successor, the California Privacy Rights Act , provide some of the most robust privacy protections in the country. These laws give consumers the right to know what personal data is being collected, the ability to opt out of the sale of their data and the right to request the deletion of their data. The establishment of the California Privacy Protection Agency also provides an independent body to enforce these regulations and ensure compliance.

Moving forward, the U.S. could benefit from adopting similar regulations at the federal level to create a consistent and comprehensive approach to data privacy. AI-specific regulation that addresses the unique challenges posed by this technology is crucial. This includes requirements for transparency in AI decision-making processes, such as mandating explainability in algorithmic decisions that significantly impact individuals. Ensuring that high risk AI systems are subject to regular audits and assessments can also help mitigate risks related to bias and discrimination.

International cooperation is also essential in addressing the global challenges posed by AI. Given the borderless nature of the internet and the global reach of tech companies, no single country can effectively regulate AI on its own. Collaborative efforts, including international agreements on data privacy and ethical AI standards, are crucial to create a cohesive framework that ensures individual rights are protected while also encouraging technological innovation.

Additionally, there is a need for greater public awareness and education about AI. Most people are unaware of how their data is being collected and used, or of the ways in which AI influences their decisions. Public education campaigns can help individuals understand the risks and take steps to protect their privacy. Furthermore, fostering a culture of ethical AI development within the tech industry is crucial. Companies should be encouraged to adopt best practices for transparency, fairness and accountability, and to prioritize the well-being of users over profits.

When AI knows more about our behaviors and desires than we do, it becomes easy to exploit our vulnerabilities.


The challenge before us is to harness AI’s potential while putting in place safeguards to protect our privacy, autonomy and democratic institutions.

To do this, we need a collective effort. Governments must establish regulations that hold tech companies accountable. Tech companies must adopt ethical practices and prioritize the rights of users over short-term profits. And we, as individuals, must be vigilant in understanding how our data is being used and advocate for our rights to privacy and autonomy.

View Comments

The tech industry also has a responsibility to develop AI in a way that aligns with democratic values. This means designing AI systems that are transparent, accountable and fair. Companies should invest in research to reduce algorithmic bias, increase explainability and ensure that AI systems are inclusive and equitable. Moreover, independent oversight bodies should be established to monitor the impact of AI technologies and ensure that they are used responsibly.

The future of AI is still being written. By addressing the risks it poses to privacy and democracy today, we can ensure that it becomes a tool for progress rather than a mechanism for control. We must strive to create an environment where technological innovation goes hand in hand with the protection of human rights and democratic values. Only by doing so can we build a future where AI serves humanity, rather than subjugates it.

This article is adapted from “Artificial Intelligence: Risks to Privacy and Democracy” published in the Yale Journal of Law & Technology. Lyric Kaplan is AI product and privacy counsel and a board member at AI League for Good. The opinions expressed in this article do not express the views or opinions of her employer.

This story appears in the January/February 2025 issue of Deseret Magazine. Learn more about how to subscribe.

Join the Conversation
Looking for comments?
Find comments in their new home! Click the buttons at the top or within the article to view them — or use the button below for quick access.