Facebook Twitter

Op-ed: Public policy needs to confront AI’s benefits and dangers

SHARE Op-ed: Public policy needs to confront AI’s benefits and dangers
Artificial intelligence concept with a wire mesh grid.

Artificial Intelligence Concept with Wire Mesh Grid

DepositPhotos

Artificial intelligence can be difficult to explain, yet it's one of today's hottest buzzwords. Tractica described “AI as an information system that is inspired by a biological system designed to give computers the human-like abilities of hearing, seeing, reasoning, and learning.”

With Stephen Hawking's recent passing, I feel it’s worth pondering AI's current state. Media and pop culture tout its advancements, but is our enthusiasm causing us to overlook AI’s societal impact, be it good or bad?

It’s true that AI has several practical and advantageous applications, making this an exciting time for various industries, from transportation to retail and even health care.

Harvard Business Review found that between 34 percent and 44 percent of companies have employed AI in their IT departments to complete tasks in information technology, marketing, finance and accounting and customer service. This statistic has likely changed since that article was written, but it’s clear that AI is automating jobs like monitoring security incursions and predicting customer purchasing behavior.

In medicine, a branch of AI called “deep learning” algorithms are executing technical functions originally handled by human doctors like accurately diagnosing chest X-ray images for tuberculosis, according to Forbes.

The key is not to completely automate the job market, but to balance human strengths for more specialized work with AI that can manage mundane tasks and protect against human error.

My concern has grown exponentially over the past several years after reading quotes by leading scientists and leaders around the world.

Tesla and SpaceX CEO Elon Musk cautioned that ”AI could cause a third world war.” Hawking warned that “AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”

Musk, Google DeepMind co-founder Mustafa Suleyman and 114 other leading AI and robotics experts have drafted an open letter to ask the United Nations to safeguard the future use and development of AI and robotics so they are not repurposed into a global arms race of “killer robots.” I feel the most important piece from the letter is this: “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

I recently saw "AI Nightmare," a short VR film by Lindero Edutainment, which seemed to reflect these very concerns. The filmmakers said Hawking’s speech on the dangers of unchecked AI had inspired the movie's development. Granted, the open letter to the U.N. may have focused on “weapons of terror,” but this virtual reality indie film questioned how even unassuming smart technology like virtual assistants (i.e., Siri or Alexa) could prove dangerous if we do not take the time to fully understand its capabilities to make AI more human-centric.

Do we need to do more to have our voices heard? We must encourage greater discussion in communities outside the scientific and academic fields about AI and its future applications rather than just mindlessly absorbing the endless stream of media noise surrounding it. Speak up before it’s too late!