Facebook Twitter
AI_sheriff_f.jpg

Photo illustration by Michelle Budge

Can artificial intelligence stop mass shootings?

AI is transforming life as we know it, changing industries like security. But there could be unintended consequences.

SHARE Can artificial intelligence stop mass shootings?
SHARE Can artificial intelligence stop mass shootings?

A young man parks outside a sprawling warehouse around 11 p.m. on April 15. Weary employees shuffle in and out through circles of yellow lamplight, changing shifts or taking breaks. He used to work here, too, among hundreds of people who sort parcels at the FedEx facility in Indianapolis, until he got fired. Now, he pulls up his hoodie and reaches for two assault rifles.

For eight victims — four shot dead in the parking lot, four inside — it’s already too late. But imagine there was a security system that could instantly detect those guns and lock the shooter out of the building. Imagine it could automatically alert the police and send them live footage, so they’d know where to find the shooter and the people who’d been hit. Could a system like that save lives? For some proponents of artificial intelligence, that is the hope.

AI is a branch of computer programming in which machines are built to think and learn for themselves. The technology is already deployed in health care, education, transportation and other sectors, quietly restitching the fabric of our lives. AI can transcribe audio recordings more accurately than humans and detect cancer better than doctors. It could soon deliver goods in trucks more safely and efficiently than practiced drivers. “It is difficult to think of a major industry that AI will not transform,” said Andrew Ng, a computer scientist and founder of Landing AI, an artificial intelligence consulting company. 

Tech magnates like Elon Musk have hailed the technology’s potential to solve some of humanity’s most intractable problems. None is more confounding — or more pointedly American — than mass tragedies like the recent FedEx shooting. There have been more than 230 mass shootings so far this year, according to the Gun Violence Archive, including one in San Jose, California, just this week. A handful of startup companies are trying to marshal AI to prevent them or at least reduce their impact, with products that can detect firearms, lock doors and warn security guards or police, all without human intervention.

But as society enters the AI age, we will have to reckon with both its power — and the ways that power can be abused — as well as its limitations. Can technology fix humanity’s own flaws?

That’s the idea. AI is meant to enhance or even replace human decision-making with computers that can perceive their environment and learn to solve problems. Researchers have already developed systems that can see, hear, smell, read, write, drive and more. The AI response to gun violence centers around video analytics — scanning security footage in real time looking for firearms or other signs of a threat, like someone wearing a ski mask. 

Gun detection forms a modest subset of the $5 billion video analytics market focused on predicting and preventing crime. But companies like Actuate in New York, ZeroEyes in Pennsylvania, and Defendry in Arizona believe there will be demand if they can provide a reliable solution. Advancing on gunshot detection systems — which analyze ambient sound through strategically placed microphones — video analytics products can tell when someone is hopping a fence, entering an unauthorized area, shoplifting, loitering, starting a fight or even slipping and falling. Some companies are even working on detecting weapons hidden in bags or concealed in jackets, using microwave and infrared cameras.

“AI can be trained to do almost anything,” said Defendry president Pat Sullivan. Like humans, these computers learn best from repeated exposure to the subject matter. Some companies have made hundreds of home videos of staged shooting attempts with employees, paid actors and police, for that purpose. The clips are fed to the computer, which analyzes them and learns to recognize warning signs.

While lighting, obstruction and angles can hinder accuracy, the more training a system gets, the better it becomes, said Ali Farhadi, professor in the Department of Computer Science and Engineering at the University of Washington. Good data can make the difference between an AI system spotting a gun or mistakenly sounding the alarm on a maintenance worker with a drill in hand. 

That can be a problem. Some facial recognition systems have reportedly shown a bias against women and people of color. In 2020, Vice reported that a Canadian company misled a New York school district about the accuracy of its AI technology, which allegedly misidentified Black faces at a disproportionate rate and mistook objects like broom handles for guns. Imagine the implications if police were deployed to what they believed to be an active-shooter situation in a school.

That scenario echoes the sort of dystopian downsides of technology that science fiction writers have been warning us about for a century. Despite the allure of a safer future, where criminals are spotted before they can cause harm, the risks and costs should also be taken into account, say experts like Jay Stanley, a senior policy analyst at the American Civil Liberties Union and author of a paper called, “The Dawn of Robot Surveillance.”

Even AI products that are initially sold for a narrow purpose can later be retooled for broader objectives. During the pandemic for example, Actuate taught its systems to alert staffers if a person in their building wasn’t wearing a face mask. The same ability means it could be used with malign intentions. “We need to think very carefully as a society about how we want to constrain the use of these technologies,” Stanley said. “There will be a lot of good uses for it, and there will be some bad uses.” 

Increasingly, we will be watched by cameras that track our movements and may even discern our emotions, Stanley said. He envisions a chilling effect where people avoid doing something as simple as skipping down a sidewalk for fear of setting off anomaly detectors and being questioned by the police. It’s a hypothetical development that would have the greatest impact on the most vulnerable among us, who fear standing out too much. “Gun recognition is the more sympathetic end,” he said. “But we have to ask where it’s going.” 

Equally vexing is the question of whether AI can be effective enough in preventing mass shootings to merit the inherent risks. At the FedEx facility in Indianapolis, for example, the attacker gave no warning. Without argument or confrontation, he started shooting at people in the parking lot before pushing inside, where workers ducked under conveyor belts to hide. The whole event lasted a matter of minutes. When the authorities arrived, it was essentially over. How much time can AI really buy? And what will that change?

Aside from locking the doors, AI is essentially being proposed as a tool for accelerating the police response. But in Dayton, Ohio, in 2019, officers reached the scene of a shooting within 32 seconds, and in that unthinkably short span, nine people had already lost their lives. Even on-site police and security guards have often proven ineffective, as was the case at Marjory Stoneman Douglas High School in Parkland, Florida. So how fast is fast enough?

Meanwhile, some have focused their efforts on early warning signs, trying to identify and stop potential attackers before they act. Like in Indianapolis. The FedEx shooter legally purchased at least one of the two rifles he used, months before the massacre, but after his mother contacted police to report that he might try to commit suicide by provoking a confrontation with cops, FBI officials interviewed him, inspected his home, seized a shotgun and left. “We tried to get him the help he needed,” his family said following the tragic event. 

If AI is a promising tool, its weaknesses reflect our very reliance on technology. No computer can give a hopeless person a sense of belonging and purpose, or get a struggling individual access to therapy, or rebuild a healthier, more loving society. At the end of the day, only caring humans and a community of support can solve the most human of problems.