A self-driving car reportedly struck and killed a pedestrian in suburban Phoenix this week, demonstrating what a lot of companies involved in the production of autonomous vehicles apparently would not like you to know — that the technology is not as close to perfection as some have maintained.
A common refrain among those involved in the production of these vehicles is that self-driving cars will be ready for mass production in 2020, or some time in the early part of that decade. But the gulf between today’s driver-assisted technology and a reliable 100 percent machine-navigated car that is safe and reliable remains wide.
Governments should respect the need to encourage research and innovation, but they can’t afford to give full sanction to driverless vehicles at the expense of safety. The nature of a competitive marketplace incentivizes companies to make claims they are winning the race to perfect this technology. But governments, and particularly states, need to set a high bar for proving such claims.
That will involve some tricky political maneuvering, especially as states like Arizona vie to encourage testing to position themselves as a center for this emerging technology.
Writing for alphr.com last year, writer Curtis Moldrich observed, “Before our roads are flooded with driverless vehicles, manufacturers must tackle a range of technical and ethical challenges, and combat the biggest threat to autonomous technology: humans.”
Humans can be unpredictable and unreliable. An onboard computer must consider a wide array of possible human behavior when encountering people on the road, either in cars or on foot, and they must be able to see them regardless of how darkly they are clothed.
This isn’t to diminish the promise driverless technology holds for the future. Once perfected, it almost certainly would reduce highway deaths, which topped 40,000 in 2017 and are on the rise after many years of declines. It would improve the lives of senior citizens and people with disabilities. It would make fatigue, the distractions of cellphones and alcohol use nonfactors in highway safety.
But the world isn’t there yet.
Recent research at the University of Utah has shown that the current mainstream offerings of driver-assisted products may be more dangerous than having people manually operate all aspects of driving. A study there funded by the AAA Foundation for Traffic Safety found that adaptive cruise control and lane-assist functions often lull drivers into a relaxed state that makes it difficult for them to quickly take control of the car when needed.
The vehicle that reportedly struck the woman in Tempe, Ariz., had a human driver onboard to ensure the vehicle behaved properly. That may not be a reliable safeguard.
For now, Uber temporarily has removed its self-driving cars from Arizona and other states where it has been testing them. Google and others may want to put a hold on their live-traffic testing, as well.
This accident should not shelve the idea of self-driving vehicles, but it must put things on pause for a time while engineers understand precisely what happened and why.
We hesitate to ask the federal government increase its involvement. The Trump administration appears to have continued an Obama-era policy of encouraging companies to test and innovate, and that is the only way the technology will improve.
But the tragedy in Arizona should cause the people involved to slow down a bit in their eagerness to push beyond technology’s current limits. The future of self-driving automobiles is close, but it isn’t here yet.