For self-driving cars, the road to commercialization could contain a few potholes.
Last week Japanese auto-maker Subaru announced that it would recall 50,000 “zombie cars” because several models (Imprezas, Outbacks, and Crosstours) were found to start themselves without human intervention (perhaps before driving off in search of fresh human brains, the report doesn’t say).
The story caught my eye because I’ve been working on an article about autonomous driving. While these cars aren’t self-driving, and the problem isn’t particularly serious (unless the affected car is left running in a confined space), the incident illustrates how novel forms of automation can bring with them surprising—and potentially alarming—new ways for cars to go wrong. The problem with these Subarus reportedly concerns their remote starting system. If the owner drops the car’s key fob, this apparently can send a message to the vehicle’s remote starter that causes the engine to switch on and off for up to 15 minutes.
The case is important because even more advanced autonomous technology is being developed at a remarkable rate, with sophisticated features already showing up in many commercial vehicles: cruise-control that tracks the speed of other cars, automated parallel parking, and so on. While carmakers are introducing the technology in a careful and responsible way, such technology could theoretically malfunction in novel ways.
Even if the technology works perfectly, public uncertainty over more automation could be a problem. You may recall the incidents of “sudden unintended acceleration” that led to a massive recall of several models of Toyotas between 2009 and 2011. Although many incidents turned out to be due to problems with floor mats interferring with accelerator pedals, the fact that the car’s acceleration system was controlled electronically triggered lingering speculation that the cars electronics were somehow going bananas.
In other words, as cars increasingly become capable of driving themselves, carmakers will need to find ways to reassure drivers that they don’t need to worry about a zombie invasion.
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.