Elon Musk Admits Humans Can’t Be Trusted with Tesla’s Autopilot Feature
Tesla CEO Elon Musk (indirectly) made a striking admission yesterday: the autonomous driving features his company recently launched are too dangerous. On an earnings call with investors, Musk said that “additional constraints” will be added in response to evidence that people have pushed the feature too far. “There’s been some fairly crazy videos on YouTube,” he said. “This is not good.”

It has been well documented that people have both intentionally and accidentally tested the limits of Tesla’s new feature (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). But while some individuals have clearly been reckless, Tesla bears some responsibility as well due to the way it has designed and deployed its system, as Musk seems to realize.
Musk didn’t mention any specific “constraints” that will be added to make the autonomous driving feature safer. One obvious upgrade would be to require that someone be sitting in the driver’s seat. As this video of a Tesla driving itself with no one at the wheel on a private road shows, the system requires only that the driver’s side seatbelt be clicked in, even though the driver’s seat has an “occupancy sensor.”
Restrictions like that may not be enough if Google is right about the relationships that form between humans and autonomous cars, though. One reason Google invented a new car design that lacks a steering wheel was that long-term tests of conventional SUVs modified to drive themselves showed that people quickly became dangerously detached from what was going on around them. When the car needed them to take over because it couldn’t handle a particular situation, they weren’t ready (see “Lazy Humans Shaped Google’s New Autonomous Car”).
Musk has said he believes fully autonomous cars are inevitable. But Tesla is a vendor of conventional cars and it looks like he is committed to gradually adding more and more autonomy while relying on humans to stay alert and sensible—a path Google says is too dangerous.
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.