Skip to Content

Elon Musk Admits Humans Can’t Be Trusted with Tesla’s Autopilot Feature

Tesla will add restrictions to its new autonomous driving feature in response to evidence of people using it dangerously.
November 6, 2015

Tesla CEO Elon Musk (indirectly) made a striking admission yesterday: the autonomous driving features his company recently launched are too dangerous. On an earnings call with investors, Musk said that “additional constraints” will be added in response to evidence that people have pushed the feature too far. “There’s been some fairly crazy videos on YouTube,” he said. “This is not good.”

A Tesla Model S and X side by side at the supercharger in Gilroy, California.

It has been well documented that people have both intentionally and accidentally tested the limits of Tesla’s new feature (see “Drivers Push Tesla’s Autopilot Beyond Its Abilities”). But while some individuals have clearly been reckless, Tesla bears some responsibility as well due to the way it has designed and deployed its system, as Musk seems to realize.

Musk didn’t mention any specific “constraints” that will be added to make the autonomous driving feature safer. One obvious upgrade would be to require that someone be sitting in the driver’s seat. As this video of a Tesla driving itself with no one at the wheel on a private road shows, the system requires only that the driver’s side seatbelt be clicked in, even though the driver’s seat has an “occupancy sensor.”

Restrictions like that may not be enough if Google is right about the relationships that form between humans and autonomous cars, though. One reason Google invented a new car design that lacks a steering wheel was that long-term tests of conventional SUVs modified to drive themselves showed that people quickly became dangerously detached from what was going on around them. When the car needed them to take over because it couldn’t handle a particular situation, they weren’t ready (see “Lazy Humans Shaped Google’s New Autonomous Car”).

Musk has said he believes fully autonomous cars are inevitable. But Tesla is a vendor of conventional cars and it looks like he is committed to gradually adding more and more autonomy while relying on humans to stay alert and sensible—a path Google says is too dangerous.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.