Skip to Content

Tesla Might Replace Autopilot’s Eyes with Something Far More Advanced

This may accelerate the arrival of self-driving cars.
August 4, 2016

What will Tesla’s new brain be capable of?

The car company announced last week that it would no longer use a vision system provided by MobileEye, an Israeli company that supplies technology to many automakers. This comes a few weeks after the National Highway Traffic Safety Administration announced that it was investigating a fatal accident that occurred while one of Tesla's cars was operating in Autopilot mode, a system designed to enable automated driving under a driver's supervision. It is unclear why Tesla is dropping MobileEye, but one reason may be the emergence of newer approaches to automated driving.

MobileEye provides what amounts to an advanced image-recognition system, capable of identifying road signs or obstacles, such as other cars or pedestrians, on the road ahead. The company has said that it uses deep learning, a popular machine-learning technique based on training a many-layered network of simulated neurons to recognize input using a large number of training examples.

Tesla has not publicly disclosed how its semi-automated driving technology works, but it most likely takes information from the MobileEye system as well as data from radar and ultrasound sensors, and uses that to make driving decisions.

Tesla may simply design its own vision system, designed purely for automated driving. The company declined to comment, but it has been building up expertise in machine vision, recruiting experts in this area.

Historically, automated driving systems used rules hand-coded by engineers to recognize obstacles and make critical on-the-road decisions. Increasingly, however, rules are being replaced by machine learning, a way of training a system how to behave using masses of data. Deep learning in particular will be used to train cars not just how to see but how to drive correctly. Forthcoming systems will use machine learning to do more than just recognize objects on the road—for example it might be able to identify the distance to an obstacle or even its trajectory. It is also possible for machine learning to help with a car’s motion planning and even the control of its driving systems.

Nvidia, which supplies hardware to many carmakers including Tesla, has demonstrated a system that uses deep learning to control everything on a self-driving prototype. This was purely an experimental prototype, however, and does not necessarily reflect a future Nvidia offering. The hardware maker declined to comment for this article.

Ignmar Posner, a lecturer at the University of Oxford and an expert on applying machine learning to robotic systems including self-driving vehicles, says deep learning will likely take on more complex scene interpretation in forthcoming driving systems. 

"I think the applications in autonomous driving will widen as more sensing modalities are introduced, like radar and lidar, and as different outputs are required," Posner says. "Imagine, for example, a system that learns to anticipate a driver’s actions ahead of time and checks whether these are safe."

Some startups are already working on more advanced deep-learning-based driving systems that may become commercially available before long.

Drive.ai, a company started by a group of AI researchers from Stanford University, is developing a sophisticated automated driving system that it will eventually offer to carmakers. Like Nvidia's system, Drive.ai uses deep learning for more elements of automated driving, including image recognition and elements of motion planning and control. In April this year, Drive.ai received a license to test autonomous vehicles on the road in California, the 13th company to receive such permission.

"We realized that driving is this amazing application of deep learning, and done right, it's a way to change the world," says Carol Reiley, a roboticist and cofounder of Drive.ai. "It's a very data-driven, deep-learning approach to driving.”

After years of slow and steady progress, the automotive industry is now changing at an extraordinary pace, with combustion engines and crankshafts becoming less important than computers, sensors, and code (see "Rebooting the Automobile"). That a company like Drive.ai, staffed by computer scientists and AI experts, could be poised to introduce a key new technology for automakers, says much about this transformation. But it's also critical for this sort of expertise to infuse the car world, since machine-learning techniques like deep learning are fundamentally different (see "If A Driverless Car Goes Bad We May Never Know Why").

Reiley explains that this is a big area of focus for Drive.ai. "With autonomous driving, safety is so critical," she says. "One of the things we're thinking heavily about is how to test deep-learning systems in a way that is semi-transparent. That people can at least understand the inputs and have expected outputs."

Drive.ai is entering a competitive market. Google has been testing self-driving cars for some time, with the goal of eventually offering the technology to automakers. Apple is also rumored to be developing an automated driving system, either for its own vehicle or for a product that would be offered to existing carmakers.

Posner at Oxford University says that new-and-improved sensing capabilities developed for automated vehicles should lead to better mobile robots for many industrial settings, such as mining and warehouse logistics. "This point often gets missed," Posner says. "Autonomous cars really present only a small subset of the application domains this tech will touch."

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.