Deep Driving
When the Google self-driving-car project began about a decade ago, the company made a strategic decision to build its technology on expensive lidar and detailed mapping. Even today, Google’s self-driving technology still relies on those two pillars. While that approach is great up to a point—we have good algorithms for using lidar and camera data to localize a car on the map—it’s still not good enough. Driving on complicated, ever-changing streets involves perception and decision-making skills that are inherently uncertain (see “Your Driverless Ride Is Arriving”).
Now an artificial-intelligence technology called deep learning is being used to address the problem. Rather than using the old method of hand-coded algorithms, we can now use systems that program themselves by learning from examples of how a system ought to behave in response to an input. Deep learning is now the best approach to most perception tasks, as well as to many low-level control tasks.

A self-driving car needs a perception system to sense things that are moving (cars, people) as well as things that aren’t (lampposts, curbs). Self-driving vehicles detect dynamic objects using sensors such as cameras, laser scanners, and radar. Of these three, cameras are the cheapest, but they’re also used the least because it’s hard to translate images into detected objects. Using deep learning, we’re seeing dramatic improvements in the car’s ability to understand and make use of such images.
We’re also seeing significant gains from something called “multitask deep learning,” in which a system trained simultaneously to detect lane markings, cars, and pedestrians does better than three separate systems trained in isolation—since the single network can share information among the separate tasks.
Instead of relying entirely on a pre-computed map, the car can use the map as one of many data streams, combining it with sensor inputs to help it make decisions. (A neural network that knows from map data where crosswalks are, for example, can more accurately detect pedestrians trying to cross than one that relies solely on images.)
Deep learning can also alleviate one of the biggest issues identified by many who have ridden in a self-driving car—a “jerky” feel to the driving style, which sometimes leads to motion sickness. But a car trained using examples of humans driving can offer a ride that feels more natural.
It’s still early. But just as deep learning did with image search and voice recognition, it is likely to forever change the course of self-driving cars.
Carol Reiley is the cofounder of Drive.ai.
Keep Reading
Most Popular
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.