Skip to Content

The Latest Driverless Cars Don’t Need a Programmer Either

The technique that helped a computer master the game of Go is about to be tested in real vehicles as a way to cope with complex driving situations.
January 8, 2017

An unusual fleet of self-driving cars will take to the road in coming months. Unlike most automated vehicles, which are programmed to deal with the situations they may encounter, these cars will have taught themselves, in simulation, how to handle tricky scenarios safely.

The cars will learn to navigate busy intersections, crowded highways, and packed rotaries using reinforcement learning, an approach inspired by the way animals learn to associate a reward with the behavior that led to it.

Mobileye, an Israeli company that provides vehicle safety systems to many carmakers, announced at CES in Las Vegas last week that it will test the approach on the road, in collaboration with the German automaker BMW and the chip company Intel, in the second half of this year.

In reinforcement learning, a computer is not hand-coded, or given specific examples to learn from; instead, it experiments, altering its own programming in light of the behavior that most reliably leads to a certain result. In the case of automated driving, the goal might be entering a rotary or merging into traffic safely and smoothly. The technique has proved an effective way of training computers to do things that are hard to capture with code, such as learning how to achieve superhuman skill at Atari video games and the board game Go.

James Maddox, director of the American Center for Mobility, a nonprofit that works with companies to develop and establish standards for connected and automated technologies, says interacting with human drivers will be a key challenge for self-driving cars. Such systems “need to learn not just from a vehicle’s experience but from other drivers as well,” he says. Mobileye is also developing a platform that would let different carmakers share the data collected by their automated cars. Ready access to that information may prove important for the technology’s progress, Maddox says.

Automated driving technology was the focus of a flurry of announcements and demonstrations at this year’s CES. Toyota showed off a self-driving concept car featuring a virtual assistant. The chipmaker Nvidia presented a powerful new system on a chip that it created for automated driving. The automotive parts maker Delphi demonstrated a self-driving Audi that it developed in collaboration with Mobileye.

Mobileye has been working on its learning system for some time. Speaking in December at an AI conference held in Barcelona, Spain, Shai Shalev-Shwartz, vice president of technology at the company, explained that reinforcement learning offers a way to equip self-driving vehicles with a range of subtler driving skills. He showed a demonstration of one situation that his company is tackling using the technique. In a simulation, at the point where two virtual highways intersect, a handful of cars simultaneously merged in opposite directions.

“We need to balance between defensive and aggressive behavior,” Shalev-Shwartz said. “If we are too defensive, we will not make progress; if we are too aggressive, we might hit other cars. We need to negotiate with other drivers. We cannot [just] follow the rules—we need to know the rules of breaking the rules.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.