Skip to Content
Artificial intelligence

Waymo’s cars drive 10 million miles a day in a perilous virtual world

A simulation lets autonomous cars experience situations that are too dangerous to try in reality.
October 10, 2018
Waymo

You could argue that  Waymo, the self-driving subsidiary of Alphabet, has the safest autonomous cars around. It’s certainly covered the most miles. But in recent years, serious accidents involving early systems from Uber and Tesla have eroded public trust in the nascent technology. To win it back, putting in the miles on real roads just isn’t enough.

So today Waymo not only announced that its vehicles have clocked more than 10 million miles since 2009. It also revealed that its software now drives the same distance inside a sprawling simulated version of the real world every 24 hours—the equivalent of 25,000 cars driving 24/7. Waymo has covered more than 6 billion virtual miles in total.

This virtual test track is incredibly important to Waymo’s efforts to demonstrate that its cars are safe, says Dmitri Dolgov, the firm’s CTO. It lets engineers test the latest software updates on a wide variety of new scenarios, including situations that haven’t been seen on real roads. It also makes it possible to test scenarios that would be too risky to set up for real, like other vehicles driving recklessly at high speed. 

“Let’s say you’re testing a scenario where there’s a jaywalker jumping out from a vehicle,” Dolgov says. “At some point it becomes dangerous to test it in the real world. This is where the simulator is incredibly powerful.” 

Unlike human drivers, autonomous cars rely on training data rather than real knowledge of the world, so they can easily be confused by unfamiliar scenarios.

But it is not easy to test and prove machine-learning systems that are complex and can behave in ways that are hard to predict (see “The dark secret at the heart of AI”). Letting the cars gather vast amounts of usable training data from a virtual world helps train these systems.

“The question is whether simulation-based testing truly contains all the difficult corner cases that make driving challenging,” says Ramanarayan Vasudevan, an assistant professor at the University of Michigan who specializes in autonomous-vehicle simulation. 

To explore as many of these rare cases as possible, the Waymo team uses an approach known as “fuzzing,” a term borrowed from computer security. Fuzzing involves running through the same simulation while adding random variations each time, to see if these perturbations might cause accidents or make things break. Waymo has also developed software that ensures the vehicles don’t depart too much from comfortable behavior in the simulation—by braking too violently, for example.

Besides analyzing real and simulated driving data, Waymo tries to trip its cars up by engineering odd driving scenarios. At a test track at Castle Air Force Base, in central California, testers throw all sorts of stuff at the cars to confuse them: everything from people crossing the road dressed in wild Halloween costumes to objects falling from the backs of passing trucks. Its engineers have also tried cutting the power lines to the main control system to make sure the fallback will step in correctly. 

Waymo is making progress. In October last year, it became the first company to remove safety drivers from some of its vehicles. Around 400 people in Phoenix, Arizona, have been using these truly autonomous robo-taxis for their daily drives.

However, Phoenix is a fairly straightforward environment for autonomous vehicles. Moving to less temperate and more chaotic places, like downtown Boston in a snowstorm, will be a huge step up for the technology.

“I’d say the Waymo deployment in Phoenix is more like Sputnik rather than full self-driving in Michigan or San Francisco, which I’d argue would be closer to an Apollo mission,” says Vasudevan.

The situation facing Waymo and other self-driving-car companies remains, in fact, a neat reminder of the big gap that still exists between real and artificial intelligence. Without many billions more miles of real and virtual testing, or some deeper level of intelligence, self-driving cars are always liable to trip up when they come across something unexpected. And firms like Waymo cannot afford that kind of uncertainty.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.