Skip to Content

Toyota Tests Backseat-Driver Software That Could Take Control in Dangerous Moments

Cameras that watch where people are looking allow cars to judge when the driver is likely to miss a dangerous situation.
March 7, 2017
This modified Lexus is used by Toyota to test autonomous driving software.

Making a turn across oncoming traffic is one of the most dangerous maneuvers drivers undertake every day. Researchers at Toyota think it’s one of the situations in which a software guardian angel built into your car could save lives.

In trials at private testing grounds in the U.S., left turns are one of the first scenarios Toyota has used to test the concept of a system it has dubbed “Guardian,” which judges whether a human is about to make a dangerous mistake.

Radar and other sensors on the outside of the car monitor what’s happening around the vehicle, while cameras inside track the driver’s head movements and gaze. Software uses the sensor data to estimate when a person needs help spotting or avoiding a hazardous situation.

So far Toyota is just testing the ability of software to understand the hazards around a car and whether a person has spotted them, but the company plans to eventually make Guardian capable of taking action if a person doesn’t look ready to do so already.

“Imagine going through an intersection and you’re going to get T-boned—the right thing for the car to do is accelerate you out of it,” says Ryan Eustice, VP of autonomous driving at Toyota Research Institute, which was established in 2015 to work on robotics and automated driving (see “Toyota’s Billion Dollar Bet”). The group first said it would start developing Guardian last year (see “Toyota Joins the Race for Self-Driving Cars with an Invisible Co-Pilot”).

Eustice argues that the Guardian effort could have a widespread impact on public safety earlier than cars that fully remove driving duties from humans. Toyota is working on such technology, along with competitors like Alphabet, Ford, and Uber. But despite high-profile testing programs on public roads, Eustice and his counterparts at other companies say that truly driverless vehicles are still some years from serving the public and will initially be limited to certain routes or locales.

“We see an opportunity to deploy it sooner and more widely,” says Eustice of the backseat-driver approach. That’s because unlike full autonomy, it won’t be reliant on hyper-detailed maps and could be easily packaged into a conventional vehicle sold to consumers, he says. However, he declines to predict how soon Guardian might be ready for commercialization.

Steven Shladover, a researcher at the University of California, Berkeley, says the claim that Guardian could save lives sooner than fully autonomous vehicles makes sense. “If the driver has a 99 percent chance of detecting hazards and the automation system also has a 99 percent chance of detecting hazards, that gives the combination of the driver and system a 99.99 percent chance,” he says. “But this is much simpler and easier than designing a fully automated system that could reach that 99.99 percent level by itself.”

Getting the relationship between Guardian and humans right will be critical, though. Any mistakes it makes, such as intervening or sending a warning when a person has correctly interpreted a situation, would undermine a person’s trust in the system and could even lead to new kinds of accidents, says Shladover.

Eustice says Toyota is well aware of those challenges. “There will have to be a lot of studies to understand human acceptance,” he says. One idea he’s considering is enabling the system to talk with a driver about incidents on the road.

“If the car does intervene, it will be important for the car to explain to you why it did that, or to later say, ‘Hey, I didn’t intervene back there, but that was actually a close call,’” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.