Skip to Content

This Car Knows Your Next Misstep Before You Make It

Researchers trained a computer to recognize the behavior that precedes a particular maneuver.
October 1, 2015

An experimental new dashboard computer can not only keep track of your behavior behind the wheel, but even predict what you’re about to do next.

With the vast majority of road accidents resulting from driver error, and distraction a growing problem thanks to the ubiquity of smartphones, carmakers are increasingly exploring ways to track driver behavior behind the wheel. Volvo, GM, and others are already testing systems that will monitor head and eye positions to pick up on signs of distraction.

A study by researchers at Cornell University and Stanford shows that a more advanced system could be trained to recognize the body language and behavior that precedes a particular maneuver. This could help trigger an early warning system, such as a blind spot alert, much earlier—perhaps thereby helping to prevent serious accidents, according to the academics involved.

“Imagine you are driving on a highway,” says Saxena Ashutosh, the director of a project called Robo Brain at Cornell University and Stanford who oversaw the driving project. “You look to the right for a second, because you are going to make a right turn, and as you are starting to make a right turn, some other driver has pulled into the space that you thought was empty.” A car could then either issue an alert or even prevent you from pulling into the lane.

The system was trained using cutting-edge machine-learning algorithms, and it could predict, with just over 90 percent accuracy, when a driver was about to change lanes in the next few seconds. A lane change was usually signaled by a glance over the shoulder along with telltale head movements and changes in steering, braking, and acceleration. Ashutosh says the accuracy achieved is almost good enough to be used in a production system.

This video shows the features used by the system to track a driver’s head movement.

The researchers behind the work are exploring different ways for a vehicle to monitor and anticipate driver behavior through a project called Brain4Cars.

The work involved using a machine-learning approach called deep learning to recognize the actions that preceded the lane-change maneuver. The algorithms were trained using data collected as 10 different people drove a total of 1,180 miles around different areas of California. The researchers intend to make the resulting data set freely available so that other academics and auto researchers can make use of it.

Deep learning has proven especially useful in recent years for recognizing complex or subtle patterns in data such as video and audio (see “10 Breakthrough Technologies 2013: Deep Learning”). It is already used to enable vehicle computers to recognize different types of obstacles on the road. In the latest work, the team combined data from a video camera with GPS data and information from a car’s computer systems.

Many luxury cars now come with sensors that enable safety warnings, as well as automatic breaking and steering. Ashesh Jain, a student of Saxena’s and project lead on Brain4Cars, says monitoring activity inside a car, as well as outside of it, could make such safety systems more intelligent. “Suppose the driver is distracted for a second,” he says. “If there’s nothing in front, the car should be smart enough, and not alert the driver. It’s about how you use information from all these sensors.”

More than 90 percent of U.S. road accidents are the result of some sort of driver error, according to research conducted by the National Highway Traffic Safety Authority.

Paradoxically, monitoring driver behavior could become more important even as cars become more automated. That’s because even if cars drive themselves in some situations, such as on highways or in parking lots, drivers will still need to retake the wheel occasionally, and research has shown that this can take many seconds depending on a driver’s level of distraction (see “Proceed with Caution toward the Self-Driving Car”). Google has gone so far as to sidestep the problem by removing the pedals and steering wheel from some of its prototypes altogether.

Don Norman, an expert on product design who has served as a consultant for numerous carmakers and computer companies, says the Brain4Car work is promising, but adds that it will need to be improved further and tested in the real world. “These are simulation data, run in the laboratory,” Norman says. “The real world is never as nice as the laboratory. Many factors may change the results when applied to real people driving real cars in real traffic.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.