Skip to Content

Someday Your Brain Could Brake for You

For the first time, researchers have used brain signals to predict when a driver is about to slam on the brakes.

Many high-end cars today come equipped with brake assist systems, which help a driver use the brakes correctly depending on particular conditions in an emergency. But what if the car could apply the brakes before the driver even moved?

The braking brain: During this driving simulation, electrodes are measuring what happens to the driver’s brain when the virtual car ahead of him slams on the brakes.

This is what German researchers have successfully simulated, as reported in the Journal of Neural Engineering. With electrodes attached to the scalps and right legs of drivers in a driving simulator, they used both electroencephalography (EEG) and electromyography (EMG) respectively to detect the intent to brake. These electrical signals were seen 130 milliseconds before drivers actually hit the brakes—enough time to reduce the braking distance by nearly four meters.

Seated facing three monitors in a driving simulator, each subject was told to drive about 18 meters behind a computer-driven virtual car traveling at about 100 kilometers per hour (about 60 mph). The simulation also included oncoming traffic and winding roads. When the car ahead suddenly flashed brake lights, the human drivers also braked. With the resulting EEG and EMG data, the researchers were able to identify signals that occurred consistently during emergency brake response situations.

“None of these [signals] are specific to braking,” says Stefan Haufe, a researcher in the Machine Learning Group at the Technical University of Berlin and lead author of the study. “However, we show that the co-occurrence of these brain potentials is specific to sudden emergency situations, such as pre-crash situations.” So while false positives from the signal are possible, the combination of EEG and EMG data makes a false positive much less likely.

While this kind of brain and muscle measurement works in lab conditions, the next step—real-world application—will likely be much more difficult technically to arrange. The first thing Haufe and his team will investigate is whether or not it’s possible to accurately gather data from EEG and EMG measurements in a real-world condition. In the lab, participants were asked not to move while attached to the wires, but real-world drivers move around however they please.

“The current challenge is to determine how to make use of the important, but still small and unreliable, information that we can gather from the brain on the intent to brake,” says Gerwin Schalk, a brain-computer interface researcher at the New York Department of Health’s Wadsworth Center.

Although research into mind-reading-assisted braking systems will continue, tests involving real vehicles are likely many years away. The research may never lead to a fully automated braking system, but it could ultimately result in a system that takes brain data into account when implementing other assisted-braking measures.

Whether drivers would feel comfortable handing over any braking responsibility to a computer hooked up to their head is another question. “In a potential commercial application, it of course would have to be assessed whether customers really want that,” adds Haufe.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.