Someday Your Brain Could Brake for You
Many high-end cars today come equipped with brake assist systems, which help a driver use the brakes correctly depending on particular conditions in an emergency. But what if the car could apply the brakes before the driver even moved?
This is what German researchers have successfully simulated, as reported in the Journal of Neural Engineering. With electrodes attached to the scalps and right legs of drivers in a driving simulator, they used both electroencephalography (EEG) and electromyography (EMG) respectively to detect the intent to brake. These electrical signals were seen 130 milliseconds before drivers actually hit the brakes—enough time to reduce the braking distance by nearly four meters.
Seated facing three monitors in a driving simulator, each subject was told to drive about 18 meters behind a computer-driven virtual car traveling at about 100 kilometers per hour (about 60 mph). The simulation also included oncoming traffic and winding roads. When the car ahead suddenly flashed brake lights, the human drivers also braked. With the resulting EEG and EMG data, the researchers were able to identify signals that occurred consistently during emergency brake response situations.
“None of these [signals] are specific to braking,” says Stefan Haufe, a researcher in the Machine Learning Group at the Technical University of Berlin and lead author of the study. “However, we show that the co-occurrence of these brain potentials is specific to sudden emergency situations, such as pre-crash situations.” So while false positives from the signal are possible, the combination of EEG and EMG data makes a false positive much less likely.
While this kind of brain and muscle measurement works in lab conditions, the next step—real-world application—will likely be much more difficult technically to arrange. The first thing Haufe and his team will investigate is whether or not it’s possible to accurately gather data from EEG and EMG measurements in a real-world condition. In the lab, participants were asked not to move while attached to the wires, but real-world drivers move around however they please.
“The current challenge is to determine how to make use of the important, but still small and unreliable, information that we can gather from the brain on the intent to brake,” says Gerwin Schalk, a brain-computer interface researcher at the New York Department of Health’s Wadsworth Center.
Although research into mind-reading-assisted braking systems will continue, tests involving real vehicles are likely many years away. The research may never lead to a fully automated braking system, but it could ultimately result in a system that takes brain data into account when implementing other assisted-braking measures.
Whether drivers would feel comfortable handing over any braking responsibility to a computer hooked up to their head is another question. “In a potential commercial application, it of course would have to be assessed whether customers really want that,” adds Haufe.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.