Skip to Content
Uncategorized

Give Your Dashboard the Finger

An in-car interface helps drivers keep their hands on the wheel.

An experimental gestural interface for cars lets drivers control any part of their dashboard without taking their hands off the wheel.

Crank it up: A new gesture-recognition system would let drivers control a car’s stereo and other systems with an index finger, as shown here (in a car without the system).

Christian Müller, a researcher at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, who co-developed the new system, says the idea is to enable drivers to adjust everything from the volume of the car stereo to the climate-control settings while keeping their hands on the wheel and eyes on the road.

The prototype interface uses several sensors to detect the movement of a driver’s right index finger as it disrupts an electric field. It is based on the same principle as the theremin, a musical instrument that is played without being touched. Electromagnetic sensors located in the dashboard detect finger movements, providing the driver is holding the wheel in the recommended ten-to-two position and is driving straight. By detecting the different shapes the driver’s finger draws in the air, the system can detect and interpret a wide range of commands, says Müller.

Müller tested it with DFKI colleague Christoph Endres and Tim Schwartz at the University of Saarland, also in Germany. The researchers used a car simulator. “In our prototype we glued these antennae on the dashboard right behind the steering wheel,” he says.

Six people were asked to try the system, dubbed Geremin. It was able with an accuracy of 86 percent to distinguish 10 different gestures, including moving the finger up or down, left or right, or tracing out circles, triangles, and squares. The work will be presented next week at the International Conference on Intelligent User Interfaces in Palo Alto, California.

Some cars do already have buttons and controls built into the steering wheel or attached to the steering column, says Müller, but buttons are more limiting than gestures. “While you can certainly have designated buttons for a few selected functions or applications on the steering wheel, the number of gestures can be extended,” he says.

The system should be much cheaper than installing cameras to monitor drivers’ movements, as some car manufacturers are now doing. The cost of each sensor is about 50 cents.

“It’s an interesting idea. It could be useful,” says Paul Green, a research professor of the Driver Interface Group at the University of Michigan’s Transport Research Institute. But while it may help keep a driver’s hands on the wheel, recognition errors could prove to be a different kind of distraction, he says. “Also, if each manufacturer has a different set of gestures, then you have a real problem,” he says.

The German researchers hope to extend the gesture set significantly. “We will combine this with speech recognition in order to allow people to dictate text messages in the car,” says Müller.

Nonetheless, there are skeptics. Andrew Howard, head of road safety at the U.K.’s Automobile Association, says the prospect of enabling drivers to text is a “frightening” one. “We shouldn’t be encouraging people to do anything other than driving while driving,” he says.

Dictating texts is “not something we would want to encourage,” says Green, adding that “I think we’re going to find the throughput would be faster using speech-to-text.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.