Skip to Content

AT&T Reinvents the Steering Wheel

Vibrating wheel tells you to when to turn—and is less distracting than visual and auditory cues, researchers say.
March 22, 2012

Distracted driving kills an estimated 3,000 people yearly in the United States, triggering calls for bans on one of the causes, mobile phone use in vehicles. In response, the wireless industry is ramping up its anti-distraction efforts. Now, AT&T Labs is contributing with a vibrating steering wheel that promises to deliver navigation information to drivers more safely than on-screen instructions or turn-by-turn GPS commands.

In the prototype, a clockwise pattern of vibrations on the steering wheel means “turn right”; counterclockwise means “turn left.” The wheel’s 20 actuators can fire off in any pattern. And while the initial focus has been on improving delivery of GPS navigation instructions, other applications are under development, such as notifying drivers if cars are in their blind spots. The technology underlying these tactile cues is known as haptics.

A study of the gadget in driving simulators, by AT&T Labs researchers and collaborators at Carnegie Mellon University, found that it provided clear benefits: participants’ eyes stayed on the road longer. When younger drivers—with an average age of 25—used the haptic steering wheel along with the usual visual and auditory methods of receiving navigation instructions, their inattentiveness (defined as the proportion of time their eyes were off the road) dropped 3.1 percent.

That study did not find any benefit for older drivers, but a different one did. When haptics were added to auditory-only instructions, the inattentiveness of older drivers (above age 65) dropped 4 percent.

Overall, “by adding the haptic feedback we can lead to more attentive driving,” says SeungJun Kim, a computer scientist at Carnegie Mellon who participated in the study. The paper has not yet been published, but it will be presented this June at this conference.

An earlier study on car haptics that examined whether drivers accurately followed instructions—rather than whether they were distracted—also showed a benefit: haptics-equipped drivers made fewer turn errors. The work also builds on other research showing that listening to voices—whether from a dashboard GPS or a backseat driver—exacts a cognitive burden that detracts from driving attentiveness. (And while human talkers intuitively know it’s best to shut up if the listener is in a tough driving situation, machines don’t.)

As cars become more computerized, and smart phone use more pervasive, there has been a big push to develop technologies that address distracted driving. The National Transportation Safety Board last year called for the first-ever nationwide ban on driver use of portable electronic devices while driving. Research has shown that talking on a phone while driving increases the risk of a crash by a factor of four, and that text-messaging multiplies that risk by 23. And partial or complete bans—on text-messaging in particular—are already in place in many countries and U.S. states.

Many other groups are working on technological approaches to reducing distractions. Some tools can block calls or text messages when the phone senses that it’s in a moving car. Others can sense if they are being used by a driver, as opposed to a passenger, within a vehicle. These approaches leverage the increasing connectedness of automobiles.

Kevin Li, a researcher with AT&T’s user interface group in Florham Park, New Jersey, cautions that while the lab is working with automakers on the technology, it will be years before the gadget makes it into real cars. Solutions need to be usable, intuitive, and accommodating of different hand placements. “An underlying thread of this research is, can we develop great haptic and tactile cues that users ‘get’ right out of the box?” 

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.