Skip to Content

Using Ultrasound to Feel Virtual Objects

A startup uses sound waves to create touch sensations out of thin air.
April 25, 2014

A startup called Ultrahaptics aims to make gesture control and virtual reality more engaging by using ultrasound waves to let you feel like you’re touching virtual objects and surfaces with your bare hands.

Tom Carter, cofounder of Ultrahaptics and a computer science graduate student at the University of Bristol, says the startup’s technology could improve upon touch-free interfaces, such as those enabled by Microsoft’s Kinect, or Leap Motion’s device, by reflecting air pressure waves off the hand in a way that can create different sensations for each fingertip. “You actually feel like you’re interacting with a thing and getting immediate tactile feedback,” he says.

The Ultrahaptics technology is based on research conducted by Carter and other researchers at the University of Bristol.

If the resolution can be improved, applications could include interacting with moving objects in virtual reality games, or improving navigation for the visually impaired by projecting the sensation of Braille letters onto fingers in midair. Carter hopes the first products that include the technology will be available in the next two years.

For now, it’s still in the experimentation stage. A paper about the Ultrahaptics technology is being presented in Toronto at the ACM CHI Conference on Human Factors in Computing Systems, which begins on Saturday. Carter worked with Ultrahaptics cofounder and University of Bristol computer-human interaction professor Sriram Subramanian, and with researchers at the University of Glasgow. The paper explores how well people can sense ultrasonic haptic feedback on different parts of their hands, and as a continuous motion across the hand. 

For the study, participants placed a hand, palm facing up, on a table below an array of 64 ultrasound transducers set in an eight-by-eight grid. Each transducer was connected to a circuit board, which was itself connected to a computer. The transducers emitted air pressure waves that bounced off the participants’ hands, giving the sensation of a breeze.

In one experiment, the ultrasound array focused feedback on 25 different parts of the hand to see if participants could pinpoint differences in where, precisely, they felt the waves. In the second, the array emitted waves in a way meant to feel like a line of continuous motion in a specific direction across the hand. Eventually, Carter says, this method will be used to form shapes rather than just lines.

The researchers found that certain parts of the hand are better at detecting ultrasonic feedback—such as across the palm from your thumb to pinky, rather than vertically up the center of your palm. They also determined that a sense of motion was best felt when several waves were emitted for longer lengths of time at different points. Additionally, research indicated that that the smallest virtual shape people could reliably feel was about two centimeters square.

Carter says there are many more perception studies to conduct. Ultrahaptics is also concentrating on refining and miniaturizing the technology, and building it into prototypes that potential customers will be able to try. 

Chris Harrison, an assistant professor of human-computer interaction at Carnegie Mellon University who has tried a demonstration of the technology, says using it would make sense in games, especially for adding simple sensations like wind on your face or virtual bullets hitting your chest. He suspects getting the technology to work across a room will be tricky, though, given that it’s essentially vibrating air. 

“It’s definitely weird, but it has a lot of potential,” he says.  

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.