Skip to Content

A Techno-Sensory Revolution is Coming, According to IBM

The five human senses? We’ll have have technologies that stimulate each of them in new ways.
December 18, 2012

In case you hadn’t noticed, it is list season. Gift lists and card lists, New Years’ resolution lists and, of course, Best Of 2012 lists.

IBM has its own twist on this tradition. It has published a list of tech advances that it’s researchers think will change our lives in the next five years. In a new “5 in 5” report published this week, they describe ways in which technology will able to enhance, augment, or mimic (to varying degrees) our senses of sight, sound, touch, smell, and taste. IBM is building a lot of that tech in-house, but others are developing their own technology that could contribute to that change. 

Take sight. The promise of Google Glass and lookalikes like this competitor Vuzix, or even Google Goggles, indicate how computers are learning to “see” better. IBM’s vision report says image processing will get faster and better learning to recognize human scenes like a beach, with a volley ball game or a surfing contest.

When it comes to hearing, Apple’s Siri and her legion of competitors are the best example of everyday tech trying to sound out our needs, with varying degrees of success. Perhaps the thing folks remember most about Google Now is how its voice search magically, works. But forget adults, IBM researchers are working on away to tell what a baby is feeling by the sounds they make, even patenting a way to track that data.

But perhaps most interesting of the lot is their section on touch-based technology. Screens in the next five years will take on a whole other range of abilities, IBM predicts

We at IBM Research think that in the next five years that our mobile devices will bring together virtual and real world experiences to not just shop, but feel the surface of produce, and get feedback on data such as freshness or quality.

By matching variable-frequency patterns of vibration to physical objects so, that when a shopper touches what the webpage says is a silk shirt, the screen will emit vibrations that match what our skin mentally translates to the feel of silk.

They have a point—Disney’s been working on a project called TeslaTouch for some time now. They’ve built a screen which tickles the nerves in your fingers as you drag a digit across its cold surface. Varying electric field patterns on the touch panel administer the sensation of touch. Finger painting on a screen could have all the sensation of finger painting on a canvas—with none of the mess. When you bought a dress or shirt online you could paw at the virtual fabric before you bought it.

Projects with similar goals are underway at the Linear Actuators lab at the Ecole Polytechnique Federale de Lausanne in Switzerland. In a video, graduate student Christophe Winter explains that you can change a person’s understanding of the material they are touching—that is, the friction they feel—by changing the vibration of the surface they are touching.

But though our tactile feedback from screens will be enriched, I think there are also ways we’ll be touching our devices less on the whole. Consider what the Kinect did for gaming, and the range of other uses the motion sensor technology sensor is being tested and developed for. As MIT Technology Review wrote earlier this year, there’s a fair chance we’ll be touching screens less, and gesturing at them more.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.