Skip to Content

A New Computer Screen Reaches Out to Touch You

An experimental new touch screen, the Obake, has a stretchable surface that to reacts user interaction in new ways.

An inexpensive new prototype device called the Obake adds a new dimension to touch screen technology. The surface of the device, developed by Dhairya Dand and Rob Hemsley of the MIT Media Lab, can react to how it’s being used by reaching out toward the user. It was relatively simple to make: the researchers used an open source software framework to enable the screen to react; the hardware costs between $50 and $60, Dand says.

The Obake
The Obake, inspired by nature, can make a mountain and change the run of a river in this hands-on demonstration.

Six specialized motors located below a silicone liquid rubber screen control the screen’s movement. Push, pry, prod, pinch, poke and the surface is malleable enough to move. (Watch a demo here).

A small microphone below the screen picks up the noise of vibrations when a finger touches the surface. Cameras mounted above detect the movement of a user’s hand; an overhead projector is used to display images.

“There are many ways to detect,” says Dand. A matrix of bend sensors embedded in the material has been shown to achieve the same effect as depth cameras. “And you don’t need the projector on top. You can have it from the side or from the bottom,” he says.

Dand envisions many potential applications. The “display opens up all sorts of new options,” he adds. “Like for Excel tables with row and columns. What if it could pop out a chart bar and change the data?”

The name Obake comes from a shape-shifting mythological Japanese creature. Dand and Hemsley say they were heavily influenced by nature in their design, specifically in the malleability and feel of water. “Things that you can actually feel and touch, they have their own inherent beauty. That’s how nature works,” Dand says.

But both recognize that people are accustomed to what they have now. For today’s touchscreens, the world is still flat. “We need a paradigm shift in user interface,” says Dand.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.