Skip to Content

inFORM: An Interactive Dynamic Shape Display that Physically Renders 3-D Content

An interactive dynamic display table.
December 30, 2013

While it’s debatable whether we’ll ever be able to teleport objects or people around the world at the speed of light, the inFORM system from Tangible Media Group at MIT might be the seeds of the next best thing. inFORM facilitates the real-time movement of physical “pixels” on a table surface that move in accordance with data from a Kinect motion sensing input device. The system allows people to remotely manipulate objects from a distance, physically interact with data or temporary objects, and could open the door to a wide variety of gaming, medical, or other interactive scenarios where people might be in remote locations.

One can only imagine the possibilities as the resolution of such a device increases. As mind-blowing as the video is above, the inFORM demonstrated has a relatively low resolution of 30×30 resulting in 900 moving “pixels”. As technology allows, what happens if the resolution doubles or quadruples and 3D content begins to appear exponentially more lifelike.

inFORM is currently under development at MIT’s Tangible Media Group and was designed by Daniel LeithingerSean FollmerHiroshi Ishii with help from numerous other software and hardware engineers. You can learn more about it here.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.