Skip to Content

Dancing Chairs and 3-D Puppets Will Make Kids Love the Kinect

Playtime is about to get an upgrade.
October 11, 2012

Microsoft’s Kinect is well loved by the gaming community and is being taken apart and put back together in labs all around the world. But two Kinect hacks I saw this week tell me the device will become a favorite of a much younger crowd.

Take the KinEtre, a 3-D animator that lets you control virtual objects on a screen with your body. All that moving furniture you may remember watching in Beauty and the Beast? KinEtre lets you create your own virtual world of hopping chairs and dancing brooms, Microsoft researcher Jiawen Chen explained, using the magical depth-sensitive camera on a Kinect and the KinectFusion real-time 3-D constructor.

Say you want to play a chair, as the researchers did in this demo video. The first step is to scan that candidate object, which KinEtre understands as a mesh skeleton frame. Now, the Kinect already has the ability to watch your body move. As the video shows, with a single word (“Possess”), the system superimposes your virtual skeleton with the back legs of the virtual chair, animating the chair according to the movement it detects as you jump and bend and kick in front of the camera.

The delightful part is that KinEtre recognizes animation cues from more than just one person. Chen joined his colleague on screen and they proceeded to animate a virtual horse, having “possessed” two legs each. Chen said that the most obvious application for KinEtre would be in gaming—to create avatars that are truer to reality. It would also be an easy way to introduce computer graphics into home movies, and a quick way to throw together a 3-D animation using a cast of real people—say, at the family gathering at Thanksgiving.

The 3-D puppet project out of the Visualization Labs at UC Berkeley is also likely to win big points from kids, but parental guidance is advised. Like the KinEtre creators, they specialize in 3-D animation for folks who aren’t CG whizzes. A puppet is scanned into the system (captured using ReconstructMe software). The puppet is recognized when it wanders into the field of the camera’s vision, and the software has the ability to ignore the puppeteer’s hand. The puppets are identified, their orientation and position detected, and their images rendered in a virtual storyland, all in real time.

A few perks: the system allows you to change the position of the “camera” looking on the scene you created, and also allows light to be turned up and turned down. For anyone interested in a hands-on experience, the team recently made the source code for their setup available for free.

Keep Reading

Most Popular

What to know about this autumn’s covid vaccines

New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.

DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

“This is a profound moment in the history of technology,” says Mustafa Suleyman.

Human-plus-AI solutions mitigate security threats

With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure

Next slide, please: A brief history of the corporate presentation

From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.