Skip to Content

Second Skin Captures Motion

A new system could make special effects more affordable.

Researchers at MIT have developed a new system that may provide a cheaper and more efficient way to track motion. The system, called Second Skin, could be a cheaper alternative for creating special effects for movies. The researchers say that they hope it will also be used to help people monitor their own motions so that they can practice physical therapy or perfect their tai chi moves.

Coded light: This projector sends out pulses of light several times a second. These light pulses are picked up by photosensors worn by the subject being tracked.

Traditional tracking systems involve high-speed cameras placed around a specially lit set. The subject being tracked wears special markers that reflect light emitted by the cameras. The cameras capture and record the reflected light several times a second, to track the subject’s motion. When the system is used to make movies, software programs and a team of animators convert the data into an animated character. These motion-tracking systems can cost up to hundreds of thousands of dollars. Alternative systems that use magnets, accelerometers, or exoskeletons are, respectively, in need of even more extensive set up and calibration, error prone, or bulky and inflexible.

In contrast to traditional optical tracking systems, Second Skin doesn’t rely on cameras at all. Instead, the system uses inexpensive projectors that can be mounted in ceilings or outdoors. Therefore, the system can be used indoors and out without special lighting, and it costs only a few thousand dollars, says Ramesh Raskar, an associate professor at MIT’s Media Lab and the main researcher of Second Skin along with graduate student Dennis Miaw.

“I think it’s a breakthrough technology,” says Chris Bregler, an associate professor of computer science at New York University, who works on computer vision systems for motion tracking and was not involved in the Second Skin research. “It lets you do motion capture in lots of scenarios where a lot of other people wanted to do motion capture before and couldn’t.”

Tiny photosensors embedded in regular clothes record movement. The projectors send out patterns of near infrared light–approximately 10,000 different patterns a second. When the patterns hit the tiny photosensors embedded in the subject’s clothes, the photosensors capture the coded light and convert it into a binary signal that indicates the position of the sensor. Because the patterns of light will hit the sensors differently, depending on where they are, each sensor receives a unique light pattern. These patterns are recorded about 500 times a second for each sensor. The sensors send the information to a thin, lightweight microcontroller worn by the subject under her clothes, which then transmits the data back to a computer via Bluetooth. The whole system can cost less than $1,000, with each photosensor costing about $2, a vibrating sensor $80, and a projector $50. (Raskar says that at least six projectors are required per system.) “Each photodector is essentially decoding its own indoor location in a similar fashion to GPS,” he says.

Track and vibrate: Tiny photosensors are embedded in the black fabric shown above. Blue vibrators also surround the subject’s wrist (left). The photosensors capture light emitted from nearby projectors to determine the position of the subject’s arm. Software then compares the real arm position (red line) with the ideal arm position (yellow line). The circles correspond to the vibrators; the green one will vibrate to indicate the correct position to the subject.

Because each Second Skin sensor has a unique ID, Raskar says that this will be more accurate than traditional optical systems, in which the reflectors worn by the subject are indistinguishable from each other and can cause errors when one reflector crosses another. What’s more, Raskar has shown that the system works outside and in the dark.

“In theory, everything his system can do, we could do by computer vision, but in practice we haven’t found a solution yet,” says Bregler. “With his sensors, it’s very accurate and very clear where [the person] is from frame to frame.”

For the movie industry, this potentially means that motion tracking can be done on a regular set, which would save production time and let the actors work in a natural setting. “These elaborate systems get in the way of trying to shoot these films,” says Steve Sullivan, the senior technology officer at Lucasfilm’s Industrial Light & Magic (ILM). “A lot of people see motion tracking as being a solved problem, but I think there’s much more we can do to make it more accessible to a range of people and less in the way.” ILM recently developed a special proprietary system of high-contrast bands for the on-set motion tracking for Pirates of the Caribbean 2 and Ironman, but the actors have to wear a special suit, and extensive post-processing is required, says Sullivan. “It’s great to hear that researchers are trying to tackle this problem, because it’s such an issue to have to break up production and shoot scenes and actors separately on these stages.”

“I think a lot of the potential for these kinds of low-end tracking technology is … more for new applications, where you can’t spend all the time to make it perfect,” says Michael Gleicher, an associate professor of computer science at the University of Wisconsin-Madison. Gleicher says that motion tracking is becoming more popular in video games that can track a player and react in real time.

The researchers have also shown that the system can be used to help track and correct motions in a simple tai chi exercise. Such a system incorporates tiny vibrators that move when the current position of the subject’s limb is off course to indicate to the subject the need for self-correction. In much the same way, the researchers speculate that the system could also be used for physical therapy or to track a subject’s movements, in order to prevent falls.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.