Super-Slow-Mo Video

New Media Lab camera captures a trillion frames per second

  • by Larry Hardesty
  • February 21, 2012
  • Light show: Andreas Velten (left) and Ramesh Raskar have captured light scattering through a bottle and below the surface of a tomato (below).

MIT researchers have created a new imaging system that can acquire visual data at an effective rate of one trillion exposures per second—fast enough to produce a slow-motion video of a burst of light traveling the length of a plastic bottle. “There’s nothing in the universe that looks fast to this camera,” says Media Lab postdoc Andreas Velten, one of the system’s developers.

The system relies on a technology called a streak camera, whose aperture is a narrow slit. Particles of light—photons—enter the camera through the slit and are converted into electrons, which pass through an electric field that deflects them in a direction perpendicular to the slit. As a burst of light travels through a plastic bottle, some of its photons exit the bottle all along the way; the camera captures where those photons exit. Because the electric field is changing very rapidly, it deflects the electrons corresponding to late-arriving photons more than it does those corresponding to early-arriving ones. The camera can thus determine the time of arrival of photons passing through a one-dimensional slice of space.

To produce their super-slow-mo videos, Velten, Media Lab associate professor Ramesh Raskar, and chemistry professor Moungi Bawendi must perform the same experiment—such as passing a light pulse through a bottle—over and over, continually repositioning the streak camera to acquire a new one-dimensional sample of the scene. It takes only a nanosecond—a billionth of a second—for light to scatter through a bottle, but it takes about an hour to collect all the data necessary to build up a two-dimensional image for the final video. For that reason, Raskar calls the new system “the world’s slowest fastest camera.”

After an hour, the researchers have accumulated hundreds of thousands of data sets, each of which plots the one-dimensional positions of photons against their times of arrival. Raskar, Velten, and other members of ­Raskar’s Camera Culture group at the Media Lab developed algorithms that can stitch the raw data into a set of sequential two-dimensional images.

Because the system requires multiple passes to produce its videos, it can’t record events that aren’t exactly repeatable. Any practical applications will probably involve cases where the way in which light scatters—or bounces around as it strikes different surfaces—is itself a source of useful information. Those cases may, however, include analyses of the physical structure of both manufactured materials and biological tissues—“like ultrasound with light,” as Raskar puts it.

Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.

Subscribe today

Uh oh–you've read all of your free articles for this month.

Insider Premium

$179.95/yr US PRICE

Want more award-winning journalism? Subscribe to Insider Plus.

  • Insider Plus {! insider.prices.plus !}*

    {! insider.display.menuOptionsLabel !}

    Everything included in Insider Basic, plus ad-free web experience, select discounts to partner offerings and MIT Technology Review events

    See details+

    What's Included

    Bimonthly home delivery and unlimited 24/7 access to MIT Technology Review’s website.

    The Download. Our daily newsletter of what's important in technology and innovation.

    Access to the Magazine archive. Over 24,000 articles going back to 1899 at your fingertips.

    Special Discounts to select partner offerings

    Discount to MIT Technology Review events

    Ad-free web experience

/
You've read all of your free articles this month. This is your last free article this month. You've read of free articles this month. or  for unlimited online access.