Skip to Content

New Camera Captures Light in Motion

The system records 0.6 trillion frames a second—good enough to follow the path of a laser beam as it bounces off objects.
December 14, 2011

Hollywood has to resort to trickery to show moviegoers laser beams traveling through the air. That’s because the beams move too fast to be captured on film. Now a camera that records frames at a rate of 0.6 trillion every second can truly capture the bouncing path of a laser pulse.

See a video of a laser pulse moving through a Coke bottle, or bouncing off a tomato.

The system was developed by researchers led by Ramesh Raskar at MIT’s Media Lab. Currently limited to a tabletop inside the group’s lab, the camera can record what happens when very short pulses of laser light—lasting just 50 femtoseconds (50,000 trillionths of a second) long—hit objects in front of them. The camera captures the pulses bouncing between and reflecting off objects.

Raskar says the new camera could be used for novel kinds of medical imaging, tracking light inside body tissue. It could also enable novel kinds of photographic manipulation. In experiments, the camera has captured frames roughly 500 by 600 pixels in size.

The fastest scientific cameras on the market typically capture images at rates in the low millions of frames per second. They work similar to the way a consumer digital camera works, with a light sensor that converts light from the lens into a digital signal that’s saved to disk.

The Media Lab researchers had to take a different approach, says Andreas Velten, a member of the research team. An electronic system’s reaction time is inherently limited to roughly 500 picoseconds, he says, because it takes too long for electronic signals to travel along the wires and through the chips in such designs. “[Our shutter speed is] just under two picoseconds because we detect light with a streak camera, which gets around the electrical problem.”

More typically used to measure the timing of laser pulses than for photography, a streak camera doesn’t need any electronics to record light. Light entering the streak camera falls onto a specialized electrode—a photocathode—that converts the stream of photons into a matching stream of electrons. That electron beam hits a screen on the back of the streak camera that’s covered with chemicals that light up wherever the beam falls. The same mechanism is at work in a traditional cathode ray tube TV set.

Because a streak camera can only view a very narrow line of a scene at one time, the MIT system uses mirrors to build up a full view. A conventional digital camera captures the images from the back of the streak camera, and these images are then compiled by software into the final output. Each image captured by the digital camera records only the tiny fraction of a beam’s journey visible to the streak camera.

One result of this design is that videos captured by the team show the sequence of events as a laser pulse bounces around, but they don’t capture the fate of a single pulse of light. Rather, they capture a sequence of snapshots from the actions of many successive, identical light pulses, thanks to tight synchronization between the light pulses and streak camera. “We need an event that is repeatable to create an image or video,” says Velten.

That is in contrast to what is widely known as the “world’s fastest camera,” a system unveiled in 2009 by a research group at the University of California, Los Angeles, that captures 6.1 million frames per second and has a shutter speed of 163 nanoseconds, compared to the 1.7 picoseconds of the MIT group.

Because the MIT system can’t image events that don’t happen on a regular cycle, there are limits to what it can be used for, but Velten says there’s still value in slowing down the usually unobservable movement of light.

One possible application is a new kind of medical imaging that Velten and Raskar call “ultrasound with light.” That would involve firing laser pulses into tissue and using the camera’s ability to record light movements beneath a surface to learn about structures and other information invisible using normal illumination and cameras. The potential for that can be seen in the group’s videos, says Velten. “You can see reflections happening and light moving beneath the surface of objects.”

The MIT research group previously used a similar setup to gather images from around corners, by bouncing a laser around a corner and then capturing any light that bounced back.

Srinivasa Narasimhan, a Carnegie Mellon University professor who researches computational photography, calls the MIT fast imaging system “amazing.” He says physicists and chemists could use it to image very brief events and reactions, or to refine our understanding of how light interacts with objects. “We have known for a long time how to simulate light propagation,” he says. “Now we can actually see light propagate and interact with the scene in slow motion to verify these things. Seeing is believing.”

Because the MIT camera can see exactly how light interacts with a scene, it is also able to gather 3-D information that could be used to perform novel kinds of photographic manipulation, says Velten. “When you have that extra information about a scene, you can do things like change the lighting in a photo after you have taken it,” he says. Startup company Lytro recently launched a camera that records the path that light takes in order to perform similar tricks.

The MIT system’s impressive speed currently comes along with some bulk: the camera setup covers a dining table-sized bench, with the laser filling the space underneath. But Velten says the laser is over a decade old, and could be replaced by one roughly the size of a desktop computer. He adds that research is underway that will shrink the entire system to the size of a laptop.

Velten says the research team is now focusing on making the system more compact, identifying specific applications, and increasing the size of the images it collects. Further increasing the speed is a low priority, he says. “We’re already looking at light moving, so there’s no reason to go faster.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.