Hollywood has to resort to trickery to show moviegoers laser beams traveling through the air. That’s because the beams move too fast to be captured on film. Now a camera that records frames at a rate of 0.6 trillion every second can truly capture the bouncing path of a laser pulse.
The system was developed by researchers led by Ramesh Raskar at MIT’s Media Lab. Currently limited to a tabletop inside the group’s lab, the camera can record what happens when very short pulses of laser light—lasting just 50 femtoseconds (50,000 trillionths of a second) long—hit objects in front of them. The camera captures the pulses bouncing between and reflecting off objects.
Raskar says the new camera could be used for novel kinds of medical imaging, tracking light inside body tissue. It could also enable novel kinds of photographic manipulation. In experiments, the camera has captured frames roughly 500 by 600 pixels in size.
The fastest scientific cameras on the market typically capture images at rates in the low millions of frames per second. They work similar to the way a consumer digital camera works, with a light sensor that converts light from the lens into a digital signal that’s saved to disk.
The Media Lab researchers had to take a different approach, says Andreas Velten, a member of the research team. An electronic system’s reaction time is inherently limited to roughly 500 picoseconds, he says, because it takes too long for electronic signals to travel along the wires and through the chips in such designs. “[Our shutter speed is] just under two picoseconds because we detect light with a streak camera, which gets around the electrical problem.”
More typically used to measure the timing of laser pulses than for photography, a streak camera doesn’t need any electronics to record light. Light entering the streak camera falls onto a specialized electrode—a photocathode—that converts the stream of photons into a matching stream of electrons. That electron beam hits a screen on the back of the streak camera that’s covered with chemicals that light up wherever the beam falls. The same mechanism is at work in a traditional cathode ray tube TV set.
Because a streak camera can only view a very narrow line of a scene at one time, the MIT system uses mirrors to build up a full view. A conventional digital camera captures the images from the back of the streak camera, and these images are then compiled by software into the final output. Each image captured by the digital camera records only the tiny fraction of a beam’s journey visible to the streak camera.
One result of this design is that videos captured by the team show the sequence of events as a laser pulse bounces around, but they don’t capture the fate of a single pulse of light. Rather, they capture a sequence of snapshots from the actions of many successive, identical light pulses, thanks to tight synchronization between the light pulses and streak camera. “We need an event that is repeatable to create an image or video,” says Velten.
That is in contrast to what is widely known as the “world’s fastest camera,” a system unveiled in 2009 by a research group at the University of California, Los Angeles, that captures 6.1 million frames per second and has a shutter speed of 163 nanoseconds, compared to the 1.7 picoseconds of the MIT group.
Because the MIT system can’t image events that don’t happen on a regular cycle, there are limits to what it can be used for, but Velten says there’s still value in slowing down the usually unobservable movement of light.
One possible application is a new kind of medical imaging that Velten and Raskar call “ultrasound with light.” That would involve firing laser pulses into tissue and using the camera’s ability to record light movements beneath a surface to learn about structures and other information invisible using normal illumination and cameras. The potential for that can be seen in the group’s videos, says Velten. “You can see reflections happening and light moving beneath the surface of objects.”
The MIT research group previously used a similar setup to gather images from around corners, by bouncing a laser around a corner and then capturing any light that bounced back.
Srinivasa Narasimhan, a Carnegie Mellon University professor who researches computational photography, calls the MIT fast imaging system “amazing.” He says physicists and chemists could use it to image very brief events and reactions, or to refine our understanding of how light interacts with objects. “We have known for a long time how to simulate light propagation,” he says. “Now we can actually see light propagate and interact with the scene in slow motion to verify these things. Seeing is believing.”
Because the MIT camera can see exactly how light interacts with a scene, it is also able to gather 3-D information that could be used to perform novel kinds of photographic manipulation, says Velten. “When you have that extra information about a scene, you can do things like change the lighting in a photo after you have taken it,” he says. Startup company Lytro recently launched a camera that records the path that light takes in order to perform similar tricks.
The MIT system’s impressive speed currently comes along with some bulk: the camera setup covers a dining table-sized bench, with the laser filling the space underneath. But Velten says the laser is over a decade old, and could be replaced by one roughly the size of a desktop computer. He adds that research is underway that will shrink the entire system to the size of a laptop.
Velten says the research team is now focusing on making the system more compact, identifying specific applications, and increasing the size of the images it collects. Further increasing the speed is a low priority, he says. “We’re already looking at light moving, so there’s no reason to go faster.”
The race is on to define the new blockchain era. Get a leg up at Business of Blockchain 2019.Register now