Skip to Content

Sponsored

Making the Invisible Visible: Matter, Methods, and Minds

The 2016 winner of the $500,000 Lemelson-MIT Prize, which honors outstanding mid-career inventors, discusses his work as an imaging scientist and his plan to expand peer-to-peer inventing opportunities for young people via the REDX platform.

Provided byLemelson-MIT Program

Inventing the future begins with imagining the impossible. The ability to drive through fog as if it were a sunny day, or to read a book without opening it, might seem farfetched. Can you imagine technology that detects circulating tumor cells with a device resembling a blood-pressure cuff? Such innovations may seem to extend beyond our current realm of ability, but in my Camera Culture Group at the MIT Media Lab, these inventions and others are well underway.

The impossible photos of tomorrow won’t be recorded; they’ll be computed.

Such advances represent more than just fascinating science; they could have global impacts and significant, practical applications in everyday life. It’s not farfetched to think that consumer electronic devices will soon incorporate technologies that enable users to see through fog—and even walls—as well as around corners. In addition, these innovations will significantly improve biomedical technology, search-and-rescue operations, and imaging in hazardous situations.

Making the Invisible Visible

The invention of X-ray imaging enabled us to see inside our bodies. The invention of thermal infrared imaging enabled us to depict heat. So over the last few centuries, the key trick to making the invisible visible was by recording with a new slice of electromagnetic spectrum. But the impossible photos of tomorrow won’t be recorded; they’ll be computed. Accordingly, we now need a co-design of novel imaging hardware and computational algorithms.

For instance, in September 2015, at the Defense Advanced Research Project Agency (DARPA) “Wait, What?” Future Technology Forum, I presented a talk titled “Extreme Computational Photography,” which showcased our latest work. Femto-photography, a field developed by my research group, uses a high-speed camera that allows us to visualize the world at nearly a trillion frames per second so that we can create slow-motion movies of light in flight—and create impossible photos with scattered light. 

Video: Making Invisible Visible- Matter, Methods, Minds: MIT Media Lab, Camera Culture Group

Normally, cameras record what’s in the direct line of sight, using direct light. However, fog or tissue or room corners create scattered light that obscures the object of interest. The scientific community has been studying scattered light for decades, but my group decided to take things a step further. One complication in researching scattered light is that the noise dominates the signal. But one person’s noise is another person’s signal. So the question becomes: how can we exploit scattered light for the purpose of gaining more information about the scenes we’re exploring?

Let’s begin with seeing what’s around a corner. A laser pulse of light, lasting less than one-trillionth of a second, flashes through the air and explodes against a wall, sending photons scattering around the room. A small number of these photons will return to the starting point and be collected by a femto-camera at a rate equivalent to roughly half a trillion frames per second.

By capturing and analyzing the scattered light at high-time resolution, we can create a 3D image of the object that’s around the corner and out of sight. This idea of scattered-light imaging is also part of the ambitious new DARPA program Revolutionary Enhancement of Visibility by Exploiting Active Light-fields (REVEAL), led by Predrag Milojkovic, program manager for DARPA’s Defense Sciences Office. Scattered light can help us paint a picture of what lies beyond our scope of vision. Using fluorescent lifetime imaging (FLIM) techniques, as demonstrated by MIT PhD candidate and Camera Culture Group researcher Guy Satat and others, we can also detect cancer tumors hidden in deep tissue—potentially eliminating the need for X-rays and biopsies. Ultra-fast imaging can measure the intensity decay or lifetime of fluorophores that tag tumor cells. More importantly, it can distinguish fluorescent lifetime decay from scattering induced decay in tissue.   

Our team is exploring ways to use time-resolved measurement to image through thick and highly scattering materials, with an eye toward major applications in biomedical imaging, subdermal diagnosis, and dental imaging.

We’re enhancing REDX, our peer-to-peer platform for young innovators. We hope other entities will also invest, making it possible for more young people to embark upon “invention education pathways” allowing them to start early—as I did at age 10.

If we add terahertz imaging to the mix, we can begin to see through other materials, such as paper. Our team has found a way to use time-of-flight terahertz spectroscopy imaging to read through the pages of an unopened book. Time domain terahertz spectroscopy uses pulses much like radar and ultrasound, providing us with information about the depth and range of the pages by measuring the echo of terahertz pulses.  In addition, the variance of reflectivity between blank paper and inked paper enables us to recover the content of each page by mapping the distribution of pulses across the reconstructed surface of the page.

Another prototype incorporates radio frequency (RF) technology to unveil shapes that exist on the other side of a wall. However, RF bounces off human-sized objects as if those objects are made of mirrors. So we can’t create full 3D shapes from a single emitter and receiver. But by combining multiple RF frequencies and emitters, we “light up” the scene behind the wall. We look at the reflected RF energy in time domain to compute blobby 3D shapes.

Helping Young People Engage in Creative Invention

We are making the invisible visible at MIT. But we’re also passionate about bringing invisible problems to light—and making talent visible—worldwide.

When contemplating the future, we should ask ourselves: “how far can we go?” and “whose imagination will take us there?” The joys and satisfaction that my colleagues and I experience from developing inventions and new capabilities such as those described above—and the resulting benefits that can accrue to billions of people in need—can be experienced by young people as part of their daily lives anytime, anyplace. Inventors don’t need to be a particular age or be affiliated with universities to create technological solutions that make a difference. Now is the time to empower genuine peer-to-peer invention.  

To contribute towards this preferred future, over the next several years, I will invest a portion of the Lemelson-MIT Prize money to support young inventors’ development. We’re enhancing REDX (Rethinking Engineering Design Execution), our peer-to-peer platform for young innovators. We hope other entities will also invest, making it possible for more young people to embark upon “invention education pathways” allowing them to start early—as I did at age 10.

REDX philosophy centers on a spot/probe/grow/launch model. The first step is spotting the right problem to work on together with experts and stakeholders. This is a multi-stage process analyzing available resources and pressing problems. Later phases involve probing the solution, growing adoption and launching the prototype.

REDX philosophy has influenced six co-innovation hubs, including: REDX Mumbai, LVP MITRA (Hyderabad, India), Medhacker (Sao Paolo, Brazil),  REDX Kumbhathon and  DISQ Center (both in Nashik, India), and the Emerging Worlds Special Interest Group at the MIT Media Lab. I am opening the REDX playbook to all so that anyone can apply to start REDX co-innovation lab or a club. I consider this a “flipped” venture-capital model for translating inventions into solutions with real impact.

Fast forward 10 years into the future. (Re)imagine a world in which young co-inventors around the globe are actively engaged in using online/offline collaboration, research data, innovative citizen-based technologies, and ways of thinking like inventors to address pressing challenges and leapfrog existing solutions. This is a future where great minds are engaged in spotting needs, probing solutions, developing prototypes, and deploying them to scale—ultimately, improving lives.

 

Ramesh Raskar is the 2016 winner of the $500,000 Lemelson-MIT Prize, which honors mid-career inventors dedicated to improving the world through outstanding technological invention. He is director of the Camera Culture research group at the MIT Media Lab and associate professor of Media, Arts and Sciences at MIT.A pioneer in vision technologies and social innovation, he holds more than 75 patents and has received numerous awards for his work. He plans to use a portion of the Lemelson-MIT Prize money to enhance the REDX peer-to-peer platform for young inventors. Raskar received a bachelor’s degree in electronics and telecommunication from the Government College of Engineering in India and a PhD in computer science from the University of North Carolina at Chapel Hill.

Keep Reading

Most Popular

Scientists are finding signals of long covid in blood. They could lead to new treatments.

Faults in a certain part of the immune system might be at the root of some long covid cases, new research suggests.

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.