Skip to Content

Software Detects Motion that the Human Eye Can’t See

The video technique could lead to remote diagnostic methods, like the ability to detect the heart rate of someone on a screen.

A new set of software algorithms can amplify aspects of a video and reveal what is normally undetectable to human eyesight, making it possible to, for example, measure someone’s pulse by shooting a video of him and capturing the way blood is flowing across his face.

Looking deeper: Fredo Durand, associate professor of computer science at MIT, developed an algorithm that can highlight surprising things in nearly any video.

The software process, called “Eulerian video magnification” by the MIT computer scientists who developed the program, breaks apart the visual elements of every frame of a video and reconstructs them with the algorithm, which can amplify aspects of the video that are undetectable by the naked eye. These aspects could include the variations in redness in a man’s face caused by his pulse. “Just like optics has enabled [someone] to see things normally too small, computation can enable people to see things not visible to the naked eye,” says MIT computer scientist Fredo Durand, one of the coauthors of a paper about the research.

UC Berkeley professor Maneesh Agrawala, who has spent his career in visualization and computer graphics, says he is impressed with the work. “The many examples in the video they provide are really nice examples of visualizing things that are difficult to do otherwise,” he says.

Durand and his colleagues plan to make their software code available to others this summer. He predicts the primary application will be for remote medical diagnostics, but it could be used to detect any small motion, so that it might let, for example, structural engineers measure the way wind makes a building sway or deform slightly. 

He adds that any video footage can be used, although depending on the quality of the camera that captured the footage, noise and artifacts such as graininess will also be amplified. So the higher quality the footage, the better the outcome using the program. “What’s really nice about this technique is that it can just take standard video, from just about any device, and then process it in a way that finds this hidden information in the signal,” Agrawala says.

Keep Reading

Most Popular

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.

A startup says it’s begun releasing particles into the atmosphere, in an effort to tweak the climate

Make Sunsets is already attempting to earn revenue for geoengineering, a move likely to provoke widespread criticism.

10 Breakthrough Technologies 2023

Every year, we pick the 10 technologies that matter the most right now. We look for advances that will have a big impact on our lives and break down why they matter.

These exclusive satellite images show that Saudi Arabia’s sci-fi megacity is well underway

Weirdly, any recent work on The Line doesn’t show up on Google Maps. But we got the images anyway.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.