Skip to Content

Software Detects Motion that the Human Eye Can’t See

The video technique could lead to remote diagnostic methods, like the ability to detect the heart rate of someone on a screen.

A new set of software algorithms can amplify aspects of a video and reveal what is normally undetectable to human eyesight, making it possible to, for example, measure someone’s pulse by shooting a video of him and capturing the way blood is flowing across his face.

Looking deeper: Fredo Durand, associate professor of computer science at MIT, developed an algorithm that can highlight surprising things in nearly any video.

The software process, called “Eulerian video magnification” by the MIT computer scientists who developed the program, breaks apart the visual elements of every frame of a video and reconstructs them with the algorithm, which can amplify aspects of the video that are undetectable by the naked eye. These aspects could include the variations in redness in a man’s face caused by his pulse. “Just like optics has enabled [someone] to see things normally too small, computation can enable people to see things not visible to the naked eye,” says MIT computer scientist Fredo Durand, one of the coauthors of a paper about the research.

UC Berkeley professor Maneesh Agrawala, who has spent his career in visualization and computer graphics, says he is impressed with the work. “The many examples in the video they provide are really nice examples of visualizing things that are difficult to do otherwise,” he says.

Durand and his colleagues plan to make their software code available to others this summer. He predicts the primary application will be for remote medical diagnostics, but it could be used to detect any small motion, so that it might let, for example, structural engineers measure the way wind makes a building sway or deform slightly. 

He adds that any video footage can be used, although depending on the quality of the camera that captured the footage, noise and artifacts such as graininess will also be amplified. So the higher quality the footage, the better the outcome using the program. “What’s really nice about this technique is that it can just take standard video, from just about any device, and then process it in a way that finds this hidden information in the signal,” Agrawala says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.