Skip to Content
MIT News feature

Dina Katabi, SM ’99, PhD ’03

First she improved wireless data flow. Then she found an entirely new way to use wireless signals—in health care.
Dina Katabi
Dina KatabiPatrick Leger

Dina Katabi, SM ’99, PhD ’03, came from a family of doctors in Syria, where she was born, but she abandoned medical school for a more math-focused life, studying electrical engineering at the University of Damascus and then computer science as a grad student at MIT, where she is now a professor.

In her early work, she came up with novel ways to prevent congestion in wireless networks, making things like Wi-Fi and cellular service faster and more efficient. To overcome the problem of interference—signals competing for the same pathway—she embraced it, developing a way to mix together signals from different sources and decode them on the receiving end.

A hallmark of her work, she says, is bridging disciplines, and her approach to wireless networks is unique in that it goes all the way from the signal to the application. “It’s not the traditional way people think about a field,” she says.

By the time she won a MacArthur “genius grant” in 2013, Katabi had turned her focus from how wireless signals carry data to how they bounce off people. Her team built a wall-mounted device that emits extremely low-powered radio waves and uses machine learning to extract information from the way those signals reflect off people’s bodies. Since the signals travel through walls, the device becomes a kind of x-ray vision that can measure a person’s heartbeat, breathing, gait, and more without any kind of wearable sensor. This information can also reveal emotional states and even distinct stages of sleep.

“The vision is to do passive monitoring for health and wellness,” Katabi says. The device could detect subtle changes in Alzheimer’s patients, for example, and monitor some effects of their medications. She thinks her technology could also make metrics like the pain scale more objective, yet another way to improve medicine.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.