Researchers at MIT and Boston Children’s Hospital have developed a system that can take MRI scans of a patient’s heart and, in a matter of hours, convert them into a tangible, physical model that surgeons can use to plan surgery.
The models could provide a more intuitive way for surgeons to assess and prepare for the anatomical idiosyncrasies of individual patients—a particular concern when heart abnormalities are the reason for surgery in the first place.
The project depends on a new technique developed by Mehdi Moghari, a physicist at Boston Children’s Hospital and one of the collaborators on the project, which increases the precision of cardiac MRI scans up to sevenfold. With Moghari’s system, a single scan generates roughly 200 2-D cross sections of the patient’s heart.
Like a black-and-white photograph, each cross section has regions of dark and light, and the boundaries between those regions may indicate the edges of anatomical structures. Then again, they may not.
Determining the boundaries between objects in an image is one of the central problems in computer vision, known as image segmentation. But general-purpose image-segmentation algorithms aren’t reliable enough to produce the very precise models that surgical planning requires. And a human expert might take 10 hours to segment all 200 cross sections.
So Polina Golland, a professor of electrical engineering and computer science at MIT, and Danielle Pace, a student in her group, instead asked experts to identify boundaries in just a few of the cross sections and allowed algorithms to take over from there. Their strongest results came when, rather than segmenting entire cross sections, the experts segmented only a small patch of each—one-ninth of the total area.
In that case, segmenting just 14 patches and letting the algorithm infer the rest yielded 90 percent agreement with expert segmentation of the entire collection of 200 cross sections. Human segmentation of just three patches yielded 80 percent agreement.
“If somebody told me that I could segment the whole heart from eight slices out of 200, I would not have believed them,” Golland says.
Together, human segmentation of sample patches and algorithmic generation of a digital heart model takes about an hour. Using 3-D printing to create the model (which they’ve done with collaborators at Harvard’s Wyss Institute) takes a couple of hours more.
“We have used this type of model in a few patients and in fact performed ‘virtual surgery’ on the heart to simulate real conditions,” says Sitaram Emani, a cardiac surgeon at BCH who was not involved in the research. “Doing this really helped with the real surgery in terms of reducing the amount of time spent examining the heart and performing the repair.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
The Biggest Questions: What is death?
New neuroscience is challenging our understanding of the dying process—bringing opportunities for the living.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
How to fix the internet
If we want online discourse to improve, we need to move beyond the big platforms.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.