Researchers at MIT and Boston Children’s Hospital have developed a system that can take MRI scans of a patient’s heart and, in a matter of hours, convert them into a tangible, physical model that surgeons can use to plan surgery.
The models could provide a more intuitive way for surgeons to assess and prepare for the anatomical idiosyncrasies of individual patients—a particular concern when heart abnormalities are the reason for surgery in the first place.
The project depends on a new technique developed by Mehdi Moghari, a physicist at Boston Children’s Hospital and one of the collaborators on the project, which increases the precision of cardiac MRI scans up to sevenfold. With Moghari’s system, a single scan generates roughly 200 2-D cross sections of the patient’s heart.
Like a black-and-white photograph, each cross section has regions of dark and light, and the boundaries between those regions may indicate the edges of anatomical structures. Then again, they may not.
Determining the boundaries between objects in an image is one of the central problems in computer vision, known as image segmentation. But general-purpose image-segmentation algorithms aren’t reliable enough to produce the very precise models that surgical planning requires. And a human expert might take 10 hours to segment all 200 cross sections.
So Polina Golland, a professor of electrical engineering and computer science at MIT, and Danielle Pace, a student in her group, instead asked experts to identify boundaries in just a few of the cross sections and allowed algorithms to take over from there. Their strongest results came when, rather than segmenting entire cross sections, the experts segmented only a small patch of each—one-ninth of the total area.
In that case, segmenting just 14 patches and letting the algorithm infer the rest yielded 90 percent agreement with expert segmentation of the entire collection of 200 cross sections. Human segmentation of just three patches yielded 80 percent agreement.
“If somebody told me that I could segment the whole heart from eight slices out of 200, I would not have believed them,” Golland says.
Together, human segmentation of sample patches and algorithmic generation of a digital heart model takes about an hour. Using 3-D printing to create the model (which they’ve done with collaborators at Harvard’s Wyss Institute) takes a couple of hours more.
“We have used this type of model in a few patients and in fact performed ‘virtual surgery’ on the heart to simulate real conditions,” says Sitaram Emani, a cardiac surgeon at BCH who was not involved in the research. “Doing this really helped with the real surgery in terms of reducing the amount of time spent examining the heart and performing the repair.”
The gene-edited pig heart given to a dying patient was infected with a pig virus
The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.
Meta has built a massive new language AI—and it’s giving it away for free
Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.