A new search tool developed by researchers at Microsoft indexes medical images of the human body, rather than the Web. On CT scans, it automatically finds organs and other structures, to help doctors navigate in and work with 3-D medical imagery.
CT scans use X-rays to capture many slices through the body that can be combined to create a 3-D representation. This is a powerful tool for diagnosis, but it’s far from easy to navigate, says Antonio Criminisi, who leads a group at Microsoft Research Cambridge, U.K., that is attempting to change that. “It is very difficult even for someone very trained to get to the place they need to be to examine the source of a problem,” he says.
When a scan is loaded into Criminisi’s software, the program indexes the data and lists the organs it finds at the side of the screen, creating a table of hyperlinks for the body. A user can click on, say, the word “heart” and be presented with a clear view of the organ without having to navigate through the imagery manually.
Once an organ of interest has been found, a 2-D and an enhanced 3-D view of structures in the area are shown to the user, who can navigate by touching the screen on which the images are shown. A new scan can also be automatically and precisely matched up alongside a past one from the same patient, making it easy to see how a condition has progressed or regressed.
Criminisi’s software uses the pattern of light and dark in the scan to identify particular structures; it was developed by training machine-learning algorithms to recognize features in hundreds of scans in which experts had marked the major organs. Indexing a new scan takes only a couple of seconds, says Criminisi. The system was developed in collaboration with doctors at Addenbrookes Hospital in Cambridge, U.K.
The Microsoft research group is exploring the use of gestures and voice to control the system. They can plug in the Kinect controller, ordinarily used by gamers to control an Xbox with body movements, so that surgeons can refer to imagery in mid-surgery without compromising their sterile gloves by touching a keyboard, mouse, or screen.
Smaller design teams can now prototype and deploy faster.