A new way to use the AI behind deepfakes could improve cancer diagnosis

Generative adversarial networks, the algorithms responsible for deepfakes, have developed a bit of a bad rap of late. But their ability to synthesize highly realistic images could also have important benefits for medical diagnosis.
Deep-learning algorithms are excellent at pattern-matching in images; they can be trained to detect different types of cancer in a CT scan, differentiate diseases in MRIs, and identify abnormalities in an x-ray. But because of privacy concerns, researchers often don’t have enough training data. This is where GANs come in: they can synthesize more medical images that are indistinguishable from the real ones, effectively multiplying a data set to the necessary quantity.
There is another challenge, though. Deep-learning algorithms need to train on high-resolution images to produce the best predictions, yet synthesizing such high-res images, especially in 3D, takes a lot of computational power. That means it requires special and expensive hardware, making its large-scale use impractical in hospitals.
So researchers from the Institute of Medical Informatics at the University of Lübeck proposed a new approach to make the process much less intensive. They broke it up into stages: the GAN first generates the whole image in low-res, then generates the details at the right resolution one small section at a time. Through experiments, the researchers showed not only that their method generated realistic high-res 2D and 3D images with low computational resources, but that the expenditure also stayed constant regardless of image size.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
Deep Dive
Artificial intelligence
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Minds of machines: The great AI consciousness conundrum
Philosophers, cognitive scientists, and engineers are grappling with what it would take for AI to become conscious.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.