State-of-the-art machine-learning projects often require massive amounts of data and computational power. As a consequence, only a few groups with these resources control access to many machine-learning models. Gauri Joshi, 34, is working to change that by designing distributed computing algorithms that make it possible for such models to be trained using a network of devices such as cell phones or sensors. “It democratizes machine learning and makes it universally accessible without requiring expensive computing hardware and enormous amounts of training data,” Joshi says.
Robots aren’t generally known for their flexibility, with some exceptions. Laura Blumenschein, 29, co-invented the Vine robot, which moves and grows like a plant. Shaped like a tube, the soft robot uses air pressure to maneuver around and gets longer as material fed through its center comes out the other end. “Think of a flexible blade of grass strong enough to push itself through cracks in concrete sidewalks, but compliant enough to be bent in the wind,” she says. Possible applications, says Blumenschein, include flexible intravenous catheters that allow for safer surgeries, shape-changing antennas for avoiding interference, and archaeological tools for exploring tight tunnels and ruins.
When treating stroke victims, doctors use a long, thin device called a guide wire to unclog the blocked blood vessels in the brain. But these manually controlled wires provide limited access to difficult-to-reach areas. Yoonho Kim, 33, developed a teleoperated robotic system that can wind its way through the brain’s vascular network. “My invention enables robotically assisted procedures for treating stroke and aneurysms with much improved safety and accuracy,” Kim says.
Large biobanks with medical health records from millions of patients offer a view into how genetic variation can influence people’s health. To take advantage of this, Joelle Mbatchou, 32, has developed a machine-learning model called Regenie that makes analyzing the data quicker and cheaper while reducing the amount of computing power required. The method could allow researchers to more easily identify genetic variants associated with specific diseases. “With the increasing number of collaborations being established across large biobanks, many of them involving individuals from diverse populations, Regenie makes it possible to leverage those data and … potentially make discoveries which can lead to improved clinical care,” she says.
To be accurate, many AI models need large amounts of human-labeled data. Research from Ishan Misra, 31, shows that it’s possible to train these models on visual data alone, skipping the human labels. Misra believes that such self-supervised models will greatly expand the types of problems that AI can solve. “In domains like medical imaging, where labeling is expensive, self-supervised models can play a major role in rapidly developing AI models at a fraction of the cost,” he says. “These models can also enable AI models to learn new skills continuously from the stream of data they observe, without human supervision.” That could be especially useful for robots operating in environments that constantly change.
Advances in speech and language technologies have led to tools like voice-enabled search, text-to-speech apps, speech recognition, and machine translation, but such tools only work for the languages they’ve been trained to recognize—typically English, French, or Chinese. For many other languages, including ones spoken by millions of Africans, they remain out of reach. Kathleen Siminyu, 28, wants to change that. She launched a fellowship program through which contributors created nine open-source African-language data sets annotated for a variety of machine-learning tasks. She sees “a possible future where all the information readily available on the internet is equally accessible in African languages as it is in English.”
Kathryn Tunyasuvunakool, 32, was part of the team that developed AlphaFold, a machine-learning method for predicting a protein’s 3D structure from its amino acid sequence. She also led the team that used AlphaFold to predict and study the structures of all human proteins, data that was made freely available to the scientific community. “If you want to get a detailed understanding of how they work, it’s very helpful to know their structure,” Tunyasuvunakool says of proteins. “Experimental methods exist for solving protein structures, but they can take a long time and are labor intensive. In many cases, AlphaFold can provide good-quality, actionable structural information within minutes.”
Chemists are always trying to figure out how to make new kinds of molecules. Usually, this requires a lot of research and lab experiments to get it right. Alain Vaucher, 31, made it his goal to simplify the synthesis of novel compounds. He created an AI system that analyzes related compounds to determine the chemical recipe for any molecule you desire. Via an online platform, researchers can draw the skeletal structure of the molecules they want to make. The AI then predicts which ingredients it will need, and under what conditions and in which order they should be mixed. A robot connected to the cloud then executes the instructions.
Generative AI, which creates entirely new content and images from existing data, is not inherently good or bad, yet many of its applications have been harmful—deepfakes, fake news articles, or chatbots that respond in toxic ways. Sharon Zhou, 29, is working to characterize the problems and advantages by developing new benchmarks with which to evaluate these systems. She notes that generative models are among the most capable AI systems we have, and yet we understand their capabilities the least. She aims to “make it possible to understand how fast our generative models are progressing at the frontier, if at all, and how they’re progressing: is it safe, and to what extent can it be deployed?”