Skip to Content

Pioneers

Their innovations are leading the way to better gene editing, smarter AI, and a safer internet.

  • Age:
    28
    Affiliation:
    MIT Media Lab and Algorithmic Justice League

    Joy Buolamwini

    When AI misclassified her face, she started a movement for accountability.

    As a college student, Joy Buolamwini discovered that some facial-analysis systems couldn’t detect her dark-skinned face until she donned a white mask. “I was literally not seen by technology,” she says.

    That sparked the research for her MIT graduate thesis. When she found that existing data sets for facial--analysis systems contained predominantly pale-skinned and male faces, Buolamwini created a gender-balanced set of over a thousand politicians from Africa and Europe. When she used it to test AI systems from IBM, Microsoft, and Face++, she found that their accuracy varied greatly with gender and skin color. When determining gender, the error rates of these systems were less than 1 percent for lighter-skinned males. But for darker-skinned female faces, the error rates were as high as 35 percent.

    In some cases, as when Facebook mislabels someone in a photo, such mistakes are merely an annoyance. But with a growing number of fields coming to rely on AI—law enforcement is using it for predictive policing, and judges are using it to determine whether prisoners are likely to reoffend—the opportunities for injustice are frightening. “We have to continue to check our systems, because they can fail in unexpected ways,” Buolamwini says. 

    A former Rhodes scholar and Fulbright fellow, she founded the Algorithmic Justice League to confront bias in algorithms. Beyond merely bringing these biases to light, she hopes to develop practices to prevent them from arising in the first place—like making sure facial-recognition systems undergo accuracy tests.

    —Erika Beras

  • Age:
    30
    Affiliation:
    University of California, Berkeley

    Alessandro Chiesa

    A cryptocurrency that’s as private as cash.

    For all the promise of blockchain, there’s a problem that comes with treating all transactions as public information: some of that stuff is just nobody’s business. But with Zcash, a cryptocurrency cofounded by Alessandro Chiesa, transactions can be not only secure but as anonymous as handing someone a $20 bill from your wallet.

    That’s because Zcash employs a cryptographic protocol called a succinct zero-knowledge proof (see “10 Breakthrough Technologies 2018: Perfect Online Privacy”)—that is, an efficient way to convince both parties to a transaction that something is true without divulging any other information.

    Zcash has huge implications for transactions that might otherwise reveal a buyer’s or seller’s location, medical information, or other private data. It allows people to do transactions online without risking their privacy or exposing themselves to identity theft. Zcash, which Chiesa launched four years ago, now has a market cap of over a billion dollars.

    —Dan Solomon

  • Age:
    25
    Affiliation:
    Berkeley Artificial Intelligence Lab

    Chelsea Finn

    Her robots act like toddlers—watching adults, copying them in order to learn.

    Chelsea Finn is developing robots that can learn just by observing and exploring their environment. Her algorithms require much less data than is usually needed to train an AI—so little that robots running her software can learn how to manipulate an object just by watching one video of a human doing it.

    Finn’s robots act like toddlers, watching adults do something and copying them. A wooden shape-sorting toy in her lab shows evidence of the process: marks from where a robot repeatedly bashed a red cube before learning to place it inside the square hole.

    Her ultimate goal is to create robots that can be sent off into the world and acquire a general set of skills—not because they’ve been programmed for those tasks but because they’ve been taught to learn by observing. This might mean factory robots that wouldn’t have to be trained by teams of engineers, or AI systems that recognize objects without being trained on labeled images.

    Finn thinks a good intermediate goal for her robots is to teach them how to set the table. The first step is to make robots that can learn how to arrange multiple objects. “In many ways, the capabilities of robotic systems are still in their infancy,” she says. “The goal is to have them gain common sense.” 

    —Katherine Bourzac

  • Age:
    28
    Affiliation:
    ForAllSecure

    Alexandre Rebert

    He asked, what if a computer could fix itself?

    When a computer system gets hacked, people typically fix the problem after the fact. Alexandre Rebert created a machine that can fix itself as the attack is happening.

    Rebert recognized that computers may lack creativity, but they’re good at doing things quickly and on a massive scale. His system, called Mayhem, can analyze thousands of programs simultaneously, doing in a few hours what might take a human expert years to accomplish.

    Mayhem, an autonomous system, does this by combining two techniques. The first is called coverage-based fuzzing—a standard in automated security testing, in which data is thrown at a program to see if an input triggers new behavior. It’s essentially scanning and searching in a fast way. The second, symbolic execution, analyzes the program in a slower, more nuanced way. The approaches complement each other, making the combination better than other techniques.

    Rebert led the team creating Mayhem while working with ForAllSecure, the Pittsburgh-based cybersecurity company he cofounded. The company’s work and mission stem from his research at Carnegie Mellon. He thinks his invention could be especially useful for vulnerable systems like power grids, hospitals, and banks.

    “There is an increasing amount of software in our lives,” says Rebert. “And depending only on human expertise is insufficient and dangerous.”

    —Erika Beras

  • Age:
    28
    Affiliation:
    Cellino Biotech

    Nabiha Saklayen

    She developed a way to edit genes with cheap lasers.

    Gene editing is invaluable in correcting mutations like the one that causes sickle-cell anemia. But biologists need better ways to get DNA and other ingredients into cells. Typically, the gene-editing recipe is introduced by viruses, which can have dangerous side effects, or during electroporation, a technique that uses strong electrical pulses and kills many of the cells in the process.

    Lasers offer a gentler alternative, but those methods have had their own drawbacks. The lasers used have typically been very powerful and expensive, and capable of injecting only one cell at a time—too slow for clinical applications.

    Nabiha Saklayen’s innovation was to design nanostructured add-ons to the laser system that deliver pulses of laser light to large numbers of cells at once, making it possible to dose them with gene editors at clinically useful speeds. Her process doesn’t require an expensive laser, though it took her a while to convince other researchers and her advisor that relatively cheap ones were powerful enough. “It doesn’t matter to the cell,” she says.

    Saklayen has now founded a company, Cellino Biotech, to commercialize her idea and use gene-editing tools to engineer cells.

    Trained as a physicist, she is unusually comfortable with moving between scientific fields, including laser physics, nanomaterials, and synthetic biology. She credits her upbringing in Saudi Arabia, Bangladesh, Germany, and Sri Lanka with her adaptability. “I’m comfortable in new places, and at the interface of
    different fields,” she says.

    Katherine Bourzac

  • Age:
    25
    Affiliation:
    DeepMind

    Julian Schrittwieser

    AlphaGo beat the world’s best Go player. He helped engineer the program that whipped AlphaGo.

    A few years ago, when Julian Schrittwieser joined the Google-owned artificial-intelligence firm DeepMind, the board game Go was often called the Holy Grail of machine learning. The two-player game, which originated in ancient China, was so unconstrained by rules and so driven by intuition that many thought it would take a decade for AI to best the world’s top players. But in March 2016, a program developed by ­Schritt­wieser and his ­DeepMind colleagues defeated South Korea’s Lee Sedol, the world Go champion, in a best-of-five series that drew more than 100 million viewers. Go enthusiasts called it the match of the century.

    Schrittwieser and his teammates followed this up with an even more impressive accomplishment. In October 2017, their new program, AlphaGo Zero, defeated the earlier program, AlphaGo, 100 games to zero. Unlike AlphaGo, which learned the game by studying the play of humans, AlphaGo Zero learned by playing against itself—a feat with major implications for artificial intelligence. “With AlphaGo Zero, we see that even in areas where we don’t have human knowledge, we can bootstrap that knowledge and have a system that learns on its own,” Schrittwieser says.

    Schrittwieser, an Austrian native, is the lead software engineer on the AlphaGo Zero project. He is also a driving force behind a third DeepMind initiative, Alpha­Zero—a more generalized algorithm that has already mastered Go, chess, and the Japanese board game Shogi. The push toward generalization, ­Schrittwieser says, is key to DeepMind’s quest to build intelligent machines that are independent of human intuition—thereby devising better solutions to problems where the approach might otherwise be inhibited by human biases. Ultimately, he believes, this could lead to entirely new, AI-driven innovations in fields from pharmaceuticals to materials science.

    —Jonathan W. Rosen

  • Age:
    30
    Affiliation:
    OpenAI

    John Schulman

    Training AI to be smarter and better, one game of Sonic the Hedgehog at a time.

    John Schulman, a research scientist at OpenAI, has created some of the key algorithms in a branch of machine learning called reinforcement learning. It’s just what it sounds like: you train AI agents in the same way you might train a dog, by offering a treat for a correct response. For a machine, the “treat” might be to rack up a high score in a video game.

    Which explains why Schulman is so excited about the 1991 video game Sonic the Hedgehog. The game, he says, is a perfect benchmark for testing how well new machine-­learning algorithms transfer learned skills to new situations. Since Sonic is the world’s fastest hedgehog, the game moves rapidly, and it also depicts some interesting physics. Once an AI agent learns how to play, it’s easy for researchers to test its ability to transfer that knowledge to different scenarios.

    These algorithms, once trained, might be applied in the real world, and they can be used to improve robot locomotion. Traditional approaches have been specialized for certain situations—which means that on new terrain, a robot programmed using older methods might fall down. One that uses reinforcement learning, ­Schulman hopes, would be able to get back up and try new things until it solves the problem.

    —Katherine Bourzac

  • Age:
    32
    Affiliation:
    Stanford University

    Humsa Venkatesh

    She discovered a secret to cancer growth that could lead to a new class of drugs.

    Humsa Venkatesh’s research revealed how cancers hijack the activity of neural networks to fuel their own growth. Her discovery sparked a novel area of research targeting a type of activity seen in many different types of cancer. “These neuronal systems are signaling inputs that instruct how the tumor grows and functions,” she says. The results could lead to therapies that work against tumor cells in all their diversity.

    When Venkatesh was a teenager in California, her uncle, who lived in India at the time, learned he had kidney cancer. Though he sought treatment in both India and the US, the only options available to him were standard radiation and chemotherapy, neither of which was effective. He died less than two years after the diagnosis. The experience made Venkatesh realize how little doctors understood the fundamental mechanisms of tumor growth.

    So instead of becoming a doctor, as she’d originally hoped, she devoted herself to studying that. “I wanted my contribution to be not just treating these patients on an individual level, but really advancing cancer research in a way that would help us come up with new ways to treat [them],” she says.

    Now Venkatesh is harnessing tumors’ essentially parasitic behavior within their environment to develop drugs that might neutralize the way they exploit neural networks. These therapies could be pushed into clinics faster than some others because prototypes of such drugs already exist—they were developed for other purposes before scientists found out about their potential in cancer treatment.

    —Yiting Sun