Inioluwa Deborah Raji
Her research on racial bias in data used to train facial recognition systems is forcing companies to change their ways.
The spark that sent Inioluwa Deborah Raji down a path of artificial-intelligence research came from a firsthand realization that she remembers as “horrible.”
Raji was interning at the machine--learning startup Clarifai after her third year of college, working on a computer vision model that would help clients flag inappropriate images as “not safe for work.” The trouble was, it flagged photos of people of color at a much higher rate than those of white people. The imbalance, she discovered, was a consequence of the training data: the model was learning to recognize NSFW imagery from porn and safe imagery from stock photos—but porn, it turns out, is much more diverse. That diversity was causing the model to automatically associate dark skin with salacious content.
Though Raji told Clarifai about the problem, the company continued using the model. “It was very difficult at that time to really get people to do anything about it,” she recalls. “The sentiment was ‘It’s so hard to get any data. How can we think about diversity in data?’”
The incident pushed Raji to investigate further, looking at mainstream data sets for training computer vision. Again and again, she found jarring demographic imbalances. Many data sets of faces lacked dark-skinned ones, for example, leading to face recognition systems that couldn’t accurately differentiate between such faces. Police departments and law enforcement agencies were then using these same systems in the belief that they could help identify suspects.
“That was the first thing that really shocked me about the industry. There are a lot of machine-learning models currently being deployed and affecting millions and millions of people,” she says, “and there was no sense of accountability.”
Born in Port Harcourt, Nigeria, Raji moved to Mississauga, Ontario, when she was four years old. She remembers very little of the country she left other than the reason for leaving: her family wanted to escape its instability and give her and her siblings a better life. The transition proved tough. For the first two years, Raji’s father continued to work in Nigeria, flying back and forth between two continents. Raji attended seven different schools during their first five years in Canada.
Eventually, the family moved to Ottawa and things began to stabilize. By the time she applied to college, she was sure she was most interested in pre-med studies. “I think if you’re a girl and you’re good at science, people tell you to be a doctor,” she says. She was accepted into McGill University as a neuroscience major. Then, on a whim, and with her father’s encouragement, she visited the University of Toronto and met a professor who persuaded her to study engineering. “He was like, ‘If you want to use physics and you want to use math to build things that actually create impact, you get to do that in this program,’” she remembers. “I just fell for that pitch and overnight changed my mind.”
It was at university that Raji took her first coding class and quickly got sucked into the world of hackathons. She loved how quickly she could turn her ideas into software that could help solve problems or change systems. By her third year, she was itching to join a software startup and experience this in the real world. And so she found herself, a few months into her internship at Clarifai, searching for a way to fix the problem she had discovered. Having tried and failed to get support internally, she reached out to the only other researcher she knew of who was working on fighting bias in computer vision.
In 2016, MIT researcher Joy Buolamwini (one of MIT Technology Review’s 35 Innovators Under 35 in 2018) gave a TEDx talk about how commercial face recognition systems failed to detect her face unless she donned a white mask. To Raji, Buolamwini was the perfect role model: a black female researcher like herself who had successfully articulated the same problem she had identified. She pulled together all her code and the results of her analyses and sent Buolamwini an unsolicited email. The two quickly struck up a collaboration.
At the time, Buolamwini was already working on a project for her master’s thesis, called Gender Shades. The idea was simple yet radical: to create a data set that could be used to evaluate commercial face recognition systems for gender and racial bias. It wasn’t that companies selling these systems didn’t have internal auditing processes, but the testing data they used was as demographically imbalanced as the training data the systems learned from. As a result, the systems could perform with over 95% accuracy during the audit but have only 60% accuracy for minority groups once deployed in the real world. By contrast, Buolamwini’s data set would have images of faces with an even distribution of skin color and gender, making it a more comprehensive way to evaluate how well a system recognizes people from different demographic groups.
Raji joined in the technical work, helping to prepare the data for Buolamwini's audits. The results were shocking: among the companies tested—Microsoft, IBM, and Megvii (the company best known for making the software Face++)—the worst identified the gender of dark-skinned women 34.4% less accurately than that of light-skinned men. The other two didn’t do much better. The findings made a headline in the New York Times and forced the companies to do something about the bias in their systems.
Gender Shades showed Raji how auditing could be a powerful tool for getting companies to change. So in the summer of 2018, she left Clarifai to pursue a new project with Buolamwini at the MIT Media Lab, which would make its own headlines in January 2019. This time Raji led the research. Through interviews at the three companies they’d audited, she saw how Gender Shades had led them to change the ways they trained their systems in order to account for a greater diversity of faces. She also reran the audits and tested two more companies: Amazon and Kairos. She found that whereas the latter two had egregious variations in accuracy between demographic groups, the original three had dramatically improved.
The findings made a foundational contribution to AI research. Later that year, the US National Institute of Standards and Technology also updated its annual audit of face recognition algorithms to include a test for racial bias.
Raji has since worked on several other projects that have helped set standards for algorithmic accountability. After her time at the Media Lab, she joined Google as a research mentee to help the company make its AI development process more transparent. Whereas traditional software engineers have well-established practices for documenting the decisions they make while building a product, machine-learning engineers at the time did not. This made it easier for them to introduce errors or bias along the way, and harder to check such mistakes retroactively.
Along with a team led by senior research scientist Margaret Mitchell, Raji developed a documentation framework for machine-learning teams to use, drawing upon her experience at Clarifai to make sure it would be easy to adhere to. Google rolled out the framework in 2019 and built it into Google Cloud for its clients to use. A number of other companies, including OpenAI and natural-language processing firm Hugging Face, have since adopted similar practices.
Raji also co-led her own project at Google to introduce internal auditing practices as a complement to the external auditing work she did at the Media Lab. The idea: to create checks at each stage of an AI product’s development so problems can be caught and dealt with before it is put out into the world. The framework also included advice on how to get the support of senior management, so a product would indeed be held back from launching if it didn’t pass the audits.
With all her projects, Raji is driven by the desire to make AI ethics easier to practice—“to take the kind of high-level ethical ideals that we like to talk about as a community and try to translate that into concrete actions, resources, and frameworks,” she says.
It hasn’t always been easy. At Google, she saw how much time and effort it took to change the way things were done. She worries that the financial cost of eliminating a problem like AI bias deters companies from doing it. It’s one reason she has moved back out of industry to continue her work at the nonprofit research institute AI Now. External auditing, she believes, can still hold companies accountable in ways that internal auditing can’t.
But Raji remains hopeful. She sees that AI researchers are more eager than ever before to be more ethical and more responsible in their work. “This is such impactful technology,” she says. “I just really want us to be more thoughtful as a field as to how we build these things, because it does matter and it does affect people.”
Photo by David Vintiner
Update Sept 23, 2020: Some details of Raji's involvement with Gender Shades have been clarified.
Her program sifts through data faster so scientists can focus more on science.
When Leilani Battle was working on her PhD, she helped develop ForeCache, a tool designed to help researchers browse large arrays of data—for instance, scanning high-resolution satellite images to look for areas covered with snow. The goal is to reduce latency, so that a user can pan and zoom across the data set without perceptible delay. A common way to do this is to predict which parts of the data a user is likely to need and then “prefetch” them. But how to predict what to prefetch? That depends on understanding the user’s behavior.
Battle and her colleagues developed a more efficient prediction system. It attempts to discern first which “analysis phase” a user is in, and then what tiles of data might be wanted next. They dubbed the three phrases “foraging,” “sensemaking,” and “navigation.” They suppose that users in the “foraging” phase are browsing at a coarse level, in order to come up with new ideas. “Sensemaking” is a closer examination meant to test those ideas, and “navigation” is a transition between the two.
This system allowed them, they said, to predict which tiles users wanted about 25% better than existing prefetching systems they benchmarked against, almost halving the latency.
Battle has devoted her career to designing systems and interfaces that help researchers sifting through data do their work better and faster. She hopes to make exploration tools more interactive and visual so they’ll be less daunting. Perhaps this will allow scientists to spot data quirks that would otherwise go unnoticed.
She was a key player behind the idea of a Facebook cryptocurrency.
In the summer of 2017, Morgan Beller approached her supervisor on Facebook’s corporate development team with a proposal: what if she began spending the bulk of her job researching how the social-media giant could enter the digital currency market?
Beller was so new at Facebook that she was still completing her orientation, but she’d cut her teeth at a venture capital firm, where she’d worked on early cryptocurrency investments. She could see that a seismic shift in the global financial community was coming.
When she realized that no one at Facebook was working on blockchain, she volunteered and quickly became the company’s digital currency evangelist, shepherding the development of both its open-source blockchain infrastructure, Libra, and its currency application and digital wallet, Novi. Today she serves as head of strategy for the latter, where she works with a team of digital currency developers.
Facebook and its founder, Mark Zuckerberg, endured sharp criticism after announcing the plans for Libra. Beller wasn’t surprised. “We’re trying to change the system, and there are a lot of people who are incentivized for the global financial system not to change,” she says.
Libra hasn’t even rolled out yet, but it’s already prompted several countries, including China, to accelerate the development of their own national digital currencies. The Libra Association recently announced plans to scale back Libra and first issue a coin backed by a local currency, but even with these modifications, Libra has already been disruptive.
Medical implants are often thwarted as the body grows tissue to defend itself. She may have found a drug-free fix for the problem.
When Eimear Dolan first worked to develop implantable medical devices to treat type 1 diabetes, she and her colleagues had to overcome a common roadblock. Their problem was one that’s long dogged makers of devices like pacemakers, insulin delivery systems, and breast implants: when the body senses an implanted foreign object, it constructs a protective wall of fibrous tissue. This reaction, known as the foreign body response, is one of the main reasons medical implants fail.
Today, as a biomedical engineer at the National University of Ireland Galway, Dolan thinks she’s found a way to counteract the foreign body response. Her weapon is a small robotic device known as a dynamic soft reservoir. Developed through a collaboration between Dolan’s lab at NUI Galway and researchers at MIT, the device is made of a soft material that can be made to oscillate, creating enough fluid flow to alter the environment around the implant and keep protective tissue from forming.
Past researchers have sought to use drugs or modify the surface chemistry of an implant. Dolan’s innovation, which she and her colleagues have successfully tested in rats, marks the first time anyone has tackled the problem mechanically. “The beauty about it is it’s a drug-free approach,” Dolan says.
Her team is redesigning the dynamic soft reservoir as part of an effort to construct a “bioartificial pancreas,” an implantable reservoir of cells that produce insulin for people with type 1 diabetes. Early attempts at such devices have been particularly liable to be rejected by the body and fail. Dolan believes her team can change that—and ultimately improve the success of other implantable devices.
Photo by Lillie Paquette
Her sensor-laden wristwatch would monitor your brain states.
If Rose Faghih’s project pans out, a seemingly simple smart watch could determine what’s happening deep inside your brain.
Faghih has developed an algorithm to analyze otherwise imperceptible
changes in sweat activity—a key indicator of stress and stimulation. Using two small electrodes attached to the back of a smart watch, she can monitor changes in skin conductance caused by sweat. Signal-processing algorithms then allow Faghih to correlate those changes with specific events, such as a PTSD-related flashback or even just wandering attention, in order to pinpoint the person’s brain state.
Typically, this kind of real-time data is available only by way of expensive scalp-based electrode systems like EEG or functional MRI. Faghih’s “Mindwatch” would in theory be cheap and portable enough to let people monitor their brain states anywhere.
Faghih hopes it will help people manage their own changing moods and mental states: a wearable with her technology could suggest that an agitated driver try some deep breathing or prompt a lonely shut-in to turn on mood-enhancing music. For people with mental illness or chronic conditions like diabetes, it could potentially even trigger an automated deep-brain stimulation device or an insulin pump.
Photo by Jeff Lautenberger, Cullen College of Engineering, University of Houston
By devising new ways to fool AI, she is making it safer.
A few years ago, Bo Li and her colleagues placed small black-and-white stickers on a stop sign in a graffiti-like pattern that looked random to human eyes and did not obscure the sign’s clear lettering. Yet the arrangement was deliberately designed so that if an autonomous vehicle approached, the neural networks powering its vision system would misread the stop sign as one posting a speed limit of 45 mph.
Such “adversarial attacks”—manipulation of input data that looks innocuous to a person but fools neural networks—had been tried before, but earlier exampleshad been mostly digital. For instance, a few pixels might be altered in an image, a change invisible to the naked eye. Li was one of the first to show that such attacks were possible in the physical world. They can be harder for an AI to detect because the methods developed to spot manipulated digital images don’t work on physical objects.
Li also devised subtle changes in the features of physical objects, like shape and texture, that again are imperceptible to humans but can make the objects invisible to image recognition algorithms. Her goal is to use this knowledge about potential attacks to make AI more robust. She pits AI systems against each other, using one neural network to identify and exploit vulnerabilities in another. This process can expose flaws in the training or structure of the target network. Li then develops strategies to patch these flaws and defend against future attacks.
Adversarial attacks can fool other types of neural networks too, not just image recognition algorithms. Imperceptible tweaks to audio can make a voice assistant misinterpret what it hears, for example. Some of Li’s techniques are already being used in commercial applications. IBM uses them to protect its Watson AI, and Amazon to protect Alexa. And a handful of autonomous-vehicle companies apply them to improve the robustness of their machine-learning models.
His discovery could reduce errors in quantum computing.
Zlatko Minev overturned a mainstay of quantum physics that had troubled Niels Bohr and Albert Einstein alike. For most of the 20th century, it was assumed that atoms change from one energy level to another in abrupt, unpredictable, discrete quantum jumps. Minev proved otherwise.
“Quantum physics is not quite as unpredictable and discrete as we previously thought,” he says.
His experiment showed that when an atom is bombarded with energy in the form of light, it moves from one energy level to the next in a continuous, smooth way, not an instantaneous jump. What’s more, Minev was able to detect the change in an atom’s energy level quickly enough to control it so he could stop the jump midflight and reverse it before it was completed.
“In the short term,” he says, “with the monitoring that I developed for this project, we can actually have a window of predictability.”
Minev’s work could have major implications for quantum computing. Such systems are riddled with errors that occur when subatomic particles jump between energy levels, like the atoms in Minev’s experiment. The ability to detect and reverse such jumps before they finish should dramatically boost the power of quantum computers, allowing them to better crack encryption, model chemical reactions, and forecast weather.
Photo by Robert Jones
He is reducing the chemical industry’s carbon footprint by using AI to optimize reactions with electricity instead of heat.
Miguel Modestino has cleared a major hurdle in electrifying the chemical industry, which produces compounds used in everything from plastics to fertilizer. His AI-based system teaches itself how to optimize the reactions for making various chemicals by zapping them with pulses of electricity instead of the conventional approach of heating them, which typically involves burning fossil fuels. And since electricity can come from renewable sources like wind or solar, electrifying chemical plants could greatly reduce emissions.
In an early lab project, Modestino’s team achieved more than a 30% boost in the production rate of adiponitrile (which is used in making nylon, among numerous other industrial processes)—a greater improvement than any other method has shown in the last 50 years.
The key was using complex pulses of electrical current at constantly varying rates to optimize yields. Figuring out what patterns of pulses to use required machine learning. Modestino ran a few experiments making adiponitrile under different electrical conditions and then let his AI analyze the data to figure out how to make the compound with less energy, better yields, and less waste.
Modestino and two former students recently founded Sunthetics to apply the AI system to other chemical processes, like those involved in generating hydrogen fuel and making polymers. The company is also working to scale up the adiponitrile process for a full pilot reactor and to extend the approach to other processes.
Photo by Eduardo Whaite
Her tools let anyone design products without having to understand materials science or engineering.
Adriana Schulz’s computer-based design tools let average users and engineers alike use graphical drag-and-drop interfaces to create functional, complex objects as diverse as robots and birdhouses without having to understand their underlying mechanics, geometries, or materials.
“What excites me is that we’re about to enter the next phase in manufacturing—a new manufacturing revolution,” says Schulz.
One of her creations is Interactive Robogami, a tool she built to let anyone design rudimentary robots. A user designs the shape and trajectory of a ground-based robot on the screen. Schulz’s system automatically translates the raw design into a schematic that can be built from standard or 3D-printed parts.
Another of the tools she and her collaborators built lets users design drones to meet their chosen requirements for payload, battery life, and cost. The algorithms in her system incorporate materials science and control systems, and they automatically output a fabrication plan and control software.
Schulz is now helping start the University of Washington Center for Digital Fabrication, which she will co-direct. She will work with local technology and manufacturing companies to move her tools out of the lab.
Photo by David Curtis
He is designing computer chips to seamlessly connect human brains and machines.
Six years ago Dongjin “DJ” Seo said he'd always wanted to be “a scientist with strong intuitions about how to improve the world through engineering.” At the time, he was working in a crowded corner of a lab at the University of California, Berkeley, on a concept called neural dust—ultra-small electronic sensors that could be sprinkled in an animal’s brain and controlled with acoustic waves.
The goal of that project was new types of brain-machine interfaces that could read the firing of neurons inside the cortex and even send information back in. That kind of technology might open up ways to read and write information from and to the brain.
Then, in 2016, Elon Musk tapped him to join a new company, Neuralink, which was ready to spend millions on engineering a seamless interface between human brains and computers. “The vision that Elon outlined—well, it was hard to say no,” Seo says. “It was everything I had imagined.”
Instead of neural dust, the startup is betting on a robot that plunges ultra-thin electrodes into animal brains. Seo is head of a team of about a dozen people designing low-power wireless computers that fit into a small burr hole that’s cut into the skull. He says his primary contribution is designing the necessary circuit boards and chips. “We need these chips to collect a signal that may look like noise, process it, and do all that without cooking your brain.”
After tests on animals, the company hopes to try the brain connection on someone with paralysis or a serious illness. Eventually, “augmentation” of healthy people “is an obvious result,” Seo says: “It’s being able to enhance our ability to interact with the world.”