The next time you open up Google’s Chrome Web browser, take a look at the little green icon that appears in the left corner of the URL bar whenever you’re on a secure website. It’s a lock, and if it’s green it signals that the website you’re on is encrypting data as it flows between you and the site. But not everyone knows what it is or what it represents, and that’s where Adrienne Felt comes in.
As a software engineer for Chrome, Felt has taken on the task of making the Internet more secure and helping users of the world’s most popular browser make smart, informed choices about their safety and privacy while online. This includes heading a years-long push to convince the world’s websites, which traditionally used the unencrypted HTTP to send data from one point to another, to switch to the secure version, HTTPS.
Why is it tricky to come up with online security measures that work for all kinds of people?
Part of it is that security measures generally stop people from doing things. The way we keep you safe is by telling you no. But this has very real costs. You can scare people … you can keep people from using the Internet at all. On the other hand, if you don’t do anything you put people and their data at very real risk. So you have to figure out how to strike just the right balance. And with multiple billion users it’s very difficult to find a balance that makes everyone happy.
One way you are trying to make people safer while they’re online is by encouraging websites to use HTTPS. What makes this a complicated process?
Think about a site like the Washington Post. When you go to the Washington Post’s home page, there’s going to be 100 different [assets from various websites] that are loaded. All of those have to support HTTPS before the Washington Post itself can do it. Sites need to make sure there’s no revenue hit, they need to make sure there’s no [search] ranking hit, they need to make sure there’s no performance hit. And then they can switch. All these things can be done. Sites are transitioning very successfully at scale now. But it is work.
Now that many of the biggest websites have made the switch from HTTP to HTTPS, what are you focusing on?
The long tail is a big problem. There are lots and lots of sites that are out there. Some that are barely maintained, some that are run by your dentist, your hairdresser, a teacher at a local elementary school, and I don’t see them rushing to add support for HTTPS. The question is now, “Okay, we’ve hit all the really popular sites, we’re starting to get to the medium sites—what do we do for the rest of the Internet?” I don’t want to get in a state where oh, great, you’re secure if you go to a big company but not if you go to a small, independent site. Because I still want people to feel like they can go everywhere on the Web.
In his lab at the Broad Institute in Cambridge, Massachusetts, Viktor Adalsteinsson has put an automated system in place that scans blood samples for traces of tumor DNA—a so-called liquid biopsy. Collecting genetic information on advanced cancers might lead to clues about what drives the disease in later stages and what drugs to give patients. Adalsteinsson, whose mother succumbed to breast cancer while he was earning his PhD, is now looking to improve treatment as part of several projects, including one that sends blood collection tubes to women fighting breast cancer across America. “The doctors and patients cross their fingers and there’s a lot of watching and waiting,” says Adalsteinsson. “Now we can closely monitor patients’ responses to therapy and see what’s causing treatments to fail.”
Human-like artificial intelligence is still a long way off, but Greg Brockman believes the time to start thinking about its safety is now. That’s why, after helping to build the online-payments firm Stripe, he cofounded OpenAI along with Elon Musk and others. The nonprofit research group focuses on making sure AI continues to benefit humanity even as it increases in sophistication. Brockman plays many roles at the firm, from recruiting to helping researchers test new learning algorithms. In the long term, he says, a general AI system will need something akin to a sense of shame to prevent it from misbehaving. “It’s going to be the most important technology that humans ever create,” he says, “so getting that right seems pretty important.”
Silicon Valley loves data. But until recently, there was one subject where tech companies showed little interest at all in the numbers: the diversity of their workforces. It’s not that the statistics were downplayed—the numbers didn’t even exist.
Today most big tech companies have issued public reports on diversity, and there’s an independent, crowdsourced data repository at GitHub that collects information on tech workforces. And this has happened in no small part because Tracy Chou, a Pinterest software engineer at the time, wrote a post on Medium in the fall of 2013 called simply “Where are the numbers?”
Chou wrote the post after returning from a conference where she heard Facebook COO Sheryl Sandberg say the number of women in tech was dropping. “I didn’t think she was wrong,” Chou says. “But I also thought: ‘How does she know? There are no numbers.’ I knew there was this problem.”
Chou’s Medium post quickly went viral. And soon the numbers began to flow—first via Twitter, and then via that GitHub repository, which Chou set up. Within a few weeks, Chou had data on more than 50 companies (the repository now has numbers for hundreds), and by the summer of 2014, a host of the Valley’s most powerful companies had released demographic reports on their workforces. The numbers were dismal—in general, somewhere between 10 and 20 percent of workers in technology positions were women, and one study found that 45 percent of Silicon Valley companies didn’t have a single female executive. But at least the data now existed.
As this was happening, Chou continued her coding work at Pinterest, but she also found herself in demand as a speaker and panelist. Last spring, she teamed up with a group of seven other women—including venture capitalist Ellen Pao and Slack engineer Erica Joy Baker—to form Project Include, an organization designed to help CEOs implement diversity and inclusion strategies at their companies.
Chou isn’t, and doesn’t want to be, a professional activist. “It’s fulfilling to work on this issue, and I can have an impact here,” she says. “But I see it as a complement to my main work, which is building things and making products.” Nonetheless, she’s become a voice of authority on tech’s diversity problem because she’s unusually good at articulating the connections between the personal experience of women in the Valley and the systemic sexism they face, while also identifying how a lack of diversity hurts companies themselves. For instance, there is clearly a pipeline problem when it comes to gender and technology—not enough young women take classes in science, technology, engineering, and math or graduate with STEM degrees. But it’s also true, as Chou argues, that the pipeline problem can’t explain the high rate of attrition for women in tech, or the lack of women in senior positions. In other words, the pipeline for women gets even more narrow once you’re inside a company.
Sometimes that’s because of extraordinarily retrograde, garden-variety sexism, exemplified by the recent problems at Uber or the men who regularly told Chou, “You’re too pretty to be a coder.” It’s also because at many companies there’s an implicit (and sometimes explicit) assumption that women are less naturally adept at coding, and less willing to work hard.
Chou, for example, went to Stanford for an undergrad degree in electrical engineering and got a master’s there in computer science, and had internships at Facebook and Google. Yet at her first job she regularly dealt with casually dismissive sexism, making her question whether she belonged in the industry. “I loved coding,” she says. “But I just felt something was off. I felt out of place, and I had serious questions about whether I was going to stay in tech. And I really thought the problem was me.”
A large body of research shows that making organizations and teams more diverse also improves their performance. Diversity makes teams less likely to succumb to groupthink and helps companies reach untapped markets. “Products tend to be built to solve the problems of the people building them,” Chou says. “And that’s not a bad thing, necessarily. But it means that in the Valley lots of energy and attention goes into solving the problems of young urban men with lots of disposable income, and that much less attention goes to solving the problems of women, older people, children, and so on.”
Despite the evidence, plenty of companies still need convincing. “There’s lots of diversity theater and lip service paid to the concept,” Chou says. “And maybe we’ve helped weed out some of the most egregious actors. But there’s a long way to go.”
Anca Dragan, an assistant professor of electrical engineering and computer science at UC Berkeley, is working to distill complicated or vague human behavior into simple mathematical models that robots can understand. She says many conflicts that arise when humans and robots try to work together come from a lack of transparency about each other’s intentions. Teaching a robot to understand how it might influence a person’s behavior could solve that. One pressing application for this work is in helping self-driving cars and human-driven cars to anticipate each other’s next moves.
The business world is drowning in data, but Neha Narkhede is teaching companies to swim. As an engineer at LinkedIn, Narkhede helped invent an open-source software platform called Apache Kafka to quickly process the site’s torrent of incoming data from things like user clicks and profile updates. Sensing a big opportunity, she co-founded Confluent, a startup that builds Apache Kafka tools for companies, in 2014. She’s been the driving force behind the platform’s wide adoption—Goldman Sachs uses it to help deliver information to traders in real-time, Netflix to collect data for its video recommendations, and Uber to analyze data for its surge-pricing system. Confluent’s products allow companies to use the platform to, for example, sync information across multiple data centers and monitor activity through a central console.
“We view our technology as a central nervous system for companies that aggregates data and makes sense of it within milliseconds, at scale,” she says. “We think virtually every company would benefit from that and we plan to bring it to them.”
Amanda Randles, an assistant professor of biomedical engineering at Duke University, is building software that simulates blood flowing throughout the human body in a model based on medical images of a particular person. The code base is called “HARVEY,” after William Harvey, a 17th-century surgeon who first described the circulatory system. The software requires a supercomputer to crunch calculations on the fluid dynamics of millions of blood cells as they move through the blood vessels. Randles has other plans for her fluid-dynamic model of the circulatory system. Next up: scanning newborns with heart problems to guide surgeons and predicting how cancer cells move through the body.
Artificial intelligence has reached “a critical point,” says Gang Wang—it’s moved beyond the lab and is now ready for mass-market consumer products. Wang, who joined Alibaba’s AI lab in March, is at the forefront of the push to make AI practical for consumer products, and he’s doing it for one of the world’s most ambitious companies in the world’s biggest consumer market. He was one of the scientists behind the Tmall Genie, Alibaba’s first AI-based product, released in July. Analogous to Amazon’s Echo, the device can make purchases on Alibaba’s shopping sites and perform other tasks, such as playing music and checking calendars through voice commands.
“The design of neural networks needs to be intertwined with real-world applications,” says Wang. “Only in this way can we create a product that’s useful in a commercial environment.”