Finding ways to make promising perovskite-based solar cells practical.
Crystalline-silicon panels—which make up about 90 percent of deployed photovoltaics—are expensive, and they’re already bumping up against efficiency limits in converting sunlight to electricity. So a few years ago, Michael Saliba, a researcher at the Swiss Federal Institute of Technology in Lausanne, set out to investigate a new type of solar cell based on a family of materials known as perovskites. The first so-called perovskite solar cells, built in 2009, promised a cheaper, easier-to-process technology. But those early perovskite-based cells converted only about 4 percent of sunlight into electricity.
Saliba improved performance by adding positively charged ions to the known perovskites. He has since pushed solar cells built of the stuff to over 21 percent efficiency and shown the way to versions with far higher potential.
Exploring new materials for better lithium-ion batteries.
As employee number seven at Tesla, Gene Berdichevsky was instrumental in solving one of its earliest challenges: the thousands of lithium-ion batteries the company planned to pack into its electric sports car caught fire far more often than manufacturers claimed. His solution: a combination of heat transfer materials, cooling channels, and battery arrangements that ensured any fire would be self-contained.
Now Berdichevsky has cofounded Sila Nanotechnologies, which aims to make better lithium-ion batteries. The company has developed silicon-based nanoparticles that can form a high-capacity anode. Silicon has almost 10 times the theoretical capacity of the material most often used in lithium-ion batteries, but it tends to swell during charging, causing damage. Sila’s particles are robust yet porous enough to accommodate that swelling, promising longer-lasting batteries.
The world’s narrowest fluid channel could transform filtration of water and gases.
Beneath a microscope in Radha Boya’s lab, a thin sheet of carbon has an almost imperceptible channel cutting through its center, the depth of a single molecule of water. “I wanted to create the most ultimately small fluidic channels possible,” explains Boya. Her solution: identify the best building blocks to reliably and repeatedly build a structure containing unimaginably narrow capillaries. She settled on graphene, a form of carbon that is a single atom thick.
She positions two sheets of graphene (a single sheet is just 0.3 nanometers thick) next to each other with a small lateral gap between them. That is sandwiched on both sides with slabs of graphite, a material made of many layers of graphene stacked on top of each other. The result is a channel 0.3 nanometers deep and 100 nanometers wide, cutting through a block of graphite. By adding extra layers of graphene, she can tune the size of the channel in 0.3-nanometer increments.
But what fits through something so narrow? A water molecule—which itself measures around 0.3 nanometers across—can’t pass through the channel without application of pressure. But with two layers of graphene, and a 0.6-nanometer gap, water passes through at one meter per second. “The surface of graphene is slightly hydrophobic, so the water molecules stick to themselves rather than the walls,” says Boya. That helps the liquid slide through easily.
Because the gaps are so consistently sized, they could be used to build precisely tuned filtration systems. Boya has performed experiments that show her channels could filter salt ions from water, or separate large volatile organic compounds from smaller gas molecules. Because of the size consistency, her technology can filter more efficiently than others.
Boya currently works at the University of Manchester’s Graphene Research Institute in the U.K.—a monolithic black slab of a building that opened in 2015 to industrialize basic research on the material. It brands itself as the “home of graphene,” which seems appropriate given that Boya’s office is on the same corridor as those of Andre Geim and Kostya Novoselov, who won a Nobel Prize for discovering the material.
Invented a way for neural networks to get better by working together.
A few years ago, after some heated debate in a Montreal pub, Ian Goodfellow dreamed up one of the most intriguing ideas in artificial intelligence. By applying game theory, he devised a way for a machine-learning system to effectively teach itself about how the world works. This ability could help make computers smarter by sidestepping the need to feed them painstakingly labeled training data.
Goodfellow was studying how neural networks can learn without human supervision. Usually a network needs labeled examples to learn effectively. While it’s also possible to learn from unlabeled data, this had typically not worked very well. Goodfellow, now a staff research scientist with the Google Brain team, wondered if two neural networks could work in tandem. One network could learn about a data set and generate examples; the second could try to tell whether they were real or fake, allowing the first to tweak its parameters in an effort to improve.
After returning from the pub, Goodfellow coded the first example of what he named a “generative adversarial network,” or GAN. The dueling-neural-network approach has vastly improved learning from unlabeled data. GANs can already perform some dazzling tricks. By internalizing the characteristics of a collection of photos, for example, a GAN can improve the resolution of a pixelated image. It can also dream up realistic fake photos, or apply a particular artistic style to an image. “You can think of generative models as giving artificial intelligence a form of imagination,” Goodfellow says.
A design for a heart valve that’s biodegradable—potentially eliminating the need for repeat surgeries.
Problem: Over 85,000 Americans receive artificial heart valves, but such valves don’t last forever, and replacing them involves a costly and invasive surgery. In children, they must be replaced repeatedly.
Solution: Svenja Hinderer, who leads a research group at the Fraunhofer Institute in Stuttgart, Germany, has created a biodegradable heart valve that studies strongly suggest will be replaced over time by a patient’s own cells.
To accomplish this, Hinderer created a scaffolding of biodegradable fibers that mimic the elastic properties of healthy tissues. To it she attaches proteins with the power to attract the stem cells that naturally circulate in the blood. The idea is that once implanted, her heart valve would be colonized and then replaced by a patient’s own cells within two to three years.
An open-source autopilot for drones.
Lorenz Meier was curious about technologies that could allow robots to move around on their own, but in 2008, when he started looking, he was unimpressed—most systems had not yet even adopted the affordable motion sensors found in smartphones.
So Meier, now a postdoc at the Swiss Federal Institute of Technology in Zurich, built his own system instead: PX4, an open-source autopilot for autonomous drone control. Importantly, Meier’s system aims to use cheap cameras and computer logic to let drones fly themselves around obstacles, determine their optimal paths, and control their overall flight with little or no user input. It has already been adopted by companies including Intel, Qualcomm, Sony, and GoPro.
Preparing for the security and privacy threats that augmented reality will bring.
What would hacks of augmented reality look like? Imagine a see-through AR display on your car helping you navigate—now imagine a hacker adding images of virtual dogs or pedestrians in the street.
Franzi Roesner, 31, recognized this challenge early on and is leading the thinking into what security and privacy provisions AR devices will need to protect them, and ourselves. Her research group at the University of Washington created a prototype AR platform that can, for example, block a windshield app from hiding any signs or people in the real world while a car is in motion.
“I’ve been asking the question, ‘What could a buggy or malicious application do?’” she says.
Employed crowdsourcing to vastly improve computer-vision system.
“It’s hard to navigate a human environment without seeing,” says Olga Russakovsky, an assistant professor at Princeton who is working to create artificial-intelligence systems that have a better understanding of what they’re looking at.
A few years ago, machines were capable of spotting only about 20 objects—a list that included people, airplanes, and chairs. Russakovsky devised a method, based partly on crowdsourcing the identification of objects in photos, that has led to AI systems capable of detecting 200 objects, including accordions and waffle irons.
Russakovsky ultimately expects AI to power robots or smart cameras that allow older people to remain at home, or autonomous vehicles that can confidently detect a person or a trash can in the road. “We’re not there yet,” she says, “and one of the big reasons is because the vision technology is just not there yet.”
A woman in a field dominated by men, Russakovsky started AI4ALL, a group that pushes for greater diversity among those working in artificial intelligence. While she wants greater ethnic and gender diversity, she also wants diversity of thought. “We are bringing the same kind of people over and over into the field,” she says. “And I think that’s actually going to harm us very seriously down the line.”
If robotics are to become integral and integrated into our lives, she reasons, why shouldn’t there be people of varying professional backgrounds creating them, and helping them become attuned to what all types of people need?
Russakovsky took a rather conventional path from studying mathematics as an undergrad at Stanford, where she also earned a PhD in computer science, to a postdoc at Carnegie Mellon. But, she suggests, “We also need many others: biologists who are maybe not great at coding but can bring that expertise. We need psychologists—the diversity of thought really injects creativity into the field and allows us to think very broadly about what we should be doing and what type of problems we should be tackling, rather than just coming at it from one particular angle.”
Using an understanding of the brain to create smarter machines.
Greg Wayne, a researcher at DeepMind, designs software that gets better the same way a person might—by learning from its own mistakes. In a 2016 Nature paper that Wayne coauthored, it was demonstrated that such software can solve things like graph problems, logic puzzles, and tree structures that traditional neural networks used in artificial intelligence can’t.
Wayne’s computing insights play off his interest in connections between neurons in the human brain—why certain structures elicit specific sensations, emotions, or decisions. Now he often repurposes the concepts behind those brain structures as he designs machines.