Showing computers how to learn might seem like a game, but it’s also serious business.
When he was 15 years old, Oriol Vinyals became obsessed with StarCraft, a video game in which three factions vie for control of the map—like chess if it were played not only with black and white pieces but also with red ones. Vinyals soon became the top-ranked player in Spain. “I almost knew the game would return later in my life,” he says. “I was fascinated by the artificial-intelligence problems it presents.”
It was more than a decade before Vinyals’s premonition came to pass. While he was studying at UC Berkeley, he helped to create an AI bot that was able to play StarCraft unassisted. The bot, forebodingly dubbed Overmind, represented a triumph in machine learning.
Later, while he was working on the Google AI team creating new techniques for language translation, inspiration struck. Vinyals decided to see whether a computer could accurately write a description of an image. It’s a form of translation, albeit from pixel to caption. “I remember it so well,” he says. “I changed a single line of code: instead of translating from French, I changed my code to input an image instead.” The next day, Vinyals showed his program a photograph of a busy market stall, the ground beside it littered with bananas. The caption read: “A group of people standing in the market buying fruits.” “It worked!” he recalls. “It wasn’t just saying ‘People on the street.’ It was reading the image with sophistication.” The technology, now being incorporated into Google Image Search, allows computers to caption images and show them to people who enter relevant search terms.
Vinyals and his coworkers have developed a technology now used in Gmail called Smart Reply, which automatically suggests short replies to e-mails. And now, having joined the team at Google DeepMind in London, he has come full circle. There, he is working to create computers that can teach themselves how to play and win complex games—not by hard-coding the rules but by enabling them to learn from experience.
His inventions are helping IBM in its decade-plus quest to replace silicon transistors with more efficient carbon nanotubes.
IBM researchers devise a way to produce arrays of carbon-nanotube transistors.
IBM researchers show that nanotube transistors can carry more than twice the electric current of top-performing silicon transistor prototypes. This is interpreted as the first evidence that nanotubes can outperform silicon transistors.
The first integrated circuit using a single carbon nanotube is built at IBM.
During his doctoral studies at the University of Illinois, Qing Cao invents a way to print circuits of nanotubes on flexible plastic substrates.
At IBM, Cao develops a technique that applies mechanical force to push purified nanotubes in water together into high-density, neatly ordered arrays.
Cao overcomes a fundamental roadblock to commercially viable nanotube transistors. He devises a way to connect metal wires to carbon nanotubes by welding metal atoms to the nanotubes’ ends.
IBM incorporates carbon nanotubes into its in-house semiconductor research line to figure out how to refine and scale up the technology.
IBM aims to have its nanotube transistors ready to replace silicon transistors. The company estimates that nanotube transistors will perform two to three times better than silicon and require half as much power.
She knows how to print perfect plastic solar cells.
Flexible solar cells that are cheap to make could be “printed” on many surfaces, even windows. But the polymers that would be required have so far been lackluster at converting sunlight to electricity. One reason is that unlike more efficient solar materials such as crystalline silicon, polymer-based materials have a messy molecular structure that looks like cooked spaghetti.
Ying Diao is creating printing techniques that bring order to the otherwise chaotic assembly of plastic molecules. She has made organic solar cells with double the efficiency of previous ones. Diao came up with a microscopic “comb” that controls the flow of the molecules and lets them assemble into orderly structures during printing.
She uses nanocrystals to trap light and increase the efficiency of solar cells.
With her hands cloaked in aquamarine rubber gloves, Vivian Ferry, an assistant professor of chemical engineering and materials science, picks up a lipstick-size test tube filled with clear liquid. When she shines UV light through the tube, its contents turn a glowing shade of fluorescent orange. Tiny crystals suspended in the liquid explain the vial’s fiery glow: they absorb high-energy blue wavelengths and emit lower-energy reds.
Existing solar cells tend to absorb limited wavelengths of light, letting most of the sun’s energy pass through uncaptured. If solar cells could grab more light, they would generate more electricity and make solar power even cheaper. So in addition to the luminescent crystals, Ferry turned to tiny mirrors made of nanostructured metals that can trap specific wavelengths and steer light toward the solar cell.
For now, Ferry makes her luminescent nanocrystals with cadmium selenide and cadmium sulfide, neither of which is ideal since cadmium is a toxic metal. But her improvements—and subsequent drops in cost—stand to become so significant that the technology could still work well using substances that are more abundant and less toxic.
He teaches robots to watch and learn from their own successes.
While serving a nine-month stint at Google, Sergey Levine watched as the company’s AlphaGo program defeated the world’s best human player of the ancient Chinese game Go in March. Levine, a robotics specialist at the University of California, Berkeley, admired the sophisticated feat of machine learning but couldn’t help focusing on a notable shortcoming of the powerful Go-playing algorithms. “They never picked up any of the pieces themselves,” he jokes.
One way that the creators of AlphaGo trained the program was by feeding 160,000 previous games of Go to a powerful algorithm called a neural network, much the way similar algorithms have been shown countless labeled pictures of cats and dogs until they learn to recognize the animals in unlabeled photos. But this technique isn’t easily applicable to training a robotic arm.
So roboticists have instead turned to a different technique: the scientist gives a robot a goal, such as screwing a cap onto a bottle, but relies on the machine to figure out the specifics itself. By attempting the task over and over, it eventually attains the goal. But the learning process requires lots of attempts, and it doesn’t work with difficult tasks.
Levine’s breakthrough was to use the same kind of algorithm that has gotten so good at classifying images. After he gives the robot some easy-to-solve versions of the task at hand—instructing it to screw on the cap, for example—the robot then retrospectively studies its own successes. It observes how the data from its vision system maps to the motor signals of the robotic hand doing the task correctly. The robot supervises its own learning. “It’s reverse--engineering its own behavior,” Levine says. It then can apply that learning to related tasks.
With the AI technique, previously insoluble robotics tasks have suddenly become approachable, thanks to the massive increase in training efficiency. Suddenly, robots are getting a lot more clever.
A computation whiz speeds up the search for catalysts that will make green chemistry possible.
Using enzymes honed over hundreds of millions of years of evolution, plants readily split water into oxygen and hydrogen that’s used to fuel metabolic reactions. Humans, too, could use hydrogen as a fuel and a way to store energy from intermittent renewable sources. But we don’t have millions of years to figure out how to make practical catalysts.
Aleksandra Vojvodic uses supercomputers to design new catalysts for water splitting and other reactions. The idea behind her work, she explains, is to “circumvent the trial and error of nature”—and of the chemistry lab.
Splitting water requires two catalysts, one for making hydrogen and the other for making oxygen. “The things that work efficiently are usually rare or expensive,” says Vojvodic. That’s where computational chemistry comes in. To predict the behavior of a catalyst, Vojvodic makes computer models that relate a material’s functions to its structure using the rules of quantum mechanics. Chemists know what functions the catalyst needs to have, and they know how different kinds of atoms and structures are likely to behave. Vojvodic’s computer experiments, at the SLAC National Accelerator Lab, have yielded oxygen--producing catalysts that match or outperform those made of expensive materials.
Researchers have been using powerful computers to try to design better catalysts for years, with varying degrees of success. But today’s supercomputers are now capable of doing much more complex calculations. And Vojvodic has been exceptionally talented at taking advantage of computing power; identifying new ways to represent electronic properties, chemical structure, nanostructure, and other properties in mathematical calculations; and writing programs to carry them out. Working with experimentalists, she and her coworkers have recently made extremely efficient water-splitting catalysts that her modeling work predicted. The researchers are now eyeing other catalysts, including ones that can convert nitrogen and other abundant molecules into useful chemicals.
Pop-up nanostructures make it far easier to fabricate very tiny shapes.
Yihui Zhang likes to invite visitors to his office to stretch a piece of highly elastic silicone that has a soccer-ball-like structure attached to it. Once the silicone is pulled taut from four corners, the three-dimensional structure becomes a two-dimensional pattern that looks like a wheel with many adjacent hexagons and pentagons in the center. When the silicone is relaxed again, the flattened pattern pops back into its three-dimensional shape.
With this trick, Zhang has solved the challenge facing many researchers: how to fabricate complex three-dimensional nanoscale structures. Although the demonstration is done at the macro level, the idea works with nanostructures, too: easily created two-dimensional patterns can be attached to a substrate stretched taut and then buckled into three-dimensional structures as the substrate is relaxed. This process works with a wide range of materials such as metals and polymers.
The technique could be used to create nanostructures for a variety of uses. Ultimately, Zhang hopes to develop a database or algorithm that allows researchers to easily map the three-dimensional structures they want onto two-dimensional precursors. “It’s a tool,” he says. “People from different disciplines can build their own innovations.”
What to do if there is no clean water around.
Water is everywhere; safe drinking water is not. So Jia Zhu has created a thin metal sheet capable of floating on the surface of a body of water, absorbing lots of sunlight and using the energy to generate steam that condenses into clean water. “It only needs two things. The first is water—no matter what kind of water you have—and the second is the sun,” says Zhu.
The device could be used to desalinate seawater or treat polluted water: after the water is turned into steam, what’s left are salts or solidified contaminants that can be easily collected.
He also envisions other ways to put the ingenious apparatus to use. “The steam doesn’t have to be condensed,” he says, suggesting that it could be used to produce power.