Prineha Narang seeks to build technologies by starting small: with the atom.
As an assistant professor of computational materials science at Harvard, Narang studies the optical, thermal, and electronic behavior of materials at the nanoscale. Her research in how materials interact with light and other forms of electro-magnetic radiation could drive innovations in electronics, energy, and space technologies.
Narang’s work builds on decades of advances in nanoscience that have brought the field closer to a long-held goal: the ability to engineer materials atom by atom.
Yet since its emergence in the 1980s, the discipline has focused mainly on nanostructures at or near equilibrium—their lowest state of energy. At the temperatures they encounter in nature, however, most materials are away from equilibrium, in so-called excited states, which remain poorly understood at the quantum level. “There’s so much more we can do with excited states that has just not been tried yet,” Narang says.
By studying these excited states, Narang is developing approaches that could lead to vastly improved materials. Applications could include improved reflectors and lenses for telescopes, lighter cell phones with better cameras, or synthetic fuels designed at the atomic level.
—Jonathan W. Rosen
Shehar Bano made it possible to fight state censorship of the internet—by pioneering the first systematic study of how it happens.
It all started when Bano’s homeland of Pakistan blocked YouTube in 2012. “Previously, people were under the delusion that this was magic,” she says of the inner workings of such restrictions. But she wanted to understand—and defeat—them.
So Bano probed three years of ISP data from Pakistan, and she experimented with ways to circumvent China’s Great Firewall. What she found was a variety of relatively basic technical restrictions, such as censors looking for any request to load a specific website and then sending signals to both the website’s servers and the surfer’s browser to end the request. Understanding this let her devise ways around the restriction without resorting to encryption, like sending an initial, fake request that the censor would see but ignore because of a misspelling—allowing the real request to slip through in the meantime.
Bano not only analyzed online censorship; she also looked into how users of anonymization and security software like Tor and ad blockers are treated differently from unprotected surfers, whether that means a worse user experience or an outright ban.
Bano has joined a wave of computer scientists working to protect the freedom of online communication. As a postdoc at University College London, she’s increasingly working with blockchain-based systems, like the smart-contract platform Chainspace, to improve online security and transparency by allowing transactions that are difficult for outside parties to monitor.
Growing up in Iran, Niki Bayat always wanted to use her aptitude in engineering to help people suffering from disease—especially after her father developed glaucoma and was unable to have eye surgery because of other health issues. She placed eighth in Iran’s countrywide university entrance exams and majored in chemical engineering at the country’s top university. For grad school, she set her sights on the University of Southern California and joined a collaboration between the labs of renowned chemist Mark Thompson and Mark Humayun, who developed the first artificial retina. “I convinced them that I could bridge the gap between polymer chemistry and biomedical engineering,” she says.
She did just that, using her chemical engineering expertise to develop materials that can help repair traumatic eye injuries and deliver ocular therapies. Bayat has created squishy, biocompatible polymers called hydrogels that become extremely sticky at body temperature, adhering as strongly as superglue. In cases of eye injury, they can be injected in the field, quickly sealing the wound to prevent blindness. Then, back at a hospital, a surgeon can flush the sealant with cold saline, remove it, and suture the wound. Bayat has also designed versions of these materials that can release glaucoma medication or antibiotics in a controlled manner.
In 2016, while still working on her PhD, Bayat started AesculaTech to commercialize these drug-delivering materials, which can be inserted into the tear ducts and release medication over periods of months—potentially preventing the need for patients to apply eye drops multiple times a day. AesculaTech plans to first seek approval for polymer devices to treat dry eye before trying to introduce drug-releasing versions. Her ultimate goal, she says, is to come up with a new and better treatment for glaucoma.
After collaborating with doctors in the intensive care unit at Beth Israel Deaconess Medical Center during her PhD studies, Marzyeh Ghassemi realized that one of their biggest challenges was information overload. So she designed a suite of machine-learning methods to turn messy clinical data into useful predictions about how patients will fare during a hospital stay.
It wasn’t easy. Areas where machine learning excels typically have huge, carefully labeled data sets. Medical data, on the other hand, comes in a bewildering variety of formats at erratic frequencies, ranging from daily written doctors’ notes to hourly blood tests to continuous heart-monitor data.
And while vision and language tasks are innately easy for humans to grasp, even highly trained medical specialists can disagree on diagnoses or treatment decisions. Despite these challenges, Ghassemi developed machine-learning algorithms that take diverse clinical data and accurately predict things like how long patients will stay in the hospital, how likely they are to die while there, and whether they’ll need interventions such as blood transfusions or ventilators.
This fall Ghassemi joins the University of Toronto and the Vector Institute, where she’s hoping to test her algorithms at local hospitals.
As quantum computing starts to move from the lab to the factory, companies from Google to Intel are struggling to solve a tricky problem: how to faithfully steer the quantum information such systems spit out to traditional computers. Doing so is important since quantum systems, which are expected to have a profound impact on cryptography and other fields, will probably be useful only if regular computers can read their calculations.
Archana Kamal, an assistant professor at UMass Lowell, solved the problem. Kamal demonstrated that quantum information could be steered and amplified for transmission before leaving the device where it was processed. Previously, the transmission required large magnets and complicated devices too big to fit on a single chip, leading to data latency and loss, a major impediment in scaling up current qubit systems.
Kamal’s innovation was to slightly alter the path of the transmission of light signals carrying information so as to shrink the components from the size of a quarter to a few micrometers. “That’s a huge difference,” she says. “Our schemes enable the bulk of quantum signal processing to be done on-chip while preserving the high fidelity of the signals.”
Brenden Lake created an AI program that can learn novel handwritten characters as well as a human can after seeing just a single example. That might seem mundane in a world where AI controls self-driving cars and beats the world’s best Go players. But today’s state-of-the-art deep-learning approaches train on thousands of examples and aren’t great at transferring their learning to new problems. A human who’s shown an unfamiliar object once, on the other hand, will be able to recognize a new example, draw it, and understand its various parts.
So Lake took inspiration from cognitive psychology. Instead of feeding his program thousands of examples of letters, he taught it how handwriting works. He showed his model motion-capture recordings of humans drawing letters from 30 alphabets so it could learn what pen movements are used to make strokes, how many strokes characters typically have, and how strokes are connected. When shown a character from an unfamiliar alphabet, the model can recognize and reproduce that character just as well as a person.
He’s applied the same approach to get machines to recognize and reproduce spoken words after hearing one example, and also to mimic how people creatively ask questions when solving a problem.
Getting machines to learn the way humans do could prove crucial for AI applications where training on big data isn’t feasible. “If we want to have smart robots in the home, we can’t pre-train or pre-program the robot to know everything out of the box,” Lake says. “Children pick up new concepts every day, and a truly intelligent machine must do the same.”
Adam Marblestone wants to make the brain machine-readable. So he worked out the physical limits of what’s possible in recording brain activity and is now using that knowledge to set technology strategy at Kernel, a startup with $100 million in funding that’s building neural interfaces for humans.
As a PhD student, Marblestone was a lead author of a paper now considered a foundational strategic document for researchers building technology to read brain activity. Using the mouse brain as a model, he identified the engineering problems we’ll have to solve to simultaneously measure the activity of every neuron in the brain.
“It’s all about how do we, in the approaches that we take to studying the brain, somehow try to match the complexity of the brain itself?” he says.
As chief strategy officer at Kernel, he’s marshalling a network of leading researchers to identify the most promising approaches for making neural interfaces that can help us understand and treat neurological diseases. One day they could even make it possible to merge our brains with machines.
Menno Veldhorst has invented a faster path to real-world quantum circuits by making it possible for them to be printed on silicon—the way computer chips have been made for decades.
Quantum computers would allow powerful calculations that no traditional computer is capable of, but before Veldhorst’s innovation, it was considered impossible to make semiconductor-based quantum circuits on silicon that would be stable enough for useful computation. These machines—which are governed by the strange physics of subatomic particles—have instead been built with esoteric materials, including superconductors, that are easier to control in their fragile quantum states. The trade-offs: working with such technology is expensive, and producing such circuitry at scale would require entirely new industrial processes.
Veldhorst, a researcher at Delft University in the Netherlands, has found a way forward with the most replicated manmade structure on the planet—the transistor. He was able to demonstrate calculations on the basic units of quantum information, known as qubits, in silicon semiconductors.
Now, thanks to Veldhorst’s breakthrough, Intel is printing hundreds of thousands of such simple systems on the same type of 300-millimeter wafers the company uses to make its conventional chips. This means collaborators at Intel can increasingly spend their time on the microelectronics and algorithms necessary for complete quantum computers rather than working through the basic physics.
What’s most exciting to Veldhorst is that—just as with the transistor and the computer itself—a flood of quantum computers will need to be built just to figure out what they are capable of. His research has allowed just that.