Of the estimated 9.1 billion tons of plastic ever produced, only 9% has been recycled. Almost 80% ends up as waste that adds to growing landfills or pollutes the natural environment, where it takes a thousand years to degrade. Such materials can also end up in the human body as microplastics, slowly accumulating with devastating effects on health. One key to solving these problems could be bioplastics—plastic alternatives produced through bioengineered organisms. These can degrade naturally and much more quickly.
The idea of bioplastics isn’t exactly new, but it’s been difficult to make them in the sorts of quantities and with the properties that would be useful for industry. Avinash Manjula Basavanna, a postdoc at the Wyss Institute for Biologically Inspired Engineering at Harvard University, thinks he can do better. He and his colleagues have developed a new type of plastic based on living materials that he calls AquaPlastic and which can be produced at a commercial scale, exhibits the tough qualities of many petroleum-based plastics, and can degrade in water in as little as two months.
The material itself is resistant to strong acids and bases. It can be applied as a coating using nothing but water, which makes the plastic turn adhesive—the first plastic of its kind to boast this feature. If it gets scratched, the coating can also be “healed” using water. And most important, “it’s flushable,” says Manjula Basavanna. “You don’t have to worry about it adding to our plastic and microplastic problem.” He and his partners are now in the beginning stages of forming a startup around AquaPlastic. If manufactured at scale, the cheap, biodegradable material could compete with conventional plastic coatings.
Early on in her days as a doctoral student at the University of Southern California, Ghena Alhanaee stumbled upon a disturbing set of facts. The countries of the Persian Gulf, including her native United Arab Emirates, were far more vulnerable to disaster than she’d realized. Not only was the Gulf itself one of the world’s largest oil and gas production zones, with more than 800 offshore platforms and thousands of tankers passing through its shallow waters every year, but the UAE was also building the Arab Peninsula’s first nuclear power plant. Meanwhile, several Gulf countries relied almost exclusively on desalinated Gulf water for drinking, with emergency supplies for just two or three days. “If something were to happen, and desalination plants weren’t able to operate, right now there really is no backup plan,” Alhanaee says.
Ever since, she has devoted her energy to tackling the Gulf’s disaster preparedness gap. She’s developing a data-driven framework to help the region better mitigate the risks of an oil spill or nuclear accident. Since the Gulf’s nuclear industry is nascent, and its oil and gas sector keeps its data private, she’s relying on information from the US: her statistical model draws on data from more than 4,000 reported safety incidents in the US nuclear and offshore oil industries over the past decade. The trick, she says, is to better understand which combinations of small incidents, under which scenarios, are most likely to snowball into something major.
Alhanaee’s framework seeks to do just that. She plans to apply her findings to a particularly vulnerable spot in the Gulf—in the vicinity of the Barakah nuclear power plant, which is nearing completion, and large-scale oil and desalination installations. Ultimately, she hopes her research will help the region’s governments develop more robust, and better coordinated, disaster mitigation strategies.
Lili Cai has created nanomaterial--based textiles the thickness of a normal T-shirt that can keep you warm or cool you off.
Cai’s work takes advantage of the fact that human skin strongly emits infrared radiation in a specific range of wavelengths. By manipulating the ways in which her fabrics block or transmit radiation in this band, she has produced multiple textiles that can have different effects on temperature.
To heat the body, Cai created a metallized polyethylene textile that can minimize heat radiation loss but is still breathable. Compared with normal textiles, it keeps people about 7 °C warmer. Under direct sunlight, her cooling fabric, a novel nanocomposite material, can cool the body by more than 10 °C.
Cai believes it’s extremely important to figure out how to make such textiles look as much as possible like normal clothing. Previous radiative cooling materials could only be produced in white, but in 2019 Cai figured out how to fabricate her textiles in different colors. Her goal is to eventually produce one single adaptive textile that keeps you warm if it’s cold out, but cools you off in the heat.
As climate change introduces shifts in weather and temperature patterns globally, people will use even more energy to regulate building temperatures. If she can figure out how to cheaply make her textiles at scale, they will provide an alternative that could help cut those heating and cooling bills.
The amount of radiation it takes to kill a tumor depends on the level of oxygen in the tumor cells. This can vary greatly, but oncologists don’t currently adjust radiation doses to account for it. Gregory Ekchian, cofounder of Stratagen Bio, has developed a sensor for reading tumor oxygen levels to personalize cancer treatment.
Ekchian recognized a glaring need for a new sensing tool after discussions with clinicians at Brigham and Women’s Hospital in Boston. He developed a prototype for a cancer treatment technique called high-dose-rate brachytherapy. In this form of treatment, doctors puncture the tumor with a series of hollow catheter tubes and then drop radioactive seeds through the tubes to suffuse the tumor with radiation, removing them once the desired dose has been delivered.
For his prototype, Ekchian added a strip of a recently invented oxygen-sensitive polymer to the tips of a modified version of the catheters. During routine MRI scans, protons in the polymer are excited; these protons return to equilibrium far faster in catheters surrounded by high levels of oxygen than low levels. The speed at which they return to equilibrium can therefore be used to map out how oxygen levels vary in different parts of the tumor, allowing oncologists to pinpoint where radiation doses should go and tailor their length and intensity to be most effective.
“If we weren’t worried about healthy tissue, we would just boost the dose to the entire tumor,” he says, but excess radiation can harm the patient. That means “it’s really important to figure out where those high doses need to go.”
Ekchian is preparing to publish the results of a clinical trial involving seven patients with cervical cancer, the first in humans. He ultimately hopes to employ his oxygen-sensing applications for a wide range of clinical needs.
The world’s biggest machine, the Large Hadron Collider, was built to help answer some of the most important questions in physics. To do that, the scientists behind the particle collider have to be able to process and understand the massive amounts of data from the machine. They want to be able to tell whether certain particles are produced in high--energy collisions taking place at nearly the speed of light.
The LHC can produce over a petabyte of data per second from one billion particle collisions, requiring about one million processor cores spread out around the world to analyze and understand what would otherwise be chaos. What does all that data mean?
This is one of the most staggering problems facing Jennifer Glick, an IBM researcher whose work is to find big problems that can benefit from quantum computing and then either try to solve them with existing quantum algorithms or create new ones for the purpose.
Quantum computing promises enormous advances in processing power over classical computing for certain problems that are intractably large or time-consuming for classical computers—the kind of problems Glick looks for. A quantum computer’s strength can be credited to the superposition and entanglement of quantum bits, or qubits, which offer an exponentially large computational space. For example, 50 perfect qubits can represent over a quadrillion states to explore.
Still, it’s a technology in its very early days. In two years at IBM, Glick has helped lead an effort to create partnerships that bring quantum technology into the real world. She spends a lot of her time hunting for problems and then developing and demonstrating ways in which a quantum computer could solve them faster than a classical one.
“What we’re looking at for the Large Hadron Collider is to use a quantum algorithm to predict whether or not a certain particle was produced,” she says. “Was that the particle I think was produced or not?”
In 2019, Glick and her colleagues tackled another big but more workaday problem with the banking giant Barclays. The challenge was managing the quadrillions of dollars processed each year in securities transaction settlements. These occur, for instance, when a financial institution buys shares, bonds, or derivatives. Clearinghouses must run complex optimization algorithms on the transactions to settle as many of them as possible within technical and legal constraints.
The results of the team’s research indicate that quantum technology could make this process more efficient, speeding up the time between trade and settlement. “When someone gives you an industry or business problem, there’s a lot of complications to start out with. It’s a very complex, gnarly problem,” Glick says. “Part of it is breaking it down into simpler pieces to be able to identify where the bottlenecks are with respect to classical computing methods that are being used today. And can any of those bottlenecks be removed by an quantum approach?"
Photo by David Vintiner
Getting computers to see—to actually see—has been an ambition of countless computer scientists for decades. Few have come closer than Andrej Karpathy, whose approach to deep neural networks allows machines to make sense of what is happening in images.
As a graduate student at Stanford, Karpathy extended techniques for building what are known as convolutional neural networks (CNNs)—systems that broadly mimic the neuron structure in the visual cortex. (In 2015 he also designed and was the primary instructor for the first deep-learning class at Stanford.)
By combining CNNs with other deep-learning approaches, he created a system that was not just better at recognizing individual items in images (say, a dog or a person), but capable of seeing an entire scene full of objects—multiple dogs and people interacting with each other—and effectively building a story of what was happening in it and what might happen next.
In 2017, Karpathy joined Tesla, where he oversees neural networks for the cars’ Autopilot feature. That includes collision detection, self-driving capabilities, and summoning (having a car drive autonomously from where it is parked).
Using Karpathy’s advances, Tesla is taking a different path from most other automakers. Typically, self-driving vehicles scan their surroundings with expensive laser range finders, build a virtual map, and then use AI to make decisions about what to do. Tesla’s approach uses traditional cameras. Not only can Karpathy’s method let the car spot objects in the road as a human driver would, but it can take in the entire scene (cars, people, intersections, stop signs, and more) and—if it works as intended—instantly infer what’s taking place. Doing so requires nearly 50 neural networks to constantly process data coming in as the more than a million cars in the fleet look and learn.
Siddharth Krishnan, a materials scientist at MIT, developed a tiny sensor that could save people from a devastating and often deadly brain condition.
Between one and two in every 1,000 babies born in the United States have hydrocephalus, a condition in which cerebrospinal fluid builds up in the brain. It can also occur later in life, including after traumatic brain injury. Over a million people in the United States have hydrocephalus, and nearly all of them have a shunt installed that drains fluid from their brain into their chest or abdomen. The condition can be fatal if untreated, but if it’s dealt with promptly a full recovery is often possible.
If shunts fail, because they get clogged, then fluid will again build up in the brain. This happens to about half of shunts within six years, so it’s a major problem.
Earlier techniques for detecting shunt failure all had various shortcomings. Repeated CT scans, MRIs, or x-rays subject patients to dangerous doses of radiation, cost a lot, and—because they measure the performance of shunts only indirectly—are not all that reliable. Sometimes, invasive brain surgery is done just to verify that a shunt is working. And because checks were being performed only a few times a year, patients and their families had to live with constant uncertainty, wondering if their shunts were working properly.
In any case, because the flow of fluid from the brain is naturally intermittent, spot checks don’t necessarily catch problems.
Krishnan’s sensor offers a noninvasive way to monitor the flow in shunts: it can be placed over the skin on the neck, near the valve. It measures the temperature at several distinct spots, inferring from the temperature distribution at those spots whether or not liquid is flowing. Unlike an earlier generation of noninvasive sensors, which made fewer temperature measurements and required the use of an ice pack, his device can continuously measure the flow, reporting results via Bluetooth.
So far, field trials on seven patients reported in a paper earlier this year in the journal NPJ Digital Medicine show that his sensor gives “robust, high-quality data” for hours at a time.
Krishnan hopes that his sensor will have applications beyond hydrocephalus, possibly monitoring other diseases like diabetes, where tiny changes beneath the skin can have huge effects.
Zika, Ebola, SARS, dengue fever, and covid-19. These diseases have fearsome personalities, yet the viruses that cause them are not really alive. To reproduce, viruses need to hijack a cell and use its components to produce more viruses.
To Andreas Puschnik, understanding which of our biomolecules viruses depend on could lead to new types of broad-acting antiviral drugs. “The idea is that viruses depend on specific cellular pathways which could themselves become drug targets,” says Puschnik.
Usually, the German-born researcher says, drug makers look to take out pathogens with chemicals designed to bind to and disable the molecular components of the virus itself. This “one drug, one bug” solution can work powerfully (think HIV drugs). The problem is that each drug has to be specially designed.
An alternative, called host-directed therapeutics, is in its early days. But Puschnik has helped speed it up using the gene-editing tool CRISPR. In a mass screening approach, he uses CRISPR to pepper millions of human cells growing in flasks with a hundred thousand different genetic mutations. If any of those cells survive infection with, say, yellow fever, it means he’s inactivated a molecular pathway the germ needs to reproduce.
Puschnik has already helped find an enzyme that mosquito-borne flaviviruses like dengue, Zika, and West Nile need to reproduce, as well as a drug to block it. Since all flaviviruses work similarly, he hopes the drug could be a “universal treatment” for them.
During California’s 2020 lockdown, the biologist remained at work at the Chan Zuckerberg Biohub, a new institute that picked him as its first scientific fellow. “It is still busy days for virologists,” says Puschnik, who now plans to turn his attention to the coronavirus that causes covid-19. Perhaps, he thinks, a drug that changes cells so they are less hospitable to coronaviruses could be ready for the next pandemic: “You might be able to treat viruses you don’t even know about yet.”
Photo by David Vintiner