Dinesh Bharadia invented a telecommunications technology that everyone said would never work: he found a way to simultaneously transmit and receive data on the same frequency.
Because the signal from broadcasting a radio transmission can be 100 billion times louder than the receiving one, it was always assumed that outgoing signals would invariably drown out incoming ones. That’s why radios typically send and receive on different frequencies or rapidly alternate between transmitting and receiving. “Even textbooks kind of assumed it was impossible,” Bharadia says.
Bharadia developed hardware and software that selectively cancel the far louder outgoing transmission so that a radio can decipher the incoming message. The creation of the first full-duplex radio, which eventually could be incorporated into cell phones, should effectively double available wireless bandwidth by simply using it twice. That would be a godsend for telecom companies and consumers alike.
Bharadia took a leave of absence from his PhD studies at Stanford so he could commercialize the radio through the startup Kumu Networks. Germany-based Deutsche Telekom began testing it last year, but since Bharadia’s prototype circuit board is too large to fit in a phone, it will be up to other engineers to miniaturize it.
When biomedical engineer Muyinatu Lediju Bell was an undergraduate at MIT, her mother died of breast cancer. Bell thought her mother might have survived if she had been diagnosed sooner, so she decided to investigate what makes some ultrasound images blurry, a problem that limits a doctor’s ability to screen for and diagnose cancer and other diseases.
As a doctoral candidate at Duke University, Bell developed and patented a novel signal processing technique that produces clearer ultrasound images in real time. The solution could particularly help diagnose problems in people who are obese, because fat tissue can scatter and distort ultrasound waves, delaying the detection of a serious disease. “I think it’s unfair that a long-standing technology does not serve a huge group of people that should be able to benefit from it,” she says.
Beyond ultrasound, Bell is now working to improve another type of noninvasive medical imaging technique. Called photoacoustic imaging, it uses a combination of light and sound to produce images of tissues in the body. She is especially interested in using it for real-time visualization of blood vessels during neurosurgeries to lower the risk of accidental harm to the carotid artery, which supplies blood to the brain. Her lab at Johns Hopkins plans to launch a pilot study of the technology in patients in 2017.
“At the company I cofounded, Skydio, we looked at all the things people wanted to do with drones and realized that the products are primitive compared to what’s possible. Today the typical consumer experience is you take it out of the box and run it into a tree.
“We’re building a drone for consumers that understands the physical world, reacts to you intelligently, and can use that information to make decisions. It has cameras positioned in a way so that computer vision can track its motion and understand the 3-D structure of the world. It also understands ‘This is a person,’ ‘This is a tree.’ We’ve demonstrated the ability to fly autonomously in close proximity to obstacles such as trees safely and reliably, and to follow someone walking, running, or cycling.
“On a week-to-week basis you can see the thing getting smarter and being capable of more. It shows up in the way it behaves and responds in different situations.
“We aren’t saying a lot about our product yet, but it’ll be a high-end consumer device smart enough to fly itself as well as or better than an expert pilot. Devices that understand the world and can respond to you and take actions will open up things that don’t exist today. A flying camera that can be anywhere around you would be a very powerful thing. Drones are likely to be the first widely deployed category of mobile robot. As they start to get out into the world and people start to interact with them, it’s going to lead to some interesting places.”
—as told to Tom Simonite
“I grew up in a small village in Xuzhou, China. When I was a child I saw a lot of people around me dying of different diseases. Many people don’t realize there’s a problem until it’s too late. I thought, in the future I should design a wearable electronic device to monitor health and tell us what’s going on and what’s going wrong before it gets bad.
“Our body is generating data all the time. There are so many wearable devices now—the Apple watch, the Fitbit—but they mainly track physical activities or vital signs. They can’t provide information at the molecular level.
“It came into my mind: what about sweat?”
This year, Gao made a sweatband that combines sensors with electronic processors and a Bluetooth transmitter on a flexible printed circuit board. If you wear the band, it wirelessly transmits data about what’s in your sweat to a cell phone running an app.
Gao’s device has sensors that interact with chemicals including glucose and lactate, causing a detectable change in their electrical current. Other sensors change their voltage in response to sodium or potassium. A recent addition includes sensors that can pick up on toxic heavy metals excreted in the sweat.
The challenge now is to figure out whether and how these measurements correspond to meaningful changes in health. So Gao is working with exercise physiologists on clinical studies to look for correlations that will help spot signs of trouble before it’s too late.
When we meet at a café in Beijing’s 798 Art District, a creative hub in China’s capital, Jiawei Gu has turned off the notification pings from Tencent’s WeChat, China’s ubiquitous messaging app, on his smartphone. When he glances quickly to check the screen, he has “more than 17,000 unread messages.” The way we interact with information technology is broken, he says. “I don’t want to be captive for checking buzzes,” Gu says.
Gu is Baidu’s go-to engineer for designing better models of “human-computer interaction.” One example, DuLight, is an AI interface that helps blind or vision-impaired people. A camera mounted on a headset or a user’s phone can scan bills, train schedules, labels on boxes, or just about anything; the objects or words are then identified, using deep—learning algorithms and the processor on a mobile phone, and translated into speech that the user hears through an earpiece. “The facial recognition function is also getting really good,” says Gu.
Gu’s vision of the future is one in which people can enjoy the benefits of technology without being captive to cords and notification buzzes. “I want to bring humans back to an unplugged age,” he says.
No matter how good your smartphone camera is, it can show you only a fraction of the detail Alex Hegyi can with the one he’s built at Xerox’s PARC in Palo Alto, California. That’s because Hegyi’s camera also records parts of the spectrum of light that you can’t see.
Since Hegyi’s camera logs a wider range of wavelengths, it can be used for everything from checking produce at the grocery store (fruits increasingly absorb certain wavelengths as they ripen) to spotting counterfeit drugs (the real ones reflect a distinctive pattern). In the near future, Hegyi hopes, his technology can be added to smartphone cameras, so anyone can make and use apps that harness so-called hyperspectral imaging.
Such systems have been around for years, but they have been big and expensive, limiting them to non-consumer applications like surveillance and quality control for food and drugs. His version, which is much simpler and more compact, relies on a black-and-white USB camera. He adds a liquid crystal cell, set between polarizing filters, in front of its image sensor. He also created software, which he runs on a connected tablet computer, to process the images.
Three to five years from now, Hegyi thinks, your phone could be revealing information that isn’t available in the visible spectrum of light. With such a tool, he says, “consumers themselves don’t have to know anything about wavelengths—they can take a picture and the display can say ‘counterfeit’ or ‘real.’” Or it might say the peach is ripe.
Growing up in rural Montana, Kendra Kuhl watched the namesake ice formations of nearby Glacier National Park shrink. “We could see global warming happening,” she says. The sight drove her professional ambitions. “I liked the idea of putting atoms together in new ways that are potentially friendly to the environment,” she says.
That’s just what Kuhl hopes to do through the startup she cofounded in 2014. Opus 12 is working on a reactor that will take the carbon dioxide emitted by power plants and make useful chemicals from it.
At Cyclotron Road, a startup incubator at the Lawrence Berkeley National Laboratory, Kuhl shows off one of Opus 12’s prototypes, a small reactor with an input for carbon dioxide and an output spigot connected to an instrument that analyzes the products. The key to the technology is the design of the reactor, which incorporates a family of catalysts she collaborated on during her graduate work at Stanford University. Sandwiched inside the metal reactor chamber is an electrode that uses a membrane coated with the catalysts. They enable the carbon reactions to occur at low temperature and pressure, without requiring large amounts of energy.
Opus 12 is not the first company to work on converting carbon dioxide into widely used chemicals. But its improved catalysts and scalable reactor design set the company apart, says Kuhl. Still, the company has far to go before it can begin competing with traditional chemical suppliers. By the end of 2017, Opus 12 plans to build a reactor with a stack of electrodes that can produce several kilograms of product a day.
Computer designers have long desired a universal memory technology to replace the combination of RAM—which is fast but expensive and volatile, meaning it requires a power supply to retain stored information—and flash, which is nonvolatile but relatively slow.
The urgency is increasing as Moore’s Law, which for so long governed the blistering pace at which silicon transistors shrank, begins to peter out. If we can’t fit many more transistors on a RAM chip, we need to find a fast, cheap new nonvolatile memory technology that can store vast amounts of data.
One promising alternative to the combination of RAM and flash is phase-change materials. This new type of memory stores data not by turning electric current on and off in transistors but by switching a type of material called chalcogenide glass between amorphous and crystalline states. Potentially, it is fast like RAM and nonvolatile like flash. Since 2010, Desmond Loke and his colleagues have solved several critical problems holding up its commercialization.
As a result of the advances, the Singapore researcher has now created a version of phase-change memory that is as fast as RAM chips and packs in many times more storage capacity than flash drives.
For years, researchers have been unable to get the speed at which a material changes from an orderly crystal to amorphous glass—the 1 and 0 states—any faster than about 50 nanoseconds, whereas RAM chips take less than a nanosecond to switch transistors on or off. But by applying a small, constant charge to the material, Loke found he could reduce switching time to half a nanosecond. He and his coworkers also reduced the size of a memory-cell bit to just a few nanometers. And he figured out how to vastly reduce power consumption and allow cells to be stacked in three dimensions to pack in even more memory capacity.
To truly understand the human genome, we need better insight into how individual cells differ. While every cell in a person’s body has basically the same DNA blueprint, there’s great variation in the way that genetic information is actually acted on, or expressed, at any given time. It’s the reason one cell becomes a neuron that plays a role in memory, while another cell becomes part of a person’s toenail. Even a given organ, like the brain, encompasses different types of cells, and individual cell types, too, have variations. Inadequate knowledge about how genes are expressed in different cells is greatly hampering progress in genomic medicine.
Evan Macosko has helped invent a technology called Drop-Seq, which allows a researcher to look at thousands of cells, one by one, to determine how each is carrying out its genetic instructions. Such analysis of a single cell can be done with existing tools, but it is typically painstaking, expensive work that involves dropping individual cells into tiny wells. “If you get two cells in a well, you’re screwed,” says Macosko.
To greatly speed up the process, Macosko figured out how to take each cell he wanted to analyze, break it apart, and attach the expressed genes to a tiny bar-coded bead. Once material from each cell is labeled, the genes can be analyzed rapidly—all for a cost of just seven cents a cell.
Macosko says he and his team have nearly finished profiling hundreds of thousands of cells spanning most of the mouse brain. Next stop: the 86 billion neurons and innumerable other cells that make up the human brain. By analyzing the great variation in the cells in our brains, he hopes to identify the rogue cells that are malfunctioning or interfering with normal function in disorders like schizophrenia, autism, and Alzheimer’s.