People often find robots baffling and even frightening. Leila Takayama, a social scientist, has found ways to smooth out their rough edges. Through numerous studies and experiments that look at how people react to every aspect of robots, from their height to their posture, Takayama has come up with key insights into how robots should look and act to gain acceptance and become more useful to people.
Takayama has had an especially big influence on the design of an advanced robot from Willow Garage, the startup she works for in Menlo Park, California. Called PR2 (see “Robots That Learn from People”), it’s an early prototype of a new generation of robots that promise to be indispensable to the elderly, people with physical challenges, or anyone who simply needs a little help around the home or office.
PR2 can fold laundry and fetch drinks, among other impressive tasks. But Takayama suspected that the nest of a half-dozen cameras originally perched on PR2’s head would alienate users. To find out, she turned to crowdsourcing, showing images of the robot head to an online audience recruited for the purpose. The results verified her concerns, and she successfully lobbied to jettison all but a few of the cameras, some of which were redundant.
More recently, Takayama has devoted effort to improving a robot called Project Texai, which is operated directly by humans rather than running autonomously. She ran an extensive field study to find out how Project Texai fit into the office environment of several different companies, coming by each office every two weeks to collect feedback and observe interactions between on-site staff and robots operated by remote colleagues. That study led to a surprising insight: “When you control a telepresence robot, there comes a point for a lot of people when they feel as if the robot is their body,” she explains. “They don’t want people to stand too close or touch the buttons on the screen.”
She also discovered that people in the offices ended up being less comfortable with Project Texai if they were allowed to dress it up. Personalizing the robot led people to feel more possessive about it and less accepting of the fact that someone else was controlling it. Project Texai should be personalized, Takayama concluded, but only by the “pilot,” and not by those who are around the machine. She also found that robot size can have a big impact on acceptance and is conducting a study to nail down the optimal height for Project Texai. Another key question: is it better to have the robot at eye level with a person who is sitting or standing?
Takayama is now conducting home interviews with the elderly and disabled to figure out which sorts of tasks would be most helpful to them. She predicts that someday soon, older people will employ personal robots to help them communicate with family and friends.
Tests to detect rheumatoid arthritis, lupus, and other autoimmune diseases can cost hundreds of dollars and take days, and they aren’t always accurate. To address those shortcomings, Ryan Bailey, a chemist at the University of Illinois, developed a silicon testing chip that fuses optical sensor technology with semiconductor fabrication methods.
Bailey’s chip is faster and more sensitive than many other optical tests, which typically look for color changes or fluorescence in response to telltale proteins. And it also outperforms many tests that detect changes in the electric charge of proteins and DNA.
The device can detect almost vanishingly small concentrations of proteins in 10 minutes or less—which means test results can be put to clinical use during an office visit. For most assays, samples can be placed on the chip without any of the preparation required in current systems, making the test easy to run with little training. And at about one dollar per test, it costs a fraction as much as most others.
Each silicon chip has an array of 30-micrometer-wide rings. Each ring can be coated with a molecular trap for a different protein, gene, or biomarker. If light of a certain wavelength shines onto the empty rings, it will resonate and appear brighter to an optical scanner positioned over the chip. When a sample is washed over the chip, any sought-after molecules in the sample will be trapped on the rings—and the change causes the light to resonate at a different wavelength. The wavelength also varies with the amount of trapped material.
In 2007 Bailey helped launch a company called Genalyte; it recently introduced its first diagnostic assay for connective-tissue autoimmune diseases, with a focus on lupus.
The company is also working on applications of the technology in diagnostics for cancer and for cardiovascular and neurodegenerative disease. It is currently producing chips with 128 rings, but Bailey expects the number to go up. His group is also working to simultaneously detect two different kinds of biological molecules on a single chip, such as a protein and an RNA molecule.
Is there a way for a window to reflect heat in the summer and let it through in the winter?
A window that changed in response to the heat might behave in just that way. Sarbajit Banerjee, a materials chemist at the University at Buffalo in New York, is applying his work on a compound called vanadium oxide to coat glass with a material that makes this possible.
Banerjee had been studying vanadium oxide because he was interested in the physics of phase transition—for example, the way water freezes as the temperature drops. When the temperature reaches 153 °F, this compound’s crystalline structure changes from one that’s transparent to infrared light—that is, radiated heat—to one that reflects the light.
Using nanofabrication techniques to change the microscopic structure of the crystalline material, Banerjee found a way to lower the temperature at which that change occurs. When the material is formed as long, thin nanowires, it undergoes the transition at a mere 90 °F. A researcher at a window company suggested that this version had good characteristics for a switchable window coating.
Banerjee was able to bring that temperature down even further by mixing tungsten into the material. And perhaps most promising of all, he found that he could trigger the transition at a range of temperatures by sending an electric current through the material—holding out the promise of changing a room’s temperature with the flip of a switch, and without racking up an energy bill.
Banerjee is now in the process of licensing his heat-blocking window coating to a U.S. building-materials company; he predicts that it will cost just 50 cents per square foot. He also has a partnership with Tata Steel, a global manufacturer headquartered in Mumbai, India, and they are looking at how to use the material to deflect heat from the corrugated-steel roofs that commonly turn houses stifling in India and other parts of the developing world.
Office towers and commercial buildings account for nearly one-fifth of all energy consumed in the United States. Burcin Becerik-Gerber has found a cheap way to cut a building’s energy use by a third.
Today’s smart buildings can be programmed to default to energy-thrifty measures, such as turning down the heat or air-conditioning and turning off unnecessary lights—but occupants often just crank everything back up, or even work against the system by plugging in space heaters or opening windows. An assistant professor of civil and environmental engineering at the University of Southern California, Becerik-Gerber has come up with a way to save energy by essentially getting buildings to “negotiate” with their occupants, factoring in the perceptions and desires of each.
The system uses occupants’ smartphones to open up a line of communication. Becerik-Gerber worked with colleagues in social psychology and computer science to design an app that asks people how satisfied they are with the work environment’s current temperature, lighting, air quality, and even noise level. System software then fashions each user’s consumption patterns and preferences into a virtual “agent” that resides in his or her smartphone. “The agent works for you and tries to look after you,” she explains.
The system then works with all the building’s agents to find the most energy-efficient way of adjusting the settings so as to make the greatest number of people happy. To improve the results, it asks those users demanding more energy-intensive conditions if they’d be willing to compromise a bit, and it tells them what the resulting energy savings would be. “If people understand the consequences, they’re more tolerant,” says Becerik-Gerber. The optimized settings are then put in place and monitored automatically.
Finding an optimal solution for as few as five occupants is difficult. Finding a way to coördinate the preferences of hundreds was massively challenging. The problem is especially acute in today’s popular open-plan offices: people with very different preferences often share space, typically guaranteeing that most of them will be unhappy with the environmental settings. But Becerik-Gerber’s simulations indicate that her algorithms could satisfy some 70 percent of occupants—while reducing overall energy consumption by more than 30 percent.
PROBLEM: Many power plants connected to the grid operate well below their full capacity, wasting fuel. If we have no means to store large amounts of electricity or reliably predict power demand, however, maintaining idle capacity is the only way to respond quickly to surges in demand. The problem is particularly challenging in China, a huge consumer of electricity. Its push to add thousands of wind turbines, with their variable, difficult-to-predict output, will make it even harder to efficiently balance supply and demand.
SOLUTION: Software from electrical engineer Qixin Chen of Tsinghua University in China accurately forecasts power demand and helps utilities coördinate power plants. His software is already in use in nearly 200 cities and 10 provinces in China. One province, he says, reported saving $30 million and 240,000 tons of coal in a single year.
Chen found two ways to improve on existing demand-forecasting software. First, he designed the system to better choose the right forecasting approach for particular areas; differences in demand and weather patterns mean that some techniques are much better suited to some locations than others. Then he enabled his system to analyze its own previous prediction errors and adjust its formulas so as to minimize the errors the next time similar conditions occur. The resulting demand forecasts are reliable a month ahead. Other forecasting systems, in contrast, aren’t sufficiently accurate beyond a day or two, if that.
The results are helping utilities dole out electricity more efficiently. Now Chen is working to adapt his forecasting software to predict the power output of wind turbines. His system would take into account wind data gathered for miles around the turbines, providing a sharper picture of which wind shifts are likely to affect them in the coming hours. That means utilities can know when to expect more power from the turbines so they can cut back on conventional power generation.
While doing his doctoral studies at Caltech, William Chueh showed that heat from the sun can turn cerium oxide—a relatively cheap material—into an effective catalyst for splitting water to yield hydrogen that can be used to make fuel. Most other hydrogen extraction processes rely on expensive catalysts made from precious metals such as platinum. “There’s simply not enough of those metals to make a dent in our fuel needs,” says Chueh, who is now a materials scientist at Stanford University.
His process relies on mirrors of the type that some solar plants use to concentrate sunlight by a factor of 1,500. The sunlight heats the cerium oxide to 1,500 °C, driving out its oxygen. As the cerium oxide cools, steam is fed to it, which then gives up its oxygen to the oxygen-starved material, freeing hydrogen gas. The hydrogen can be collected, and the cerium oxide can be reheated to repeat the process.
Chueh has used the same process to split carbon dioxide. The resulting carbon monoxide can be combined with the generated hydrogen to make hydrocarbon fuel such as methane—a renewable alternative to extracting it from the earth. The technique generates about 100 times more carbon monoxide than previous processes for a given amount of energy.
Chueh’s idea is to use his catalyst in combination with the type of large solar concentrators now used in power generation. Meanwhile, he’s working to make cerium oxide–based hydrogen generation work at lower temperatures, because the only containers that can hold the material at 1,500 °C without melting are made of exotic alloys that cost too much. He’s already developed a hybrid of cerium oxide and another material that shows the potential to work at 500 °C, which would allow the use of stainless-steel vessels.
Clean energy tends to come with drawbacks. Hydrogen has such low density that it’s hard to compress a useful amount of it into a container small enough to be practical; natural gas is more costly to transport and transfer than liquid fuels; batteries hold relatively little energy for their size and weight. But MIT chemistry professor Mircea Dincă has come up with a promising way to solve all these problems: sponges.
Dincă uses organic and metallic materials to form his sponges, which are so thoroughly riddled with microscopic chambers that in some cases, the surface area of just a gram would cover a football field if laid out flat. By mixing and matching these building blocks, he is able to control the size of the tiny chambers. Different configurations have different chemical and electrical properties.
Getting enough hydrogen on board a hydrogen-powered car requires either ultrahigh-compression tanks or cryogenic fuel tanks, but neither of these methods stores enough hydrogen to meet the U.S. Department of Energy’s target: a vehicle that can travel 300 miles without refueling. Dincă came up with a sponge capable of trapping twice as much hydrogen as ordinary tanks in a given volume. Adding a bit of heat or relieving some pressure coaxes the sponge to release the hydrogen when it’s needed.
Dincă’s sponges also make great sites for catalytic reactions, because the whole inner surface can be coated with a catalyst; the reaction can be controlled by altering the size of the sponge’s pores. He is developing variants of the sponges that could transform methane into a liquid fuel by efficiently catalyzing reactions that strip oxygen from air. He is also working on turning these sponges into materials for batteries and for organic photovoltaics.
In 1999 Daniel Ek was a 16-year-old Swedish programmer, getting rich building websites, when he started asking what he himself now says was a dumb question: How do you get people to pay for music that can, if illegally, be downloaded free—and without charging them for each song, the way Apple’s iTunes service does now?
Ek’s eventual solution: Spotify, a jukebox in the cloud that provides legal, on-demand access to millions of songs. Supported by paying subscribers, as well as by radio-style ads played only to nonsubscribers, the service debuted in the United States last year after operating for three years in Europe; it now has more than 15 million users, four million of whom pay. With an estimated value of $4 billion, Spotify is one of the hottest Internet companies in the world.
Spotify isn’t the only service to let listeners stream music on demand. But it distinguishes itself from Internet radio services like Pandora and Slacker through the vastness of its music libraries and its deep integration into social media. Spotify lets users seamlessly share playlists and swap music on social networks like Facebook and Twitter. And Spotify makes it easy for others to build apps that work with its platform in order to give users yet more ways to discover and share music. “The trick was to think through the social aspect of the service from the very beginning,” says Ek. “We didn’t want it to be an afterthought.”
Spotify’s users can access some 16 million songs—about 15 times more than Pandora makes available. The service offers all those terabytes of music without revealing any of the licensing complexities involved in the process. Ironing out the needed deals with record companies while refining the service ate up two years of Ek’s time before he launched in Europe in 2008. And it took a team of software engineers—the company now has 250 of them—to make the service easy to use in spite of all the programming code that works in the background to prevent music from being illegally copied and distributed. “The best thing about Spotify is that it works at all,” says Ek. “If you’re in Spain and you want to share your music with someone in the U.K., you don’t want to see how we take care of paying licensing fees in both places.”
Now Ek is trying to find ways to make it as easy to find and play music as it is to find and play videos on YouTube. This year the company introduced a radio service for computers and mobile devices, launched its first iPad app, and made it possible to embed a Spotify play button into any website. The Huffington Post, the blogging site Tumblr, and Rolling Stone’s website are among the many that now offer music that way.
For a man capable of turning his teenage vision into a mushrooming empire, Ek claims a surprisingly simple strategy for continued growth. “I just keeping asking dumb questions,” he says.
Rana el Kaliouby
Computers are good with information—but oblivious to our feelings. That’s a real shortcoming, believes MIT Media Lab scientist Rana el Kaliouby, because it leaves them unable to usefully respond to many of our needs until we take the trouble to tap out instructions. To close that gap, el Kaliouby has come up with technologies that help computers recognize facial expressions and other physical indicators of how someone is feeling. Someday this could help make our machines more adept at assisting us.
El Kaliouby is not the first researcher to try to map facial expressions. But where others have focused on trying to get computers to recognize a half-dozen exaggerated expressions recorded in the lab, she is identifying the more varied and subtle faces that people commonly make. “It’s a problem that requires pushing the state of the art of computer vision and machine learning,” she says.
To break the problem down, she zeroed in on 24 “landmarks” on the face. Then she trained a computer to identify how those parts of the face change shape in response to different emotions, creating expressions such as a furrowed brow. To ensure that the technology would work with people in different cultures, el Kaliouby, who lives in Cairo and spends one week a month at MIT, enlisted the help of thousands of people on six continents. They have allowed their computers’ embedded cameras to record their expressions while they watch a video, resulting in what she says is the largest database of facial images in the world.
One early experimental application of the technology was a set of camera-equipped glasses intended for people with Asperger’s syndrome, who tend to have difficulty recognizing others’ emotional states. The device could recognize whether someone facing the wearer appeared bored; if so, it could use small lights in the glasses to signal that to the wearer. (El Kaliouby herself was known to sport a head-cam in and out of the lab, tucked into the head scarf she wears.)
El Kaliouby has cofounded a company called Affectiva in Waltham, Massachusetts, to commercialize the facial recognition technology and a wristband that she helped develop to measure skin conductance, which is associated with emotional arousal and can be used to detect anxiety in real time. For now, Affectiva uses facial recognition mainly to give advertisers a better sense of how their ads are affecting viewers. The company convenes enormous virtual focus groups made up of online viewers who allow their expressions to be tracked, and then analyzes the resulting data. But in the longer term, el Kaliouby also wants to bring her technology to classrooms to help teachers identify which material students respond to best.
The technology could eventually become a critical component of many electronic devices, making it possible for them to recognize when we’re puzzled, frustrated, happy, or sad—and enabling them to respond with the right information, music, or human assistance. And there’s a lot to be said for getting our phones, PCs, and GPS systems to recognize when we just want to be left alone.
Robotic limbs are usually packed with multiple powerful motors, making them heavy and bulky. Engineer Ken Endo hit on an idea for lightening and streamlining the limbs: replacing some of the motors with a series of springs. His goal isn’t to build better robots; rather, he wants to make prosthetic limbs and orthopedic devices that can, as he puts it, “eradicate disability.” He hopes to make artificial limbs that function nearly as well as real ones, affording amputees near-effortless motion with no discomfort.
Endo had been focused on building more advanced robots until about seven years ago, when he found himself moved by the determination of a friend who had lost his legs to bone cancer. “He said he wanted to walk by himself,” Endo says. “That’s when I changed my research focus from robots to biomechanics.”
As a PhD student working in the MIT Media Lab’s Biomechatronics Group, led by Hugh Herr, Endo created the first computer program that closely simulates human walking, a surprisingly complex motion. Now back in his native Japan as a researcher with Sony, he’s enlisting that model to build legs with spring-based ankle and knee joints that he says work much like the real things. “The ankle joint also requires a motor,” he notes, “because the human ankle generates a huge amount of mechanical power.” But most of the work will be done by the springs, he says, making the legs far more efficient and leaving the wearer less tired and sore. Endo is now perfecting his joints on a walking robot. He hopes to have the bugs smoothed out in mere months, at which point he’ll start working to make the device suitable for amputees.
Another big challenge Endo has taken on is making prostheses affordable. More than half of all amputees live in poor countries, where many are victims of land mines. The price tag of $35,000 or more for a high-quality prosthetic leg in the United States is far out of reach for the vast majority of these amputees.
To address that, Endo has been working to design prostheses specifically for people in developing countries and to find ways to distribute them there. He has already achieved one breakthrough: a leg costing about $30 whose knee joint can bend when the leg is lifted off the ground but locks into place when the leg is weighted, leading to a less effortful, more natural-looking gait.
There’s never been a great way to safely and accurately test what’s going on in the womb. The mother’s bloodstream contains some fetal cells, but not many of them, so a maternal blood sample rarely yields enough for a useful analysis. Now Christina Fan has come up with an approach to measuring the chromosomes and genes in the fetus without having to isolate the fetal cells, enabling her to develop tests for Down syndrome and a range of inherited and other conditions.
While still a graduate student in bioengineering at Stanford, Fan developed a DNA sequencing technique as well as an algorithm for estimating how many of certain chromosomes—such as chromosome 21, the one implicated in Down syndrome—should be present in a sample of the mother’s blood if the fetus is contributing the expected number. Any excess in the sample means the fetus has more than the normal number of the chromosome, indicating that the child is likely to have Down syndrome. There are other blood tests for this condition, which affects cognitive and physical development, but these tests are much less accurate. There are also more accurate tests performed on fluid drawn from the amniotic sac, but collecting this fluid carries a small chance of triggering a miscarriage.
Fan realized that to expand her work to other inherited conditions, she had to go beyond simply counting chromosomes and look at the genes associated with those conditions. She was able to adapt her chromosome technique for these other conditions by calculating, from an analysis of both the mother’s and father’s cells, how much of a certain type of gene ought to turn up in a sample of the mother’s blood. If the sample contains higher levels than expected, the excess is coming from the fetus. “We used this method to build the entire inherited fetal genome from maternal blood,” she explains.
Some of the conditions that are detectable this way can be prevented from causing problems if they’re treated promptly at birth. The metabolic disorder phenylketonuria, for example, can be managed through diet if that begins when the patient is a newborn.
People studying public-health issues must cope with surprisingly shoddy data. Plenty of numbers are available, but epidemiologists and policy makers often don’t trust them, because they are frequently incomplete, inconsistent, and inaccurate. “When I came to global health, I was shocked by how little we knew,” says mathematician Abraham Flaxman, an assistant professor of global health at the Institute for Health Metrics and Evaluation at the University of Washington.
In response, Flaxman has developed improved models and algorithms that can automatically fill in the gaps in flawed health data. His breakthrough approach, which is now widely used, came from a realization that improving the quality of a large data set requires not just analyzing it on its own but also cross-analyzing it against other relevant data sets that have at least some variables in common.
Flaxman started off as a postdoctoral fellow with Microsoft Research’s Theory Group, where he studied complex networks, but he soon yearned to apply his mathematical and modeling skills to big health problems. When he made the jump to academia, he immediately discovered that public health was beset by serious data problems, and he began trying to address those problems.
Flaxman is using his methodology to track the spread and treatment of a wide range of diseases. His latest model, called DisModIII, starts with all the available data on the incidence and mortality of a specific disease. It then integrates and cross-analyzes the data to produce consistent estimates of the way the disease progresses through a population as a function of age, time, gender, and geography.
About 800 researchers use DisModIII to track more than 140 diseases, including hepatitis B and cholera. Researchers and policy makers had long dealt with data that had to be taken on faith and data analysis tools that were unique to each disease. Applying Flaxman’s one tool to different data sets covering many different diseases provides credible, apples-to-apples comparisons of their relative impact. That helps policy makers direct health funds to the interventions likely to save the most lives.
Flaxman continues to come across new areas in which his modeling approaches can play a pivotal role. One of them is determining causes of death. Death certificates in developing countries are often incomplete or inaccurate—if they exist at all. The conventional approach is to collect information about symptoms and other matters from relatives and friends of the deceased person, and have a physician review the results to make an educated guess—a labor-intensive technique called a “verbal autopsy.”
Flaxman created a computer program that examined available information about a wide range of deceased people whose causes of death were known in order to come up with accurate correlations between observations and causes. Now the software can determine causes of death far more cheaply than physicians can conduct verbal autopsies, getting it right more often as well.
Flaxman wants to do much more to improve health data; he’s driven by the knowledge that the policy and public-health decisions informed by his tools are matters of life and death on a large scale. “These are data that really matter,” he says.
A stumbling block to increasing our reliance on electricity from cleaner energy sources such as solar panels and wind farms has always been figuring out how to efficiently store the energy for use when the wind isn’t blowing and the sun isn’t shining. Danielle Fong could make clean energy significantly more practical on a large scale by introducing a novel way to use tanks of compressed air for energy storage. “It could radically reorient the economics of renewable energy,” she says.
The idea of using compressed air to store energy is not new. Electricity from solar panels or wind turbines can turn a motor that’s used to compress the air in a large tank, and the air pressure can then be converted into power to drive a generator when the power is needed. The problem is that during compression the air reaches temperatures of almost 1,000 °C. That means energy is lost in the form of heat, and storage in conventional steel vessels becomes impractical.
Fong stumbled on a possible solution while skimming through a nearly century-old book: water spray is great at cooling air. She asked, why not spray water into the air while compressing it, so that the air stays cool? To make the process practical, she developed a technique for separating the heated water from the compressed air and diverting the water into a tank, so the heat can be recaptured to minimize energy loss. The process is about as efficient as the best batteries: for every 10 kilowatt-hours of electricity that goes into the system, seven kilowatt-hours can be used when needed.
Fong founded a company called LightSail Energy in Berkeley, California, to develop the technology. Initially, she planned to produce compressed-air-powered scooters. But backer Vinod Khosla of the venture capital firm Khosla Ventures convinced her to go after the much bigger market of electricity for the power grid.
Batteries are the current state of the art in storing excess wind and solar energy, but Fong says the LightSail system will cost less to purchase and will last for a decade or more. Over the long term, she says, the system could cost as little as one-tenth as much to own and operate as batteries do. A single system, which is about the size of a shipping container plus a car-size unit, will store the energy generated by a one-megawatt wind turbine running for three hours.
Fong and the LightSail team had to come up with a filtering system capable of separating the water from the highly compressed air. Another challenge was to design a system that could handle both compressing the air and expanding it to drive a generator; previous efforts have required two separate systems.
Not only did LightSail meet those challenges, but it managed to find a compound—the company won’t provide details—that can be used more efficiently than steel to make compressed-air storage tanks. Tanks made from this material also don’t need the costly underground installation that’s normally required. And unlike standard systems, LightSail’s doesn’t need the turbine to run at a fairly constant speed to get efficient compression, meaning it is better able to cope with intermittent wind conditions.
Fong says there are no technical barriers to building units large enough to power entire cities. The company plans to manufacture the systems, and she says several renewable-energy developers have already signed on as customers. The first pilot unit is scheduled to ship in late 2013 or 2014—but she is still hoping to see those compressed-air scooters.
Saikat Guha is convinced that privacy and profit don’t have to conflict online. The Microsoft Research India computer scientist has developed a software platform that allows advertisers to precisely target potential customers without exposing the customers’ personal information.
The trick involves flipping the basic model of targeted advertising. Companies now track your browsing and purchasing behavior and then sell your data to advertisers. But instead of acquiring data from your phone or PC so that companies can send the right ads to websites you visit, Guha’s system calls for companies to send potential ads to you; then software on your device figures out which of them are targeted effectively. Thus, if you search for video games, the software will fetch entertainment-related ads. If your computer or phone recognizes that, say, you often buy DVDs, the device will pick out a DVD ad to show you. Guha’s ad-selecting software could be built into browsers, or into websites such as Facebook. And he estimates that the ads wouldn’t take up significant amounts of memory on your machine.
Since companies wouldn’t be able to see or store your data or toss it around the Web, risking accidental leakage, even data normally too private to share with advertisers could be brought to bear in picking from among them.
Today, for instance, Google can’t determine your birth date unless you offer it up. But Guha’s software might come across it on your PC and use it to enhance the targeting of Google’s ad network, without ever revealing the date to Google. It’s a privacy protection scheme that, unlike almost all others, indirectly gives businesses an even richer set of data to work with.
Guha has also addressed the privacy threat from smartphone apps that package and sell sensitive information such as a user’s name and location. “Today someone could construct a full history of where you are at any given time of the day,” he says. His idea is a platform that cryptographically splits information such as a person’s name, the name of the store the person is visiting, and the amount of time spent at the previous store into disconnected fragments before sending it to the cloud. Software on the phone or tablet could then use all or most of those fragments to target advertisements, but no party involved could connect them to create a privacy-violating portrait of the user.
There will always be those who will try to get around privacy protection schemes to scope out more about you than you care to share. Guha is on top of that problem, too. He’s working on algorithms that detect when websites and apps are surreptitiously using your personal data, so you can block them.
Chris Harrison recently helped develop an invention, called Touché, that can turn practically anything into a computer input device—a table, a doorknob, a pool of water, your hand. To do this, he relies on the natural conductivity of some things, or he adds electrodes to objects that aren’t conductive. Then he wires up a controller that registers the range of electronic signals the objects generate when they are changed by, say, a particular hand gesture or body posture. A sensor attached to a sofa, for instance, can continuously monitor voltage changes to detect the signatures of particular motions and events and link them to actions. A dog leaping on the couch might trigger a harsh noise to scare it off; a person sitting down might cause the TV to switch on. (Yes, even a couch potato’s life can be made easier.)
Harrison, a PhD student in Carnegie Mellon’s Human-Computer Interaction Institute, says his mission is to liberate our fingers from having to command our phones and other devices by poking at squished keyboards and teensy screens. “If you think about all the ways we use our hands, being limited to only poking would make the world really hard to use,” he says.
He is enlisting technologies ranging from cameras to stethoscopes to miniature projectors. Before Touché, which he developed while at Disney Research, he invented a device called Skinput that turns skin into the equivalent of an interactive touch screen: a tiny body-mounted optical system projects “buttons” onto the wearer’s hand and arm and detects any tapping of the buttons so that a device can be controlled. As an intern at Microsoft, he helped create OmniTouch, a roughly similar system that makes it possible to turn any object in the environment into a multitouch screen. And he’s made a device called Scratch Input that uses a modified stethoscope and generic microphone to convert the sound of a fingernail dragging over just about any surface into an electrical control signal.
Harrison notes that as computers become better integrated into almost everything we do, we will find it increasingly convenient to be able to interact with them in a variety of ways, without always having to resort to a screen or keyboard. “Eventually we’ll develop input technologies so good that we don’t need a touch screen,” he says. Our tired fingers salute that quest.
In 2005 John Hering notoriously invented a hacking “rifle” called the BlueSniper that enabled him to take control of a Nokia handset from a record-setting distance of 1.2 miles. But though he’s been a hacker since childhood, Hering isn’t the kind of hacker you have to worry about. In fact, his mission is to keep your cell phone safe from malware.
The BlueSniper stunt was all about exposing security weaknesses in Bluetooth technology. Hering used the attention he got from it to further a more ambitious idea: that there should be a central database of information about phone malware. In 2007 he cofounded Lookout Mobile Security with two college buddies and created a free app that protects Android users from malicious apps—say, a fake version of a game that tacks an easy-to-miss $5 charge onto your monthly smartphone bill. Lookout found 1,000 instances of virus-infected apps last year and found that Android users had a 4 percent chance of encountering malware, a number expected to rise.
To stay on top of the bad guys, Lookout has built what it calls the Mobile Threat Network: a giant database, tallying more than a million rogue apps, that it continuously adds to as the company’s software scans and analyzes apps worldwide. When an Android smartphone owner uses Lookout’s app, it compares installed apps against its database of known threats and notifies the user when it detects a match.
Users can help by allowing Lookout to collect data from their mobile devices, essentially crowdsourcing the job of finding threats. That approach to identifying malware stands in contrast to the methods used by traditional security software for desktop computers, which rely on professionals working in the background to find threats in the digital wild.
Last year, Lookout blocked millions of mobile threats, according to the company. More than 20 million people have downloaded the app. (Most of Lookout’s revenue comes from users who pay $3 a month to subscribe to a premium service that also secures mobile devices’ Web browsers and makes it possible to lock or erase stolen phones remotely. But Hering won’t say whether the privately held company is profitable yet.)
Hering says he thinks of his approach to mobile security as one that will empower users, not hamper them, as desktop security programs sometimes do. “Security is typically something that’s thought of as a burden,” he says. “It slows down your computer, it tries to scare you. It’s all these things that we don’t stand for.”
One day in 2009, Drew Houston and his business partner, Arash Ferdowsi, pulled their Zipcar into Apple headquarters in Cupertino, California. “We went to the front desk,” Houston recalls. “And what do you say at that point? ‘We’re here to see Steve.’”
Steve Jobs had invited them largely because he wanted to explore acquiring Houston’s fast-growing company, Dropbox. Founded in 2007, Dropbox conferred iPhone-like ease and reliability on cloud-based file storage—something Apple itself hadn’t yet begun offering. People using any browser or operating system, on any kind of device, could drag any kind of file to Dropbox’s icon of an open blue box. The files were stored on Dropbox’s servers and synched each time you saved a file, so that it would be available on any device running Dropbox.
Houston and his team hammered out thousands of issues to create an easy system free of the typical annoyances. Dropbox knows that while Linux file names are case-sensitive, Windows file names aren’t, so a Windows file called “ABC.doc” will overwrite one called “abc.doc.” It can keep antivirus software from interfering with its file-synching system. It integrates smoothly with different user interfaces: on a Mac, for example, the Finder displays a check mark in the Dropbox icon when files are in sync.
Its ability to shield users from myriad mind-numbing details and housekeeping chores—“the acrobatics to support all these different situations,” as Houston puts it—is what made Dropbox a hit. “It sounds like what we do is simple,” says Houston, who wrote the original code on a bus ride from Boston to New York and is now Dropbox’s CEO. “But sanding down the thousand rough edges to make something work 100 percent of the time is really, really hard. Even something simple, like synching a file, is actually really complicated to do in a bulletproof way a billion times.”
That’s how many times people are updating files with Dropbox every two days. And as consumers slide more stuff into their Dropbox folders, more blow past their free two-gigabyte limit and start paying $10 a month for additional storage. Dropbox says it now has more than 50 million users, with 4 percent paying.
The other big technical challenge was how to make Dropbox work fast on any device. Users often store thousands of files, and tracking and synching every one of them could easily eat up memory and processor time. The first version of the service hogged two full gigabytes of memory, but Dropbox eventually whittled that down to a mere 100 megabytes. And to keep Dropbox from dropping the ball when operating systems are revised or upgraded on users’ PCs, the company created custom analysis tools that rapidly detect and resolve any software conflicts.
Houston’s team is now working on advanced capabilities for synching and sharing photos, and gearing up for the demands that will be imposed on the software by continued rapid growth. “We’re designing a system that can connect billions of devices,” he says. The company has tripled its staff in the past year, to 150, and taken over a large office space in San Francisco.
Back at that meeting at Apple in 2009, Houston told Jobs he wasn’t interested in selling, after which Apple went on to bring out its competing iCloud service. But it’s hard to argue that Houston was being shortsighted, given that private investors recently valued Dropbox at $4 billion.
Quantum dots are crystal particles, with a diameter of tens to thousands of atoms, that can absorb and emit different wavelengths of light or move electric charges around. Now Prashant Jain, a chemistry professor at the University of Illinois, has figured out a way to create tunable quantum dots that can be adjusted on the fly. His innovation could be key to designing optical computers and ultra-efficient solar panels.
Jain makes quantum dots out of copper sulfide, varying the ratio of copper atoms to sulfur atoms. At certain ratios, the amount and distribution of electrical charges inside the dots becomes sensitive to small changes in voltage—and it’s that charge distribution that mostly determines the dots’ properties, such as which wavelengths of light they’ll absorb and emit. “You can controllably push and pull charges into these semiconductor nanocrystals and thus turn on and off their ability to interact with light,” he explains.
That means the dots could function as submicroscopic optical switches—potentially, core components of an ultrafast optical computer that replaces electricity with beams of light. Jain’s tunable-quantum-dot switch is about one-sixth the size of today’s smallest transistors, and about a hundredth the size of current optical switches. Jain is also making quantum dots out of titanium oxide mixed with bismuth. These dots absorb solar light and convert it to electrochemical energy, which is used to generate hydrogen fuel from water.
Jain’s dots are still very much in the research stage, and he predicts it will take an enormous amount of additional research to achieve practical optical computers or the super-efficient hydrogen production needed for energy applications. “There’s a lot more fundamental work to be done,” he says.
Pulling medical tape off newborn babies in hospitals can be extremely painful and even potentially dangerous. To find something safer, Bryan Laulicht, a postdoctoral fellow at Harvard University and MIT, tested dozens of adhesive materials commonly used in medicine. He soon discovered that the adhesives fell into two groups: those that stuck securely and those that could be removed painlessly. None of them met both criteria.
But Laulicht knew that evolution had long since solved the problem. The feet of the gecko, for example, sport pads that adhere strongly to surfaces for climbing, but when rotated in a certain way, the pads release easily so the animal can run. Convinced that an artificial material ought to be able to do the same, Laulicht hunted for a way to fabricate it.
Using existing adhesives and a new quick-release backing layer, Laulicht developed a dry adhesive, suitable for bandages and medical tape, that was inspired by the gecko’s feet. Though he won’t give more details before the results are published, he says that he and colleagues are gearing up to test his creation on humans.
Newborns are the immediate intended beneficiaries of the adhesive technology, but Laulicht says elderly patients and others with sensitive or injured skin need it, too. Because the adhesive is based mostly on materials found in existing types of tape, he hopes his bandage will find its way to the clinic quickly.
Better integrating electronics with human tissue holds out the promise of monitoring the body more conveniently and accurately than is possible with sensors that are worn or taped on. Nanshu Lu is developing long-lasting “electronic tattoos” that can bond to skin and track and report on the wearer’s vital signs or translate small muscle movements into commands for controlling devices. Future versions may play critical roles inside the body in watching for signs of disease or damage. They could even treat problems automatically.
Lu, an assistant professor in the department of aerospace engineering and engineering mechanics at the University of Texas at Austin, has solved a big problem in building electronics for biological tissue: silicon semiconductor circuits are flat, rigid, and brittle, making them a terrible match for the soft, pliable tissue. (See “Making Stretchable Electronics”) What is needed is a soft device better able to make intimate contact with skin.
To create a more tissue-friendly chip, Lu enlisted a flexible polymer substrate on which she could deposit small islands of silicon. That technique had been tried by other researchers, but these devices had limited flexibility, since ordinary wires used to connect the silicon tear as the substrate stretches or twists with the tissue’s movement. Lu solved the wiring problem by eliminating the islands and replacing them with a serpentine mesh of nanoribbons; this webbing stands up to twisting and pulling without breaking.
The resulting device is a 30-micrometer-thick patch of supersoft, transparent silicone. Lu has built a prototype of the device that carries sensors to measure temperature, strain, and electrical signals. The patch could also be equipped with LEDs to enable visual signaling.
The circuits are printed onto silicone that’s supported by a stiffer layer of water-soluble polymer. When the patch is placed on dry skin and then wetted, the polymer layer dissolves; intermolecular attractions between the silicone and skin make the silicone adhere tightly. In tests, the silicone patches have adhered to skin for a week, hanging on even through showers and exercise. And the patches don’t irritate skin the way adhesives often do.
Lu and collaborators have already tested the devices in a few applications. For example, they have been attached to people’s necks to enable them to control Sokoban games simply by speaking commands; the patches measure the electrical activity of throat muscles during speech, with enough fidelity to distinguish between the spoken words “left,” “right,” “up,” and “down.”
Now Lu wants to see the patches used in a wide variety of health-related applications. She hopes to stick the devices on foreheads to directly monitor electrical activity in the brain, to place them on skin during plastic surgery so that strain gauges on the patch can alert surgeons if the procedure is overly stretching skin, to monitor heart rate and muscle activity during exercise, and to track the progress of healing in wounds and burns.
Lu is working on new versions of the devices. For example, she’s trying to create stronger physical and electrical connections by integrating arrays of microneedles on the bottom of the silicone patches. That, in turn, could enable the patches to stick to heart muscle so doctors could detect early signs of heart-attack risk, such as reduced blood flow.
Lu also hopes that a version of the patch with two-way communication capabilities might be able to sense heart arrhythmias and instantly respond by delivering small electric shocks to restore an even beat. And she envisions transdermal electronics that could detect the level of a protein in the body associated with a specific disorder and then release drugs to treat it.
In 2008, when Shishir Mehrotra joined YouTube to take charge of advertising, the booming video-sharing service was getting hundreds of millions of views a day. YouTube, which had been acquired by Google in 2006, was also spending as much as $700 million on Internet bandwidth, content licensing, and other costs. With revenue of only $200 million, YouTube was widely viewed as Google’s folly.
Mehrotra, an MIT math and computer science alum who had never worked in advertising, thought he had a solution: skippable ads that advertisers would pay for only when people watched them. That would be a radical change from the conventional media model of paying for ad “impressions” regardless of whether the ads are actually viewed, and even from Google’s own pay-per-click model. He reckoned his plan would provide an incentive to create better advertising and increase the value for advertisers of those ads people chose to watch. But the risk was huge: people might not watch the ads at all.
Mehrotra’s gamble paid off. YouTube will gross $3.6 billion this year, estimates Citi analyst Mark Mahaney. The $2.4 billion that YouTube will keep after sharing ad revenue with video content partners is nearly six times the revenue the streaming video service Hulu raked in last year from ads and subscriptions. And that suggests Mehrotra has helped Google solve a problem many fast-growing Web companies continue to struggle with: how to make money off the huge audience that uses its service free.
In 2008, Mehrotra was working for Microsoft and hankered to have his own startup, but he agreed to talk to a Google executive he knew about working there instead. He decided against it—but that evening he kept thinking about how the exec was frustrated that most ad dollars go to TV, even though nobody watches TV ads. Yet at his Super Bowl party two weeks earlier, Mehrotra recalled, guests kept asking him to replay the ads. Was there a way, he wondered, to make TV ads as captivating as Super Bowl ads, every day?
The answer came to him in a flash. The next day, he had changed his mind about working at Google. After he tried his idea for skippable ads on a television project, the company asked him to bring the idea to YouTube.
YouTube was searching for alternatives to standard “pre-roll” ads, which performed poorly because viewers didn’t want to sit through a 30-second ad to watch a two-minute video. In 2010, Mehrotra’s alternative came to fruition as YouTube rolled out its TrueView ads. One type lets viewers choose from three ads. Another lets them skip an ad after five seconds; advertisers pay only if their ads are watched in their entirety, or for at least 30 seconds if the ads are longer than that.
Thousands of advertisers piled in. Now some 65 percent of ads inside YouTube videos are skippable. But YouTube has found that only 10 percent of viewers always skip ads, and viewership is 40 percent higher on videos running TrueView than on those with non-skippable ads. As a result, Mehrotra says, video viewed on YouTube brings in more ad revenue per hour than cable TV.
Thanks to Mehrotra’s ad model—and to Google’s crackdown on piracy of television shows and films—YouTube now attracts top-line content producers such as the nonprofit academic-tutorial producer Khan Academy, Paramount, and the NBA. Revenues paid to YouTube’s 30,000-plus video-making partners have doubled in each of the past four years. Thousands of partners get six-figure annual revenues from the ads, and a few take in tens of millions of dollars.
The result is a virtuous cycle. “The more money we bring in, the better content they produce, the more there is for viewers to watch, and so on,” Mehrotra says.
Now Mehrotra’s goal is to try to grab a big chunk of the $60 billion U.S. television business. But to do that, and fend off TV-content-oriented online rivals such as Hulu, YouTube has to become a bit more like conventional TV. To that end, it organized itself last year into TV-like channels, investing $100 million in cable-quality launches from Ashton Kutcher, Madonna, the Wall Street Journal, and dozens of others. More and more TV advertisers are being won over, says David Cohen, chief media officer at the media buying agency Universal McCann. “They’re getting marketers to think about YouTube as a viable outlet,” he says.
Mehrotra, who last year became YouTube’s vice president of product, envisions millions of online channels disrupting TV, just as cable’s 400 channels disrupted the four broadcast networks. “We want to be the host of that next generation of channels,” he says.
—Robert D. Hof
It’s hard to radically improve the internal-combustion engine. But Shannon Miller may have done it, by getting one to work at extremely high compression and expansion ratios. Initially designed to generate electricity in homes or businesses, not to power cars, Miller’s engines use 25 percent less fuel than conventional gas-powered generators.
Miller knew that operating engines at high compression and expansion ratios could make them far more efficient, but that’s easier said than done. High compression ratios create extreme temperatures, wasting energy. And high pressure increases friction between the piston and the cylinder.
So she turned to a “free-piston” design, an old idea that allows each piston to bounce up and down independently of any rod or crankshaft. The approach had not been used to operate pistons at very high compression ratios. “To make this work, you can’t just change one or two things,” she says. “You really need to change the whole architecture of the engine.”
Miller cofounded and is CEO of a company called EtaGen, which aims to bring the engine to market. The company has built a prototype that runs for hours at target performance levels. She says the results indicate that upcoming versions of the engine should be about as efficient as large power plants—the current gold standard for energy efficiency—once the energy the plants lose during distribution is factored in.
EtaGen’s first product will be a replacement for conventional diesel and natural-gas generators, allowing businesses to operate a building off the grid or to ride through power outages. Eventually, Miller says, the same basic engine design could be used to make onboard generators for electric cars like GM’s Volt. In either case, the engines would run on common fuels like diesel and natural gas.
Today’s digital cameras do the focusing for you, but they occasionally blow the shot with a blurred subject. That’s never a problem with Ren Ng’s camera. His company, Lytro, sells a $399 model that captures light in a very different way from conventional cameras, recording the angle at which each ray enters the lens. The resulting photo can be sharply focused on any part of the scene, and then refocused on a different part—all long after the picture has been taken. “This is going to drive even larger transformations than the transition from film to digital photography,” says Ng.
Ng’s camera is at the leading edge of the new field of computational photography, which uses software to wring new tricks out of conventional optical components and a few novel ones. Lytro is preparing to release software upgrades that will allow shots taken with one of its cameras to be viewed in 3-D, and it is developing methods that could get professional-quality shots from cameras with cheap lenses, such as those on cell phones.
The focusing trick is an impressive enough start. When a photo taken with the Lytro camera is displayed on a computer, anyone can click on any object in the picture to get the software to instantly bring that object (and anything else in the photo that was the same distance from the camera) into sharp focus, leaving the rest artfully blurred. The focus point can then be changed with a click elsewhere in the photo. Friends can refocus Lytro photos for themselves when they are shared on Facebook or elsewhere online.
Whereas a conventional digital camera captures a focused image as light strikes a sensor chip, the Lytro camera has a plastic sheet of thousands of tiny lenses directly in front of its sensor. These lenses take rays that come into the camera at different angles and direct them to different points on the sensor. That leaves an unfocused image, but it doesn’t matter—because Ng’s software in the camera can use the information about the angle of the light rays to bring any part of the image into sharp focus.
In 2006 Ng was a PhD student at Stanford University studying the illumination of virtual objects. But he wanted to work on something with a more tangible impact, so he put off finishing his degree and started researching ideas for better camera designs. He wasn’t sure how to proceed until one day he found himself staring in frustration at a poorly focused photo he had recently taken. “I thought, ‘Does the camera have to focus before you take the shot?’” he recalls. He had a strong hunch the answer was no, and he immediately set out to prove it.
Once he hit on the idea for his camera system with multiple lenses inside, Ng started tearing apart and rejiggering conventional digital cameras to build prototypes. When he wasn’t screwing together camera parts, he was networking to scrounge up the expertise, technology, and funding he needed. After about nine months, he finally found himself at his kitchen table assembling what he hoped would be his first fully functioning prototype capable of after-the-fact focus. It worked, and became the subject of Ng’s prize-winning PhD thesis.
Ng decided to start a company based on the technology. The easier path would have been to license it to one of the established camera manufacturers, such as Nikon or Canon, rather than trying to take them on. But he feared that a big company would simply try to add the technology to its existing cameras as an incremental improvement. “A transformational technology requires a transformational product,” he says. So he started Lytro, and after four years of stealthy development, the company’s first camera began shipping in February.
Lytro has raised over $50 million in investments. It is currently working on introducing software to expand the capabilities of the existing camera model, with the 3-D upgrade expected this year. A bit further down the road, says Ng, could be cameras that will take refocusable videos.
Juan Sebastián Osorio
Nearly 85 percent of babies born before 34 weeks stop breathing for 20 seconds or more, often because their undeveloped nervous systems fail to signal their lungs. That can be fatal. The babies are typically hooked up to monitors, but sometimes the systems fail to sound the alarm—and Juan Sebastián Osorio discovered why.
Osorio, then a biomedical-engineering student with the Antioquia School of Engineering and CES University in Medellín, Colombia, realized that the sensors used on the infants were poorly adapted to their small size. Electrodes are placed on either side of the infant’s chest to watch for stoppages in motion. But the tiny chests move so little that the monitor can mistake heartbeats for breathing motions long after respiration has stopped.
Osorio and colleagues came up with a prototype detector attuned to the rhythms of infant physiology. The monitor combines heart rate recordings, electrical signals from the diaphragm muscle, and blood oxygen measurements for a potentially more precise and reliable way to measure a baby’s breathing. Eventually the device could predict the risk of apnea by analyzing the measurements along with information about the baby’s weight and gestational age. Osorio says that could help hospitals discharge low-risk babies earlier, saving costs and sparing the babies from extended ICU stays.
Osorio is testing his system and seeking to license it commercially. He’s integrating it with a mobile-phone app he developed that helps parents recognize signs of risk for sudden infant death syndrome. Next he plans to couple his detector with a video camera to make it easier for parents to monitor babies at high risk for apnea. If a problem comes up, the system will connect to pediatricians remotely.
Optical communications could be a boon for data centers, reducing electricity use and heat buildup by replacing electronic signals with light signals. But the technology has been cost-effective only over distances of a kilometer or more, and using it in data centers would mean sending signals mere meters or centimeters. Joyce Poon may have solved the problem by creating new optical modulators with microscopic loop-the-loops through which light can shuttle data between servers and even from chip to chip within a single server.
To make light-based data communications work over short distances, Poon, an assistant professor of electrical and computer engineering at the University of Toronto, knew she needed to come up with a much smaller version of an optical modulator, a device that converts an electronic signal into an optical one. She designed tiny rings that can be built onto computer chips. When laser light is sent into a ring, it races around the ring over and over before a bit of it emerges through a waveguide at the bottom. The trick was to control how much light came out. Other researchers working with micro-rings have tried to do that by adjusting the properties of the ring, in order to alter the length of the light’s path or the amount of light the ring absorbs. Poon realized she could leave the ring alone and simply control the gateway between the ring and the rest of the chip.
The resulting optical modulator can be both faster and more efficient. With a team from IBM, Poon is working to create a version that is competitive with today’s optical data rates.
The jump to optical data transmission in servers can’t come soon enough. Data centers consumed at least 200 billion kilowatt-hours’ worth of power in 2010, and the proliferation of smartphones and cloud storage is only going to push that higher, driving up costs and the risk of heat-related outages.
PROBLEM: We’re forced to interact with smartphones in much the same way that we do with desktop computers—by selecting applications, typing in information, choosing from menus, hunting down snippets on websites, and clicking links. That’s okay at a desk, but it can be a huge inconvenience when you’re dealing with a tiny screen on the go.
SOLUTION: Hossein Rahnama, research and innovation director of the Digital Media Zone at Toronto’s Ryerson University, decided that smartphones ought to offer us useful information where and when we need it.
Through his startup, Flybits, Rahnama is laying the technical groundwork for a wave of mobile software that can identify and respond to contextual cues like location and time of day—and integrate them with information such as a user’s travel itinerary. It can then guess at what information would be most relevant to display, such as directions to a car-rental counter when you get off the plane after arriving at an airport.
Others have been working on so-called context-aware computing, but Rahnama’s software platform is already being used as the basis for inexpensive, commercially practical applications that also protect privacy. Several Canadian airports and the transit systems in Toronto and Paris have used the Flybits platform to create apps that automatically serve up personalized, location-keyed guidance to travelers, and a small U.K. telecommunications company is using it to develop apps that can route calls to the appropriate number to help you avoid roaming fees (for example, it knows to send your mom’s call to your hotel landline rather than your cell if it detects that you’re overseas).
Flybits can also make it easier to find the people most relevant to your location and interests. The company is rolling out a service called Flybits Lite that prompts users to form spontaneous social networks limited to a certain space, such as the office or a concert. So eventually, after you’ve navigated the Metro to the Louvre, perhaps you can find out who else is there to admire the Mona Lisa.
Pinterest became a household name seemingly overnight in the spring of 2012. Founder Ben Silbermann had seen what other tech companies were overlooking: existing social networks, while letting users share information in just about any form, did not offer an emotionally warm and visually rewarding experience tied to individual passions. Guided by this conviction and his interest in collecting things, Silbermann directed his engineers—he’s no programmer—to create a site that did.
Users of Pinterest create and curate virtual boards of photos clipped from websites and other users’ boards, gathering up shots of lusted-after products and other stimulating images. When you log in, you’re presented with a grid of new content that past activity suggests you might want to “pin” to your own boards. Silbermann describes it as a more interactive and social version of the lifestyle section of a newsstand: a place to find visually interesting, emotionally resonant content related to stuff you love—and often want to buy.
That vision initially gained momentum not at the elite colleges and California coffee shops that often function as the Web’s proving ground for new ideas but by word of mouth in Silbermann’s home state of Iowa. Perhaps as a result, Pinterest is big with the mainstream audience that other Web companies struggle to attract after they’ve conquered Silicon Valley. It’s used by 34 million people worldwide each month, mostly in the United States. Google’s DoubleClick advertising unit estimates that 79 percent of them are female.
Silbermann refined the idea for two mostly unpromising years after he talked a few friends into starting the company, running it from his own apartment until he received his first significant backing from investors in the summer of 2011. Though he initially had no users to offer feedback, he sweated countless details, having his lone designer, cofounder Evan Sharp, create 50 fully functional versions of the site’s basic layout that varied spacing and image sizes by just fractions of an inch. Silbermann personally wrote to the first few thousand users to gather their impressions.
Now with over 60 employees and a spacious office in San Francisco, Pinterest has received a total of $138 million in venture capital funding; in the last cash injection, the company was valued at $1.5 billion. Silbermann says he’s focused on improving the product rather than figuring out how to make money on it. But retail brands are discovering that they can use Pinterest to boost sales by encouraging people to share images of their products on what are essentially eye-catching shopping wish lists. And that would seem to leave the company well positioned to start charging brands for the privilege. There’s a lot of value in, as Silbermann puts it, “helping people to discover things that they didn’t know they wanted.”
Christopher Soghoian sniffs out security holes and privacy shortcomings on the Web. Then he urges companies that are responsible—Google, AT&T, and Dropbox have been among them—to halt practices that put consumers’ personal information at risk. If they don’t, he’ll write about the flaws publicly and try to get regulators to crack down. “I see myself as a combination horse whisperer and Paul Revere–type character,” he says.
Soghoian’s credentials as a computer scientist are substantial—he helped develop the Do Not Track mechanism that lets people prevent websites from following their online activity—but most of his work relies on techniques that suggest Woodward and Bernstein more than a basement hacker: he seeks information by filing Freedom of Information Act requests or cajoling corporate lawyers and congressional aides over late-night beers in Washington, D.C.
Insinuating himself into the world of Washington as a privacy gadfly didn’t come easily to Soghoian, 30, an earnest geek with a beard and a ponytail. “I didn’t own a suit until 2009,” he says. Wearing one to face executives and lawyers is “not pleasant,” he adds. But he has learned that his impact as a security researcher is much greater if he steps into power corridors and directly addresses the people there.
That lesson began in 2006. Soghoian, then a grad student at Indiana University, wrote a blog post about how easily someone could gin up a legitimate-appearing boarding pass to get past airport security checkpoints. To prove the point, he put a widget on his blog that made it possible for people to create their own. That inflamed the Homeland Security apparatus, and the FBI seized his computers for a month. When the furor subsided, a few rational officials in Washington pointed out that Soghoian was actually helping the Transportation Security Administration by identifying a flaw in its defenses. The episode taught him that if he framed his message in the right way, he could get people to listen.
In 2009, while working as a student fellow at Harvard’s Berkman Center for Internet and Society, Soghoian led an effort to get Google to turn on SSL encryption in Gmail by default. SSL, the technique used to secure banking and e-commerce websites, essentially ensures that people using Gmail in a public Wi-Fi café aren’t vulnerable to having their accounts plundered by criminals. After Soghoian and 36 cosigners wrote an open letter to then-CEO Eric Schmidt, Google eventually said it would indeed turn on SSL by default. This doesn’t make Gmail totally private: law enforcement can still subpoena Google for an unencrypted look at the contents. But it does ensure that political dissidents’ e-mail is out of the reach of repressive governments with which Google doesn’t coöperate. Because of that, “if I’m 5 percent responsible for Google turning on SSL, it’s the most important thing I’ve done in my life,” Soghoian says. Today he’s lobbying for SSL to become the default setting on other online services, notably Facebook. (Facebook spokesman Frederic Wolens says the company is working on it; in the meantime, SSL is available to Facebook users who activate it themselves.)
In 2009, Soghoian stepped a bit too far into the establishment for his comfort: he became a staff technologist for the U.S. Federal Trade Commission. In October of that year, he went to a telecom-industry event and recorded a Sprint Nextel executive explaining how often the company fed data about subscribers to law enforcement. To him, this is a crucial subject—his recently completed PhD dissertation is all about the ways that police get around outdated wiretapping laws by having telecommunications and Web companies do surveillance for them. He argues that these companies, without sufficient public recognition, have effectively replaced judges as arbiters of whether the authorities are acting appropriately. But that’s not entirely in the FTC’s purview—and in any case, Soghoian had made the secret recording after using his FTC badge to get into the closed event. He ultimately lost his job.
Now he’s probably found a more natural outlet for his work: in September he will become a principal technologist and senior policy analyst for the American Civil Liberties Union, where he plans to keep raising alarms about how easily law enforcement, spies, and criminals can delve into our ever-growing storehouses of personal data. “My goal,” he says, “is to move to a world where everybody has access to secure communication.”
Combining tools used to manufacture printed circuit boards with the spirit of origami, Pratheev Sreetharan has found a way to build tiny machines and complex objects that were previously impossible to fabricate without assembling them manually. Some of the results: a robotic bee created in a day, a tiny, precise icosahedron, and a small chain of interlocking carbon-fiber links. The small, intricate items demonstrate a fundamentally new fabrication approach that Sreetharan believes can be broadly applicable in making a range of new medical devices, robots, and components of analytical instruments.
If Sreetharan is successful, he could open up the manufacturing no-man’s-land between the micrometer-scale features of silicon chips and the centimeter-plus scale of everyday items. It’s a size range that’s of critical importance in biology and medicine. But today there’s simply no practical way to mass-produce three-dimensional objects and complex machines on this in-between scale.
Sreetharan’s prize creation is the robot bee, fabricated through a series of steps inspired by pop-up books. As a graduate student in the lab of Harvard microrobotics pioneer Robert Wood (a member of the 2008 TR35), Sreetharan was familiar with the task of laboriously gluing the miniature robots together under a microscope, and his fabrication approach was born of his determination to find a better way.
He began by adapting standard lamination and micromachining techniques from circuit board manufacturing to carve the needed parts into a flat substrate. But the real trick came in adding features that allowed the parts to pop up and lock into place in one step, creating the bee.
Sreetharan, who spent a recent summer in the Indian region of Tamil Nadu teaching Sri Lankan refugees about renewable energy and designing a solar-powered computer charger, recently got his PhD from Harvard and founded a startup called Vibrant Research in Cambridge, Massachusetts, to adapt his fabrication methods to advanced manufacturing.
He is still deciding which specific products the company will focus on, but he says he is able to routinely make objects that have never before existed. And he hopes the novel production methods will create new opportunities in manufacturing. That would be a pretty good way to build on the buzz from his robot bee.
“Cyborg tissue could allow us to put multifunctional prosthetics in humans,” says Bozhi Tian. That goal is still a long way off, but Tian has taken a key step by creating artificially grown tissue that’s intelligent. So far, he’s developed a synthetic blood vessel that can detect the pH of solutions flowing through it. And with different nanoelectric sensors embedded in that and other tissue replacements, Tian thinks, the technology could one day wirelessly monitor proteins linked to cancer and other diseases.
Tian’s cyborg tissue project grew out of another impressive feat: an innovative method for detecting electrical changes in living cells. Instead of sticking fine-tipped glass pipettes into the cells, a conventional technique that ends up killing them within a few hours at most, Tian created a semiconductor device made of a kinked nanowire less than 50 nanometers wide at the tip.
He then coated the tip of his probe with molecules similar to those found in cell membranes, enabling the device to enter the cell with minimal damage. The implanted nanowires can potentially send information for days, and cells can tolerate multiple wires, making it possible to map complex changes across the cell.
By coating the wire with antibodies, which can be designed to latch onto a specific molecule, researchers could enable the tool to detect the presence of specific proteins seen when a particular disease state is getting better or worse. That could be useful for monitoring how cells respond to different compounds being considered for use as drugs.
Tian, an assistant professor at the University of Chicago, is currently working on equipping cells with electronic components that don’t merely monitor activity but actively affect it. Get ready for the cyborg cell.
Eben Upton thought a new generation of youngsters might never develop valuable hardware and software hacking skills unless they had access to cheap, hobbyist-friendly computers. So he set out to build one himself. The resulting tiny box, which sells for just $25, has been a big hit. It could boost computer skills not only among children but among adults in poor countries as well.
Upton came up with the idea in 2006, when he was finishing his PhD in computer science at the University of Cambridge. Having agreed to help out with undergraduate computer science admissions, he was looking forward to interacting with teenagers who loved messing around with computers as much as he had when he was younger.
Upton had done all that messing around partly for the thrill of bending the machines to his will, and partly because the 1980s boom in video games had made it easy to imagine making a fortune working with computers. “I was a mercenary child,” he says, sounding a bit apologetic. “One of the things that drew me to computing was that there were 15-year-old kids who made so much money from computing they actually bought Ferraris.”
To judge by the applicants Upton was looking at, however, kids had lost interest. They were still messing around on computers, but they weren’t messing around with them. They weren’t writing programs and taking apart circuit boards. They were the kinds of kids who played World of Warcraft and exchanged cat pictures on Facebook. They had changed from active hackers to passive consumers.
Perhaps the dot-com bust had killed some of the enthusiasm for hacking. But to Upton, one other possible factor loomed large. In the 1980s, he and his friends had learned basic computer science on a BBC Micro, a line of computers built for the British Broadcasting Corporation by Acorn Computers and installed in most English schools. Small, rugged, inexpensive, and expandable, the Micro introduced a generation of British children to hardware engineering and software programming.
There was no contemporary equivalent to the Micro. “Sure, everyone in the middle class has a PC,” Upton says. “But even then, often there is only the one family PC. You won’t let kids screw around with it.” Schools aren’t going to let students take apart their machines, either. As a result, he observes, “computing” classes teach children how to use Microsoft Word and PowerPoint. “Even Microsoft wants schools to produce software engineers,” Upton says. To successfully restore literacy in computer tinkering, he decided, the world needed a modern analogue of the BBC Micro.
Being a hardware guy at heart, Upton went ahead and built a prototype of a next-generation hobbyist machine—the sort of stripped-down device that would enable its users to become acquainted with the guts of a computer. It would also allow its users to put the machine to work in projects ranging from robotics to wearable computing to gaming. He eventually took up a Cambridge professor’s suggestion to call his device Raspberry Pi, tipping his hat to the old tech tradition of naming computers after fruit. But he didn’t immediately see a way to produce Raspberry Pi in sufficient numbers to make a difference, so he reluctantly mothballed the project.
After finishing his PhD, Upton went to work at the Cambridge, U.K., office of Broadcom, a networking company based in Southern California. (He is now one of the company’s technical directors for Europe.) Upton was instrumental in the creation of Broadcom’s first microprocessor intended for multimedia applications—the BCM2835. Released in 2011, it is a single chip that’s small enough to fit in a phone but big enough to contain vital parts such as a central processing unit and a graphics processor. By some measures it was the most powerful chip in the mobile market at the time, and it was a tremendous success for Broadcom.
It was also, Upton realized, the way to restart Raspberry Pi, given that a single-chip computer would be much less costly to produce. He and half a dozen volunteers worked on the new version on evenings and weekends. But the BCM2835 wasn’t easy to deal with: it was dauntingly jammed with tiny components, including no fewer than five power supplies.
To keep Raspberry Pi small and cheap, the team wanted to build it on a single circuit board that could be stamped out, no further assembly required. But to enable the phone chip to work with computer peripherals and run full-scale computer software, they would, it appeared, need to build a board with more than eight stacked layers of circuitry, a prohibitively complex and expensive proposition. Working furiously to simplify the circuitry, the team eventually managed to shave the board design down to six layers.
The first prototypes were ready in December 2011, but Upton discovered, to his horror, that they didn’t work at all. Fighting panic at the thought of all the various subtle flaws that might be buried in all those layers of tangled circuitry, the team discovered that one pin on the chip had been inadvertently disconnected. It was a blessedly easy fix, and within minutes, his invention was popping to life.
The Raspberry Pi is strikingly unlike other computers. About the size of an Altoids box, the computer has no keyboard, monitor, or disk drive—it doesn’t even have an internal clock or an operating system. In other words, the machine requires a fair amount of hardware and software tinkering just to get started. It almost dares you to take it on and try to hack together a robot or gaming system.
It can’t get by on looks. Lacking a case, the Raspberry Pi offers a dense, bristling cluster of tiny electronics to the owner’s view, with five ports: HDMI, to hook the computer up to a television; USB, to hook it to multiple devices; Ethernet, for data; and analog TV and analog stereo. But having to face the guts of the device is a good thing, according to Upton. “Kids can see what they ordinarily can’t see, unless they smash a phone,” he says.
The really surprising feature of the Raspberry Pi is the $25 price: about a tenth the cost of the lowest-priced computers available in stores (if you ignore tablets, which no one can hack anyway).
It was intended for kids, but hackers of all ages wanted it, and so did budding computer scientists in poor countries. Almost the instant the Raspberry Pi went on sale, orders crashed the websites of its two vendors, RS Components and Premier Farnell. The companies reported that they were taking in orders fast enough to tear through the entire initial stock of 10,000 computers in minutes.
Thrilled with the reception, Upton is making more of the devices through a nonprofit Raspberry Pi Foundation he put together—his mercenary tendencies having abated over the years. In fact, he says, he intends to sell two million Raspberry Pis a year in order to reach a critical mass that will support an active community of owners to share tips and applications. He also hopes that the existence of this community will prompt schools to adopt the Raspberry Pi for courses.
Even more important, Upton hopes, is that kids start to take them apart. “That would be real success,” he says.
—Charles C. Mann
Nothing moves too fast for Andreas Velten’s camera—not even light. Last year Velten, who built the camera while a postdoc at the MIT Media Lab’s Camera Culture Group, made a video of laser light zipping through a plastic soda bottle. Capturing the equivalent of 600 billion frames per second, the slow-mo footage showed a ghostly light moving from one end of the bottle to the other. Equally remarkable, the camera can harness light reflected off surfaces to see around corners. Because the camera is so fast, it can detect how long it takes the different light rays to reach it, and an image can be reconstructed from that information.
It’s not just amazing gimmickry. Velten’s technology could lead to ultrafast medical imagers and scanners that use light instead of sound to detect tiny imperfections, whether in cancerous tissue or in airplane wings. It also suggests an approach to taking high-quality photos of scenes lit only by the tiny flash on a cell phone.
Velten’s table-mounted camera uses 672 carefully positioned and timed optical sensors, each capable of capturing a trillionth of a second’s worth of reflected laser light. The technical advance was figuring out how to modify a streak camera, a common piece of equipment in chemistry labs that measures the optical properties of laser light. That type of camera can capture only one horizontal line, or “streak,” of light at a time. Velten, combining his expertise in optics and computer science, developed custom software to repeat the scan over and over and combine the resulting data.
Now at the Morgridge Institute at the University of Wisconsin, Velten is applying his ultrafast imaging techniques to help develop new types of microscopy and biomedical imaging for clinical applications. One of the tools he envisions, for example, is a less invasive endoscope that could travel shorter distances to see deeper inside the body.
Light beams are so fast that using them to replace electrons would make for vastly more powerful and energy-efficient chips, even paving the way for quantum computing. At times, though, light is too fast. That’s why Zheng Wang decided to slow it down. “The speed is very good for optical communications but very bad for processing signals on-chip,” he says.
To slow light, Wang, an assistant professor of electrical and computer engineering at the University of Texas at Austin, created nanometer-size ridges on a chip. The ridges are so slender and flexible that they can be deformed by electric fields. When light is delivered by optic wire to the ridges at the edge of the chip, they convert the light waves to high-frequency sound waves, which travel at about a hundred-thousandth the speed of light. The same trick works in reverse after the sound waves have traversed the chip, with the ridges converting the sound back into light to continue its higher-speed journey via optic wire.
Other researchers had accomplished similar feats with light—but only by enlisting a high-powered pulse laser that generates acoustic pulses, a much less efficient and larger-scale process that can’t be handled on a chip.
Sound waves are much easier to read and route within the tiny confines of a chip. And they offer the huge advantage of not generating the heat that electronics do. That makes Wang’s approach promising for applications in information processing, as well as in nanoscale microscopy.
There’s been a lot of excitement, both in the scientific community and in the popular media, about the possibility of creating cloaking materials that make people or military vehicles appear to vanish. But that goal seemed nearly impossible until Baile Zhang came up with a simple, promising solution. While Zhang’s technique has serious limitations—for one thing, it works only in an exotic medium called laser oil—it does suggest a possible path to making practical invisibility cloaks.
Most previously developed invisibility cloaks were made with materials painstakingly fabricated in the lab to have micro- or nanoscale patterns that bend light waves. But labs couldn’t turn out more than tiny amounts of these materials. What’s more, most existing examples of cloaking materials work only with microwaves and other nonvisible forms of light.
Reading about these exotic materials, Zhang, a professor at the Nanyang Technological University in Singapore, remembered a high-school physics demonstration of how calcite, an inexpensive natural mineral, bends light in strange ways. That, in turn, led him to come up with a simpler way to make a large cloak: gluing two pieces of calcite together.
Zhang demonstrated that his calcite sandwich could hide the middle section of a Post-it note rolled into a tube and placed on a mirror submerged in a liquid. The calcite cloak on top of the tube guides light from the space behind the tube to a point directly over it, so that the eye is, in effect, seeing right “through” the rolled-up paper. It turns out calcite’s crystal structure already resembles the sorts of artificial nanoscale patterns that other labs have been struggling to fabricate with electron beams.
“This shows better than any other experiment that the basic concept of cloaking can work,” says Steven Cummer, an engineering professor at Duke University, who was on the team that made the first cloaking device. But Cummer cautions that Zhang has a lot of work ahead to make this simple cloak more practical.
Right now the calcite trick works only if the medium around it helps to bend the light, which means the medium has to have just the right refractive index. The bath of laser oil used for the initial demonstration did the trick, but water or air won’t work.
Zhang is hoping, however, that some new tricks he has in mind will allow the cloak to work in air. That’s a project worth keeping an eye on.
There are surprisingly few ways to directly observe how cells and proteins work inside living creatures. Weian Zhao devised simple sensors that let scientists do exactly that.
Zhao starts by identifying a short, single-stranded piece of DNA called an aptamer that selectively binds with a protein or other biomolecule researchers are interested in. He attaches a fluorescent dye to the aptamer and then attaches the aptamer-dye combination to the surface of a type of stem cell, found in bone marrow and fat tissue, that homes in on inflamed tissue and tumors.
When the combination of dye, aptamer, and stem cell is injected into a living organism, the stem cell seeks out the targeted biomolecules. For example, if researchers want to look at unhealthy tissue, the aptamer latches onto the biomolecule suspected of being at the root of the problem, and the dye lights up or changes color.
By putting mice that have been injected with these sensors under a special microscope designed to hold a living animal and spot fluorescent dye, Zhao can see where in the organism the dye ends up. He can observe the action down to the level of individual cells, and he can even watch in real time how the biomolecular traffic is altered by the presence of drugs or by other changes in the organism. That’s never before been possible.
Zhao’s lab is working on a way to rapidly create vast libraries of aptamers that bind to almost any molecule. He foresees scientists using these libraries to build a collection of cellular sensors not only for use in basic research but also to improve the drug discovery process. At present, drug discovery suffers because what happens in cell cultures rarely duplicates what happens in living animals, sometimes misleading researchers and wasting time. Zhao’s sensor will allow scientists instead to immediately observe what the drug does inside animals, which can help speed a promising drug toward human trials.
Zhao is also currently working toward getting his stem-cell-based sensors to bind to various cancer markers found in whole blood, in the hope of developing a faster, less expensive, and potentially more accurate diagnostic tool that could even in many cases eliminate the need for a biopsy. He figures that if the work pans out, some of these tests could be on the market within as little as five years.