Skip to Content

10 Breakthrough Technologies 2011

Emerging Technologies: 2011

April 19, 2011

Every year, Technology Review looks at the advances that have happened over the previous year and chooses 10 emerging technologies that we think will have the greatest impact. The ultimate criterion is straightforward: is the technology likely to change the world? This year’s group includes high-energy batteries that could make cheaper hybrid and electric vehicles possible and a new class of electrical transformers that could stabilize power grids. Some of our choices will alter how you use technology: you’ll be tapping into computationally intensive applications on mobile devices, or using gestures to command computers that are embedded in televisions and cars. Other choices could improve your health; for instance, doctors will craft more effective cancer treatments by understanding the genetics of individual tumors. But no matter the category, all 10 promise to make our lives better.

Emerging Technologies: 2011

This story was part of our May/June 2011 issue.

Explore the issue

10 Breakthrough Technologies

  • Homomorphic Encryption

    Making cloud computing more secure
    Ciphering: Gentry’s system allows encrypted data to be analyzed in the cloud. In this example, we wish to add 1 and 2. The data is encrypted so that 1 becomes 33 and 2 becomes 54. The encrypted data is sent to the cloud and processed: the result (87) can be downloaded from the cloud and decrypted to provide the final answer (3).

    Craig Gentry is creating an encryption system that could solve the problem keeping many organizations from using cloud computing to analyze and mine data: it’s too much of a security risk to give a public cloud provider such as Amazon or Google access to unencrypted data.

    The problem is that while data can be sent to and from a cloud provider’s data center in encrypted form, the servers that power a cloud can’t do any work on it that way. Now Gentry, an IBM researcher, has shown that it is possible to analyze data without decrypting it. The key is to encrypt the data in such a way that performing a mathematical operation on the encrypted information and then decrypting the result produces the same answer as performing an analogous operation on the unencrypted data. The correspondence between the operations on unencrypted data and the operations to be performed on encrypted data is known as a homomorphism. “In principle,” says Gentry, “something like this could be used to secure operations over the Internet.”

    With homomorphic encryption, a company could encrypt its entire database of e-mails and upload it to a cloud. Then it could use the cloud-stored data as desired—for example, to search the database to understand how its workers collaborate. The results would be downloaded and decrypted without ever exposing the details of a single e-mail.

    Gentry began tackling homomorphic encryption in 2008. At first he was able to perform only a few basic operations on encrypted data before his system started producing garbage. Unfortunately, a task like finding a piece of text in an e-mail requires chaining together thousands of basic operations. His solution was to use a second layer of encryption, essentially to protect intermediate results when the system broke down and needed to be reset.

    “The problem of how to create true homomorphic encryption has been debated for more than 30 years, and Craig was the first person who got it right and figured out how to make the math work,” says Paul Kocher, the president of the security firm Cryptography Research. However, Kocher warns, because Gentry’s scheme currently requires a huge amount of computation, there’s a long way to go before it will be widely usable.

    Gentry acknowledges that the way he applied the double layer of encryption was “a bit of a hack” and that the system runs too slowly for practical use, but he is working on optimizing it for specific applications such as searching databases for records. He estimates that these applications could be ready for the market in five to 10 years.

  • Cancer Genomics

    Deciphering the genetics behind the disease
    Decoding cancer: Elaine Mardis uses sequencing to study the genomes of diseased cells.

    In the fall of 2006, a new machine arrived at what’s now known as the Genome Institute at Washington University in St. Louis. It was capable of reading DNA a thousand times as quickly as the facility’s earlier machines, and at far less cost. Elaine Mardis, the center’s codirector, immediately began using it to sequence cancer tissues, scouring their DNA for mutations. Just five years later, Mardis and her collaborators have sequenced both cancerous and healthy tissue from several hundred patients and identified tens of thousands of mutations. Some of the findings have led to new approaches to treating cancer, while others have opened new avenues of research.

    Cancer develops when cells accumulate genetic mistakes that allow them to grow and divide faster than healthy cells. Identifying the mutations that underlie this transformation can help predict a patient’s prognosis and identify which drugs are most likely to work for that patient. The information could also identify new targets for cancer drugs. “In a single patient, you have both the tumor genome and the normal genome,” Mardis says. “And you can get at answers much more quickly by comparing the two.”

    In 2008, Mardis and her team became the first to publish the sequence of a cancer genome, derived by comparing the DNA of healthy and cancerous cells in a patient with a bone marrow cancer called AML. Further studies have suggested that patients with mutations in a particular gene may fare better with bone marrow transplants than with traditional chemotherapy, a less risky treatment that physicians usually try first. Mardis predicts that soon all AML patients will be genetically tested, allowing their physicians to make more informed decisions about treatment.

    As the cost and speed of DNA sequencing have dropped—Mardis estimates that sequencing genomes from a patient’s cancerous and healthy tissue today costs about $30,000, compared with $1.6 million for the first AML genome—the technology is being applied to oncology more broadly. Research groups have now sequenced the genomes of multiple cancers, and in a handful of cases, they have used the results to guide treatment decisions for a patient (see “Cancer’s Genome,” January/February 2011). A few companies are now offering cancer genome analysis to researchers, and at least one is planning to offer the service to physicians and patients.

    The decreasing cost of sequencing also means that Mardis can use the technology in drug development and testing. Her latest project is part of a clinical trial assessing hormone therapy for breast cancer. She has developed a preliminary genetic profile of cancers most likely to respond to a popular set of drugs called aromatase inhibitors, which are given to most breast cancer patients whose tumor cells have estrogen receptors on the surface. The goal is to identify the patients who will benefit from the drugs and those who won’t. (Preliminary evidence suggests that only about half the patients in the trial respond to the drugs.)

    Understanding cancer genomes isn’t easy. Mardis’s team had to invent techniques to distinguish the rare cancer mutations from the mistakes that routinely occur when sequencing DNA. And scientists must figure out which mutations are actually driving the growth of the tumors and which are harmless. Then comes what might be the most challenging part: determining how the mutations trigger cancer. Mardis says she is leaving that challenge to the scientists around the world who are working to understand the mutations that she and others have identified. “It’s really gratifying to see others pick that up,” she says.

  • Solid-State Batteries

    High-energy cells for cheaper electric cars
    Charged up: Sakti3’s Ann Marie Sastry is developing ways to mass-produce lighter and cheaper batteries.

    Ann Marie Sastry wants to rid electric vehicles’ battery systems of most of the stuff that doesn’t store energy, such as cooling devices and supporting materials within the battery cells. It all adds up to more than half the bulk of typical lithium-ion-based systems, making them cumbersome and expensive. So in 2007, she founded a startup called Sakti3 to develop solid-state batteries that don’t require most of this added bulk. They save even more space by using materials that store more energy. The result could be battery systems half to a third the size of conventional ones.

    Cutting the size of a battery system in half could cut its cost by as much as half, too. Since the battery system is the most expensive part of an electric car (often costing as much as $10,000), that would make electric cars far cheaper. Alternatively, manufacturers could keep the price constant and double the 100-mile range typical of electric cars.

    The limitations of the lithium-ion batteries used in electric cars are well known. “Most liquid electrolytes are flammable. The cathode dissolves,” says Sastry. Keeping the electrolyte from bursting into flames requires safety systems. And to extend the electrode’s lifetime and prevent heat buildup, the battery must be cooled and prevented from ever fully charging or discharging, resulting in wasted capacity. All this adds bulk and cost. So Sastry wondered if she could make a battery that simply didn’t need this much management.

    Sastry’s solid-state batteries are still based on lithium-ion technology, but they replace the liquid electrolyte with a thin layer of material that’s not flammable. Solid-state batteries are also resilient: some prototypes demonstrated by other groups can survive thousands of charge-discharge cycles. And they can withstand high temperatures, which will make it possible to use materials that can double or triple a battery’s energy density (the amount of energy stored in a given volume) but that are too dangerous or unreliable for use in a conventional lithium-ion battery.

    To make solid-state batteries that are practical and inexpensive to produce, Sastry has written simulation software to identify combinations of materials and structures that will yield compact, reliable high-energy devices. She can simulate these materials and components precisely enough to accurately predict how they will behave when assembled together in a battery cell. She is also developing manufacturing techniques that lend themselves to mass production. “If your overall objective is to change the way people drive, your criteria can no longer only be the best energy density ever achieved or the greatest number of cycles,” she says. “The ultimate criterion is affordability, in a product that has the necessary performance.”

    Although it may be several years before the batteries come to market, GM and other major automakers, such as Toyota, have already identified solid-state batteries as a potentially key component of future electric vehicles. There’s a limit to how much better conventional batteries can get, says Jon Lauckner, president of GM Ventures, which pumped over $3 million into Sakti3 last year. If electric vehicles are ever to make up more than a small fraction of cars on the road, “something fundamental has to change,” he says. He believes that Sakti3 is “working well beyond the limits of conventional electrochemical cells.”

    Sastry is aware that success isn’t guaranteed. Her field is something of a technological battleground, with many different approaches competing to power a new generation of cars. “None of this is obvious,” she says.

  • Smart Transformers

    Controlling the flow of electricity to stabilize the grid
    Powerful electronics: The smart transformer can handle AC and DC power and, thanks to semiconductors capable of handling high voltages, be programmed to redirect the flow of electricity in response to fluctuations in supply and demand. A. High-voltage semiconductor-based AC rectifier. B. High-voltage semiconductor-based DC converter. C. High-frequency transformers. D. Control circuitry.

    In a lab wired up to simulate a residential neighborhood, Alex Huang is working to revamp aging power grids into something more like the Internet—a network that might direct energy not just from centralized power stations to consumers but from any source to any destination, by whatever route makes the most sense. To that end, Huang, a professor of electrical engineering at North Carolina State University, is reinventing the transformers that currently reduce the voltage of the electricity distributed to neighborhoods so that it’s suitable for use in homes and offices.

    His new transformer will make it easier for the grid to cope with things it was never designed for, like charging large numbers of electric vehicles and tapping surplus electricity from residential solar panels. Smart meters in homes and offices can help by providing fine-grained information about the flow of electricity, but precise control over that flow is needed too. Not only would this stabilize the grid, but it would better balance supply and demand, reducing spikes so that fewer power plants would be needed to guarantee the electricity supply.

    “We need a radically new device to sit between homes and grid to provide a buffer, so that the grid will remain stable no matter what is going on in the homes,” Huang says. Conventional transformers handle only AC power and require manual adjustment or bulky electromechanical switches to redirect energy. What he wants is a compact transformer that can handle DC as well as AC and can be electronically controlled so that it will respond almost instantaneously to fluctuations in supply and demand. If one neighbor plugged an electric car into an AC charger, for example, it could respond by tapping otherwise unneeded DC power from another neighbor’s solar panels.

    To build such a transformer, Huang started developing transistors and other semiconductor-based devices that can handle thousands of volts, founding the Future Renewable Electric Energy Delivery and Management Systems Center at NC State in 2008. His first transformer had silicon-based components, but silicon is too unreliable for large-scale use at high voltages. So Huang has pioneered the development of transformers with semiconductors based on compounds of silicon and carbon or gallium and nitrogen, which are more reliable in high-power applications. He expects to have a test version of the silicon-carbon transformer ready in two years and to have a device that utilities can test in five years.

    Huang’s transformers would make connecting a solar panel or electric car to the grid as simple as connecting a digital camera or printer to a computer. That would reduce our reliance on fossil fuels by making it easier for small-scale sources of cleaner energy to contribute to the grid. He says, “The real benefit to society will come when there’s an aggregate effect from many, many small generators, which we hope will be renewable and sustainable energy sources.”

  • Gestural Interfaces

    Controlling computers with our bodies
    Determining depth: PrimeSense’s sensor determines depth by combining a number of techniques, including structured light, where an infrared pattern (red lines) is projected onto objects. How the pattern is distorted gives information about distances. The illustrated example here is an interactive airport information display (gray box), below is the depth sensor (blue box).

    How do you issue complex commands to a computer without touching it? It’s a crucial issue now that televisions are connected to social networks and cars are fitted with computerized systems for communication, navigation, and entertainment. So Alexander Shpunt has designed a 3-D vision system that lets anyone control a computer just by gesturing in the air.

    Shpunt spent five years developing the system at Tel Aviv-based PrimeSense, and Microsoft adopted the technology to power its popular Kinect controller for the Xbox 360 game console. Players can use it to direct characters with their bodies alone—no need for the wands, rings, gloves, or colored tags that previous gestural interfaces relied on to detect the user’s movements.

    The key to dispensing with those props was getting the computer to see the world in three dimensions, rather than the two captured by normal cameras. Sensing depth makes it relatively easy to distinguish, say, an arm from a table in the background, and then track the arm’s movement.

    Shpunt recalls that when he started developing his system there were a few ways to sense depth—primarily “time of flight” (determining distance from a sensor by measuring how long it takes light or sound to bounce off an object) and “structured light” (projecting patterns of light onto objects and analyzing how the patterns are distorted by the object’s surface). Although there was a lot of academic activity and a few companies built prototypes, there was “nothing really mature” that could be mass-produced, he says. Instead, he built his own system, cobbling together an approach that borrowed from those two techniques as well as stereoscopy—comparing images of the same scene from two different viewpoints.

    The Kinect is only the beginning of what Shpunt believes will be a gestural-interface revolution. A small army of hackers, encouraged by PrimeSense, is already retooling the controller to other ends. Researchers at Louisiana State University have rigged a helmetless, gloveless virtual-reality system out of a Kinect unit and an off-the-shelf 3-D TV set. In Australia, a logistics software firm quickly put together a gesture-controlled system for monitoring air traffic. Further real-world applications are easy to imagine, says Shpunt: gaze-tracking heads-up controls for automobiles, touchless interactive displays for shopping malls and airports.

    For now, Shpunt is working with computer maker Asus to build gestural controls for today’s increasingly complex and network-connected televisions—essentially turning a TV into a giant iPad that can be operated from the couch without a remote control.

  • Social Indexing

    Facebook remaps the Web to personalize online services
    Results you’ll like: Bret Taylor, who wields Facebook’s “Like” button.

    Bret Taylor wants to make online services more attuned to what you really want. Search sites could take your friends’ opinions into account when you look for restaurants. Newspaper sites could use their knowledge of what’s previously captured your attention online to display articles you are interested in. “Fundamentally, the Web would be better if it were more oriented around people,” says Taylor, who is Facebook’s chief technology officer. To bring this idea to fruition, he is creating a kind of social index of the most frequently visited chunks of the Web.

    Many sites have tried to personalize what they offer by remembering your past behavior and showing information they presume will be relevant to you. But the social index could be much more powerful because it also mines your friends’ interests and collects information from multiple sites. As a result, the index can give websites a sense of what is likely to interest you even if you’ve never been there before.

    This ambitious project gets much of its information from the simple “Like” button, a thumbs-up logo that adorns many Web pages and invites visitors to signal their appreciation for something—a news story, a recipe, a photo—with a click. Taylor created the concept in 2007 at FriendFeed, a social network that he cofounded, which was acquired by Facebook in 2009. Back then, the button was just a way to encourage people to express their interests, but in combination with Facebook’s user base of nearly 600 million people, it is becoming a potent data-collecting tool. The code behind the Like button is available to any site that wants to add it to its pages. If a user is logged in to Facebook and clicks the Like button anywhere on the Web, the link is shared with that person’s Facebook friends. Simultaneously, that thumbs-up vote is fed into Taylor’s Web-wide index.

    That’s how the Wall Street Journal highlights articles that a person’s friends enjoyed on its site. This is what lets Microsoft’s Bing search engine promote pages liked by a person’s friends. And it’s how Pandora creates playlists based on songs or bands a person has appreciated on other sites.

    This method of figuring out connections between pieces of content is fundamentally different from the one that has ruled for a decade. Google mathematically indexes the Web by scanning the hyperlinks between pages. Pages with many links from other sites rise to the top of search results on the assumption that such pages must be relatively useful or interesting. The social index isn’t going to be a complete replacement for Google, but for many types of activity—such as finding products, entertainment, or things to read—the new system’s personal touch could make it more useful.

    Google itself acknowledges this: it recently rolled out a near-clone of the Like button, which it calls “+1.” It lets people signify for their friends which search results or Web pages they’ve found useful. Google is also using Twitter activity to augment its index. If you have connected your Twitter and Google accounts, Web links that your friends have shared on Twitter may come up higher in Google search results.

    Another advantage of a social index is that it could be less vulnerable to manipulation: inflating Google rankings by creating extra links to a site is big business, but buying enough Facebook likes to make a difference is nearly impossible, says Chris Dixon, cofounder of Hunch, a Web startup that combines its own recommendation technology with tools from Facebook and Twitter. “Social activity provides a really authentic signal of what is authoritative and good,” says Dixon. That’s why Hunch and other services, including an entertainment recommendation site called GetGlue, are building their own social indexes, asking people to record their positive feelings about content from all over the Web. If you’re browsing for something on Amazon, a box from GetGlue can pop up to tell you which of your friends have liked that item.

    A social index will be of less use to people who don’t have many online connections. And even Facebook’s map covers just a small fraction of the Web for now. But about 10,000 additional websites connect themselves to Facebook every day.

  • Cloud Streaming

    Bringing high-performance software to mobile devices
    This computationally intensive 3-D animation software appears to be running on a tablet, but is actually running on OnLive’s remote servers.

    In the Silicon Valley conference room of OnLive, Steve Perlman touches the lifelike 3-D face of a computer-­generated woman displayed on his iPad. Swiping the screen with his fingers, Perlman rotates her head; her eyes move to compensate, so that she continues to stare at one spot. None of this computationally intensive animation and visualization is actually taking place on the iPad. The device isn’t powerful enough to run the program responsible—an expensive piece of software called Autodesk Maya. Rather, Perlman’s finger-swipe inputs are being sent to a data center running the software. The results are returned as a video stream that seems to respond instantaneously to his touch.

    To make this work, Perlman has created a way of compressing a video stream that overcomes the problems marring previous attempts to use mobile devices as remote terminals for graphics-intensive applications. The technology could make applications such as sophisticated movie-editing or architectural-design tools accessible on hundreds of millions of Internet-­connected tablets, smart phones, and the like. And not only professional animators and architects would benefit. For consumers, it will allow streaming movies to be fast-forwarded and rewound in real time, as with a DVD player, while schools anywhere could gain easy access to software. “The long-term vision is actually to move all computing out to the cloud,” says Perlman, OnLive’s CEO.

    Perlman’s biggest innovation is dispensing with the buffers that are typically used to store a few seconds or minutes of streaming video. Though buffers allow time for any lost or delayed data to be re-sent before it’s needed, they create a lag that makes it impossible to do real-time work. Instead, Perlman uses various strategies to fill in or hide missing details—in extreme cases even filling in entire frames by extrapolating from frames received earlier—so that the eye does not detect a problem should some data get lost or delayed. The system also continually checks the network connection’s quality, increasing the amount of video compression and decreasing bandwidth requirements as needed. To save precious milliseconds, Perlman has even negotiated with Internet carriers to ensure that data from his servers is carried directly on high-speed, high-capacity Internet backbones.

    The goal is to respond to user inputs within 80 milliseconds, a key threshold for visual perception. Reaching that threshold is crucial for a broad range of applications, says Vivek Pai, a computer scientist at Princeton University: “If you see a delay between what you are doing and the result of what you are doing, your brain drifts off.”

    Perlman founded OnLive in 2007 to commercialize his streaming technology, and last year he launched a subscription service offering cloud-based versions of popular action games, a particularly demanding application in terms of computing power and responsiveness. But games are just a start—OnLive’s investors include movie studio Warner Brothers and Autodesk, which, besides Maya, also makes CAD software for engineers and designers. Perlman believes that eventually, “any mobile device will be able to bring a huge level of computing power to any person in the world with as little as a cellular connection.”

  • Separating Chromosomes

    A more precise way to read DNA will change how we treat disease
    Chromosome chip: This matchbox-size device uses tiny valves, channels, and chambers to separate the 23 pairs of chromosomes in the human genome so they can be analyzed individually.

    The clear rubber chip sitting under a microscope in Stephen Quake’s lab is a complex maze of tiny channels, chambers, and pumps, hooked up to thin plastic tubes that supply reagents and control 650-plus minuscule valves. Using this microfluidic chip, Quake, a biophysicist at Stanford University, has engineered a way of obtaining data that’s missing from nearly all human genome sequences: which member of a pair of chromosomes a gene belongs to.

    Technology that makes it easier to identify the variations between chromosomes could have a huge impact on fundamental genomic research and personalized medicine. “This is definitely the next frontier,” says Nicholas Schork, a statistical geneticist at the Scripps Research Institute. Right now, he says, “we’re missing out on all sorts of biological phenomena that occur as a result of humans’ having [paired chromosomes].”

    When scientists sequence human genomes, they largely ignore the fact that chromosomes come in pairs, with one copy inherited from the mother and one from the father. (The Y chromosome, which determines gender, is the exception.) Standard techniques blend genetic data from the two chromosomes to yield a single sequence.

    Quake’s alternative is to physically separate chromosomes before genomic analysis. Cells are piped into the chip; when Quake spots one that’s preparing to divide (a stage at which the chromosomes are easier to manipulate), he traps the cell in a chamber and bursts its membrane, causing the chromosomes to spill out. They are randomly distributed into 48 smaller chambers. While it is possible for more than one chromosome to end up in a single chamber, it’s very unlikely that a chromosome will end up with its pair. Using standard techniques, the chromosomes are then sequenced or screened for genetic variants.

    Other groups have pursued different strategies to sequence individual chromosomes. But Quake thinks his has an advantage because it doesn’t rely on decoding and reconstructing chromosomes from a mixed pool of DNA fragments, as others do. “By the way we physically prepare the sample, we know [the result is] right,” he says.

    If costs can come down enough, Quake’s technique will be widely used, says Meredith Yeager, a senior scientist at the National Cancer Institute’s Core Genotyping Facility. The ability to routinely tell where genetic variants lie on different chromosomes “really is a big deal,” Yeager says. “Context matters.”

    For example, if testing detects two separate mutations in a disease-related gene, it’s now impossible to tell whether one chromosome has both mutations or each chromosome has one. A patient who has at least one good copy of the gene is much more likely to escape the disease or experience it in a relatively mild form. Whether the aim is to predict responses to an asthma drug or to find better matches for bone marrow transplants, the accuracy of personalized medicine could eventually hinge on understanding the variation between chromosomes.

    Fluidigm, the South San Francisco company that Quake cofounded in 1999 to commercialize microfluidic chips, is now looking at ways to automate the chromosome separation chip so that it doesn’t require so much expertise to use. Quake hopes to discover “something really interesting” about human diversity or the region of the genome that defines immune system responses. This region has been difficult to understand because it has so much genetic variation, and scientists lacked a tool to study it carefully—until now.

  • Synthetic Cells

    Designing new genomes could speed the creation of vaccines and biofuel-producing bacteria
    1: Bacterial genomes take the form of rings of DNA. An artificial genome is designed on a computer, including a sequence that “watermarks” the genome (red arc) and one that confers resistance to antibiotics (yellow arc). The genome is then synthesized as 1,078 overlapping DNA fragments. 2: Yeast cells stitch together 10 sequential fragments at a time. The longer strands that are produced are in turn stitched together by other yeast cells, and the process is repeated until copies of the whole genome are assembled. 3: The synthetic genomes are added to a colony of bacteria. Some of the bacterial cells entering the process of division absorb the synthetic genomes alongside their own. 4: When the bacterial cells divide, each daughter inherits one genome. An antibiotic is used to kill cells with the natural genome, leaving a colony of bacteria with the synthetic genome.

    The bacteria growing on stacks of petri dishes in Daniel Gibson’s lab are the first living creatures with a completely artificial genome. The microbes’ entire collection of genes was edited on a computer and assembled by machines that create genetic fragments from chemicals and by helper cells that pieced those fragments together. Gibson hopes that being able to design and create entire genomes, instead of just short lengths of DNA, will dramatically speed up the process of engineering microbes that can carry out tasks such as efficiently producing biofuels or vaccines.

    Until last year, biologists hadn’t been able to make large enough pieces of DNA to create an entire genome; though living cells routinely make long stretches of DNA, a DNA synthesis machine can’t do the same. In May, ­Gibson and his colleagues at the J. Craig Venter Institute announced their solution to this problem. Gibson used yeast cells to stitch together thousands of fragments of DNA made by a machine, pooled the longer pieces, and repeated the process until the genome was complete. Next he inserted the genome into bacterial cells that were about to divide and grew the bacteria in a medium hostile to all cells except the ones harboring the synthetic genome.

    “When we began in 2004,” he says, “assembling a complete bacterial genome didn’t seem like an easy thing to do”—even though the Venter Institute researchers started with one of the smallest bacterial genomes that have been sequenced, that of a mycoplasma. After finally overcoming the technical hurdles involved, Gibson says, creating the synthetic cell itself was exciting but almost anticlimactic. Going from computer screen to bacterial colony now seems easy.

    Gibson has also developed a faster, yeast-free way to assemble large pieces of DNA in a bottle. His colleagues are using these methods to rapidly synthesize the viral DNA needed to speed up the production of influenza vaccines. The nonprofit Venter Institute is working with Synthetic Genomics, a company that commercializes work done at the institute, to develop products.

    The creation of the synthetic cell is part of an effort to design a “minimal cell” containing only the most basic genome required for life. Gibson and his colleagues at the Venter Institute believe that synthetic biologists could use this minimal cell as the basis for cells that efficiently produce biofuels, drugs, and other industrial products.

    Right now, Gibson’s technique for incorporating his synthetic genome into living cells works only with mycoplasmas, which are useful for experimentation but not for industrial purposes. If Gibson can adapt this system to work with a broader group of bacteria, it could be used to speed up the process of engineering microbes that make a wide variety of products. At least two major challenges remain: developing appropriate recipient cells for genome transplants, and finding ways of working with even larger pieces of DNA. “We’re still in the early stages,” he says, “and we don’t know what the limits are.”

  • Crash-Proof Code

    Making critical software safer
    Fail-safe: June Andronick uses mathematical analysis to create crash-proof software.

    When a computer controls critical systems in vehicles and medical devices, software bugs can be disastrous: “unnecessarily risky” programs could put lives in danger, says June Andronick, a researcher at NICTA, Australia’s national IT research center. As a result of one recently discovered software vulnerability, she notes by way of example, “a car could be controlled by an attack on its stereo system.” She is trying to reduce these risks by making the most important part of an operating system—the core, or kernel—in such a way that she can prove it will never crash.

    The currently favored approach to creating reliable software is to test it under as many conditions as time and imagination allow. Andronick instead is adapting a technique known as formal verification, which microchip designers use to check their designs before making an integrated circuit: they create a mathematical representation of the chip’s subsystems that can be used to prove that the chip will behave as intended for all possible inputs. Until now, formal verification was considered impractical for large programs such as operating systems, because analyzing a representation of the program code would be too complicated. But in a computing and mathematical tour de force, Andronick and her colleagues, working in Gerwin Klein’s lab at NICTA, were able to formally verify the code that makes up most of the kernel of an operating system designed for processors often found embedded in smart phones, cars, and electronic devices such as portable medical equipment. Because this code is what ultimately passes software instructions from other parts of the system to hardware for execution, bulletproofing it has a major impact on the reliability of the entire system.

    “The work is hugely significant,” says Lawrence Paulson, a computer science professor at the University of Cambridge. Beyond showing that there’s no bug in the kernel that could cause it to crash, he says, the verification guarantees that the kernel will perform, without error, every function it was programmed to perform.

    The task was made a little easier by the choice to develop a so-called microkernel. Microkernels delegate as many functions as possible—such as handling input and output—to software modules outside the kernel. Consequently, they are relatively small—in this case, about 7,500 lines of C code and 600 lines of assembler. “That’s really small for a kernel, but really large for formal verification,” Andronick says. The analysis was targeted at the thousands of lines of C code; new software and mathematical tools had to be developed for the task. The kernel was released in February, and the team is working on another version designed for the popular line of x86 processor chips.

    Andronick doesn’t expect that the technique will scale to much larger software, but she doesn’t think it has to. Using verified code in key subsystems would allow developers to make sure that bugs in less rigorously examined programs—such as those used to interface with a car stereo—can’t affect critical hardware. It could also prevent a computer from locking up if it encounters a problem. Andronick wants more software developers to embrace formal verification “in fields where safety and security really matter,” she says. “We show that it is possible.”