Skip to Content

10 Breakthrough Technologies 2007

Emerging Technologies: 2007

March 1, 2007

Each year, Technology Review selects what it believes are the 10 most important emerging technologies. The winners are chosen based on the editors’ coverage of key fields. The question that we ask is simple: is the technology likely to change the world? Some of these changes are on the largest scale possible: better biofuels, more efficient solar cells, and green concrete all aim at tackling global warming in the years ahead. Other changes will be more local and involve how we use technology: for example, 3-D screens on mobile devices, new applications for cloud computing, and social television. And new ways to implant medical electronics and develop drugs for diseases will affect us on the most intimate level of all, with the promise of making our lives healthier.

Magazine
10 Emerging Technologies 2007

This story was part of our March/April 2007 issue.

Explore the issue

10 Breakthrough Technologies

  • Peering into Video’s Future

    The Internet is about to drown in digital video. Hui Zhang thinks peer-to-peer networks could come to the rescue.
    JOHN HERSEY

    Ted Stevens, the 83-year-old senior senator from Alaska, was widely ridiculed last year for a speech in which he described the Internet as "a series of tubes." Yet clumsy as his metaphor may have been, Stevens was struggling to make a reasonable point: the tubes can get clogged. And that may happen sooner than expected, thanks to the exploding popularity of digital video.

    TV shows, YouTube clips, animations, and other video applications already account for more than 60 percent of Internet traffic, says CacheLogic, a Cambridge, England, company that sells media delivery systems to content owners and Internet service providers (ISPs). "I imagine that within two years it will be 98 percent," adds Hui Zhang, a computer scientist at Carnegie Mellon University. And that will mean slower downloads for everyone.

    Zhang believes help could come from an unexpected quarter: peer-to-peer (P2P) file distribution technology. Of course, there's no better playground for piracy, and millions have used P2P networks such as Gnutella, Kazaa, and BitTorrent to help themselves to copyrighted content. But Zhang thinks this black-sheep technology can be reformed and put to work helping legitimate content owners and Internet-backbone operators deliver more video without overloading the network.

    For Zhang and other P2P proponents, it's all a question of architecture. Conventionally, video and other Web content gets to consumers along paths that resemble trees, with the content owners' central servers as the trunks, multiple "content distribution servers" as the branches, and consumers' PCs as the leaves. Tree architectures work well enough, but they have three key weaknesses: If one branch is cut off, all its leaves go with it. Data flows in only one direction, so the leaves'--the PCs'--capacity to upload data goes untapped. And perhaps most important, adding new PCs to the network merely increases its congestion--and the demands placed on the servers.

    In P2P networks, by contrast, there are no central servers: each user's PC exchanges data with many others in an ever-shifting mesh. This means that servers and their overtaxed network connections bear less of a burden; data is instead provided by peers, saving bandwidth in the Internet's core. If one user leaves the mesh, others can easily fill the gap. And adding users actually increases a P2P network's power.

    There are just two big snags keeping content distributors and their ISPs from warming to mesh architectures. First, to balance the load on individual PCs, the most advanced P2P networks, such as BitTorrent, break big files into blocks, which are scattered across many machines. To re­assemble those blocks, a computer on the network must use precious bandwidth to broadcast "metadata" describing which blocks it needs and which it already has.

    Second, ISPs are loath to carry P2P traffic, because it's a big money-loser. For conventional one-way transfers, ISPs can charge content owners such as Google or NBC.com according to the amount of bandwidth they consume. But P2P traffic is generated by subscribers themselves, who usually pay a flat monthly fee regardless of how much data they download or upload.

    Zhang and others believe they're close to solving both problems. At Cornell University, computer scientist Paul Francis is testing a P2P system called Chunkyspread that combines the best features of trees and meshes. Members' PCs are arranged in a classic tree, but they can also connect to one another, reducing the burden on the branches.

    Just as important, Chunkyspread reassembles files in "slices" rather than blocks. A slice consists of the nth bit of every block--for example, the fifth bit in every block of 20 bits. Alice's PC might obtain a commitment from Bob's PC to send bit five from every block it possesses, from Carol's PC to send bit six, and so forth. Once these commitments are made, no more metadata need change hands, saving bandwidth. In simulations, Francis says, Chunkyspread far outperforms simple tree-based multicast methods.

    Zhang thinks new technology can also make carrying P2P traffic more palatable for ISPs. Right now, opera­tors have little idea what kind of data flows through their networks. At his Pittsburgh-based stealth startup, Rinera Networks, Zhang is developing software that will identify P2P data, let ISPs decide how much of it they're willing to carry, at what volume and price, and then deliver it as reliably as server-based content distribution systems do--all while tracking everything for accounting purposes. "We want to build an ecosystem such that service providers will actually benefit­ from P2P traffic," Zhang explains. Heavy P2P users might end up paying extra fees--but in the end, content owners and consumers won't gripe, he argues, since better accounting should make the Internet function more effectively for everyone.

    If this smells like a violation of the Internet's tradition of network neu­trality­--­­­­­­­­­­­­­­­­­­­­­
    the­ principle that ISPs should treat all bits equally, regardless of their origin--then it's because the tradition needs to be updated for an era of very large file transfers, Zhang believes. "It's all about volume," he says. "Of course, we don't want the service providers to dictate what they will carry on their infra­structure. On the other hand, if P2P users benefit from transmitting and receiving more bits, the guys who are actually transporting those bits should be able to share in that."

    Networking and hardware companies have their eyes on technologies emerging from places like Rinera and Francis's Cornell lab, even as they build devices designed to help consumers download video and other files over P2P networks. Manufacturers Asus, Planex, and QNAP, for example, are working with BitTorrent to embed the company's P2P software in their home routers, media servers, and storage devices. With luck, ­Senator ­Stevens's tubes may stay unblocked a little longer.

  • Nanocharging Solar

    Arthur Nozik believes quantum-dot solar power could boost output in cheap photovoltaics.
    Arthur Nozik hopes quantum dots will enable the production of more efficient and less expensive solar cells, finally making solar power competitive with other sources of electricty.
    LANCE W. CLAYTON

    No renewable power source has as much theoretical potential as solar energy. But the promise of cheap and abundant solar power remains unmet, largely because today’s solar cells are so costly to make.

    Photovoltaic cells use semiconductors to convert light energy into electrical current. The workhorse photo­voltaic material, silicon, performs this conversion fairly efficiently, but silicon cells are relatively expensive to manufacture. Some other semiconductors, which can be deposited as thin films, have reached market, but although they’re cheaper, their efficiency doesn’t compare to that of silicon. A new solution may be in the offing: some chemists think that quantum dots–tiny crystals of semi­conductors just a few nanometers wide–could at last make solar power cost-competitive with electricity from fossil fuels.

    By dint of their size, quantum dots have unique abilities to interact with light. In silicon, one photon of light frees one electron from its atomic orbit. In the late 1990s, Arthur Nozik, a senior research fellow at the National Renewable Energy Laboratory in Golden, CO, postulated that quantum dots of certain semiconductor materials­ could release two or more electrons when struck by high-energy photons, such as those found toward the blue and ultraviolet end of the spectrum.

    In 2004, Victor Klimov of Los Alamos­ National Laboratory in New Mexico provided the first experimental proof that Nozik was right; last year he showed that quantum dots of lead selenide could produce up to seven electrons per photon when exposed to high-energy ultraviolet light. Nozik’s team soon demonstrated the effect in dots made of other semiconductors, such as lead sulfide and lead telluride.

    These experiments have not yet produced a material suitable for commercialization, but they do suggest that quantum dots could someday increase the efficiency of converting sunlight into electricity. And since quantum dots can be made using simple chemical reactions, they could also make solar cells far less expensive. Researchers in Nozik’s lab, whose results have not been published, recently demonstrated the extra-electron effect in quantum dots made of silicon; these dots would be far less costly to incorporate into solar cells than the large crystalline sheets of silicon used today.

    To date, the extra-electron effect has been seen only in isolated quantum dots; it was not evident in the first proto­type photovoltaic devices to use the dots. The trouble is that in a working solar cell, electrons must travel out of the semiconductor and into an external electrical circuit. Some of the electrons freed in any photovoltaic cell are inevitably “lost,” recaptured by positive “holes” in the semiconductor. In quantum dots, this recapture happens far faster than it does in larger pieces of a semiconductor; many of the freed electrons are immediately swallowed up.

    The Nozik team’s best quantum­-dot solar cells have managed only about 2 percent efficiency, far less than is needed for a practical device. However, the group hopes to boost the efficiency by modifying the surfaces of the quantum dots or improving electron transport between dots.

    The project is a gamble, and Nozik readily admits that it might not pay off. Still, the enormous potential of the nanocrystals keeps him going. Nozik calculates that a photovoltaic device based on quantum dots could have a maximum efficiency of 42 percent, far better than silicon’s maximum efficiency of 31 percent. The quantum dots themselves would be cheap to manufacture, and they could do their work in combination with materials like conducting polymers that could also be produced inexpensively. A working quantum dot-polymer cell could eventually place solar electricity on a nearly even economic footing with electricity from coal. “If you could [do this], you would be in Stockholm–it would be revolutionary,” says Nozik.

    A commercial quantum-dot solar cell is many years away, assuming it’s even possible. But if it is, it could help put our fossil-fuel days behind us.

  • Neuron Control

    Karl Deisseroth’s genetically engineered “light switch,” which lets scientists turn selected parts of the brain on and off, may help improve treatments for depression and other disorders.
    ELAINE KURIE

    In his psychiatry practice at the Stanford Medical Center, Karl Deisseroth­ sometimes treats patients who are so severely depressed that they can’t walk, talk, or eat. Intensive treatments, such as electro­convulsive therapy, can literally save such patients’ lives, but often at the cost of memory loss, headaches, and other serious side effects. Deisseroth, who is both a physician and a bioengineer, thinks he has a better way: an elegant new method for controlling neural cells with flashes of light. The technology could one day lead to precisely targeted treatments for psychiatric and neurological dis­orders; that precision could mean greater effectiveness and fewer side effects.

    While scientists know something about the chemical imbalances underlying depression, it’s still unclear exactly which cells, or networks of cells, are responsible for it. In order to identify the circuits involved in such diseases, scientists must be able to turn neurons on and off. Standard methods­, such as electrodes that activate neurons with jolts of electricity, are not precise enough for this task, so Deisseroth, postdoc Ed Boyden­ (now an assistant professor at MIT; see “Engineering the Brain”), and graduate student Feng Zhang developed a neural controller that can activate specific sets of neurons.

    They adapted a protein from a green alga to act as an “on switch” that neurons can be genetically engineered to produce (see “Artificially Firing Neurons,” TR35, September/October 2006). When the neuron is exposed to light, the protein triggers electrical activity within the cell that spreads to the next neuron in the circuit. Researchers can thus use light to activate certain neurons and look for specific responses–a twitch of a muscle, increased energy, or a wave of activity in a different part of the brain.

    Deisseroth is using this genetic light switch to study the biological basis of depression. Working with a group of rats that show symptoms similar to those seen in depressed humans, researchers in his lab have inserted the switch into neurons in different brain areas implicated in depression. They then use an optical fiber to shine light onto those cells, looking for activity patterns that alleviate the symptoms. Deisseroth says the findings should help scientists develop better anti­depressants: if they know exactly which cells to target, they can look for molecules or delivery systems that affect only those cells. “Prozac goes to all the circuits in the brain, rather than just the relevant ones,” he says. “That’s part of the reason it has so many side effects.”

    In the last year, Deisseroth has sent his switch to more than 100 research labs. “Folks are applying it to all kinds of animals, including mice, worms, flies, and zebrafish,” he says. Scientists are using this and similar switches to study everything from movement to addiction to appetite. “These technologies allow us to advance from observation to active intervention and control,” says Gero Miesenböck, a neuroscientist at Yale University. By evoking sensations or movements directly, he says, “you can forge a much stronger connection between mental activity and behavior.”

    Deisseroth hopes his technology will one day become not just a research tool but a treatment in itself, used alongside therapies that electrically stimulate large areas of the brain to treat depression or Parkinson’s disease. By activating only specific neurons, a specially engineered light switch could limit those therapies’ side effects. Of course, the researchers will need to solve some problems first: they’ll need to find safe gene-therapy methods for delivering the switch to the target cells, as well as a way to shine light deep into the brain. “It’s a long way off,” says Deisseroth. “But the obstacles aren’t insurmountable.” In the meantime, neuroscientists have the use of a powerful new tool in their quest to uncover the secrets of the brain.

  • Nanohealing

    Tiny fibers will save lives by stopping bleeding and aiding recovery from brain injury, says Rutledge Ellis-Behnke.
    Rutledge Ellis-Behnke
    ASIA KEPKA

    In the break room near his lab in MIT’s brand-new neuroscience building, research scientist Rutledge Ellis-Behnke provides impromptu narration for a video of himself performing surgery. In the video, Ellis-Behnke makes a deep cut in the liver of a rat, intentionally slicing through a main artery­. As the liver pulses from the pressure of the rat’s beating heart, blood spills from the wound. Then Ellis­-­Behnke covers the wound with a clear liquid, and the bleeding stops almost at once. Untreated, the wound would have proved fatal, but the rat lived on.

    The liquid Ellis-Behnke used is a novel material made of nanoscale protein fragments, or peptides. Its ability to stop bleeding almost instantly could be invaluable in surgery, at accident sites, or on the battlefield. Under conditions like those inside the body, the peptides self-assemble into a fibrous mesh that to the naked eye appears to be a transparent gel. Even more remarkably, the material creates an environment that may accelerate healing of damaged brain and spinal tissue.

    Ellis-Behnke stumbled on the material’s capacity to stanch bleeding by chance, during experiments designed to help restore vision to brain-­damaged hamsters. And his discovery was itself made possible by earlier serendipitous events. In the early 1990s, Shuguang Zhang, now a biomedical engineer at MIT, was working in the lab of MIT biologist Alexander Rich. Zhang had been studying a repeating DNA sequence that coded for a peptide. He and a colleague inadvertently found that under certain conditions, copies of the peptide would combine into fibers. Zhang and his colleagues began to reëngineer the peptides to exhibit specific responses to electric charges and water. They ended up with a 16-amino-acid peptide that looks like a comb, with water-loving teeth projecting from a water-repelling spine. In a salty, aqueous environment–such as that inside the body–the spines spontaneously cluster together to avoid the water, forming long, thin fibers that self-assemble into curved ribbons. The process transforms a liquid peptide solution into a clear gel.

    Originally, Ellis-Behnke intended to use the material to promote the healing of brain and spinal-cord injuries. In young animals, neurons are surrounded by materials that help them grow; Ellis-Behnke thought that the peptide gel could create a similar environment and prevent the formation of scar tissue, which obstructs the regrowth of severed neurons. “It’s like if you’re walking through a field of wheat, you can walk easily because the wheat moves out of the way,” he says. “If you’re walking through a briar patch, you get stuck.” In the hamster experiments, the researchers found that the gel allowed neurons in a vision-related tract of the brain to grow across a lesion and reëstablish connections with neurons on the other side, restoring the hamster’s sight.

    It was during these experiments that Ellis-Behnke discovered the gel’s ability to stanch bleeding. Incisions had been made in the hamsters’ brains, but when the researchers applied the new material, all residual bleeding suddenly stopped. At first, Ellis-Behnke says, “we thought that we’d actually killed the animals. But the heart was still going.” Indeed, the rodents survived for months, apparently free of negative side effects.

    The material has several advantages over current methods for stopping bleeding. It’s faster and easier than cauterization and does not damage tissue. It could protect wounds from the air and supply amino-acid building blocks to growing cells, thereby accelerating healing. Also, within a few weeks the body completely breaks the peptides down, so they need not be removed from the wound, unlike some other blood-stanching agents. The synthetic material also has a long shelf life, which could make it particularly useful in first-aid kits.

    The material’s first application will probably come in the operating room. Not only would it stop the bleeding caused by surgical incisions, but it could also form a protective layer over wounds. And since the new material is transparent, surgeons should be able to apply a layer of it and then operate through it. “When you perform surgery, you are constantly suctioning and cleaning the site to be able to see it,” says Ram Chuttani, a gastroenterologist and professor at Harvard Medical School. “But if you can seal it, you can continue to perform the surgery with much clearer vision.” The hope is that surgeons will be able to operate faster, thus reducing complications. The material may also make it possible­ to perform more procedures in a minimally invasive way by allowing a surgeon to quickly stop bleeding at the end of an endoscope.

    Chuttani, who was not involved with the research, cautions that the work is still “very preliminary,” with no tests yet on large animals or humans. But if such tests go well, Ellis-Behnke estimates, the material could be approved for use in humans in three to five years. “I don’t know what the impact is going to be,” he says. “But if we can stop bleeding, we can save a lot of people.” Ellis-Behnke and his colleagues are also continuing to explore the material’s nerve regeneration capabilities. They’re looking for ways to increase the rate of neuronal growth so that doctors can treat larger brain injuries, such as those that can result from stroke. But such a treatment will take at least five to ten years to reach humans, Ellis-Behnke says.

    Even without regenerating nerves, the material could save countless lives in surgery or at accident sites. And already, the material’s performance is encouraging research by demonstrating how engineering nanostructures to self-assemble in the body could profoundly improve medicine.

  • Augmented Reality

    Markus Kähäri wants to superimpose digital information on the real world.
    Boxes appear on the phone’s screen, highlighting known businesses and landmarks, such as the Empire State Building. The user can click one of these boxes to download information about that location from the Web. In Nokia’s mobile-augmented-reality prototype, a user can point a phone’s camera at a nearby building; the system calculates the building’s location and uses that information to identify it.
    JEAN PROBERT

    Finding your way around a new city can be exasperating: juggling maps and guidebooks, trying to figure out where you are on roads with no street signs, talking with locals who give directions by referring to unfamiliar landmarks. If you’re driving, a car with a GPS navigation system can make things easier, but it still won’t help you decide, say, which restaurant suits both your palate and your budget. Engineers at the Nokia Research Center in Helsinki, Finland­, hope that a project called Mobile Augmented Reality Applications will help you get where you’re going–and decide what to do once you’re there.

    Last October, a team led by Markus Kähäri unveiled a proto­type of the system at the International Symposium on Mixed and Augmented Reality. The team added a GPS sensor, a compass, and accelerometers to a Nokia smart phone. Using data from these sensors, the phone can calculate the location of just about any object its camera is aimed at. Each time the phone changes location, it retrieves the names and geographical coördinates of nearby landmarks from an external database. The user can then download additional information about a chosen location from the Web–say, the names of businesses in the Empire State Building, the cost of visiting the building’s observatories, or hours and menus for its five eateries.

    The Nokia project builds on more than a decade of academic research into mobile augmented reality. Steven Feiner, the director of Columbia University’s Computer Graphics and User Interfaces Laboratory, undertook some of the earliest research in the field and finds the Nokia project heartening. “The big missing link when I started was a small computer,” he says. “Those small computers are now cell phones.”

    Despite the availability and fairly low cost of the sensors the Nokia team used, some engineers believe that they introduce too much complexity for a commercial application. “In my opinion, this is very exotic hardware to provide,” says Valentin Lefevre, chief technology officer and cofounder of Total Immersion­, an augmented-reality company in Suresnes, France. “That’s why we think picture analysis is the solution.” Relying on software alone, Total Immersion’s­ system begins with a single still image of whatever object the camera is aimed at, plus a rough digital model of that object; image-­recognition algorithms then determine what data should be super­imposed on the image. The company is already marketing a mobile version of its system to cell-phone operators in Asia and Europe and expects the system’s first applications to be in gaming and advertising.

    Nokia researchers have begun working on real-time image-recognition algorithms as well; they hope the algorithms will eliminate the need for location sensors and improve their system’s accuracy and reliability. “Methods that don’t rely on those components can be more robust,” says Kari Pulli, a research fellow at the Nokia Research Center in Palo Alto, CA.

    All parties agree, though, that mobile augmented reality is nearly ready for the market. “For mobile-phone applications, the technology is here,” says Feiner. One challenge is convincing carriers such as Sprint or Verizon that customers would pay for augmented-reality services. “If some big operator in the U.S. would launch this, it could fly today,” Pulli says.

  • Invisible Revolution

    Artificially structured metamaterials could transform telecommunications, data storage, and even solar energy, says David R. Smith.
    David R. Smith led the team that built the world’s first “invisibility shield” (above). The shield consists of concentric circles of fiberglass circuit boards, printed with C-shaped split rings. Microwaves of a particular frequency behave as if objects inside the cylinder aren’t there--but everything remains in plain view.
    DAVID DEAL

    The announcement last November of an “invisibility shield,” created by David R. Smith of Duke University and colleagues, inevitably set the media buzzing with talk of H. G. Wells’s invisible man and Star Trek’s Romulans. Using rings of printed circuit boards, the researchers managed to divert microwaves around a kind of “hole in space”; even when a metal cylinder was placed at the center of the hole, the microwaves behaved as though nothing were there.

    It was arguably the most dramatic demonstration so far of what can be achieved with metamaterials, composites made up of precisely arranged patterns of two or more distinct materials. These structures can manipulate electro­magnetic radiation, including light, in ways not readily observed in nature. For example, photonic crystals–arrays of identical microscopic blocks separated by voids–can reflect or even inhibit the propagation of certain wavelengths of light; assemblies of small wire circuits, like those Smith used in his invisibility shield, can bend light in strange ways.

    But can we really use such materials to make objects seem to vanish? Philip Ball spoke with Smith, who explains why metamaterials are literally changing the way we view the world.

    Technology Review: How do metamaterials let you make things invisible?

    David R. Smith: It’s a somewhat complicated procedure but can be very simple to visualize. Picture a fabric formed from interwoven threads, in which light is constrained to travel along the threads. Well, if you now take a pin and push it through the fabric, the threads are distorted, making a hole in the fabric. Light, forced to follow the threads, is routed around the hole. John Pendry at Imperial College in London calculated what would be required of a meta­material that would accomplish exactly this. The waves are transmitted around the hole and combined on the other side. So you can put an object in the hole, and the waves won’t “see” it–it’s as if they’d crossed a region of empty space.

    TR: And then you made it?

    DRS: Yes–once we had the prescription, we set about using the techniques we’d developed over the past few years to make the material. We did the experiment at microwave frequencies because the techniques are very well established there and we knew we would be able to produce a demonstration quickly. We printed millimeter­-scale metal wires and split rings, shaped like the letter C, onto fiberglass circuit boards. The shield consisted of about 10 concentric cylinders made up of these split-ring building blocks, each with a slightly different pattern.

    TR: So an object inside the shield is actually invisible?

    DRS: More or less, but when we talk about invisibility in these structures, it’s not about making things vanish before our eyes–at least, not yet. We can hide them from microwaves, but the shield is plain enough to see. This isn’t like stealth shielding on military aircraft, where you just try to eliminate reflection–the microwaves seem literally to pass through the object inside the shield. If this could work with visible light, then you really would see the object vanish.

    TR: Could you hide a large object, like an airplane, from radar by covering its surface with the right metamaterial?

    DRS: I’m not sure we can do that. If you look at stealth technology today, it’s generally interested in hiding objects from detection over a large radar bandwidth. But the invisibility bandwidth is inherently limited in our approach. The same is true for hiding objects from all wavelengths of visible light–that would certainly be a stretch.

    TR: How else might we use metamaterials?

    DRS: Well, this is really an entirely new approach to optics. There’s a huge amount of freedom for design, and as is usual with new technology, the best uses probably haven’t been thought of yet.

    One of the most provocative and controversial predictions came from John Pendry, who predicted that a material with a negative refractive index could focus light more finely than any conventional lens material. The refractive index measures how much light bends when it passes through a material–that’s what makes a pole dipped in water look as though it bends. A negative refractive index means the material bends light the “wrong” way. So far, we and others have been working not with visible light but with microwaves, which are also electro­magnetic radiation, but with a longer wavelength. This means the components of the metamaterial must be correspondingly bigger, and so they’re much easier to make. Pendry’s suggestion was confirmed in 2005 by a group from the University of California, Berkeley, who made a negative­-­­­­­­­refractive-index meta­material for microwaves.

    Making a negative-index material that works for visible light is more difficult, because the building blocks have to be much smaller–no bigger than 10 to 20 nanometers. That’s now very possible to achieve, however, and several groups are working on it. If it can be done, these metamaterials could be used to increase the amount of information stored on CDs and DVDs or to speed up transmission and reduce power consumption in fiber-optic telecommunications.

    We can also concentrate electro­magnetic fields–the exact opposite of what the cloak does–which might be valuable in energy-harvesting applications. With a suitable metamaterial, we could concentrate light coming from any direction–you wouldn’t need direct sunlight. Right now we’re trying to design structures like this. If we could achieve that for visible light, it could make solar power more efficient.

  • Digital Imaging, Reimagined

    Richard Baraniuk and Kevin Kelly believe compressive sensing could help devices such as cameras and medical scanners capture images more efficiently.
    JOHN MACNEIL

    Richard Baraniuk and Kevin Kelly have a new vision for digital imaging: they believe an overhaul of both hardware and software could make cameras smaller and faster and let them take incredi­bly high-resolution pictures.

    Today’s digital cameras closely mimic film cameras, which makes them grossly inefficient. When a standard four-megapixel digital camera snaps a shot, each of its four million image sensors characterizes the light striking it with a single number; together, the numbers describe a picture. Then the camera’s onboard computer compresses the picture, throwing out most of those numbers. This process needlessly chews through the camera’s battery.

    Baraniuk and Kelly, both professors of electrical and computer engineering at Rice University, have developed a camera that doesn’t need to compress images. Instead, it uses a single image sensor to collect just enough information to let a novel algorithm reconstruct a high-resolution image.

    At the heart of this camera is a new technique called compressive sensing. A camera using the technique needs only a small percentage of the data that today’s digital cameras must collect in order to build a comparable picture. Baraniuk and Kelly’s algorithm turns visual data into a handful of numbers that it randomly inserts into a giant grid. There are just enough numbers to enable the algorithm to fill in the blanks, as we do when we solve a Sudoku puzzle. When the computer solves this puzzle, it has effectively re-created the complete picture from incomplete information.

    Compressive sensing began as a mathematical theory whose first proofs were published in 2004; the Rice group has produced an advanced demonstration in a relatively short time, says Dave Brady of Duke University. “They’ve really pushed the applications of the theory,” he says.

    Kelly suspects that we could see the first practical applications of compressive sensing within two years, in MRI systems that capture images up to 10 times as quickly as today’s scanners do. In five to ten years, he says, the technology could find its way into consumer products, allowing tiny mobile-phone cameras to produce high-quality, poster-size images. As our world becomes increasingly digital, compressive sensing is set to improve virtually any imaging system, providing an efficient and elegant way to get the picture.

  • Personalized Medical Monitors

    John Guttag says using computers to automate some diagnostics could make medicine more personal.
    John Guttag believes that computers can improve diagnostic tests and make medicine more personal by automating the interpretation of complex medical data such as the brain wave tracings shown above, or electrocardiogram readings from heart patients.
    MAX AGUILERA-HELLWEG

    In late spring 2000, John Guttag came home from surgery. It had been a simple procedure to repair a torn liga­ment in his knee, and he had no plans to revisit the hospital anytime soon. But that same day his son, then a junior in high school, complained of chest pains. Guttag’s wife promptly got back in the car and returned to the hospital, where their son was diagnosed with a collapsed lung and immediately admitted. Over the next year, Guttag and his wife spent weeks at a time in and out of the hospital with their son, who underwent multiple surgeries and treatments for a series of recurrences.

    During that time, Guttag witnessed what became a familiar scenario. “The doctors would come in, take a stethoscope, listen to his lungs, and make a pronouncement like ‘He’s 10 percent better than yesterday,’ and I wanted to say, ‘I don’t believe that,’” he says. “You can’t possibly sit there and listen with your ears and tell me you can hear a 10 percent difference. Surely there’s a way to do this more precisely.”

    It was an observation that any concerned parent might make, but for Guttag, who was then head of MIT’s Department of Electrical Engineering and Computer Science, it was a personal challenge. “Health care just seemed like an area that was tremendously in need of our expertise,” he says.

    The ripest challenge, Guttag says, is analyzing the huge amounts of data generated by medical tests. Today’s physicians are bombarded with physio­logical information–temperature and blood pressure readings, MRI scans, electrocardiogram (EKG) readouts, and x-rays, to name a few. Wading through a single patient’s record to determine signs of, say, a heart attack or stroke can be difficult and time consuming. Guttag­ believes computers can help doctors efficiently interpret these ever-­growing masses of data. By quickly perceiving patterns that might otherwise be buried, he says, software may provide the key to more precise and personalized medicine. “People aren’t good at spotting trends unless they’re very obvious,” says Guttag. “It dawned on me that doctors were doing things that a computer could do better.”

    For instance, making sense of the body’s electrical signals seemed, to Guttag­, to be a natural fit for computer science. Some of his earlier work on computer networks caught the attention of physicians at Children’s Hospital Boston. The doctors and the engineer set out to improve the detection of epileptic seizures; ultimately, Guttag and graduate student Ali Shoeb designed personalized seizure detectors. In 2004, the team examined recordings of the brain waves of more than 30 children with epilepsy, before, during, and after seizures. They used the data to train a “classification algorithm” to distinguish between seizure and nonseizure waveforms. With the help of the algorithm, the researchers identified seizure patterns specific to each patient.

    The team is now working on a way to make that type of information useful to people with epilepsy. Today, many patients can control their seizures with an implant that stimulates the vagus nerve. The implant typically works in one of two ways: either it turns on every few minutes, regardless of a patient’s brain activity, or patients sweep a magnet over it, activating it when they sense a seizure coming on. Both methods have their drawbacks, so Guttag is designing a noninvasive, software-driven sensor programmed to measure the wearer’s brain waves and determine what patterns–specific to him or her–signify the onset of a seizure. Once those patterns are detected, a device can automatically activate an implant, stopping the seizure in its tracks.

    Guttag plans to test the sensor, essentially a bathing cap of electrodes that fits over the scalp, on a handful of patients at Beth Israel Deaconess Medical Center this spring. Down the line, such a sensor could also help people without implants, simply warning them to sit down, pull over, or get to a safe place before a seizure begins. “Just a warning could be enormously life changing,” says Guttag. “It’s all the collateral damage that people really fear.”

    Now he’s turned his attention to patterns of the heart. Like the brain, cardiac activity is governed by electrical signals, so moving into cardiology is a natural transition for Guttag.

    He began by looking for areas where large-scale cardiac-data analysis was needed. Today, many patients who have suffered heart attacks go home with Holter monitors that record heart activity. After a day or so, a cardiologist reviews the monitor’s readings for worri­some signs. But it can be easy to miss an abnormal pattern in thousands of minutes of dense waveforms.

    That’s where Guttag hopes computers­ can step in. Working with ­Collin Stultz, a cardiologist and assistant professor of electrical engineering and computer science at MIT, and graduate student Zeeshan Syed, Guttag is devising algorithms to analyze EKG readings for statistically meaningful patterns. In the coming months, the team will compare EKG records from hundreds of heart attack patients, some of whose attacks were fatal. The immediate goal is to pick out key similarities and differences between those who survived and those who didn’t. There are known “danger patterns” that physicians can spot on an EKG readout, but the Guttag­ group is leaving it up to the computer to find significant patterns, rather than telling it what to look for. If the computer’s search isn’t influenced by existing medical knowledge, Guttag­ reasons, it may uncover un­expected relationships.

    Joseph Kannry, director of the Center for Medical Informatics at the Mount Sinai School of Medicine, calls Guttag’s work a solid step toward developing more accurate automated medical readings. “It’s promising. The challenge is going to be in convincing a clinician to use it,” says Kannry.

    Still, Guttag feels he is well on his way toward integrating computing into medical diagnostics. “People have very different reactions when you tell them computers are going to make decisions for you,” he says. “But we’ve gotten to the point where computers fly our airplanes for us, so there’s every reason to be optimistic.”

  • A New Focus for Light

    Kenneth Crozier and Federico Capasso have created light-focusing optical antennas that could lead to DVDs that hold hundreds of movies.
    JOHN HERSEY

    Researchers trying to make high-capacity DVDs, as well as more-powerful computer chips and higher-resolution optical microscopes, have for years run up against the “diffraction limit.” The laws of physics dictate that the lenses used to direct light beams cannot focus them onto a spot whose diameter is less than half the light’s wavelength. Physicists have been able to get around the diffraction limit in the lab–but the systems they’ve devised have been too fragile and complicated for practical use. Now Harvard University electrical engineers led by Kenneth Crozier and Federico Capasso have discovered a simple process that could bring the benefits of tightly focused light beams to commercial applications. By adding nanoscale “optical antennas” to a commercially available laser, Crozier­ and Capasso have focused infrared light onto a spot just 40 nanometers wide–one-­twentieth the light’s wavelength. Such optical antennas could one day make possible DVD-like discs that store 3.6 terabytes of data–the equivalent of more than 750 of today’s 4.7-gigabyte recordable DVDs.

    Crozier and Capasso build their device by first depositing an insulating layer onto the light-emitting edge of the laser. Then they add a layer of gold. They carve away most of the gold, leaving two rectangles of only 130 by 50 nano­meters, with a 30-­nanometer gap between them. These form an antenna. When light from the laser strikes the rectangles, the antenna has what Capasso calls a “lightning­-rod effect”: an intense electrical field forms in the gap, concentrating the laser’s light onto a spot the same width as the gap.

    “The antenna doesn’t impose design constraints on the laser,” Capasso says, because it can be added to off-the-shelf semiconductor lasers, commonly used in CD drives. The team has already demonstrated the antennas with several types of lasers, each producing a different wavelength of light. The researchers­ have discussed the technology with storage-device companies Seagate and Hitachi Global Storage Technologies.

    Another application could be in photo­lithography, says ­Gordon Kino, professor emeritus of electrical engineering at Stanford University. This is the method typically used to make silicon chips, but the lasers that carve out ever-smaller features on silicon are also constrained by the diffraction limit. Electron-beam lithography, the technique that currently allows for the smallest chip features, requires a large machine that costs millions of dollars and is too slow to be used in mass production. “This is a hell of a lot simpler,” says Kino of Crozier and Capasso’s technique, which relies on a laser that costs about $50.

    But before the antennas can be used for lithography, the engineers will need to make them even smaller: the size of the antennas must be tailored to the wavelength of the light they focus. Crozier­ and Capasso’s experiments have used infrared lasers, and photo­lithography relies on shorter-wavelength ultraviolet light. In order to inscribe circuitry on microchips, the researchers must create antennas just 50 nanometers long.

    Capasso and Crozier’s optical antennas could have far-reaching and un­predictable implications, from superdense optical storage to ­superhigh-resolution optical microscopes. Enabling engineers to simply and cheaply break the diffraction limit has made the many applications that rely on light shine that much brighter.

  • Single-Cell Analysis

    Norman Dovichi believes that detecting minute differences between individual cells could improve medical tests and treatments.
    Analyzing individual cells allows researchers to distinguish between a uniform population of cells (above left) and a group of cells with members having, say, different protein content (above right). The ability to recognize such differences could be essential to understanding diseases such as cancer or diabetes.
    ELAINE KURIE

    We all know that focusing on the characteristics of a group can obscure the differences between the individuals in it. Yet when it comes to biological cells, scientists typically derive information about their behavior, status, and health from the collective activity of thousands or millions of them. A more precise understanding of differences between individual cells could lead to better treatments for cancer and diabetes, just for starters.

    The past few decades have seen the advent of methods that allow astonishingly detailed views of single cells–each of which can produce thousands of different proteins, lipids, hormones, and metabolites. But most of those methods have a stark limitation: they rely on “affinity reagents,” such as anti­bodies that attach to specific proteins. As a result, researchers can use them to study only what’s known to exist. “The unexpected is invisible,” says Norman Dovichi, an analytical chemist at the University of Washington, Seattle. And most every cell is stuffed with mysterious components. So Dovichi­ has helped pioneer ultrasensitive­ techniques to isolate cells and reveal molecules inside them that no one even knew were there.

    Dovichi’s lab–one of a rapidly growing number of groups that focus on single cells–has had particular success at identifying differences in the amounts of dozens of distinct proteins produced by individual cancer cells. “Ten years ago, I would have thought it would have been almost impossible to do that,” says Robert Kennedy, an analytical chemist at the University of Michigan-Ann Arbor, who analyzes insulin secretion from single cells to uncover the causes of the most common type of diabetes.

    And Dovichi has a provocative hypothesis: he thinks that as a cancer progresses, cells of the same type diverge more and more widely in their protein content. If this proves true, then vast dissimilarities between cells would indicate a disease that is more likely to spread. Dovichi is working with clinicians to develop better prognostics for esophageal and breast cancer based on this idea. Ultimately, such tests could let doctors quickly decide on proper treatment, a key to defeating many cancers.

    A yellow, diamond-shaped sign in Dovichi’s office warns that a “laser jock” is present. Dovichi helped develop the laser-based DNA sequencers that became the foundation of the Human Genome Project, and his new analyzers rely on much of the same technology to probe single cells for components that are much harder to detect than DNA: proteins, lipids, and carbohydrates.

    For proteins, the machines mix reagents with a single cell inside an ultrathin capillary tube. A chemical reaction causes lysine, an amino acid recurring frequently in proteins, to fluoresce. The proteins, prodded by an electric charge, migrate out of the tube at different rates, depending on their size. Finally, a laser detector records the intensity of the fluor­escence. This leads to a graphic that displays the various amounts of the different­-­sized proteins inside the cell.

    Although the technique reveals differences between cells, it does not identify the specific proteins. Still, the analyzer has an unprecedented sensitivity and makes visible potentially critical differences. “For our cancer prognosis projects, we don’t need to know the identity of the components,” Dovichi says.

    Dovichi is both excited about the possibilities of single-cell biology and sober about its limitations. Right now, he says, analyses take too much time and effort. “This is way early-stage,” says Dovichi. “But hopefully, in 10, 20, or 30 years, people will look back and say those were interesting baby steps.”