Skip to Content

10 Breakthrough Technologies 2005

Emerging Technologies: 2005

Of the numerous technologies now in gestation at companies and universities, we have chosen 10 that we think will make particularly big splashes. They range from bacterial factories to silicon photonics to quantum wires and any one of them could change your world.
May 1, 2005

Magazine
10 Emerging Technologies

This story was part of our May 2005 issue.

Explore the issue

10 Breakthrough Technologies

  • Airborne Networks

    AVIATION An Internet in the sky could let planes fly safely without ground controllers.

    Of the numerous technologies now in gestation at companies and universities, we have chosen 10 that we think will make particularly big splashes. They’re raw, but they’ll transform the Internet, computing, medicine, energy, nanotechnology, and more.

    The technology that underpins the air traffic control system hasn’t changed much in a half-century. Planes still depend on elaborate ground-based radar systems, plus thousands of people who watch blips on screens and issue verbal instructions, for takeoffs, landings, and course changes. The system is expensive, hard to scale up, and prone to delays when storms strike.

    An entirely different approach is possible. Each plane could continually transmit its identity, precise location, speed, and heading to other planes in the sky via an airborne network. Software would then take over, coördinating the system by issuing instructions to pilots on how to stay separated, optimize routes, avoid bad weather, and execute precise landings in poor visibility.

    In the near term, such technology could save travelers time and might reduce fuel consumption. Long term, it could revolutionize air travel by enabling more planes to fill the sky without the addition of infrastructure and staff. Vastly greater numbers of small planes could zip in and out of thousands of small airfields (there are 5,400 in the U.S. alone), even those with no radar at all. “The biggest holdback to the number of airplanes that can be in the sky is that air traffic controllers are separating aircraft by hand,” says Sally Johnson, an aerospace engineer at NASA’s Langley Research Center. “Until you get away from that paradigm, we are at the limits of what you can do.”

    As a practical matter, airborne networks that rely on software and cockpit computers rather than humans to issue instructions are still decades away. But in June, NASA plans to demonstrate a prototype of such an automated system at a small airport in Danville, VA. A computer at a ground station near the airport will receive data from multiple planes and give the pilots their initial holding fixes, then tell them what planes they’re following and where to go if they miss their approaches. In the planes, cockpit displays will show pilots where the other planes are, and a computer will give them instructions that guide their trajectories.

    Future systems might go further: planes would communicate not just via a computer on the ground (or via satellite) but directly with each other, relaying information from other planes in an Internet-like fashion. This radical advance in airborne networking could come from research funded by the Pentagon – the midwife of today’s terrestrial Internet. The vision is that not only navigational data but information about targets, real-time intelligence, and bombing results would flow freely among manned and unmanned military planes, to vehicles on the ground, and up and down chains of command. “There is a terrestrial backbone of hardwired connections, and there will be a space backbone between satellites. What we are talking about adding, for aircraft, is an equivalent third backbone in the sky,” says Dave Kenyon, division chief of the Technical Architectures Division at the U.S. Air Force Electronic Systems Center in Bedford, MA.

    The U.S. Air Force is beginning to define the architecture of an airborne network and hopes to begin actively developing and testing the network itself between 2008 and 2012, Kenyon says. Taken together, the military research and the related air traffic control research into airborne communications networks could change how we travel in the decades to come.

  • Quantum Wires

    POWER TRANSMISSION Wires spun from carbon nanotubes could carry electricity farther and more efficiently.

    Richard Smalley toys with a clear plastic tube that holds a thin, dark gray fiber. About 15 centimeters long, the fiber comprises billions of carbon nanotubes, and according to the Rice University chemist, it represents the first step toward a new type of wire that could transform the electrical power grid.

    Smalley’s lab has embarked on a four-year project to create a prototype of a nanotube-based “quantum wire.” Cables made from quantum wires should conduct much better than copper. The wires’ lighter weight and greater strength would also allow existing towers to carry fatter cables with a capacity ten times that of the heavy and inefficient steel-reinforced aluminum cables used in today’s aging power grid.

    The goal is to make a wire with so little electrical resistance that it does not dissipate electricity as heat. Smalley says quantum wires could perform at least as well as existing superconductors – without the need for expensive cooling equipment. The reason: on the nanometer scale, the weird properties of quantum physics take over, and a wire can carry current without resistance. But until a couple of years ago, no one knew whether this amazing property would hold up when nanotubes were assembled into a macroscopic system. Then Jianping Lu, a physicist at the University of North Carolina at Chapel Hill, calculated that electrons could travel down a wire of perfectly aligned, overlapping carbon nanotubes with almost no loss of energy.

    Smalley’s group has already produced 100-meter-long fibers consisting of well-aligned nanotubes. But the fibers are mixtures of 150 different types of nanotubes, which limits their conductivity. The best wire would consist of just one kind of nanotube – ideally the so-called 5,5-armchair nanotube, named for the arrangement of its carbon atoms. Existing production techniques generate multiple types of nanotubes, indiscriminately. But Smalley believes that adding tiny bits of a single carbon nanotube at the beginning of the process could catalyze the production of huge numbers of identical nanotubes – in essence, “cloning” the original tube.

  • Silicon Photonics

    OPTOELECTRONICS Making the material of computer chips emit light could speed data flow.

    The Internet lives on beams of light. One hair-thin glass fiber can carry as much data as thousands of copper wires. But inside your computer, copper still rules. The advantages of light haven’t translated from long-distance connections on the Internet to the short jump between computer chips, in part because the lasers used in optical communications are made from exotic semiconductors incompatible with the standard processes for making silicon computer chips. As computers get faster and faster, they’re nearing the physical limit of copper’s ability to carry more information, and they’ll need something like the fiber-­optic network in order to keep improving at the rate we’ve come to expect.

    Getting silicon to emit light could be the solution. A light signal’s frequency is much higher than an electrical signal’s, so it can carry thousands of times as much information. Light also overcomes another problem with electrical signals; as transistors get closer together, the electrical signals passing through them start to interfere with each other, like radio stations broadcasting at the same frequency. But turning silicon into a light emitter has proved an extraordinarily difficult challenge. The problem is rooted in an energy-level mismatch between silicon’s electrons and its positively charged “holes” (electron vacancies in its crystal structure): when an electron meets a hole, it’s more likely to release its excess energy as vibration than as light.

    But last fall, a team at the University of California, Los Angeles, became the first to make a laser out of silicon. In February, ­Intel scientists upped the ante, reporting a silicon laser that put out a continuous instead of a pulsed beam, a necessity for data communications. “Once you identify the right piece of physics, everything falls into place,” says UCLA electrical-engineering professor Bahram Jalali, who made the first silicon laser.

    The right piece of physics is the Raman effect. Some photons of light that pass through a material pick up energy from the natu­ral vibration of its atoms and change to another frequency. Jalali fires light from a nonsilicon laser into silicon. Because of the ­Raman effect, the photons emerge as a ­laser beam at a different frequency. This Raman laser is “a fundamental scientific breakthrough,” says Mario Paniccia, director of Intel’s Photonics Technology Lab, which is working to create the devices needed for optical communications in silicon. In addition to building a laser, he and his colleagues created a silicon modulator, which allows them to encode data onto a light beam by making it stronger or weaker. Paniccia’s group is working to more than double the speed at which it can modulate a beam. A multibillion-­dollar infrastructure is already in place for making silicon chips, so Intel believes silicon lasers will be a cost-effective way to raise the computing speed limit.

    Photonics-based interconnects between chips should start to appear in about five years, researchers say. The ultimate goal is to enable light-wave communication between components on the same chip, which is several years further out. Philippe Fauchet, professor of optics at the University of Rochester, believes on-chip optical communications will require a silicon laser powered by electricity, which would be cheaper and less complicated than one that depends on an external laser. If such a laser can be built, it will mean that everything from supercomputers on opposite sides of the globe down to the tiniest transistors can talk to each other at the speed of light.

  • Metabolomics

    MEDICINE A new diagnostic tool could mean spotting diseases earlier and more easily.

    In their quest to develop more-accurate medical diagnostic tests, researchers are turning to a new field called metabolomics – the analysis of the thousands of small molecules such as sugars and fats that are the products of metabolism. If metabolomic information can be translated into diagnostic tests, it could provide earlier, faster, and more accurate diagnoses for many diseases.

    Doctors have been measuring a few metabolites for decades to tell what’s wrong with patients; glucose for diabetes is a familiar example. Metabolomics researchers, however, sort through hundreds of molecules to tease out a dozen or so that can serve as the signature of a particular disease. “We’re hoping that many diseases will have metabolic fingerprints that we can measure,” says Maren Laughlin, codirector of a new National Institutes of Health (NIH) metabolomics initiative. Initially, metabolic researchers are hunting for the signatures of conditions such as autism and Huntington’s disease.

    Metabolomics is, in some ways, a natural offshoot of recent advances in genomics and proteomics, which have allowed researchers to begin to identify many of the genes and proteins involved in diseases. Now researchers are realizing that they need to study metabolites in the same systematic fashion to get a complete picture of the body’s processes. And new software and increasingly powerful computers are helping them do it.

    A few small companies aim to have their metabolite-based diagnostic tests on the market within several years. Metabolon of Research Triangle Park, NC, for ­example, is working with Massachusetts General Hospital to look for metabolic markers for amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, for which there’s no definitive blood test. To determine ALS’s biochemical profile, the researchers analyzed more than 1,000 molecules in patient blood samples. Using new software to sift through the mountains of data, they found 13 chemicals that showed up consistently at high levels in ALS patients. If larger human trials confirm this 13-chemical profile to be an accurate ALS indicator, it could form the basis of a quick and easy blood test for the deadly disease. ­Another company, Phenomenome Discoveries of Saskatoon, Saskatchewan, is developing metabolite-based diagnostics for Alzheimer’s disease and bipolar disorder.

    There are drawbacks to using metabolites as disease markers. Their concentrations tend to fluctuate, since they’re heavily influenced by diet; doctors will therefore need to make sure samples are taken from patients under the proper conditions. But that’s true of many existing diagnostic tests, says Arthur Castle, the other codirector of the NIH metabolomics initiative. Metabolites may also prove not to be the best markers for every disease; in some cases, analysis of proteins may give a more reliable diagnosis. But metabolomics will give researchers a more comprehensive look at the complex changes under way in hundreds of molecules as a disease begins to develop – which can’t help but add to our store of medical knowledge.

  • Magnetic-Resonance Force Microscopy

    IMAGING The promise is a 3-D view of the molecular world.

    In nanotechnology and molecular biology, researchers are often severely limited by the inability to observe atoms and molecules in three dimensions. Proteins, for instance, fold into complex patterns that are largely invisible to the biologists trying to work out their functions of the biomolecules.

    So researchers are working to develop a tool that could provide a 3-D view of the nanoworld. The technology – called magnetic-resonance force microscopy (MRFM) – is a hybrid of magnetic-resonance imaging (MRI) and atomic force microscopy (AFM), which is widely used in nanotech. Physicists at the IBM Almaden Research Center in San Jose, CA, led by Daniel Rugar, recently used MRFM to detect the faint magnetic signal – the “spin” – of a single electron. While that accomplishment is still far from the goal of a 3-D snapshot of an atom or molecule, it is a critical step in proving that MRFM could perform atomic-scale imaging. MRFM works by dangling a tiny magnetic tip from the end of an ultrasensitive cantilever that bends in response to even an exceedingly small force. Under just the right conditions, the magnetic force between the tip and an electron changes the vibrations of the cantilever in a measurable way. Scanning a molecule in a 3-D raster pattern could, in theory, generate an image.

    By helping pharmaceutical researchers more directly work out the structures of proteins, MRFM could provide invaluable clues toward the development of safer and more effective drugs. The standard technique for determining the complex three-­dimensional structure of proteins involves crystallizing them and then analyzing the diffraction pattern of x-rays that bounce off atoms in the crystal. But not all proteins crystallize, and puzzling out x-ray diffraction patterns is painstaking and tricky.

    Researchers at IBM developed the scanning tunneling microscope, which provides images of atoms, and coinvented AFM, which has become a standard tool for atomic-scale manipulation, making possible much of nanotechnology. Whether MRFM will have the same impact is uncertain. But IBM’s experimental result is an encouraging signal for those desperate for a clearer, fuller view of the atomic and molecular world.

  • Universal Memory

    NANOELECTRONICS Nanotubes make possible ultradense data storage.

    Nantero CEO Greg Schmergel holds a circular wafer of silicon, about the size of a compact disc, sealed in an acrylic container. It’s a piece of hardware that stores 10 billion bits of ­digi­tal information, but what’s remarkable about it is the way it does it. Each bit is encoded not by the electric charge on a circuit element, as in conventional electronic ­memory, nor by the direction of a magnetic field, as in hard drives, but by the physical orientation of nanoscale structures. This technology could eventually allow vastly greater amounts of data to be stored on computers and mobile devices. Experts estimate that within 20 years, you may be able to fit the content of all the DVDs ever made on your laptop computer or store a digital file containing every conversation you have ever had on a handheld device.

    Nantero’s approach is part of a broader effort to develop “universal memory” – next-generation memory systems that are ultradense and low power and could replace everything from the flash memory in digital cameras to hard drives. Nantero’s technology is based on research that the Woburn, MA, company’s chief sci­entist, Thomas Rueckes, did as a graduate student at Harvard University. Rueckes noted that no existing memory technologies seemed likely to prove adequate in the long run. Static and dynamic random-access memory (RAM), used in laptops and PCs, are fast but require too much space and power; flash memory is dense and nonvolatile – it doesn’t need power to hold data – but is too slow for computers. “We were thinking of a memory that combines all the advantages,” says Rueckes.

    The solution: a memory each of whose cells is made of carbon nanotubes, each less than one-ten-thousandth the width of a human hair and suspended a few nanometers above an electrode. This default position, with no electric current flow between the nanotubes and the electrode, represents a digital 0. When a small voltage is applied to the cell, the nanotubes sag in the middle, touch the electrode, and complete a circuit – storing a digital 1. The nanotubes stay where they are even when the voltage is switched off. That could mean instant-on PCs and possibly the end of flash memory; the technology’s high storage density would also bring much larger memory capacities to mobile devices. Nantero claims that the ultimate refinement of the technology, where each nanotube encodes one bit, would enable storage of trillions of bits per square centimeter – thousands of times denser than what is possible today. (By comparison, a typi­cal DVD holds less than 50 billion bits total.) The company is not yet close to that limit, however; its prototypes store only about 100 million bits per square centimeter.

    Nantero has partnered with chip makers such as Milpitas, CA-based LSI Logic to integrate its nanotube memory with sili­con circuitry. The memory sits on top of a layer of conventional transistors that read and write data, and the nanotubes are processed so that they don’t contaminate the accessing circuits. By late 2006, Schmergel predicts, Nantero’s partners should have produced samples of nanotube memory chips. Early applications may come in laptops and PDAs. Ultimately, however, the goal is to replace all memory and disk storage in all computers.

    Suspending nanotubes is not the only way to build a universal memory. Other strategies include magnetic random-access memory, which Motorola and IBM and are pursuing, and molecular memory, where Hewlett-Packard is a research leader. But industry experts are watching Nantero’s progress with cautious optimism. “They have a very good approach, and it’s further along than any other,” says Ahmed Busnaina, professor of electrical engineering at Northeastern University and director of the National Science Foundation-funded Center for High-Rate Nanomanufacturing. If successful, this new kind of memory could put a world of data at your fingertips instantly, wherever you go.

  • Bacterial Factories

    PHARMACEUTICALS Overhauling a microbe’s metabolism could yield a cheap malaria drug.

    In the valleys of central China, a fernlike weed called sweet wormwood grows in fields formerly dedicated to corn. The plant is the only source of artemisinin, a drug that is nearly 100 percent effective against malaria. But even with more farmers planting the crop, demand for artemisinin exceeds supply, driving its cost out of reach for many of the 500 million afflicted with malaria every year. University of California, Berkeley, bioengineer Jay Keasling aims to solve the supply problem – and reduce the cost of treatment to less than 25 cents – by mass-producing the compound in specially engineered bacteria.

    Keasling’s efforts are an example of metabolic engineering, a field in which researchers try to optimize the complex processes whereby a cell produces or breaks down a particular substance. These processes rely on the step-by-step direction of genes; changing even one gene can alter the outcome. Most metabolic engineering has previously focused on modifying a cell’s natural processes by inserting, mutating, or deleting a few key genes. According to James Collins, a biological engineer at Boston University, “what Jay is doing is a bit more radical”: creating entirely new metabolic pathways by integrating multiple genes from different organisms into a host microbe.

    Keasling began his artemisinin project by inserting a set of yeast genes into the common bacterium E. coli. These genes induce the bacterium to make the chemical precursor to terpenes – the family of compounds to which artemisinin belongs. Adding in another two genes causes the bacterium to make a specific artemisinin precursor. Introducing a few more genes from sweet wormwood should get the microbe to make artemisinic acid, which is one simple chemical step away from artemisinin. But since E. coli don’t normally produce these chemicals, each step of the process will have to be carefully contrived and optimized. “There’s a lot of engineering still,” says Keasling.

    A $42.6 million grant from the Bill and Melinda Gates Foundation should help. In December, the foundation awarded the money to Keasling, his Emeryville, CA, startup Amyris Biotechnologies, and San Francisco’s Institute for OneWorld Health, a nonprofit that aims to secure U.S. Food and Drug Administration approval for bacteria-derived artemisinin within five years.

    The promise of bacterial factories doesn’t end with artemisi­nin. Amyris Biotechnologies hopes to adapt Keasling’s terpene precursor pathway to make prostratin, a promising anti-HIV compound found in the bark of the mamala tree on Samoa. With different alterations to the pathway, bacteria could make paclitaxel, the breast cancer drug sold under the brand Taxol and now isolated from yew trees.

    Ultimately, Keasling believes, new technologies for analyzing and understanding cellular pathways will enable researchers to engineer microbes to produce a huge range of chemicals, from drugs to plastics. And unlike conventional chemical ­engineering, bacteria do their job cleanly, without requiring or producing environmentally harmful compounds. “We’ve got all these great tools,” Keasling says. “Now we can start to put these to use to solve this one particular problem: how to engineer a cell to do the kinds of chemistries that you want it to do.”

  • Enviromatics

    ENVIRONMENT Computer forecasts enhance farm production and species diversity.

    Environmental scientists think of computers as old friends. They’ve long used them to crunch the data they collect in the field, whether to map the habitats of endangered species or predict the effects of greenhouse gas emissions on the global climate. But three trends are pushing information technology from the periphery of environmental studies to its very core, according to the proponents of a new field called environmental informatics, or enviromatics.

    First, there’s a fresh avalanche of raw data about the environment, a product of networked sensors that monitor ecosystems in real time. Second, there’s the rise of Internet standards such as the Extensible Markup Language (XML), which can tie together data stored in varying formats in different locations. The third trend – the decreasing cost of computing power – means that researchers can use inexpensive desktop machines to run analyses and simulations that once required supercomputers. Just as the invention of fast gene sequencers a decade ago gave rise to bioinformatics, a new wealth of data about the oceans, the atmosphere, and the land is leading to a wider embrace of sensing, simulation, and mapping tools – and hopefully to more reliable predictions about the future.

    Environmental modeling, of course, is nothing new: the ratification of the Kyoto Protocol was spurred in part by global climate models that predict average temperature increases of 1 °C to 6 °C over the next century. But such large-scale, long-range climate ­models don’t help with more immediate and local questions – such as whether the humidity this month in Butler County, PA, means that farmers should apply fungicides early to prevent infections. At Pennsylvania State University’s Center for Environmental Informatics, researcher Douglas Miller is pouring data from weather stations throughout the wheat-growing states into a Web-based program that can predict where a devastating wheat fungus infection called fusarium head blight may strike next. Farmers can log into a website, enter their locations and the flowering dates of their crops, and get local maps showing color-coded levels of risk. “We’re putting environmental information into people’s hands so they can make decisions,” says Miller.

    Enviromatics is even helping to manage urban growth. In San Diego County, officials compiled a detailed geographical and biological database mapping which vernal pools – basins that fill with rainwater in the winter and spring – harbor the most-endangered strains of species such as the San Diego fairy shrimp and therefore deserve the most protection. Science is rarely the main driver of land management or other decisions affecting the natural environment, but enviromatics may make it harder than ever for politicians to skirt the long-term implications of their decisions.

  • Cell-Phone Viruses

    TELECOM Wireless devices catch bad code through the air and then infect supposedly secure computer systems.

    ValleZ has released a digital epidemic – or maybe he’s delivered an early inoculation.

    ValleZ is the online handle of a 24-year-old computer programmer from Spain who, last June, wrote the first malicious program targeting cellular phones, the Cabir worm. Now, security experts fear that the rush to integrate cell phones into every aspect of our daily lives might make them the perfect carriers for digital diseases. Bruce Schneier, founder and chief technology officer of Counterpane Internet Security in Mountain View, CA, assesses the threat bluntly: “We’re screwed,” he says.

    Or maybe not. ValleZ is a member of an international cabal of programmers called 29A, which specializes in malicious software, or “malware.” These “ethical hobbyists” send their creations to security labs so that experts can research cures. “[Cabir] was a manner of saying that the antiviral people should be watching out for this,” says ValleZ, whom Technology Review tracked down via e-mail.

    ValleZ shared the code for his original, nonmalicious version of the worm with other members of 29A. Shortly after, it was passed to a Brazilian programmer who posted his own variation on his website in December. Now, bad guys everywhere are spinning off new versions that are melded with other malware that locks up phones or autodials obscure numbers. As of March, the Helsinki, Finland-based security company F-Secure reported that 15 variations of Cabir had popped up in 14 countries.

    Cabir spreads like an airborne disease through Bluetooth wireless connections, a popular means of transferring data at close proximity between cell phones and everything from other phones to car GPS navigation systems. Even antiviral researchers have found themselves worrying that viruses under examination might spread wirelessly to mobile devices outside their labs’ doors. Travis Witteveen, vice president of F-Secure’s North American division, says his company now runs its main mobile-security lab out of an old military bomb shelter.

    The cell-phone worm’s task could be as simple as swiping your address book or spewing out costly and annoying text-­message spam. Or it could mount a “denial of service” attack on your wireless-service provider by making your phone rapidly dial many numbers in succession. As people start using their “smart” cell phones to tap into computer networks, the damage caused by malware could grow more severe. If, as promised, cell phones soon begin to serve as payment devices, mobile malware that nabs your identity and taps directly into your credit line could follow. Theoretically, a corporate accountant’s phone could pick up a worm and, when synched to a PC, let it loose on the company’s network, jumbling accounts.

    And mobile malware will be able to infect systems not vulnerable to conventional viruses. A car owner could link her Bluetooth-enabled phone to her dashboard computer, so that she can control the phone via buttons on his steering wheel. As she drives down the road, her phone might connect to another in a passing car. Suddenly, her navigation system fails. “This type of threat is probably inevitable,” says Schneier. In the future, cars will include computer systems that permit remote diagnosis of problems. They should be kept physically separate from hardware that regulates mechanical systems – performing calibrations, for instance – lest a virus cause steering or brake controls to fail.

    Protection against this nascent peril is beginning to appear. Symbian, the company whose mobile-device operating system has been targeted by every cell-phone virus so far, has released a version of its software that grants Bluetooth access only to programs tagged with secure digital IDs. Antiviral software is not currently bundled with the software preinstalled on most privately purchased cell phones and so is found almost exclusively in business-issued phones. But companies like McAfee and ­InnoPath Software are developing easy ways for individual consumers to download antiviral software. According to research firm IDC, spending on mobile security will leap from around $100 million in 2004 to nearly $1 billion by 2008 – with a significant portion going toward antiviral protection.

    ValleZ says he’s done coding mobile malware – for a little while, at least. Of course, that won’t stop others from concocting their own electronic pests. Another, completely new and more virulent mobile virus, CommWarrior, was found in late February. It sends out costly multimedia messages but contains so many bugs that it doesn’t pose a major threat. The next malicious piece of code, however, may be neither a warning exercise nor a self-defeating pest but a full-bore attack on the wireless world.

  • Biomechatronics

    PROSTHETICS Mating robotics with the nervous system creates a new generation of artificial limbs that work like the real thing.

    Conventional leg prostheses frequently leave their users, especially above-the-knee amputees, stumbling and falling or walking with abnormal gaits. Hugh Herr, a professor at MIT’s Media Laboratory, is building more-reliable prostheses that users can control more precisely. Some of the latest prosthetic knees on the market already have microprocessors built into them that can be programmed to help the limbs move more naturally. But Herr has taken this idea one step further. He has developed a knee with built-in sensors that can measure how far the knee is bent, as well as the amount of force the user applies to it while walking. This artificial knee – recently commercialized by the Icelandic company Össur – also contains a computer chip that analyzes the sensor data to create a model of the user’s gait, and adapt the movement and resistance of the knee accordingly.

    Now Herr is working to distribute those sensors beyond the knee joint, using them to detect not just the mechanical forces of the body but also neural signals from the muscles near the joint. This work is part of an emerging discipline called biomechatronics, in which researchers are building robotic prostheses that can communicate with users’ nervous systems. In five to seven years, predicts Herr, spinal-cord injury patients will move their limbs again by controlling robotic exoskeletons strapped onto them (or at least they will in research settings). Biomechatronics is receiving more attention now in part because of the Iraq War, which is sending a high number of U.S. soldiers home with crippling injuries. Herr, who leads the Media Lab’s biomechatronics group, is part of a new $7.2 million research project run by the U.S. Department of Veterans Affairs (VA) to develop new technologies for amputees who lost limbs as the result of combat injuries.

    Herr, a double leg amputee, plans on becoming his own first test subject for his latest prosthetic ankle prototype. By early next year, at least three small sensors will be implanted into the muscles of one of his legs below the knee. As Herr flexes his leg muscles in ways that once moved his ankle, these sensors will measure electrical activity in the muscles and transmit that information to a computer chip in the prosthetic ankle, which will translate those impulses into instructions for the ankle’s motors. Herr hopes to be able to move the ankle by firing up the residual muscles near the joint and feeling it respond, just as he would with a natural joint. Nor will communication be just one way. Herr should also be able to sense the ankle’s position through vibrations emanating from the joint. “We regard this work as extraordinarily promising,” says Roy Aaron, a professor of orthopedics at Brown Medical School who is heading up the VA project.

    Having lost his lower legs to frostbite while mountain climbing as a teenager, Herr says he’s looking forward to trying out the device. “I think it will be quite profound to control my ankles again,” he says. Herr’s vision for the field is to combine biomechatronics with tissue engineering and create limbs made of both artificial materials and human tissue. Says Herr, “I think, inevitably, we’ll end up with hybrid devices.”