The Virtual Cell
When Harley McAdams was a few years shy of 60, he became a biologist. He had spent two decades of his working life as a systems engineer at AT&T’s Bell Laboratories, and four years at Lockheed Missile and Space in Sunnyvale, CA, working on data systems architecture for military satellites. In 1994, however, he took to attending biology seminars at Stanford University, where his wife, Lucy Shapiro, was chair of the developmental biology department. McAdams had his epiphany while listening to an eminent geneticist describe the complex biological circuitry that turns genes on and off in yeast. To the uninitiated, the diagram of this system was vaguely reminiscent of a plate of spaghetti, with various arrows and stop and go signs attached. To McAdams, it looked like nothing more than an electric circuit, with the kinds of feedback loops and regulatory and control mechanisms that constituted the meat and potatoes of his systems engineering work.
After the lecture, says McAdams, he and his wife made a deal. He would teach her Boolean algebra, the mathematical logic of computer circuitry, and in return she would teach him genetics. And so they spent the next year, or at least the nights and weekends, educating each other, until the day McAdams claimed that he could apply the rules of electrical circuitry-along with the computer modeling techniques engineers typically use to analyze and design such circuitry-to a genetic circuit. By so doing, McAdams was able to provide an understanding of how the genetic system worked that went far beyond what biologists had managed to achieve. “At one point, he just walked out of Lockheed and started working at home,” says Shapiro. “I nearly had a heart attack.”
By 2000, McAdams and Shapiro had published a seminal paper in the journal Science on the application of systems engineering to biology, and McAdams had acquired his own biology laboratory at Stanford University School of Medicine and funding from the U.S. Defense Advanced Research Projects Agency (DARPA) to pursue biological research. Even Shapiro began to view herself and her work in an entirely different light. “Now if somebody asks me what I do for a living,” she says, “I say I am a biological systems engineer.”
Shapiro and McAdams can be considered among the more senior members of the avant-garde of a revolution in biology, in which the immediate goal is to create computer simulations-styled after systems engineering-of the regulatory mechanisms of genes and, eventually, entire cells, tissues and organs. These simulations will allow researchers to do biology experiments “in silico,” inexpensively and remarkably quickly. Ultimately, researchers will use such computer simulations to identify new drug targets and to design and screen new drugs that will lead to entirely new treatments-if not cures. “It’s just a wholly new way of doing biology,” says Jim Anderson, who directs a new program to fund such research at the National Institutes of Health.
In the past three years, new departments and entire research institutes have been founded to pursue in silico biology at Stanford University, Caltech, Harvard University, the University of California, Berkeley, and the University of Washington, to name a few. All have the explicit goal of uniting biologists with physicists, engineers, mathematicians and computer scientists to create computer simulations that probe the outstanding problems of biology and medicine.
Forcing this in silico revolution are several inescapable facts: first is the sequencing of a host of complete genomes-the human genome being the most mediagenic-and the accompanying explosion in genomics technology. As a result, for the first time in history, researchers have what amounts to a genetic parts list for living organisms from bacteria to humans. This in turn has produced a shift in emphasis from the traditional focus of biology-“on intensive analysis of the individual components of complex biological systems,” as Whitehead Institute biologists Eric Lander and Robert Weinberg recently described it in Science-to a focus on how those components work together in networks and entire cellular systems.
Keep It Simple
The point that the chemotaxis field reached in the early 1990s-where there was finally enough experimental data available to begin to think about computer simulations-has now been reached by all biology, says Simon. At the moment, most practitioners of in silico biology are starting with the simplest possible cells and the simplest possible systems and hoping to work up from there. The main exception to this rule is the handful of companies that have recently begun marketing simulations of complicated disease processes, on the level of organs and cells, to pharmaceutical companies (see “In Silico, Inc.”). But the bulk of the work in academic research laboratories and biotechnology firms has targeted relatively simple systems, like bacterial chemotaxis, or particular cellular pathways that have been studied for decades and on which copious detailed information has already been collected.
From these simple systems, in silico biologists hope to identify what they have taken to calling “modules” or “control motifs” that are common to other types of cells-or even to all cells. Then researchers can begin hooking these modules together in ever more comprehensive simulations that begin to realistically approximate what’s happening in entire cells.
“You look at a cell as composed of discrete cellular machinery,” says Ravi Iyengar, a biologist-turned-modeler at Mount Sinai School of Medicine in New York. “You understand them one at a time and then eventually put it together. If we want to dream, we can say eventually we should be able to construct a simulation of a whole mammalian cell this way. We don’t have the knowledge to do it now. But it’s do-able.”
This is the goal, challenging as it may be, faced by an ambitious new project known as the Alliance for Cellular Signaling and led by Nobel Prize-winning biologist Alfred Gilman of the University of Texas Southwestern Medical Center at Dallas (see “The Proteomics Payoff,” TR October 2001). The alliance hopes to someday simulate all the “signal pathways” in two specific types of mouse cells-B lymphocytes, which are part of the immune system, and the heart muscle cells known as cardiac myocytes. One such pathway might, for instance, carry a signal from a hormone or a toxin outside the cell to the cell’s nucleus, turning specific genes on or off. “Our long-term goal is to be able to watch the flux of information come through the pathways of the cell and see how those pathways control that flux,” says Gilman. “And that ultimately becomes a model of what’s happening inside the cell.”
To pull this off, Gilman and his collaborators have enlisted the help of 50 participating investigators, all senior researchers, and of another 300 researchers each of whom is an expert on specific molecules involved in the pathways. They’ve raised over $10 million a year in funding for the next 10 years, and have obtained space to build seven dedicated labs where researchers will work on every stage of the project, from preparing and analyzing cells to building new machinery for making the necessary measurements to doing the modeling itself.
Gilman acknowledges that the plan is highly ambitious-a “crazy idea,” he says-but still believes that in only five years they can have a “pretty complete” parts list of the two cells and know “a hell of a lot about the complete dictionary” of interactions between the parts, and how the information flows through the cells. He seems equally proud of having managed to convince six pharmaceutical companies to contribute to the alliance. “We’re putting all the data out [on a Web site] in real time for everyone’s use,” he says, “and making no claims to intellectual property. This makes it particularly interesting that we’re getting money from the pharmaceutical industry. But if we really did understand signaling systems thoroughly, and if we had the equivalent of a piece of a virtual cell in terms of a quantitative model of all the signaling systems, that would be an incredible drug discovery engine and of great value to the industry.”
One hypothetical way drug companies might use in silico biology, says NIH’s Anderson, would be to find new drug targets, and maximize the effectiveness of drug candidates, by doing what’s known in the jargon as a sensitivity analysis. In effect, the researchers would simulate signaling pathways that are known to lead to, say, cancer when they go awry. Then the simulation could tell the researchers exactly where a drug molecule could intervene to have the maximum effect on the errant pathways.
Another project, already under way in biotech firms, says Anderson, is to create computer simulations of the biological systems that bacteria, for instance, use to produce such bestselling antibiotics as erythromycin. “Antibiotics are usually synthesized by soil bacteria, and these use extremely elaborate biosynthetic pathways that have been very difficult for us to crack,” says Anderson. “Every drug company would like to improve the yield of their most costly antibiotics by being able to deliberately engineer the bacteria to do x, y and z and to do it better. So what you want to do is simulate in silico the biological systems that regulate and control the synthesis of the antibiotics in the bacteria. Then you can learn how to modify the bacteria genetically in order to accentuate the production or change the characteristics of the particular antibiotic that you’re already making.”
More ambitious still, and perhaps another decade or more in the making, is the Digital Human Project, an inchoate national initiative that is just now taking shape in funding agencies in Washington, DC. The idea grew out of the Defense Advanced Research Projects Agency, says Shankar Sastry, former head of the agency’s information technology office and now chair of the electrical engineering and computer sciences department at the University of California, Berkeley. The eventual goal, he says, is a “fully functional model of an entire human body from intercellular through the tissue level through the organ level right up to the functioning of the entire body.”
Such a model would require at least as much effort and collaboration as went into the Human Genome Project and might cost a billion or more dollars a year to build. It would be used eventually for teaching-“every university in this country will be able to bring anatomy and physiology to life,” says Sastry-and for pharmaceutical research. “If you have simulations you can trust,” says Sastry, “you can try your drug out on the simulation to understand all its complex interactions.” DARPA is now gearing up to spend $80 to $100 million on programs that would seed the necessary technologies to make such a digital human a virtual reality.
As the in silico revolution explodes through the biological community, those who have already caught the bug are confident that sooner or later everyone will be doing it-at least, says biologist Tom Pollard of the Salk Institute for Biological Studies in La Jolla, CA, “if they want to understand how any biology works.”
Less certain, however, is how quickly the revolution will pay off. The handful of commercial endeavors selling simulations of tissues, cells or entire disease processes believe their models can already benefit pharmaceutical researchers by providing them, if nothing else, with a more structured way to think about the diseases they’re attacking. But talk to enough biologists-turned-modelers or engineers-turned-biologists, and you’ll get estimates ranging from a decade for a reasonable simulation of a simple cell to a century for an equally accurate simulation of a human from the genetic level on up.
Occasionally the discussions of the future of in silico biology take on a catch-22-like tone: the computer simulations will be indispensable tools for anyone who wants to truly understand the inner workings of cells, tissues and organs, but those computer simulations are going to be crippled until researchers can inform them with a better understanding of the cells, tissues and organs they’re studying. Until then, progress will be made as both researchers and simulations bat new data and hypotheses back and forth between them and slowly converge on reality. “This is not a short path to glory,” says Drew Endy, a civil-engineer-turned-biologist at Berkeley, CA’s Molecular Sciences Institute. “This is a decades-long effort.”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.