Skip to Content

Biology’s Next Breakthroughs

Biotech pioneer Leroy Hood explains how systems biology will impact medicine.

In the 1980s, Leroy Hood was something of a maverick. At a time when most biologists wanted nothing to do with the tools and methods of engineering, Hood developed a series of tools that have revolutionized biological science. As a professor at Caltech, he developed four fundamental, automated tools that have helped make possible the comprehensive study of the human genome: a DNA sequencer, a DNA synthesizer, a protein synthesizer, and a protein sequencer. But the Caltech administration wasn’t interested in commercializing these technologies, so Hood cofounded a company that became Applied Biosystems. (He has also helped found several additional biotech companies, including Amgen.)

Biotech pioneer: Leroy Hood, president and cofounder of the Institute for Systems Biology, in Seattle, invented several tools, including the automated DNA sequencer that helped make it possible to sequence the human genome.

In 2000, after a stint at the University of Washington, he started up the Seattle-based Institute for Systems Biology, where he is president. Traditional biology tends to study one gene or protein or process at a time. Systems biology takes a cue from engineering and treats organisms as complex systems. Systems biologists, often using computer models, try to understand how genes, proteins, cells, and tissues interact to create complex organisms. By mapping out, rather than reducing, biological complexity, systems biologists hope to reach a new understanding of the fundamental processes of life, from embryonic development to normal metabolism to the emergence of diseases like cancer.

The approach has expanded biologists’ understanding of simple organisms like E. coli. But dramatic success has been slow in coming. So far, systems biology’s successes have been at the level of single cells, not tissues or whole animals. At the Institute for Systems Biology’s International Symposium this April, Hood talked to Technology Review about how systems biology will eventually change human medicine and even materials science.

Technology Review: What are the challenges in applying systems biology to human disease?

Leroy Hood: What we’ve seen with systems biology in the last eight years or so is that it’s very powerful in approaching single-celled organisms, be they bacteria or yeast. Their genomes are much smaller, and our ability to manipulate bacteria and yeast genetically, environmentally, and so forth is much, much greater. As a consequence, we’ve learned an enormous amount about these single-celled organisms, and in fact we’ve developed very powerful tools for unraveling networks that begin mechanistically to explain how they respond to their environments.

One of the grand challenges in systems biology is to move from simple, single-celled model organisms up to higher organisms–flies and worms, eventually to mice, and ultimately to humans. Those transitions are enormously complex, both because of the greater number of genes and the greater number of combinatorial possibilities.

TR: How are you making this transition to higher organisms?

LH: A powerful approach is to apply these techniques to individual [human] cells. One important reason for doing single-cell analysis is to be able to use some of the very powerful tools we’ve developed in single-celled organisms. But when you do single-cell analysis, you lose out on the context, the interaction with other cells [that happens in tissues]. One of the fascinating and unanswered questions is, are you going to be throwing the baby out with the bathwater?

TR: When researchers talk about single-cell analysis, they often emphasize that the activities of individual cells within the same tissue are unique, and that averaging them together provides a blurry picture. How do you deal with this complexity and translate information about single cells into something that’s clinically useful?

LH: What has become increasingly clear with the advent of single-cell studies in higher organisms is that the responses are enormously heterogeneous. In fact, when we look at large populations of cells and average their signals together, we often miss some of the most definitive features of these cells’ responses.

One of the fundamental questions in doing single-cell studies is whether each cell is utterly individually unique–whether, whatever measurements you take, each will be uniquely different from one another. Or, in fact, whether the cells do fall into discrete populations, discrete states. My own firm conviction is, when we learn how to do these studies properly, there will be discrete states we can look at. Knowing those states, and then reconstituting them to see how populations work–that’s going to give us deep insights into developmental, physiologic, and disease mechanisms. If, on the other hand, there aren’t discrete states, if there is a continuous distribution of variability, that will represent an interesting challenge.

TR: You’ve said that solutions to biological complexity will be applied to complex problems in other fields. Can you explain what you mean by this?

LH: Evolution has had four billion years to figure out really clever solutions for new materials, new chemistries, new types of molecular machines, even new approaches to computing. I think by studying living organisms and deducing the mechanisms that underlie these evolutionarily sculpted solutions to complexity, those solutions can be applied to other fields. A classic example is materials science. The spectrum of different materials that organisms have evolved to make is enormous.

TR: For the past several years, researchers at your institute have talked about a diagnostic “nanochip” that would detect markers of disease from all over the body. Can you update me on that project?

LH: What we’re interested in doing is developing strategies that will let us identify proteins in the blood that will permit us to interrogate the state of individual organs: the liver, the heart, the muscle–whatever you’d like to look at.

The basic idea is that the organ-specific proteins from, say, the liver will reflect the operation of the networks in the liver. So they’ll be at one set of concentrations for normal liver, and a different set of concentrations for a liver that has cancer or hepatitis or cirrhosis and other diseases. These blood fingerprints, then, are not assays for a disease; they’re assays for all disease. We’ve looked at two organ systems: the brain and the liver. We’ve certainly verified in general ways these principles.

We’d like to be able to identify fiftyish organ-specific blood proteins from each of the organs, and then be able to measure them so we could have an organ-wide assay. We’d like to give you a very broad-spectrum screen of all the different major organs in disease. The challenge is to be able to do the measurements in the blood, because that’s the only organ that’s readily accessible; that’s the only organ that bathes all other organs; and it’s an organ whose fluid properties make it easily manipulable for measurement and so forth.

TR: What progress have you made?

LH: This is really a challenging project. We’ve been collaborating with James Heath at Caltech for about four years. We have a little nanochip, if you will, that can make 20 different measurements of blood proteins. It can make the measurements in about five minutes’ time and is as sensitive as any assay out there right now. And it will probably operate across six to eight orders of magnitude [in terms of] concentration difference–that’s really important if you want to make blood measurements, because a big organ like the liver puts a lot of proteins in the blood, and a small organ like the beta cells of the pancreas puts out very few. You have to be able to span many orders of magnitude if you’re going to make appropriate measurements.

TR: When can we expect this nanochip?

LH: There are two challenges with the chip that we’re currently facing. One, getting good antibody reagents is really difficult and really expensive. So we’re going to explore alternative chemistries for creating protein-capture agents. The second big challenge for the nanochips is learning how to manufacture them on a scale that will make these measurements a few pennies per protein. The cost we have now is on the order of $50 per chip. And of course manufacturing is also important, to have good quality control, reproducibility in chip features. We’re optimistic that both of those problems can be scaled and that we can scale chips up to make thousands of measurements.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.