Can Pfizer Deliver?
Don’t bother mentioning “the genomic revolution” these days in the halls of Pfizer’s sprawling research and development facility in Groton, CT. Yes, the company’s scientists will acknowledge, the latest techniques in genomics and proteomics are interesting and occasionally useful; just don’t expect them to revolutionize drug discovery anytime soon. And as Pfizer’s R&D executives are equally hasty to point out, the new technologies are expensive-very expensive.
The impatience of Pfizer’s researchers with the hype surrounding genomics is not surprising. After a decade of promises and millions of dollars of investments in high-powered new genomic tools, pharmaceutical companies are mired in their most prolonged and painful dry spell in years. Despite skyrocketing R&D spending, which reached $32 billion in 2002, the U.S. industry’s output of new drugs has been spiraling downward since 1996. In 2002, the U.S. Food and Drug Administration approved only 17 “new molecular entities”-the agency’s jargon for drugs based on a novel active ingredient-the lowest number since 1983, when U.S. drug companies spent only about $3 billion on research and development. (Through the end of October 2003, the FDA had approved 18 new molecular entities.) “There is a research productivity problem, no doubt about it, and it’s getting worse,” says biologist Anthony Sinskey, codirector of MIT’s Program on the Pharmaceutical Industry.
Pfizer’s R&D headquarters in New London, CT, sits across the Thames River from its Groton research facility, overlooking a large estuary flowing into Long Island Sound. But the idyllic scene of sailboats outside his office window does little to soften the intensity of Stephen Williams, Pfizer’s executive director of clinical technology. Williams’s job is, after all, to worry about failures-especially very expensive ones.
Although failure is a fact of life for drugmakers, the timing of such failures is key. If a compound proves ineffective or possibly toxic while still in the lab, it’s no big deal. But if a compound survives early lab tests only to fail years later during large-scale and expensive human testing, it can cause losses of tens of millions, or even hundreds of millions, of dollars, not to mention the time wasted that could have been spent developing other drugs. Less than 20 percent of compounds beginning human clinical testing survive to the end, and, says Williams, the survival rates “for really novel drugs are worse.” The “horrifically expensive” failures, he adds, are those that occur in Phase III trials, the final set of human clinical tests that often involves thousands of patients in studies that can last years.
One promising means of avoiding these failures is more accurate tests that detect, at an early stage, subtle biological changes going on in a patient that reflect whether a drug is succeeding, failing, or perhaps proving toxic. Such “biomarkers” can help researchers prove a drug is working. But they can also serve as a cheap, easy, and more effective way to weed out drug candidates. “Just by identifying early and cheaply the failures, you make the [productivity] problem go away,” maintains Williams.
The early detection of liver toxicity is one pressing challenge. According to Williams, Pfizer has wasted about $2 billion over the last decade on drugs that failed in advanced human testing-or, in a few instances, were forced off the market-because of liver toxicity problems. Consider the antibiotic drug Trovan, a treatment for severe infections. Pfizer launched the medicine in early 1998 to much fanfare and amid predictions that it would be the company’s next blockbuster. Later that year came the news that all drug manufacturers dread: the medicine was apparently causing potentially fatal liver damage in some patients. In 1999 the FDA severely limited use of the once promising medicine.
A potential method for avoiding a recurrence of this nightmare is to use advanced software to spot otherwise invisible biomarkers. Pfizer mathematicians have developed algorithms to parse out subtle signs of liver toxicity that are missed in conventional analysis of blood tests performed during clinical trials. Normally, reviews of such tests would flag only highly elevated levels of a particular factor. Minor changes are ignored as long as they fall within the normal range. But the new algorithms look for certain patterns within these minor changes. Preliminary testing on a small number of failed drugs showed that such patterns did, in fact exist, says Williams. To validate the findings, the researchers now plan to go back over the company’s vast database of blood tests, which covers years of clinical trials and millions of patients, to see if they can further pinpoint patterns correlated with toxicity.
This project will be complex and costly, but if Pfizer could save a substantial fraction of that $2 billion it spent on liver-damaging drugs, it would roughly represent the annual revenues of a new blockbuster product. And for patients, it could mean avoiding the sufferings of another Trovan.
Better biomarkers could also help find drugs for chronic, progressive diseases like Parkinson’s, in which symptoms can take years to develop, and for mood disorders like depression, whose symptoms are difficult to quantify. Because it’s hard to measure the effectiveness of drugs for these diseases, drugmakers are often reluctant to even attempt to develop them. “If you don’t have a good way of measuring [the progress] of a disease, it is almost impossible to develop a drug for it,” Williams says.
One unconventional but simple biomarker that could help is the sound of a patient’s voice. Pfizer researchers are trying to leverage recent scientific findings that measurable changes in a person’s voice can predict his or her sleepiness; they hope to extend that finding to correlate changes in voice to mood swings in patients with depression or to brain damage caused by neurodegenerative diseases. Pfizer’s preliminary studies indicate a patient’s mood could in fact be gauged by changes in his or her voice. Likewise, the company has encouraging results suggesting that researchers can measure vocal changes in Parkinson’s patients. “It is pretty obvious that there are changes,” says Williams. “You can hear them. But we showed that we could measure changes before they became audible.”
The availability of such inexpensive means of measuring whether a compound is having any effect on a disease could be a boon for researchers testing drugs for such progressive conditions as Parkinson’s and Alzheimer’s. Instead of waiting, say, five to 10 years as symptoms wax or wane, researchers could quickly and easily determine whether a drug is working. Not only would that allow them to test greater numbers of different compounds, it would, says Williams, encourage far more research on diseases that have long been “handicapped by difficulties in measuring them.”
The most obvious way to improve the chances of a compound’s surviving the drug development process, though, is to start off with the right molecule in the first place. Traditionally, this has meant a mix of good old-fashioned intuition, a vast knowledge of different compounds, and lots of chemical ingenuity.
Take Pfizer’s billion-dollar arthritis drug Celebrex. In the early 1990s, John Talley was a medicinal chemist at G. D. Searle, the drug unit of Monsanto, when university researchers discovered the gene that makes an enzyme thought to be involved in causing inflammation. (Pharmacia merged with Monsanto in 2000; in turn, Pfizer bought Pharmacia early last year.) The enzyme was called cox-2, and the finding ignited an industrywide race to produce an arthritis drug that would inhibit it. It’s at this point in reciting the story that Talley grows animated; it is when the chemistry really begins.
At a scientific conference, a Searle colleague of Talley’s heard about a compound DuPont researchers had synthesized that seemed to have anti-inflammatory properties. For various reasons, it clearly was not the right compound to make into an anti-arthritis drug, but Talley realized that it could be a starting point, providing critical clues to the chemistry of a drug that might serve as a cox-2 inhibitor. Talley and his coworkers began to chemically tear apart the DuPont molecule to figure out what gave it its biological activity. Armed with that insight, the Searle chemists then began to systematically design a new molecule that would both be effective in blocking cox-2 and have the properties required of any drug, such as lack of toxicity. After more than a year and a half of testing, redesigning, and tweaking more than 2,500 compounds, Talley and his coworkers finally produced a suitable molecule. “The eureka moment comes when you’ve made the compound,” says Talley, who is now vice president of drug discovery at Microbia, a Cambridge, MA-based biotech startup. If you can’t make the right compound, he says, the biological knowledge is “just a cool idea.”
Talley’s belief in chemistry as the linchpin of drug discovery is widely shared by Pfizer’s researchers and R&D executives. Genomics and other biological tools may provide new disease targets, but the hard-and expensive-job is still to come up with the right compound. “Genomics is not the savior of the industry. The renaissance is in chemistry,” says Rod MacKenzie, Pfizer’s vice president of discovery research in Ann Arbor, MI.
Pfizer considers its huge library of compounds, housed in a large windowless room at its Groton research labs, the Sistine Chapel of that renaissance. Like any library, this one tells of a collective history-of numerous failures, a few spectacular successes, and most commonly, long-forgotten efforts that never made much of an impression either way. In this library, however, the tales are told in small glass vials, each neatly labeled with a bar code that describes the properties of the compound within and how it was made. Pfizer’s chemists around the world can request a chemical, and a robotic librarian scoots down the aisle, retrieving the vial and neatly depositing it on a tray, where it waits to be shipped off.
Pfizer is spending $500 million over the next five years to upgrade and enlarge this collection of millions of druglike chemicals. Not only will the library give Pfizer’s chemists ideas and lessons on what works and what doesn’t, but it will provide the seed corn for a highly automated, ultrafast new system for discovering drugs. In essence, the system will perform many of the same tasks-chemically designing, testing, and refining a molecule-that Talley and his coworkers handled in inventing Celebrex. But instead of relying on instinct and intuition, the drug discovery machines will rely on automation and brute computing power to quickly perform and interpret a vast number of experiments.
While automation has become routine in pharmaceutical labs, MacKenzie says high-throughput instruments have been limited in the types of chemical reactions they can carry out. That, he says, has changed recently, and automated machines can now produce many more of the types of compounds that interest drug developers. Throw in improvements in the rapid screening of compounds for biological activity and toxicity, as well as improved computational design tools, and an automated system could soon take over much of the drug discovery process, says MacKenzie.
Here’s how it might work. A molecule is plucked out of the company’s library. The automated system screens it against multiple disease targets and tests it for such things as toxicity. The results are fed back into a computational design and synthesis process, which tweaks the structure of the molecule. The cycle repeats, continually optimizing the compounds based on the results of the screening and testing. Pieces of such a system are already in place, says MacKenzie, and this year Pfizer researchers will begin to link them together into a “closed loop” technology. “It’s the old traditional process for doing drug discovery, but it’s enabled in a parallel world to move incredibly fast,” says MacKenzie. “It is now ready to change the paradigm of drug discovery.”
No one is sure what exactly lies behind the drug industry’s productivity slump, and few are ready to hazard a guess about when it will end. But Kenneth Kaitin, director of Tufts University’s Center for the Study of Drug Development, points to several likely causes, including management distractions brought about by a rash of industry mergers and acquisitions and a tightening in FDA regulatory requirements that has made it more difficult to get a drug to market. There is also a suspicion, says Kaitin, that the drug industry spent too much too soon on new biotechnologies, like genomics and proteomics. “It led to increased cost without an increase in productivity,” asserts Kaitin. Still, he adds, “there is no going back. The technology is not going away. You need to find ways to efficiently utilize it.”
Indeed, it seems certain that any replenishment of the industry’s R&D pipeline will be tied to drug companies’ learning to better take advantage of these new biological technologies, which have given researchers an unprecedented window into disease mechanisms and how the body works. But as Pfizer and others have learned over the last few years, turning this wealth of information into actual pills is a tough challenge. And future success will likely depend, at least in part, on how well companies are able to use emerging tools like biomarkers and automated drug-discovery systems to make sense of the increasingly complex biological data. The challenge, as Williams puts it, is to find efficient ways to sort “the gold from the lead.”
While increasing the productivity of drug discovery is an industrywide challenge, it is hard to overstate the importance of Pfizer in getting it right. At the disposal of this giant organization is an annual R&D budget of $7 billion and an exploding cache of new knowledge of human genetics and biology. Pfizer’s attempt to turn these resources into an efficient flow of new and innovative medicines over the next few years is an experiment well worth watching. No one can be sure of the prognosis. But the results are sure to affect the health of us all.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.