Ever heard of Eroom’s Law? It’s a rule—and a joke—in the pharmaceutical industry: the cost of developing new drugs is getting higher, not lower, despite improvements in science and technology.
The name is a literal reverse of Moore’s Law, the famous dictum of exponential growth. In fact, according to Daphne Koller of Insitro, the cost of bringing a new drug to market has risen from $200 million 30 years ago to $2.5 billion today.
Speaking at EmTech Digital, an event organized by MIT Technology Review, Koller explained how leading researchers and scientists were trying to use AI algorithms and machine learning to reverse that drift.
“This is a problem of prediction,” she said. “And prediction is what machine learning has become really good at. So is there a role that machine learning can play in driving the costs down.”
While discovery costs have risen because of a variety of factors, including regulatory oversight, Koller added that Insitro hoped to have the systems and data in place to make its first discoveries by 2021.
However, she warned that machine learning would not solve all the problems of drug discovery—especially if algorithms have bad inputs or are going after the wrong targets. She pointed to the failure of many drugs aimed at treating Alzheimer’s, many of which were developed on the belief that the disease was caused by buildup of a protein called beta-amyloid (after Roche called off two more trials earlier this year, a consensus is building that beta-amyloid is correlated with Alzheimer’s rather than causing it).
“Machine learning is a very two-edged sword, and the more powerful it is, the easier it is to fall into those gaps,” she said.
Avoiding such pitfalls requires better data at the kind of scale seen in AI fields other than health care, Koller suggested. “The kind of data sets we’re talking about in biology don’t even exist,” she said.
That’s largely because of the fierce privacy protections that surround people’s medical data. But Koller said such measures were unnecessarily blocking innovation—and she proposed a solution to make things move faster.
“We can’t even ask [patients] to opt in to having their data shared with organizations trying to create better treatments,” she said. “If you made it the default that people’s data was shared with privacy protections, we’d have a lot more data.
“Some countries in Europe have created a system where organ donation is an opt-out rather than opt-in—and it turns out that has quadrupled the level of organ donations without limiting the control that people have over their own bodies.”
It’s not just drug discovery where new AI-driven techniques are being deployed, either. Artificial intelligence is having a significant impact on the way new chemical compounds and materials are invented, too.
Jill Becker, the CEO of Kebotix, a materials discovery startup that launched publicly at the end of 2018, told the conference that she was investing heavily in machine-learning techniques to identify potential new chemicals and materials.
Becker said that she was specifically steering away from pharmaceuticals because of the shadow of Eroom’s Law—and the regulatory oversight, in particular.
“We’re hoping to create 100 molecules a week; a hundred top-notch molecules,” she said. “And there are two kinds of chemists: those that like to make drugs and those that like to blow things up. I am one of the latter. I have zero interest in waiting for the FDA. I have no patience.”
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
The therapists using AI to make therapy better
Researchers are learning more about how therapy works by examining the language therapists use with clients. It could lead to more people getting better, and staying better.
DeepMind says its new language model can beat others 25 times its size
RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network
AI fake-face generators can be rewound to reveal the real faces they trained on
Researchers are calling into doubt the popular idea that deep-learning models are “black boxes” that reveal nothing about what goes on inside
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.