Scientific discoveries made using machine learning cannot be automatically trusted, a statistician from Rice University has warned.
A growing trend: Machine-learning systems are increasingly used by scientists across many disciplines to help refine and speed up data analysis. This accelerates their ability to make new discoveries—for example, uncovering new pharmaceutical compounds.
The problem? Genevera Allen, associate professor at Rice University, has warned that the adoption of machine learning techniques is contributing to a growing “reproducibility crisis” in science, where a worrying number of research findings cannot be repeated by other researchers, thus casting doubt on the validity of the initial results. “I would venture to argue that a huge part of that does come from the use of machine-learning techniques in science,” Allen told the BBC. In many situations, discoveries made this way shouldn’t be trusted until they have been checked, she argued.
On the plus side: There is work under way on the next generation of machine-learning systems to make sure they’re able to assess the uncertainty and reproducibility of their predictions, Allen said.
Sign up here to our daily newsletter The Download to get your dose of the latest must-read news from the world of emerging tech.
AI for everything: 10 Breakthrough Technologies 2024
Generative AI tools like ChatGPT reached mass adoption in record time, and reset the course of an entire industry.
What’s next for AI in 2024
Our writers look at the four hot trends to watch out for this year
OpenAI teases an amazing new generative video model called Sora
The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.
Google’s Gemini is now in everything. Here’s how you can try it out.
Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.