Skip to Content
Artificial intelligence

Why does AI being good at math matter?

AI systems that can solve complex math could allow us to build more powerful AI tools.

A digital concept of a brain over an abstract collage of math problems.
Sarah Rogers/MITTR | Getty

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Last week the AI world was buzzing over a new paper in Nature from Google DeepMind, in which the lab managed to create an AI system that can solve complex geometry problems. Named AlphaGeometry, the system combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions, writes my colleague June Kim. You can read more about AlphaGeometry here

This is the second time in recent months that the AI world has got all excited about math. The rumor mill went into overdrive last November, when there were reports that the boardroom drama at OpenAI, which saw CEO Sam Altman temporarily ousted, was caused by a new powerful AI breakthrough. It was reported that the AI system in question was called Q* and could solve complex math calculations. (The company has not commented on Q*, and we still don’t know if there was any link to the Altman ouster or not.) I unpacked the drama and hype in this story.

You don’t need to be really into math to see why this stuff is potentially very exciting. Math is really, really hard for AI models. Complex math, such as geometry, requires sophisticated reasoning skills, and many AI researchers believe that the ability to crack it could herald more powerful and intelligent systems. Innovations like AlphaGeometry show that we are edging closer to machines with more human-like reasoning skills. This could allow us to build more powerful AI tools that could be used to help mathematicians solve equations and perhaps come up with better tutoring tools. 

Work like this can help us use computers to reach better decisions and be more logical, says Conrad Wolfram of Wolfram Research. The company is behind WolframAlpha, an answer engine that can handle complex math questions. I caught up with him last week in Athens at EmTech Europe. (We’re hosting another edition in London in April! Join us? I’ll be there!) 

But there’s a catch. In order for us to reap the benefits of AI, humans need to adapt too, he says. We need to have a better understanding of how the technology works so we can approach problems in a way that computers can solve. 

“As computers get better, humans need to adjust to this and know more, get more experience about whether that works, where it doesn’t work, where we can trust it, or we can’t trust it,” Wolfram says. 

Wolfram argues that as we enter the AI age with more powerful computers, humans need to adopt “computational thinking,” which involves defining and understanding a problem and and then breaking it down into pieces so that a computer can calculate the answer. 

He compares this moment to the rise of mass literacy in the late 18th century, which put an end to the era when just the elite could read and write.  

“The countries that did that first massively benefited for their industrial revolution ... Now we need a mass computational literacy, which is the equivalent of that.” 

Deeper Learning

How satellite images and AI could help fight spatial apartheid in South Africa  

Raesetje Sefala grew up sharing a bedroom with her six siblings in a cramped township in the Limpopo province of South Africa. The township’s inhabitants, predominantly Black people, had inadequate access to schools, health care, parks, and hospitals. But just a few miles away in Limpopo, white families lived in big, attractive houses, with easy access to all these things. The physical division of communities along economic and racial lines is just one damaging inheritance from South Africa’s era of apartheid.

Fixing the problem using AI: Alongside computer scientists Nyalleng Moorosi and Timnit Gebru at the nonprofit Distributed AI Research Institute (DAIR), which Gebru set up in 2021, Sefala is deploying computer vision tools and satellite images to analyze the impacts of racial segregation in housing, with the ultimate hope that their work will help to reverse it. Read more from Abdullahi Tsanni

Bits and Bytes

A new AI-based risk prediction system could help catch deadly pancreatic cancer cases earlier
The system outperformed current diagnostic standards. One day it could be used in a clinical setting to identify patients who might benefit from early screening or testing, helping catch the disease earlier and save lives. (MIT Technology Review

Meta says it is developing open-source AGI
Et tu, Zuck? Meta is now an AGI company. In an Instagram Reels video, CEO Mark Zuckerberg announced a new long-term goal to build open-source “full general intelligence.” The company is doing this by bringing its generative AI and AI research teams closer together, and building the next version of its Llama model and a massive computing infrastructure to support that. (Meta

Read the full text of the AI Act
The EU reached a political agreement on the AI Act late last year. Negotiators are still finalizing technical details of the bill, and it still needs to go through a round of approvals before it enters into force. Euractiv’s Luca Bertuzzi got hold of the nearly 900-page final text of the bill and a comparison on how it compares to earlier texts. Here is a simpler version of the bill

Sharing deepfake nudes could soon become a federal crime in the US
The bipartisan Preventing Deepfakes of Intimate Images Act was introduced in the US last week. It could outlaw the nonconsensual sharing of digitally altered nude images. It was prompted by an incident at a New Jersey high school where teenage boys were sharing AI-generated images of their female classmates. (Wall Street Journal

A “shocking” amount of the web Is already AI-translated trash
The internet is already full of machine-translated garbage, particularly in languages spoken in Africa and the Global South, researchers at Amazon Web Services found. Over half the sentences on the web have been machine-translated into other languages. This could have severe consequences for the quality of data used to train future AI models. I wrote about this phenomenon all the way back in 2022. (Vice

Deep Dive

Artificial intelligence

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at with a list of newsletters you’d like to receive.