Three scientists who kickstarted an AI revolution by studying the learning abilities of large artificial neural networks have been awarded the most prestigious accolade in computer science: the $1 million Turing Award.
For decades, Geoffrey Hinton, Yann LeCun, and Yoshua Bengio persevered with neural networks when the rest of the AI community viewed them as a dead end, preferring to focus instead on symbolic approaches (meaning rules of logic that are encoded by hand rather than learned).
Big breakthrough: In 2012, these deep neural networks suddenly proved to be astonishingly good at image recognition. The key was feeding them huge quantities of training data and running them on powerful graphics processing chips, which are well suited to the parallelized computations required.
Network effects: Deep learning is now just about everywhere. It is used to process images on Facebook, target ads on Google, and help self-driving cars perceive the world around them. Non-tech companies keen to make themselves more efficient are also rapidly adopting the technology.
Big guns: As the power of deep learning became apparent, Hinton and LeCun were quickly recruited by Google and Facebook. And the rise of the technology has led to hope of making breakthroughs in AI that have long seemed like science fiction. LeCun, for instance, has led an effort within Facebook to develop not just powerful image and video recognition capabilities but also more capable personal assistants.
Rising risks: The rise of deep learning, and AI in general, has happened so rapidly that many, including Hinton, LeCun and Bengio, have sometimes wondered if things are moving too quickly. Deep learning has supercharged face recognition and other forms of surveillance, for example. The technology has also consolidated power in the hands of those businesses with lots of data and compute power.
Turing test: The moment is sweet vindication for the trio. For Hinton, it also reflects a fundamental truth about AI that goes back to the man who first speculated about intelligent machines. “One person who strongly believed the root of intelligence was learning was Turing,” he told MIT Technology Review.
Why Meta’s latest large language model survived only three days online
Galactica was supposed to help scientists. Instead, it mindlessly spat out biased and incorrect nonsense.
DeepMind’s game-playing AI has beaten a 50-year-old record in computer science
The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.
The White House just unveiled a new AI Bill of Rights
It's the first big step to hold AI to account.
Google’s new AI can hear a snippet of song—and then keep on playing
The technique, called AudioLM, generates naturalistic sounds without the need for human annotation.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.