Demis Hassabis knows a thing or two about artificial intelligence: he founded the London-based AI startup DeepMind, which was purchased by Google for $650 million back in 2014. Since then, his company has wiped the floor with humans at the complex game of Go and begun making steps towards crafting more general AIs.
But now he’s come out and said that be believes the only way for artificial intelligence to realize its true potential is with a dose of inspiration from human intellect.
Currently, most AI systems are based on layers of mathematics that are only loosely inspired by the way the human brain works. But different types of machine learning, such as speech recognition or identifying objects in an image, require different mathematical structures, and the resulting algorithms are only able to perform very specific tasks.
Building AI that can perform general tasks, rather than niche ones, is a long-held desire in the world of machine learning. But the truth is that expanding those specialized algorithms to something more versatile remains an incredibly difficult problem, in part because human traits like inquisitiveness, imagination, and memory don’t exist or are only in their infancy in the world of AI.
In a paper published today in the journal Neuron, Hassabis and three coauthors argue that only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve.
First, they say, better understanding of how the brain works will allow us to create new structures and algorithms for electronic intelligence. Second, lessons learned from building and testing cutting-edge AIs could help us better define what intelligence really is.
The paper itself reviews the history of neuroscience and artificial intelligence to understand the interactions between the two. It argues that deep learning, which uses layers of artificial neurons to understand inputs, and reinforcement learning, where systems learn by trial and error, both owe a great deal to neuroscience.
But it also points out that more recent advances haven’t leaned on biology as effectively, and that a general intelligence will need more human-like characteristics—such as an intuitive understanding of the real world and more efficient ways of learning. The solution, Hassabis and his colleagues argue, is a renewed “exchange of ideas between AI and neuroscience [that] can create a 'virtuous circle' advancing the objectives of both fields.”
Hassabis is not alone in this kind of thinking. Gary Marcus, a professor of psychology at New York University and former director of Uber’s AI lab, has argued that machine-learning systems could be improved using ideas gathered by studying the cognitive development of children.
Even so, implementing those findings digitally won’t be easy. As Hassabis explains in an interview with the Verge, artificial intelligence and neuroscience have become “two very, very large fields that are steeped in their own traditions,” which makes it “quite difficult to be expert in even one of those fields, let alone expert enough in both that you can translate and find connections between them.”
This new data poisoning tool lets artists fight back against generative AI
The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist
An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.
Unpacking the hype around OpenAI’s rumored new Q* model
If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.
Generative AI deployment: Strategies for smooth scaling
Our global poll examines key decision points for putting AI to use in the enterprise.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.