Searching Rat Brains for Clues on How to Make Smarter Machines
A $28 million government project is betting that tapping rodent brains will pay big dividends for AI research.
Larger data sets and faster computers have enabled a recent flurry of progress—and investment—in artificial intelligence. David Cox of Harvard thinks the next big jump will depend on understanding what happens inside the head of a rat when it plays video games.
Cox leads a $28 million project called Ariadne, funded by the U.S. Office of the Director of National Intelligence, that is looking for clues in mammalian brains to make software smarter. “This is a huge, moonshot-like effort to go into the brain and see what clues and tricks are hiding there for us to find,” he said today at EmTech MIT 2016.
Recent progress in tasks such as image recognition and translation sprang from putting more computing power behind a technique known as deep learning, which is loosely inspired by neuroscience. But Cox points out that despite giving us better speech recognition and mastering the game of Go, this software still isn’t very smart.
For example, it’s easy to modify photos so that deep-learning software sees things that aren’t there. Cox showed a photo of MIT Technology Review’s editor in chief subtly altered to appear to image recognition software as an ostrich. (You can try this trick yourself using this online demo from Cox’s lab.)
Cox also highlighted how the software requires thousands of labeled examples to recognize a new type of object. Human children can learn to recognize a new object, such as a new type of tool, with a single example.
Cox said that looking more closely at brains is the best way to address those shortcomings. “We think there’s still something brains can offer beyond this initial loose inspiration,” he said.
The Ariadne project only started in January, but Cox is already challenging rats with video games designed to exercise their visual recognition skills. The researchers use newly developed microscopes to watch the activity of cells in the brain and try to figure out how the neurons interpret the world.
“This is like a wiretap on a huge number of cells in the brain; you’re watching the rat have a thought,” said Cox. “We can ask unprecedented questions about how the brain is doing computation.”
Another strand of the project involves trying to reconstruct the connectivity and structure of neurons in rat brains in 3-D, using stacks of 30-nanometer slices of brain tissue processed with an electron microscope.
The 3-D models that emerge are fiendishly complex. Neuroscientists still don’t know what all the different cells do. But Cox says their bewildering intricacy is encouraging, because it suggests brains can still teach us much more about how to build artificial intelligence.
“This is one of those smoking guns—there are way more things going on,” he said. “Our hope is that by understanding this, we can make deep-learning systems that are closer to what the brain does.”
Discover how AI is driving the future of work at EmTech Next!Find more information and register