A Less-Artificial Intelligence
A fair number of engineers working on artificial intelligence don’t care whether their systems resemble real brains or not, as long as they perform well. But even today’s best systems can generalize only if fed thousands of samples, and they can’t transfer their generalizations to new contexts. This leaves AI vulnerable to attackers, who can trick it with tiny tweaks to the data. Neuroscientist Andreas Tolias believes that brain-like features could fix these problems.
In 2016, he founded Neuroscience-Inspired Networks for Artificial Intelligence (NINAI), a tag team of neuroscientists, physicists, mathematicians, and computer scientists that’s part of a larger effort to understand neural function (see “Inside the Moonshot Effort to Finally Figure Out the Brain,” November/December 2017). Their relay race toward better AI starts in Tolias’s lab at Baylor College of Medicine, which records all the neurons firing inside a one-millimeter cube of a mouse’s cortex. In December, they captured the activity of 70,000 neurons in one mouse—a feat that would have been impossible without the two- and three-photon imaging techniques Tolias’s lab helped advance. The mice then go to the Allen Institute in Seattle, which slices and photographs their brains so a third team, at Princeton, can diagram which neurons are connected. By comparing this diagram with their recordings, Tolias’s lab deduces how the cells are influencing each other and what purpose each cell serves. If, as many neuroscientists suspect, the cortex is essentially built from a few common, repeated configurations of neurons, then explaining the activity in a one-millimeter cube could reveal the building blocks for all cognition.
Recommended for You
Tolias has zeroed in on two key structural differences between brains and AI. First, a mouse’s brain has roughly a hundred types of neurons, while a typical AI network has only two or three varieties of artificial neurons. The brain’s extra cell types include interneurons, which can stop large groups of other neurons from firing. AI has no direct equivalent. Brains also have more types of connections between neurons than AI networks do. Most AI networks are “feed-forward,” meaning signals only go in one direction, from one layer of the network to the next. Unlike real brains, these networks don’t have recurrent connections (which allow feedback signals in opposite directions) or lateral connections (which link neurons within the same layer). The few types of AI networks with recurrent and lateral connections show promise, but the role of feedback in the cortex needs much more study. “The brain didn’t create all this recurrence for the fun of it,” says Tolias. He also suspects interneurons may be regulating the brain’s lateral connections to create the generalizing powers that AI lacks.
Tolias hopes to use neuro-inspired components, including lateral connections, interneurons, and feedback, to build AI capable of one-shot learning, or generalizing from a single example. Success would be a big deal for AI, and for neuroscience, by identifying which features of neural circuits are needed for abstract thought. Tolias explains his quest in the words of Richard Feynman: “What I cannot create, I do not understand.”
Keep up with the latest in AI at EmTech Digital.
Don't be left behind.
March 25-26, 2019
San Francisco, CA