Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

“Maggie is a very smart monkey,” says Tim Buschman, a graduate student in Professor Earl Miller’s neuroscience lab. Maggie isn’t visible – she’s in a biosafety enclosure meant to protect her from human germs – but the signs of her intelligence flow over two monitors in front of Buschman. For the last seven years, Maggie has “worked” for MIT’s Department of Brain and Cognitive Sciences (BCS). Three hours a day, the macaque plays computer games that (usually) are designed to require her to generate abstract representations and then use those abstractions as tools. “Even I have trouble with this one,” Buschman says, nodding at a game that involves classifying logical operations. But Maggie is on a roll, slamming through problems, taking about half a second for each and getting about four out of five right.

Maggie’s gaming lies at the intersection of artificial intelligence (AI) and neuroscience. Under the tutelage of Buschman and Michelle Machon, another graduate student, she is contributing to research on how the brain learns and constructs logical rules, and how its performance of those tasks compares with that of the artificial neural networks used in AI.

Forty years ago, the idea that neuroscience and AI might converge in labs like Miller’s would have been all but unthinkable. Back then, the two disciplines operated at arm’s length. While neuroscience focused on uncovering and describing the details of neuroanatomy and neural activity, AI was trying to develop an independent, nonbiological path to intelligence. (Historically, technology hasn’t really needed to copy nature that slavishly; airplanes don’t fly like birds and cars don’t run like horses.) And it was AI that seemed to be advancing much more rapidly. Neuro-science knew hardly anything about what the brain was, let alone how it worked, whereas everyone with an ounce of sense believed that the day when computers would be able to do everything humans did (and do it better) was well within sight. In 1962, President Kennedy himself was persuaded of the point, pronouncing automation (or as it was often called then, “cybernation”) the core domestic challenge of the 1960s, because of the threat that it would put humans out of work.

But something derailed the AI express. Although computers could be made to handle simple objects in a controlled setting, they failed miserably at recognizing complex objects in the natural world. A microphone could distinguish sound levels but not summarize what had been said; a manipulator could pick up a clean new object lying in an ordered array but not a dirty old one lying in a jumbled heap. (Nor could it, in Marvin Minsky’s inspired example, put a pillow in a pillowcase.) Today we worry far more about competition from humans overseas than about competition from machines.

While AI’s progress has been slower than expected, neuro-science has gotten much more sophisticated in its understanding of how the brain works. Nowhere is this more obvious than in the 37 labs of MIT’s BCS Complex. Groups here are charting the neural pathways of most of the higher cognitive functions (and their disorders), including learning, memory, the organization of complex sequential behaviors, the formation and storage of habits, mental imagery, number management and control, goal definition and planning, the processing of concepts and beliefs, and the ability to understand what others are thinking. The potential impact of this research could be enormous. Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library. Just for starters, it would revolutionize criminal justice, education, marketing, parenting, and the treatment of mental dysfunctions of every kind. (Earl Miller is hoping the research done in his lab will aid in the development of therapies for learning disorders.)

Such progress is one reason the once bright line between neuro-science and AI is beginning to blur at MIT – and not just in Miller’s lab. Vision research under way at the Institute also illustrates how the two disciplines are beginning to collaborate. “The fields grew up separately,” says James DiCarlo, assistant professor of neuroscience, “but they’re not going to be separate much longer.” These days, AI researchers follow the advance of neuroscience with great interest, and the idea of reverse-
engineering the brain is no longer as implausible as it once seemed.

Pages

0 comments about this story. Start the discussion »

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me