MIT Technology Review Subscribe

The Memory Trick Making Computers Seem Smarter

One startup’s approach to teaching computers to learn shows the value of applying new ideas to machine learning.

After several decades in the doldrums, AI is experiencing quite a renaissance. In recent years, amazing progress has been made using so-called deep learning, training algorithms with large amounts of data so that they can recognize subtle patterns. Such approaches have enabled computers to recognize faces in an image or the text of speech, often with eerily human accuracy.

It’s becoming clear, however, that fundamentally new approaches will be needed if machines are to go demonstrate more meaningful intelligence. One technique, being applied by a Silicon Valley startup called MetaMind, shows how adding novel memory capabilities to deep learning can produce impressive results when it comes to answering questions about the content of images. The company was founded by Richard Socher, a machine-learning expert who left an academic post at Stanford to found the company.

Advertisement

Socher’s creation uses what it calls a dynamic memory network (DMN) to enable computers to infer useful things from various inputs. These let a deep-learning system store and update facts as it parses more information. Previously the company showed how its system can feed on different sentences and figure out how to answer some fairly sophisticated questions that require inference. This ability has now been applied to answering questions about the contents of images.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in
Richard Socher, founder of MetaMind.

As a piece on MetaMind in the New York Times explains, the results are quite basic, and nothing like as sophisticated as a human’s ability to understand what’s going on in images. But the technology shows how new approaches, especially ones that take inspiration from the way memory seems to work in biological brains, may hold the key to the next big step forward in AI.

(Read more: New York Times, “Computers Are Getting a Dose of Common Sense,” “Teaching Machines to Understand Us,” “Next Big Test for AI: Making Sense of the World”)

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement