Skip to Content

The Memory Trick Making Computers Seem Smarter

One startup’s approach to teaching computers to learn shows the value of applying new ideas to machine learning.
March 7, 2016

After several decades in the doldrums, AI is experiencing quite a renaissance. In recent years, amazing progress has been made using so-called deep learning, training algorithms with large amounts of data so that they can recognize subtle patterns. Such approaches have enabled computers to recognize faces in an image or the text of speech, often with eerily human accuracy.

It’s becoming clear, however, that fundamentally new approaches will be needed if machines are to go demonstrate more meaningful intelligence. One technique, being applied by a Silicon Valley startup called MetaMind, shows how adding novel memory capabilities to deep learning can produce impressive results when it comes to answering questions about the content of images. The company was founded by Richard Socher, a machine-learning expert who left an academic post at Stanford to found the company.

Socher’s creation uses what it calls a dynamic memory network (DMN) to enable computers to infer useful things from various inputs. These let a deep-learning system store and update facts as it parses more information. Previously the company showed how its system can feed on different sentences and figure out how to answer some fairly sophisticated questions that require inference. This ability has now been applied to answering questions about the contents of images.

Richard Socher, founder of MetaMind.

As a piece on MetaMind in the New York Times explains, the results are quite basic, and nothing like as sophisticated as a human’s ability to understand what’s going on in images. But the technology shows how new approaches, especially ones that take inspiration from the way memory seems to work in biological brains, may hold the key to the next big step forward in AI.

(Read more: New York Times, “Computers Are Getting a Dose of Common Sense,” “Teaching Machines to Understand Us,” “Next Big Test for AI: Making Sense of the World”)

Keep Reading

Most Popular

Here’s how a Twitter engineer says it will break in the coming weeks

One insider says the company’s current staffing isn’t able to sustain the platform.

Technology that lets us “speak” to our dead relatives has arrived. Are we ready?

Digital clones of the people we love could forever change how we grieve.

How to befriend a crow

I watched a bunch of crows on TikTok and now I'm trying to connect with some local birds.

Starlink signals can be reverse-engineered to work like GPS—whether SpaceX likes it or not

Elon said no thanks to using his mega-constellation for navigation. Researchers went ahead anyway.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.