Skip to Content

The Memory Trick Making Computers Seem Smarter

One startup’s approach to teaching computers to learn shows the value of applying new ideas to machine learning.
March 7, 2016

After several decades in the doldrums, AI is experiencing quite a renaissance. In recent years, amazing progress has been made using so-called deep learning, training algorithms with large amounts of data so that they can recognize subtle patterns. Such approaches have enabled computers to recognize faces in an image or the text of speech, often with eerily human accuracy.

It’s becoming clear, however, that fundamentally new approaches will be needed if machines are to go demonstrate more meaningful intelligence. One technique, being applied by a Silicon Valley startup called MetaMind, shows how adding novel memory capabilities to deep learning can produce impressive results when it comes to answering questions about the content of images. The company was founded by Richard Socher, a machine-learning expert who left an academic post at Stanford to found the company.

Socher’s creation uses what it calls a dynamic memory network (DMN) to enable computers to infer useful things from various inputs. These let a deep-learning system store and update facts as it parses more information. Previously the company showed how its system can feed on different sentences and figure out how to answer some fairly sophisticated questions that require inference. This ability has now been applied to answering questions about the contents of images.

Richard Socher, founder of MetaMind.

As a piece on MetaMind in the New York Times explains, the results are quite basic, and nothing like as sophisticated as a human’s ability to understand what’s going on in images. But the technology shows how new approaches, especially ones that take inspiration from the way memory seems to work in biological brains, may hold the key to the next big step forward in AI.

(Read more: New York Times, “Computers Are Getting a Dose of Common Sense,” “Teaching Machines to Understand Us,” “Next Big Test for AI: Making Sense of the World”)

Keep Reading

Most Popular

transplant surgery
transplant surgery

The gene-edited pig heart given to a dying patient was infected with a pig virus

The first transplant of a genetically-modified pig heart into a human may have ended prematurely because of a well-known—and avoidable—risk.

open sourcing language models concept
open sourcing language models concept

Meta has built a massive new language AI—and it’s giving it away for free

Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

Muhammad bin Salman funds anti-aging research
Muhammad bin Salman funds anti-aging research

Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging

The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.

images created by Google Imagen
images created by Google Imagen

The dark secret behind those cute AI-generated animal images

Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.

Stay connected

Illustration by Rose WongIllustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.