Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

TR: Why is inference possible now?

AC: One thing is that computing systems are now able to tap into all the data that’s available on the Internet and learn from it. For instance, object recognition in machines is getting better because we are able to learn from all the pictures available on the Internet. (See “Better, More-Accurate Image Search.”) The same thing goes for language translation systems making use of the United Nations’ corpus of documents in Arabic and Chinese. This is also being fueled by disk drives getting big and cheap, and the powerful transition to nonvolatile memory. Being able to have random access to data with very low power is going to have a revolutionary impact.

TR: How does terascale computing fit into all this?

AC: In order to figure out what you’re doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously. The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated, and they tease out obscure connections. Right now these algorithms work on large systems built for a specific purpose, and it takes a PhD to get these things to work. We are looking forward to having these algorithms be in an API [application programming interface] that you can call on, like a platform service which is as reliable to access as a file system. This way, the average programmer without a PhD can make use of these machine-learning algorithms.

TR: How far away are we from seeing this in consumer gadgets?

AC: Machine learning and interference technology have been accepted by a broad slice of the research community, but we’re mired in a moderate level of quality. It’s not unusual for these systems to get things right 80 percent of the time. The scientific community says that’s great. But it wouldn’t be helpful to have a personal assistant that looked at you and only correctly knew what you were doing 80 percent of the time. Likewise, a computer isn’t going to be helpful if it’s wrong part of the time.

Ultimately, I think it’s a dance between how well the algorithms will be able to work, and how people react to them being wrong. Within five years, I think you’re going to see significant advances in performance. You’ll see demonstrations in the research world that are credible. I think the mainstream marketplace could pick up on it three years later, but at that point it’s hard to predict. The precursors for this technology are all there, though, and I see a huge need for it.

10 comments. Share your thoughts »

Credit: Intel

Tagged: Business, Intel, chip, multicore, low-power chips, multicore computers

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me