Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Patti Maes and Natan Linder, a research student at the Media Lab, inspect a novel interface device that Linder created.

What will smart phones be like five years from now?

Phones may know not just where you are but that you are in a conversation, and who you are talking to, and they may make certain information and documents available based on what conversation you’re having. Or they may silence themselves, knowing that you’re in an interview.

They may get some information from sensors and some from databases about your calendar, your habits, your preferences, and which people are important to you.

Once the phone is more aware of the user’s current situation, and the user’s context and preferences and all that, then it can do a lot more. It can change the way it operates based on the current context.

Ultimately, we may even have phones that constantly listen in on our conversations and are just always ready with information and data that might be relevant to whatever conversation we’re having.

How will mobile interfaces be different?

Speech is just one element. There may be other things—like phones talking to one another. So if you and I were to meet in person, our phones would be aware of that and then could make all the documents available that might be relevant to our conversation, like all the e-mails we exchanged before we engaged in the meeting.

Just like if you go to Google and do a search, all the ads are highly relevant to the search you’re doing, I can imagine a situation where the phone always has a lot of recommendations and things that may be useful to the user given what the user is trying to do.

Another idea is expanding the interaction that the user has with the phone to more than just touch and speech. Maybe you can use gestures to interact. Sixth Sense, which we built, can recognize gestures; it can recognize if something is in front of you and then potentially overlay information, or interfaces, on top of the things in front of you.

What do you think of Google’s augmented-reality project, its so-called Google Goggles?

People—like Google, but others before them—have looked at heads-up displays for augmented reality, so that the phone can constantly present visual as well as auditory information related to your environment.

The technologies that I’ve seen for augmented-reality heads-up displays really leave a lot to wish for. Maybe Google has some technology I’m not familiar with, but all the heads-up displays that I’ve used are not very interesting for a variety of reasons: they have a narrow field of view, and they’re very heavy—really gigantic, bulky things.

Maybe they’re working with something that I don’t know about—they’re very secretive about a lot of the work—but I don’t expect these things to take off right away.

I suspect these are early prototypes and it may be a while before these become consumer products.

12 comments. Share your thoughts »

Credit: MIT Media Lab/Fluid Interfaces group

Tagged: Computing, smartphones, augmented reality

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »