MIT Technology Review Subscribe

Three Questions for Patti Maes

Maes, whose research group studies human-computer interaction, says mobile devices may soon eavesdrop on their owners to anticipate their needs.

What will smart phones be like five years from now?

Phones may know not just where you are but that you are in a conversation, and who you are talking to, and they may make certain information and documents available based on what conversation you’re having. Or they may silence themselves, knowing that you’re in an interview.

Advertisement

They may get some information from sensors and some from databases about your calendar, your habits, your preferences, and which people are important to you.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Once the phone is more aware of the user’s current situation, and the user’s context and preferences and all that, then it can do a lot more. It can change the way it operates based on the current context.

Ultimately, we may even have phones that constantly listen in on our conversations and are just always ready with information and data that might be relevant to whatever conversation we’re having.

How will mobile interfaces be different?

Speech is just one element. There may be other things—like phones talking to one another. So if you and I were to meet in person, our phones would be aware of that and then could make all the documents available that might be relevant to our conversation, like all the e-mails we exchanged before we engaged in the meeting.

Just like if you go to Google and do a search, all the ads are highly relevant to the search you’re doing, I can imagine a situation where the phone always has a lot of recommendations and things that may be useful to the user given what the user is trying to do.

Another idea is expanding the interaction that the user has with the phone to more than just touch and speech. Maybe you can use gestures to interact. Sixth Sense, which we built, can recognize gestures; it can recognize if something is in front of you and then potentially overlay information, or interfaces, on top of the things in front of you.

What do you think of Google’s augmented-reality project, its so-called Google Goggles?

Advertisement

People—like Google, but others before them—have looked at heads-up displays for augmented reality, so that the phone can constantly present visual as well as auditory information related to your environment.

The technologies that I’ve seen for augmented-reality heads-up displays really leave a lot to wish for. Maybe Google has some technology I’m not familiar with, but all the heads-up displays that I’ve used are not very interesting for a variety of reasons: they have a narrow field of view, and they’re very heavy—really gigantic, bulky things.

Maybe they’re working with something that I don’t know about—they’re very secretive about a lot of the work—but I don’t expect these things to take off right away.

I suspect these are early prototypes and it may be a while before these become consumer products.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement