Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

Augmented Reality
An exciting emerging interface is augmented reality, an approach that fuses virtual information with the real world.

The earliest augmented-reality interfaces required complex and bulky motion-sensing and computer-graphics equipment. More recently, cell phones featuring powerful processing chips and sensors have to bring the technology within the reach of ordinary users.

Examples of mobile augmented reality include Nokia’s Mobile Augmented Reality Application (MARA) and Wikitude, an application developed for Google’s Android phone operating system. Both allow a user to view the real world through a camera screen with virtual annotations and tags overlaid on top. With MARA, this virtual data is harvested from the points of interest stored in the NavTeq satellite navigation application. Wikitude, as the name implies, gleans its data from Wikipedia.

These applications work by monitoring data from an arsenal of sensors: GPS receivers provide precise positioning information, digital compasses determine which way the device is pointing, and magnetometers or accelerometers calculate its orientation. A project called Nokia Image Space takes this a step further by allowing people to store experiences–images, video, sounds–in a particular place so that other people can retrieve them at the same spot.

Spatial Interfaces
In addition to enabling augmented reality, the GPS receivers now found in many phones can track people geographically. This is spawning a range of new games and applications that let you use your location as a form of input.

Google’s Latitude, for example, lets users show their position on a map by installing software on a GPS-enabled cell phone. As of October 2008, some 3,000 iPhone apps were already location aware. One such iPhone application is iNap, which is designed to monitor a person’s position and wake her up before she misses her train or bus stop. The idea for it came after Jelle Prins, of Dutch software development company Moop, was worried about missing his stop on the way to the airport. The app can connect to a popular train-scheduling program used in the Netherlands and automatically identify your stops based on your previous travel routines.

SafetyNet, a location-aware application developed for Google’s Android platform, lets user define parts of town that they deem to be generally unsafe. If they accidentally wander into one of these no-go areas, the program becomes active and will sound an alarm and automatically call 911 on speakerphone in response to a quick shake.

Brain-Computer Interfaces
Perhaps the ultimate computer interface, and one that remains some way off, is mind control.

Surgical implants or electroencephalogram (EEG) sensors can be used to monitor the brain activity of people with severe forms of paralysis. With training, this technology can allow “locked in” patients to control a computer cursor to spell out messages or steer a wheelchair.

Some companies hope to bring the same kind of brain-computer interface (BCI) technology to the mainstream. Last month, Neurosky, based in San Jose, CA, announced the launch of its Bluetooth gaming headset designed to monitor simple EEG activity. The idea is that gamers can gain extra powers depending on how calm they are.

Beyond gaming, BCI technology could perhaps be used to help relieve stress and information overload. A BCI project called the Cognitive Cockpit (CogPit) uses EEG information in an attempt to reduce the information overload experienced by jet pilots.

The project, which was formerly funded by the U.S. government’s Defense Advanced Research Projects Agency (DARPA), is designed to discern when the pilot is being overloaded and manage the way that information is fed to him. For example, if he is already verbally communicating with base, it may be more appropriate to warn him of an incoming threat using visual means rather than through an audible alert. “By estimating their cognitive state from one moment to the next, we should be able to optimize the flow of information to them,” says Blair Dickson, a researcher on the project with U.K. defense-technology company Qinetiq.

7 comments. Share your thoughts »

Credit: Microsoft

Tagged: Computing, Microsoft, displays, multitouch, Surface, computer interface

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me
×

A Place of Inspiration

Understand the technologies that are changing business and driving the new global economy.

September 23-25, 2014
Register »