A Collection of Articles
Edit

Computing

What's Augmented Reality's Killer App?

Researchers plan to offer more than just directions with innovations in software and hardware.

Augmented reality (AR), which involves superimposing virtual objects and information on top of the real world, may be coming to a phone near you. As mobile phones become packed with more sensors, better video capabilities, and faster processing power, many experts predict that AR will become increasingly common. But in a panel discussion today at EmTech@MIT in Cambridge, MA, panelists will admit that several obstacles still remain and that the “killer app” for augmented reality has yet to emerge.

Augmented games: In this game, developed by researchers at Columbia University, a player holds a flat board and sees three-dimensional objects projected onto it through a head-worn display. The player tilts the game board to control a virtual ball.

Several AR apps have already been released for cell phones with positioning sensors. For example, PresseLite’s Metro Paris app and Acrossair’s Nearest Tube both provide iPhone users with augmented directions to nearby subway stops. AR apps are also available for phones powered by Google’s Android platform. Layar, developed by SPRXmobile, based in the Netherlands, overlays information from Twitter, Flickr, and Wikipedia on real-world locations, while Wikitude, from Austria-based Mobilizy, displays tourist information collected from Wikipedia. (See five new augmented reality apps.)

Some researchers believe that AR represents a fundamentally new way to organize and interact with information. “In the future, we see augmented reality as a component of any kind of digital media interaction,” says Mobilizy’s co-CEO, Alexander Igelsboeck, who will speak at the EmTech@MIT session.

This week Mobilizy released a new language for AR called Augmented Reality Mark-up Language (ARML). With ARML, Mobilizy hopes to make it easier for programmers to create location-based content for AR applications. The company envisions ARML as equivalent to HTML for the Web, and Igelsboeck emphasizes the importance of open content and standardization for AR to take off. “We want to open those standards to be available for developer communities that can create innovative applications around this augmented experience,” he says.

But many challenges still remain. For instance, the positioning technology currently available in cell phones falls short for sophisticated AR applications. The GPSs built into smart phones “were really not designed for AR,” says panelist Steven Feiner, a professor of computer science at Columbia University. “They were designed for simpler applications.”

Feiner, who has worked on AR for over a decade, notes that early examples of AR required wearing a computer backpack and using cumbersome head-mounted displays. “[But] the tracking that we used [in 2001] was much, much better,” he says.

Feiner is focusing on less-mainstream applications for AR–he has developed one program that shows levels of carbon monoxide in Manhattan (see image above), and another that shows virtual labels for engineers–for example, a floating tag that says, “Remove this bolt using a 1/4 inch socket wrench”. He adds that better object recognition and posture tracking, as well as a way to deal with direct sunlight, will help AR become more practical.

Pollution visualized: Another application developed at Columbia shows carbon monoxide levels projected over New York City. The height of each ball reflects concentrations of the pollutant.

Another potential obstacle for AR is social acceptance. While people already text or check e-mail while they walk, looking through a phone can be awkward. Feiner suggests that well-designed goggles could help. “There’s a very high bar of what people are willing to wear on their heads,” he says.

Last spring, a group at the MIT Media Lab demoed an interface that avoids the need to look at a display altogether. Graduate student Prana Mistry, a 2009 TR35 winner, developed SixthSense, a device that combines a webcam and a projector worn around the neck, along with colored markers on the fingers, to recognize a user’s gestures and project information onto surfaces. (See a TR video of SixthSense in action here.)

“Your world can be augmented without you having to change your behavior and do anything extra [like] taking out your cell phone and starting an application,” says MIT professor Pattie Maes, who heads the SixthSense project. Maes’s group is also exploring technical applications for AR. “If my car stops working, I might open the hood and an expert might remotely see what I see and [then] project information in front of the engine, saying things like, ‘Open this valve,’” explains Maes.

Nokia’s Mobile Augmented Reality Applications and Mixed Reality Experiences projects aim to use a combination of hardware in AR applications. Ville-Veikko Mattila, the senior research manager at Nokia Research Center, believes that combining visual and audio information could be most practical. “I think it’s clear that people won’t be walking and holding a device upright. Therefore, the use of audio may be more intuitive,” he says.

Mattila adds that AR could potentially combine social information and location-based services to give user-tailored recommendations. For example, an application could show what your friends think of a particular restaurant, instead of providing a guidebook’s reviews.

“There’s a lot of hype obviously,” Feiner says. But ultimately he agrees that AR may be able to help people with their daily lives. “Like being able to get somewhere, find information, or recognize a face of a person you know, but can’t remember the name of,” he says.

You've read of free articles this month.