Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

When Google first introduced Glass over a year and a half ago, one question loomed: what kind of apps could make it worth wearing a head-mounted computer everywhere you go?

There’s still no good answer for that, in part because Glass is still not publicly available; it is expected to be released sometime this year. However, a select group of developers have had Glass in their hands (and on their heads) for months, and the apps they’re developing hint at what they think will make Glass—and other head-worn computers—a mainstream hit.

Some of these apps seek to make technology less distracting (for the wearer, at least, since Glass is hard to ignore on someone’s face). Others focus on activities that may be better suited for a computer on your face than a smartphone or a laptop.

Satish Sampath and Kenny Stoltz have created an app called Moment Camera that takes advantage of Glass’s five-megapixel camera. The app takes pictures every few seconds when it detects the presence of faces. It uses Glass’s accelerometer, gyroscope, and compass to figure out the most opportune time to snap a shot, and later uploads the photos to a remote server and sort outs the ones it believes are the best. “Glass has this sort of built-in awareness that a phone that’s in your pocket or sitting face-down on a table doesn’t have,” Stoltz says.

The benefit is that people would not have to stop what they are doing to take pictures themselves, Stoltz says: “We want to give people back attention.”

A similar idea motivates Georgia Tech professor Thad Starner, who is the technical lead on Glass for Google, as he develops an app called Captioning on Glass. It will transcribe the words that someone speaks into a smartphone onto the Glass display of someone with impaired hearing. “By having a head-up display, the wearer can stay ‘in the flow’ of the conversation, attending the other person’s face to get as much information as possible while speeding the natural conversation,” he wrote recently for Wired.

While many apps are being newly built for Glass, a number of developers see Glass as an even better way to present apps they previously developed for smartphones, chiefly because its display—equivalent to a 25-inch high-definition display seen from eight feet away—is hands-free and can be ever-present.

That’s the thinking at Quest Visual, whose Android and iOS translation app Word Lens uses your smartphone’s screen to translate signs in front of you, in real time, without needing an Internet connection. “We’d been keeping tabs on Glass for quite a while because it seemed like quite the ideal device for Word Lens because it has a camera wherever you’re looking,” says Bryan Lin, who leads Quest Visual’s Android development.

The company’s Glass app, built over two months, is essentially the same as its Android app, but with a different user interface. Glass wearers say, “OK, Glass, translate this” while looking at a sign they’d like to understand. The app pulls images from Glass’s camera, runs some character recognition software on them, and shows the translated text on Glass’s projected display.

Since Glass is an Android device at its core, developers who are already familiar with creating apps for that operating system are relatively at ease doing so for Glass. However, developers have access to only a smattering of capabilities on Glass. One area conspicuously absent is any kind of facial recognition—given Glass’s position on your face, you can imagine it helping you pull up details on the fly about an old college buddy you just ran into on the street. Wary that this could be perceived as too invasive, Google has said it will not bring the feature to products like Glass “without having strong privacy protections in place” and won’t approve apps that include it. (At least one developer—Stephen Balaban, founder of facial-recognition startup Lambda Labs—has built facial-recognition software tools that developers could use to add the capability to their apps, regardless.)

Although Glass looks different from so many of the laptops, cell phones, and tablets we’re used to, developers say they still grapple with a familiar problem: figuring out how to make apps as battery-efficient as possible. Google’s Glass specification sheet indicates you’ll get “one day of typical use” out of Glass, and that features like making video calls and recording videos “are more battery intensive.” Unfortunately, this means that apps that rely on a number of Glass’s functions can quickly run down its power, and it can be a difficult issue to fix.

Quest Visual addressed this by forcing its app to zoom in or out on whatever you want to translate. That means its app runs its translation algorithm only when it zooms in, which reduces the work for Glass’s CPU. “This is the part of what took us a month to get right—how to do it without draining the battery in under 15 minutes,” Lin says.

13 comments. Share your thoughts »

Credit: Image courtesy of Quest Visual

Tagged: Computing, Business, Communications, Mobile, Google, Android, wearable technology

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me