MIT Technology Review Subscribe

An Expert’s View on Google’s Goggles

Mark Changizi, a neurobiologist and the author of The Vision Revolution, discusses Google’s augmented-reality glasses.

Project Glass, the latest sci-fi concept to come out of Google’s X Lab, has gotten a lot of attention online in the past 24 hours thanks to a clever demo video that shows a user donning a pair of augmented-reality eyeglasses which project a heads-up display of video chats, location check-ins, and appointment reminders.

Reactions to the product design have ranged skeptical to enthusiastic, but I was curious about the psychological and visual-cognitive aspects of the user experience. What would these “digital overlays” actually look and feel like? Would they really be as sharp and legible as the ones shown in the video? (I don’t know about you, but I can’t focus sharply on anything less than an inch away from my eyeball, which is where the eyeglasses’ tiny screen would be dangling.) Would they obstruct my vision and make me motion-sick? How would my brain make perceptual and physical sense of the graphics: where would I “look,” exactly, in order to “watch” the tiny picture-in-picture video chat shown at the conclusion of the clip?

Advertisement

I asked Mark Changizi, an evolutionary neurobiologist and author of The Vision Revolution, to answer some of these questions in an audio commentary track on the video, which you can watch above.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

“The graphics are not going to look like they’re floating out in front of you, because it’s only being displayed to one eye,” Changizi explains. Instead, the experience would be similar to “seeing through” the image of your own nose, which hovers semi-transparently in the periphery of our visual field at all times (even though we rarely pay attention to it). “Having non-corresponding images coming from each eye is actually something we are very much used to already,” Changizi says. “It’s not uncomfortable.” So Google’s one-eyed screen design seems biologically savvy.

Then again, Changizi continues, “they’re presenting text to you, and in order to discern that kind of detail, you need to have it in front of your fovea”—the tiny, central part of your visual field. “That’s typically *not* where we’re used to ‘seeing through’ parts of our own bodies, like our noses.” Which means that those crisp, instant-message-like alerts won’t be as simple to render as the video makes it seem.

“The more natural place to put [these interface elements], especially if it’s not text, is in the parts of your visual field where your face-parts already are,” Changizi says. This could be in the left and right periphery, where the ghost-image of your nose resides, or in the upper or bottom edges of your visual field, where you can see your cheeks when you smile or your brow when you frown. “There could be very broad geometrical or textural patterns that you could perceive vividly without having to literally ‘look at’ them,” he says. This would also make the digital overlays “feel like part of your own body,” rather than “pasted on” over the real world in an artificial or disorienting way. That experience might feel more like “sensing” the digital interface semi-subconsciously, rather than looking at it directly as if it were an iPhone screen.

A Google employee (who preferred not to be identified) confirmed to Technology Review that “the team is involved in many kinds of experimentation, and some of that will involve outdoor testing,” but wouldn’t provide any details about what that testing has revealed about the perceptual aspects of the user experience. Clearly, the concept video is meant to convey the basic premise of Project Glasses, rather than render the user experience in a biologically accurate way.

But if Google really does plan to bring this product to market before the end of 2012, as it has claimed, it is exactly these psychological and phenomenological details that will have to be examined closely.

For his part, Changizi is optimistic. “Right now we have everyone walking around focusing their vision on tiny four-inch screens held in their hands, bumping into each other,” he says. “Whatever Google does with Project Glass, it’ll surely be an improvement over that.”

This is your last free story.
Sign in Subscribe now
The Download

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement