Skip to Content

An Expert’s View on Google’s Goggles

Mark Changizi, a neurobiologist and the author of The Vision Revolution, discusses Google’s augmented-reality glasses.
April 6, 2012

Project Glass, the latest sci-fi concept to come out of Google’s X Lab, has gotten a lot of attention online in the past 24 hours thanks to a clever demo video that shows a user donning a pair of augmented-reality eyeglasses which project a heads-up display of video chats, location check-ins, and appointment reminders.

Reactions to the product design have ranged skeptical to enthusiastic, but I was curious about the psychological and visual-cognitive aspects of the user experience. What would these “digital overlays” actually look and feel like? Would they really be as sharp and legible as the ones shown in the video? (I don’t know about you, but I can’t focus sharply on anything less than an inch away from my eyeball, which is where the eyeglasses’ tiny screen would be dangling.) Would they obstruct my vision and make me motion-sick? How would my brain make perceptual and physical sense of the graphics: where would I “look,” exactly, in order to “watch” the tiny picture-in-picture video chat shown at the conclusion of the clip?

I asked Mark Changizi, an evolutionary neurobiologist and author of The Vision Revolution, to answer some of these questions in an audio commentary track on the video, which you can watch above.

“The graphics are not going to look like they’re floating out in front of you, because it’s only being displayed to one eye,” Changizi explains. Instead, the experience would be similar to “seeing through” the image of your own nose, which hovers semi-transparently in the periphery of our visual field at all times (even though we rarely pay attention to it). “Having non-corresponding images coming from each eye is actually something we are very much used to already,” Changizi says. “It’s not uncomfortable.” So Google’s one-eyed screen design seems biologically savvy.

Then again, Changizi continues, “they’re presenting text to you, and in order to discern that kind of detail, you need to have it in front of your fovea”—the tiny, central part of your visual field. “That’s typically *not* where we’re used to ‘seeing through’ parts of our own bodies, like our noses.” Which means that those crisp, instant-message-like alerts won’t be as simple to render as the video makes it seem.

“The more natural place to put [these interface elements], especially if it’s not text, is in the parts of your visual field where your face-parts already are,” Changizi says. This could be in the left and right periphery, where the ghost-image of your nose resides, or in the upper or bottom edges of your visual field, where you can see your cheeks when you smile or your brow when you frown. “There could be very broad geometrical or textural patterns that you could perceive vividly without having to literally ‘look at’ them,” he says. This would also make the digital overlays “feel like part of your own body,” rather than “pasted on” over the real world in an artificial or disorienting way. That experience might feel more like “sensing” the digital interface semi-subconsciously, rather than looking at it directly as if it were an iPhone screen.

A Google employee (who preferred not to be identified) confirmed to Technology Review that “the team is involved in many kinds of experimentation, and some of that will involve outdoor testing,” but wouldn’t provide any details about what that testing has revealed about the perceptual aspects of the user experience. Clearly, the concept video is meant to convey the basic premise of Project Glasses, rather than render the user experience in a biologically accurate way.

But if Google really does plan to bring this product to market before the end of 2012, as it has claimed, it is exactly these psychological and phenomenological details that will have to be examined closely.

For his part, Changizi is optimistic. “Right now we have everyone walking around focusing their vision on tiny four-inch screens held in their hands, bumping into each other,” he says. “Whatever Google does with Project Glass, it’ll surely be an improvement over that.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.