Skip to Content

Google Glass Needs Phatic Interaction, Stat

When you’ve got a computer strapped to your face, do you really want to be talking to it all the time?
February 21, 2013

Google Glass’s new demo video is impressive. The product is looking less like magic–the original teaser video made visual and experiential claims that just weren’t plausible–and more like reality. The most interesting thing about the video is how it finally confirms the most mundane, and important, aspect of Google Glass’s user experience: how do you control the damn thing? Google Glass, apparently, relies on a Siri-like interaction: you invoke it by saying “OK Glass” and then issue further instructions.

The team at Google arrived at this solution after testing “dozens and dozens” of nonverbal head-gestures, and deeming them all too weird, annoying or uncomfortable. Voice commands were the lesser evil–but even Steve Lee, Glass’s product-design lead, acknowledges that jabbering at your headset dozens of times a day is not an ideal way of interacting with a wearable computer. “I think there will likely be some way to move your head, which is comfortable and natural for a user, as well as not make them look odd and strange,” he told Fast Company last summer. 

In other words: Glass needs phatic interactions. And soon. 

The term “phatic” comes from linguistics, and describes verbal expressions that aren’t meant to carry information or content, but are simply there to “keep the channel open.” It’s meta-communication. Small talk is phatic; saying “Can you hear me now?” or “You’re breaking up” over a bad cellular connection is phatic, too.

Phatic expressions can also be nonverbal–especially when applied to technology interfaces, says Laura Seargeant Richardson, an Experience Design Director at Frog. “I consider a phone’s vibration that indicates a text message to be phatic,” she told me. “It’s the interrupt, the attention-getting moment, the connection between you and the data or information the technology affords.”

Phatic feedback–meta-communication from the device to the user–is already commonplace. And a wearable computer like Google Glass has to employ lots of phatic feedback, if only to avoid being too visually distracting. But Glass’s cumbersome voice-control system shows that nonverbal phatic interactions will need to flow in the other direction, too: from the user to the device. 

Google knows this. “OK Glass” is already a phatic expression, says Richardson: “That’s very much like saying, ‘What’s up, how have you been, good to see you’ and so forth. It establishes a connection.” The real vision of Glass, though, is less like a smartphone and more like an omnipresent companion that’s always paying at least a low level of attention to whatever it is you’re doing. “OK Glass” isn’t the equivalent of waking your iPhone up from “sleep.” It’s not an object you turn on and off; it’s an assistant whose awareness you direct. Nonverbal, “nudge-like” phatic interactions will make that process much more fluid–and much less socially awkward. 

Figuring out exactly what those phatic interactions should be is a problem that Google has decided to punt on for now. Maybe it makes more sense to let them emerge from real-world use, much like the “@ message” and hashtagging conventions on Twitter did. Whether they arrive from a top-down or bottom-up process, though, phatic interfaces are the future.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.