Google Glass’s new demo video is impressive. The product is looking less like magic–the original teaser video made visual and experiential claims that just weren’t plausible–and more like reality. The most interesting thing about the video is how it finally confirms the most mundane, and important, aspect of Google Glass’s user experience: how do you control the damn thing? Google Glass, apparently, relies on a Siri-like interaction: you invoke it by saying “OK Glass” and then issue further instructions.
The team at Google arrived at this solution after testing “dozens and dozens” of nonverbal head-gestures, and deeming them all too weird, annoying or uncomfortable. Voice commands were the lesser evil–but even Steve Lee, Glass’s product-design lead, acknowledges that jabbering at your headset dozens of times a day is not an ideal way of interacting with a wearable computer. “I think there will likely be some way to move your head, which is comfortable and natural for a user, as well as not make them look odd and strange,” he told Fast Company last summer.
In other words: Glass needs phatic interactions. And soon.
The term “phatic” comes from linguistics, and describes verbal expressions that aren’t meant to carry information or content, but are simply there to “keep the channel open.” It’s meta-communication. Small talk is phatic; saying “Can you hear me now?” or “You’re breaking up” over a bad cellular connection is phatic, too.
Phatic expressions can also be nonverbal–especially when applied to technology interfaces, says Laura Seargeant Richardson, an Experience Design Director at Frog. “I consider a phone’s vibration that indicates a text message to be phatic,” she told me. “It’s the interrupt, the attention-getting moment, the connection between you and the data or information the technology affords.”
Phatic feedback–meta-communication from the device to the user–is already commonplace. And a wearable computer like Google Glass has to employ lots of phatic feedback, if only to avoid being too visually distracting. But Glass’s cumbersome voice-control system shows that nonverbal phatic interactions will need to flow in the other direction, too: from the user to the device.
Google knows this. “OK Glass” is already a phatic expression, says Richardson: “That’s very much like saying, ‘What’s up, how have you been, good to see you’ and so forth. It establishes a connection.” The real vision of Glass, though, is less like a smartphone and more like an omnipresent companion that’s always paying at least a low level of attention to whatever it is you’re doing. “OK Glass” isn’t the equivalent of waking your iPhone up from “sleep.” It’s not an object you turn on and off; it’s an assistant whose awareness you direct. Nonverbal, “nudge-like” phatic interactions will make that process much more fluid–and much less socially awkward.
Figuring out exactly what those phatic interactions should be is a problem that Google has decided to punt on for now. Maybe it makes more sense to let them emerge from real-world use, much like the “@ message” and hashtagging conventions on Twitter did. Whether they arrive from a top-down or bottom-up process, though, phatic interfaces are the future.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
Chinese hackers disguised themselves as Iran to target Israel
But they left a few clues that gave them away.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.