Skip to Content

Wearable Computing Pioneer Says Google Glass Offers “Killer Existence”

Thad Starner thinks people will soon crave the ultrafast communication that Google Glass makes possible.

Few gadgets have generated as much excitement and hostility as Google Glass, a voice-activated computer-monitor combo worn on eyeglass frames. Now being tested by early adopters, Glass is an ambitious attempt to advance “wearable computing.” It’s also a milestone for Thad Starner, a Georgia Tech professor who has been building and wearing head-mounted computers since 1993. A decade ago, he showed Google founders Larry Page and Sergey Brin a clunky version of such a device; in 2010 they hired Starner to be a technical lead for Project Glass. He met recently with MIT Technology Review IT editor Rachel Metz.

What was your first wearable computing device?

The first one that really worked was a Private Eye [head-mounted display] hooked up to a 12-megahertz [Intel 286 processor] with two megabytes of RAM and an 85-megabyte hard disk. That was hooked up to a two-pound car cell phone, and all that was driven by a seven-pound motorcycle battery. All in a shoulder bag. I wore the display on my head. The bag had all the components and also a Twiddler [a small handheld keyboard].

Does the arrival of Glass make you feel that everyone else can finally catch up to your way of seeing the world?

Yes. Real smartphones didn’t really come out until the mid-2000s. For [wearable-computing advocates], the smartphone was kind of a letdown, because it’s something that takes your attention off the real world. It’s something that’s very hard to use effectively while walking down the street. It’s so fast for me to get information in and out [of the wearable computer] that it’s much less socially obtrusive.

How is Glass less obtrusive than a smartphone? You’re wearing something on your face.

For me to go back and look for a message I sent you last takes me a few seconds. It’s something I can do all the time. It’s not something you can do all the time with a smartphone.

There’s already been a backlash, in large part because people can use Glass to make hands-free videos of their surroundings. Users are being called “Glassholes.” Does this surprise you?

I’ve been seen with interactive systems since 1993. There’s nothing I’ve heard [about Glass] that I haven’t heard before. And most of the time people, when they talk about these issues—they haven’t actually used one. They’ve never actually seen somebody use one. Can bystanders notice you’re using it? As a matter of fact, Glass does a very good job of that. You can actually see what the person is doing. You can actually see there’s a camera on. Glass makes a horrible, horrible spy device.

Still, a lot of people think it’s ridiculous.

So were most new devices when they were introduced. So were cell phones, right? So were eyeglasses. So were cars.

That’s lofty, isn’t it, to compare Glass to things like the automobile?

I believe if we reduce the time between intention and action, it causes a major change in what you can do, period. When you actually get it down to two seconds, it’s a different way of thinking, and that’s powerful. And so I believe, and this is what a lot of people believe in academia right now, that these on-body devices are really the next revolution in computing.

If I want a wearable computer, couldn’t I just get a wristwatch device like Pebble?

They don’t quite have the functionality. How do you take the picture of your baby’s first steps with a wristwatch?

You can take out your phone. Isn’t it okay to get a photo of the third step?

It takes 20 seconds to get that picture. Then it’s already happened, it’s already passed. The same thing with a wristwatch. You don’t really have a good way of taking a good picture with a wristwatch from a first-person perspective. I think the heads-up display is a better interface for most things you want to do.

How will Glass change the way we interact with each other?

Well, now you’ll actually be able to capture your baby’s first steps.

But in terms of having a conversation with your wife or your kids, you don’t think people will find it distracting?

If I walk by my students at Georgia Tech and you ask them, “Was he wearing it or not?” they can’t tell you. It’s just so a part of me, they don’t even notice it anymore.

What other applications would you like to see Glass have in the future?

I’ll tell you one thing I found compelling early on—this is something from 1993, called the Remembrance Agent. Imagine that as you’re, say, writing up this article, as you’re typing along, it pulls up articles from your past, or notes from your past, that might be relevant to what you’re currently typing. Having something that’s continually watching what you’re typing that will help pull up your past memories is really surprisingly powerful.

[Overall] we’re going to see calm technology that helps mediate interruptions instead of just adding to them. Part of that is the fast interaction—if you have a good idea, get it out of your head, onto the device, and go on with your life quickly. Having a device that knows what’s important to you and what’s not at any given time helps mediate those interruptions. That is a very powerful assistant. [Another] thing is that we’re going to see these interfaces that augment the user’s eyes, ears, and mind in such a way that it actually helps with their daily life instead of distracts them. Suppose you’re playing a video game or watching TV. Having something that actually shows you the TV guide or a second screen while you’re doing other things is really powerful.

There’s a lot of ways to improve it. A lot of it is going to be in how people use it, how people integrate it into their lifestyles. People always talk about the killer app, but this is more a killer lifestyle. It’s a killer existence.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.