Who Will Solve Wearable Computing’s “Jetpack” Problem?
One of the best and most short-lived blogs about the future of technology wasn’t about the future at all, but the present–which is where people actually live, buy things, and put technology to use in their daily routines. In a word, it was against “jetpack” technology: stuff that sounds awesome and looks futuristic, but doesn’t really make a ton of sense from an interaction design or user experience perspective. Can we build jetpacks? Sure. Does anyone actually need or want them? No.
I can’t help thinking about jetpacks as the emerging field of “wearable computing” picks up steam. Google, Apple, and Microsoft are all plunging headlong into head-mounted augmented reality, which some consultancies say is supposed to be a multibillion-dollar industry by the year 2015. There’s no denying that the technology behind Google Glass is awesome: the automatic picture-taking functionality seems especially cool. But will you, me, and everyone we know really be wearing a Star Trek prop on our faces in three years, just so we can do the same stuff we already do with our smartphones, except without using our hands? Farhad Manjoo had his skepticism turned around by a hands-on demo for Technology Review, but I’m still not convinced.
It’s not that I don’t think someone will make this technology go mainstream. They will. But they’ll do it by solving the jetpack problem, not by gee-whizzing the hell out of tech journalists. There are two paths that most people see Google Glass-style wearables taking: that of the Bluetooth headset (inessential to most, useful to a few, and inescapably dorky) or the iPhone (essential to many, useful to most, and inescapably desirable). They’re both wrong. To truly go mainstream, wearables have to not be “technology” at all–in the same way that eyeglasses and wristwatches aren’t. They have to be part of the present, not “the future.”
This isn’t just a look-and-feel problem, although that’s part of it. A pair of Google Glasses designed by Gucci might not make you look a cyborg, but the lack of a truly compelling mainstream use case still looms large. Sergey Brin used Google Glass to take a zillion photos while driving. A famous fashion designer used them to make a short film. Thad Starner used them to look up answers to questions from a technology journalist from this magazine (and also get invisible tips from his PR handler at Google).
You know what would be much more instructive? Giving the glasses to 50 regular folks (a Fedex delivery guy, an office worker, a college student, a stay-at-home mom, a traffic cop, a sales rep, a barista) for a week and seeing how–or if–they fit the technology into their daily lives. They’re the ones who will have to be convinced that wearable computing makes sense. And none of them need a jetpack.
Keep Reading
Most Popular
Large language models can do jaw-dropping things. But nobody knows exactly why.
And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.
How scientists traced a mysterious covid case back to six toilets
When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.
The problem with plug-in hybrids? Their drivers.
Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.
Google DeepMind’s new generative model makes Super Mario–like games from scratch
Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.