Augmented-reality technologies overlay digitally generated audio, visual, or haptic feedback on a user’s perception of the physical world. A technological dream since the 1960s, AR is now on the cusp of commercial viability: 2016 saw the massive popularity of the AR-based smartphone game Pokemon Go, and AR is appearing in more sophisticated, dedicated devices such as Microsoft’s HoloLens and Meta’s Meta 2 headset, as well as automotive windshields. These advances are happening quickly, and AR promises exciting new user experiences in domains ranging from training and education to games to everyday life.
While the technology and applications underlying AR are rapidly advancing, however, little thought has been given to how these systems should protect the security, privacy, or safety of users. Starting in 2011—before Google Glass was announced, when such technologies were still largely in the realm of science fiction—my collaborators and I have been working to understand and address this gap.
For example, imagine moving around the world wearing an AR headset that provides useful functionality: it recognizes colleagues and reminds you of your next meeting with them; it shows walking and driving directions overlaid directly on the road; it automatically translates text and speech when you travel; and it lets you play Pokemon with your kids. Now imagine accidentally installing a malicious application that blocks your view of oncoming cars as you’re crossing the street, startles you with scurrying spiders, makes people you know look like strangers, or plasters everything with distracting advertisements. At the same time, you might find it a bit creepy that the device and its applications have access to a constant video and audio feed of your surroundings, not to mention that you’re being recorded by other people’s devices. Keiichi Matsuda’s short film “Hyper-Reality” shows one vision of such a dystopian future:
Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.