At CES, a Preview of Tomorrow's Wearable Computers
Eyeglasses that overlay data and imagery onto the real world will unlock new kinds of mobile computing.
The Consumer Electronics Show in Las Vegas isn’t just a place to see new products from gadget giants like Samsung and Sony; it’s also a place to see small companies with disruptive ideas that become big consumer technologies in the future.
This year, several of the most promising small exhibitors were showing off technology that could free us from having to peer down at our mobile devices—glasses that can overlay digital data onto the world around us.
One of those companies was Lumus Optics, based near Tel Aviv, Israel. It demonstrated prototype glasses that display translucent, almost opaque imagery that fills the wearer’s view like a 10-foot-wide TV two feet in front of his face. Ari Grobman, business development manager for Lumus, told Technology Review that his company was working with “a number of top 10 consumer device companies” interested in commercializing the technology. He said nondisclosure agreements prevented him from saying more.
“We have a crazy amount of computing horsepower and bandwidth in our small mobile devices, but you can’t get the full utility of that,” says Grobman. “This will change that.”
In demonstrations, the glasses overlaid video of dancers or of a mocked-up GPS navigation app onto the wearer’s vision.
The glasses rely on a computer or phone to provide them with imagery, a link that can be made using Bluetooth. Adding sensors like accelerometers and a camera to the glasses will enable sophisticated apps, says Grobman, such as one that uses facial recognition to call up useful information about people. The technology to enable this is already available. Facebook and Google use facial recognition to help users tag photos, while Israeli company Face.com provides a facial recognition service that can be built into other software.
“Once you have it, the community of developers will bring stuff we haven’t thought of yet, the same as with touch screens and the iPhone,” says Grobman. He guesses that consumer devices will appear in “two years, maybe less.”
Vuzix of Rochester, New York, estimates that its augmented-reality technology will reach consumers in a similar time frame. At CES the company displayed a monocular display that will go on sale later in 2012 for $5,000 to $10,000. That first product will be aimed at the military and industry, says Clark Dever, marketing manager for Vuzix, but the company plans to develop a more consumer-friendly version, too.
The industrial version is intended for things like overlaying schematics of a machine onto the vision of a mechanic. “When you call tech support, they can draw guidance in your field of view,” says Dever. The lens of a Vuzix display is made from glass etched with waveguides that steer light emitted in the frame to a place where mirrors patterned into the glass direct it to the eye of the wearer. It can also connect with any device that uses Bluetooth.
Most companies working on smart glasses got started building systems for the military. Lumus’s technology is used by pilots in the U.S. Air Force and U.S. National Guard. Vuzix got started working on pilot goggles for the U.S. Air Force that overlaid a preview of a weapon’s blast radius onto the place being targeted, to increase awareness of potential collateral damage.
The software needed to offer heads-up augmented reality is far ahead of the hardware. Various augmented reality apps for smart phones can already recognize markers, text, or even landmarks in front of a device’s camera and respond by overlaying text or visuals onto a view of a scene.
One example of this type of augmented reality at CES was Aurasma, a division of software company Autonomy. Aurasma’s app can recognize images or landmarks and add virtual 3-D objects, for example showing dragons circling London’s Big Ben to promote a Harry Potter movie. The app has been downloaded over two million times.
Aurasma can also recognize hand gestures, making virtual content interactive—a feature that would be valuable in smart goggles. “What we’ve got today would work just as well with goggles if they were available,” says Matt Mills, head of partnerships and innovation at Aurasma.
Mills says many Google searches are made to find out more about things that are literally right in front of us. “We’re trying to move things on to the point where this is the way you get your information, rather than having to use a Web browser,” says Mills. “It’s much faster to snap a picture.”
Another headset on display at CES demonstrated how smart goggles could immerse the wearer in a world far removed from the one in front of them. Sensics, a Columbia, Maryland, company, showed off its Smart Goggles, which cover a person’s eyes and ears, and look like an updated version of Robocop’s helmet. A video display in front of each eye allows the Smart Goggles to completely immerse the wearer in a virtual 3-D environment. The front of the Smart Goggles is studded with 11 cell-phone cameras, which can be used to detect hand and arm gestures to allow a person to interact with what they see.
“When you’re playing Fruit Ninja, you can really be swiping with your hand instead of tapping on your phone’s screen,” says Jason Kaplan, who leads business development at Sensics. Inside the goggles, a dual-core mobile processor runs the latest version of Google’s Android mobile operating system, Ice Cream Sandwich.
Bradford Schmidt, head of media at GoPro, which makes small digital cameras used by extreme sports enthusiasts to capture their experiences firsthand, said he’d like to connect Sensic’s technology with his own. “We do a lot of RC planes, and we would love to have a set of goggles where you can fly the plane from its point of view.”
At EmTech MIT, our journalism is brought to life.
Network with like-minded professionals to stay in the know.