Augmented reality, or the superimposition of the virtual world on top of the real, is one of those almost-here technologies that has been bouncing around so long it takes a company like Google to remind us of its enormous potential.
For those just tuning in, Google just pulled back the curtain on their “Project Glass” effort to roll out an AR display affordable and simple enough for the consumer market.
But why is Google diving into this technology at this precise moment in time? The answer is simple: The technology to realize first-order augmented reality is mature, or nearly so. Let’s list the barriers to getting this tech to market that are about to be crushed.
1. Bright, lightweight displays small enough to fit into a pair of glasses.
A display sized to fit within a typical eyeglass lens can’t be wider than about an inch. What’s a reasonable resolution at that distance from your eye? Try holding a non-retina iPhone up to one of your eyes about as close as you can before it goes out of focus – that’s 480 pixels wide, and the results are a little blocky. A ‘retina’ display will yield better results, and that’s 960 pixels wide.
Vuzix, one of the only companies of any size attempting to make AR glasses for consumers, has a display 720 pixels wide that is viewable through a monocle, and is intended for defense and industrial customers. (It’s also priced accordingly, from $2,500 to $5,000 per unit.)
That means Vuzix managed to create a display with a higher resolution than the iPad 3, bright enough to be seen in daylight but transparent when switched off. Compare that to efforts by the company just two years ago to sell a significantly worse pair of AR glasses to get an idea of just how far this technology has come in a very short time.
Vuzix says it will have a consumer version of its high end display in a consumer model for $600 or less by 2013.
2. A battery of sensors intended for smartphones that are accurate enough to tell the exact location and orientation of your head.
If you thought the display problem in AR was hard, just wait until you get a load of the registration problem. Unless your AR headset knows the exact location of your head in space – to within inches – and its orientation – to within a degree or so – the things popping up in your field of view won’t make any sense, because they won’t actually be on top of or even next to the objects you’re looking at.
Traditionally, this problem is solved with super accurate location systems that only work indoors. But a Google Glasses setup is going to require that the registration problem be solved in the real world, indoors and out.
Luckily, a bevy of sensors already present in your phone can be repurposed for this trick. First up, location. Thanks to vendors like Skyhook and Broadcom, a combination of GPS, cell phone tower and WiFi hotspot data can position you fairly accurately, and fast.
Next, there’s the orientation issue. Phones have compasses for absolute positioning, but also MEMS-based gyros to determine roll, pitch and yaw. Some Android phones even have a barometric presure sensor in order to determine your altitude.
3. Geographic information at a resolution high enough to make AR worthwhile.
There’s no point in rolling out an AR display if there’s nothing worth displaying. But the whole story of the location space over the past few years has been companies working behind the scenes to figure out things as mundane as the exact dimensions of countless businesses, so that when you try to check into the corner coffee shop in Foursquare, it doesn’t think you’re actually at the pizza parlor next door.
Google is even trying to map the interior of buildings, which is presumably how its AR system will perform the trick of showing you which section of a bookstore has the book you want, as pictured in its promotional video.
Meanwhile, projects like OpenStreetMap are allowing amateur cartographers to contribute to a “wikipedia of maps” that in many locations has the best resolution – both spatially and temporally – of any set of map tiles, ever.
A market so ripe that Google won’t be alone for long.
Vuzix, the much less well known competitor of Google in this space, has partnered with Nokia in its latest efforts to create AR glasses. The problems Google is tackling are so challenging that it’s unlikely that any one player in this space is going to come out of the gate with a totally satisfactory solution. That means that the next place that mobile device manufacturers will be competing based on features could well be glasses-type displays.
In fact, let’s make a bet: Apple is working on something like this already, and has been for some time. In 2010, Cupertino named wearable computing expert Richard DeVaul to the post of Senior Prototype Engineer.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
Chinese hackers disguised themselves as Iran to target Israel
But they left a few clues that gave them away.
DeepMind says it will release the structure of every protein known to science
The company has already used its protein-folding AI, AlphaFold, to generate structures for the human proteome, as well as yeast, fruit flies, mice, and more.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.