Skip to Content

Why Google Glass Is Just the Beginning

In Google’s backyard, a startup has its eyes on glasses that offer more ways to interact with the digital world.

While Google toils to perfect the head-worn mobile computer known as Google Glass, a startup located literally down the street from its Silicon Valley campus is hard at work on a similar system that it believes will let users touch and move virtual objects instead of just viewing them.

On display: A user manipulates a 3-D cube while viewing it through Atheer’s prototype glasses.

Software being developed by Atheer Labs could lead to computerized glasses and other wearable devices that let you conduct video conference calls with people as though they were actually in the room with you, navigate a map by moving your head, or play 3-D games that feel truly interactive. And whereas Google Glass—for now, at least—is available only on Google’s purpose-built eyewear, founder and CEO Soulaiman Itani says Atheer (a play on “ether”) will roll out its platform with the expectation that it will be built into glasses, smart watches, or other gadgets.

At Atheer’s office, I tried two different demo devices that looked like dark glasses with electronic components attached. Each was fixed in place on a tripod and wired to a nearby computer, so I could stand looking through the lenses and move the device around, but not walk around with it.

One of the devices, which featured a depth sensor, let me turn a levitating cube with swipes of my finger and pop bubbles by poking them. The other, which incorporated two cameras into the glasses, let me navigate a newspaper that appeared in front of my face by swiping my finger around, and check out 3-D images and videos embedded in the seemingly flat page.

Reaching out: Atheer founder Soulaiman Itani tries out a prototype augmented-reality device.

Essentially, the cameras or depth sensor detect where your hand is and then deduce where your fingertips are and how your hand is moving. In this way, the system can detect actions like a midair push of a virtual button.

The displays inside the glasses, meanwhile, need to be see-through so you can see both what’s really in front of you and what’s virtually there. Atheer’s software also needs to determine where your gaze is focused and make sure that it shows images at the right distance and perspective for each eye.

It was neat when it worked, but I often found it tricky to manipulate and press the virtual objects because the sensor and cameras seemed to have difficulty detecting my gestures if my fingers weren’t positioned at the right angle. Some of this could be solved by calibrating the system for me, which you wouldn’t do with a demo unit that’s constantly used by different people. But Itani says the biggest obstacle to manipulating 3-D objects in space is that the technology can’t yet capture information fast enough. He’s hoping upcoming depth sensors will help.

Itani says Atheer’s software can be integrated with any operating system. So far, the team has been using it with a modified version of Android, Google’s open-source mobile software, that incorporates 3-D features and understands how to display things so that they appear superimposed on the world around you instead of on a fixed screen.

The company is concentrating on getting software developers to start building apps for its platform and hopes to have its technology in the hands of users in some form next year. When it’s available, Itani estimates, a stand-alone headset running Atheer software will cost around $500 to $600, while one that connects with a smartphone and relies on it to run the software would cost about $200 to $300.

“In a few years you will be able to go to a restaurant, leave a note in the middle of the air for your friend, and they come and they can see it,” he says. “You can have movies and entertainment that surrounds you, and you can look around.”

A few years can be an eternity in the tech market, though, and Atheer is still in its infancy.

Jason Leigh, a computer science professor and director of the Electronic Visualization Laboratory at the University of Illinois at Chicago, also sees major challenges ahead before for such wearable devices become popular. For example, he says, people have to be trained to use them properly: attempts to use similar types of glasses for military purposes have come to grief because users were so engaged with the imagery they were seeing onscreen.

“It’s just like any new piece of technology,” he says. “Now that you have something else interfering with your primary sense, you’ve got to adapt to it so you don’t crash into a tree.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.