Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

In a keynote speech this morning at the Society for Information Display’s annual Display Week conference in Seattle, Steven Bathiche, the research director of Microsoft’s Applied Sciences Group, demonstrated an immersive computing system that expand on the company’s Surface technology. Surface is a tabletop display that uses a set of four cameras to detect the location of objects placed on its surface, and special software to identify objects. Even with fantastic software, this technology can only do so much.

During his talk, Bathiche played a video that shows what’s possible when this concept is combined with better hardware–some nifty (but sketchily explained) optics and a transparent display. Transparent displays can do more than provide heads-up information while allowing you to see in front of you (for example showing traffic information on a windshield). A transparent display can look back at you. Bathiche’s group has combined a flat lens called a wedge lens with a transparent light-emitting diode display. This system can act as a touch screen; it can also detect gestures made from several feet away.

In video of a demo system where the display is mounted on top of the flat lens, a man walks up to the display and then walks back several feet, while the display shows his image. That image is captured using the lens rather than an external camera. Using this form factor, each hand can be assigned a different function–the left hand draws while the right moves the “paper” on screen. Even when the hands cross, the system keeps track of which hand is which and what it does.

Another system Bathiche showed on video uses a camera and image-processing sofrware to determine two viewers’ positions, and the positions of their eyes, and then processes that information to sequentially and directionally display different images to each viewer. Bathiche said this enables side-by-side, glasses-free 3-D viewing.

3 comments. Share your thoughts »

Tagged: Computing, displays, Microsoft Research, Microsoft Surface, interfaces, display week

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me