Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo

 

Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

When Ryder Ziola places a bell pepper on the kitchen work surface in front of him, the tabletop springs to life, suggesting recipes and other information. He can also use the work surface like a touch screen, selecting options with a finger–to see, for example, what ingredients might go well with his pepper. Ziola, a graduate student at the University of Washington, developed the system, dubbed Oasis, with researchers at Intel Labs Seattle led by senior scientist Beverly Harrison. Ziola is demonstrating Oasis at the ninth annual Intel Research Day, held at the Computer History Museum in Mountain View, CA.

“If you put, for example, a steak on the surface, it will recognize the steak and come up with recipe,” says Ziola. “It may also come up with nutritional information.” The camera can also track the motion of a person’s hand, and discern when he is touching the surface or not, allowing the surface to be interactive.

A touch with a finger can bring up a timer, or summon up images or video to offer guidance on a particular step in the recipe. When two ingredients are placed on the surface together, Oasis suggests recipes that combine them. Any of the information displayed on the surface can be dismissed by sweeping a hand across the projected images.

Oasis uses a palm-sized “pico-projector” made by Microvision to project images onto the surface. The positioning and recognition of objects is worked out using a depth-perceiving camera made by PrimeSense, the company that supplies sensors to Microsoft’s Kinect Xbox gestural controller. Although the camera can be used to recognize objects using their 3-D shape, recognition currently involves only color information. “Being able to sense depth can make recognition easier and more robust,” says Ziola, who adds that this feature will eventually be added to the system.

Oasis can rapidly be trained to recognize new objects. When presented with a pack of gum, it took only a few clicks of a mouse to inform the system this was a new object to track. “It really just needs a snapshot of it,” says Ziola.

3 comments. Share your thoughts »

Credit: Technology Review
Video by Tom Simonite, edited by Brittany Sauser

Tagged: Computing, Intel, touchscreens, virtual machines, camera imaging system

Reprints and Permissions | Send feedback to the editor

From the Archives

Close

Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me