Skip to Content

Intel Says Laptops and Tablets with 3-D Vision Are Coming Soon

Your next laptop or tablet may have 3-D sensors that let it recognize gestures or augment a real scene with virtual characters.
September 12, 2014

Laptops with 3-D sensors in place of conventional webcams will go on sale before the end of this year, according to chip maker Intel, which is providing the sensing technology to manufacturers. And tablets with 3-D sensors will hit the market in 2015, the company said at its annual developers’ conference in San Francisco on Wednesday.

Look out: Intel’s 3-D sensing technology is small enough to fit inside this new tablet from Dell, which is only six millimeters thick.

Intel first announced its 3-D sensing technology at the Consumer Electronics Show in January (see “Intel’s 3-D Camera Heads to Laptops and Tablets”). It has developed two different types of depth sensor. One is designed for use in place of a front-facing webcam, to sense human movement such as gestures. The other is designed for use on the back of a device, to scan objects as far as four meters away. Both sensors allow a device to capture the color and 3-D shape of a scene, making it possible for a computer to recognize gestures or find objects in a room.

Intel is working with software companies to develop applications that use the technology. In the next few weeks the chip maker will release free software that any software developer can use to build apps for the sensors.

Partners already working with Intel include Microsoft’s Skype unit, the movie and gaming studio Dreamworks, and the 3-D design company Autodesk, according to Achin Bhowmik, general manager for Intel’s perceptual computing business unit.

None of those partners showed off what they’re working on at the event this week. But Intel showed several demonstrations of its own. One, developed with a startup called Volumental, lets you snap a 3-D photo of your foot to get an accurate shoe size measurement—something that could help with online shopping.

Another demonstration showed how a 3-D sensor could measure the dimensions of a sofa in a store, and how it might gauge the true size of a fisherman’s catch from a photo of the fish dangling from his rod.

Bhowmik also showed how data from a tablet’s 3-D sensor can be used to build very accurate augmented reality games, where a virtual character viewed on a device’s screen integrates into the real environment. In one demo, a flying robot appeared on-screen and selected a landing spot on top of a box on a cluttered table. As the tablet showing the character was moved, it stayed perched on the tabletop, and even disappeared behind occluding objects.

“You can bring all these digital characters into the real world,” said Bhowmik. “It could be your favorite Disney character or something from a game.”

Intel also showed how the front-facing 3-D sensors can be used to recognize gestures to play games on a laptop, or take control of some features of Windows. Those demonstrations were reminiscent of Microsoft’s Kinect sensor for its Xbox gaming console, which introduced gamers to depth sensing and gesture control in 2010. Microsoft launched a version of Kinect aimed at Windows PCs in 2012, and significantly upgraded its depth-sensing technology in 2013, but Kinect devices are too large to fit inside a laptop or tablet.

Some of Intel’s demos were rough around the edges, suggesting that their compact sensors are less accurate than the larger ones of Microsoft. However, Bhowmik said that any such glitches would be unnoticeable in the fully polished apps that will appear on commercial devices.

Intel’s two sensors work in slightly different ways. The front sensor calculates the position of objects by observing how they distort an invisible pattern of infrared light by a tiny projector in the sensor. The rear sensor scans a scene using twin cameras that gauge depth with stereovision, combined with an infrared camera to help fine-tune the results.

Intel’s new sensors are roughly the same size as the camera components used in existing devices, says Bhowmik. The rear sensor in particular is compact enough to fit in very slim devices. On Monday, Dell announced that the sensors will appear later this year in its Venue 8 7000 tablet, which is only six millimeters thick, thinner than any other tablet on the market.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.