Intel Says Laptops and Tablets with 3-D Vision Are Coming Soon
Laptops with 3-D sensors in place of conventional webcams will go on sale before the end of this year, according to chip maker Intel, which is providing the sensing technology to manufacturers. And tablets with 3-D sensors will hit the market in 2015, the company said at its annual developers’ conference in San Francisco on Wednesday.
Intel first announced its 3-D sensing technology at the Consumer Electronics Show in January (see “Intel’s 3-D Camera Heads to Laptops and Tablets”). It has developed two different types of depth sensor. One is designed for use in place of a front-facing webcam, to sense human movement such as gestures. The other is designed for use on the back of a device, to scan objects as far as four meters away. Both sensors allow a device to capture the color and 3-D shape of a scene, making it possible for a computer to recognize gestures or find objects in a room.
Intel is working with software companies to develop applications that use the technology. In the next few weeks the chip maker will release free software that any software developer can use to build apps for the sensors.
Partners already working with Intel include Microsoft’s Skype unit, the movie and gaming studio Dreamworks, and the 3-D design company Autodesk, according to Achin Bhowmik, general manager for Intel’s perceptual computing business unit.
None of those partners showed off what they’re working on at the event this week. But Intel showed several demonstrations of its own. One, developed with a startup called Volumental, lets you snap a 3-D photo of your foot to get an accurate shoe size measurement—something that could help with online shopping.
Another demonstration showed how a 3-D sensor could measure the dimensions of a sofa in a store, and how it might gauge the true size of a fisherman’s catch from a photo of the fish dangling from his rod.
Bhowmik also showed how data from a tablet’s 3-D sensor can be used to build very accurate augmented reality games, where a virtual character viewed on a device’s screen integrates into the real environment. In one demo, a flying robot appeared on-screen and selected a landing spot on top of a box on a cluttered table. As the tablet showing the character was moved, it stayed perched on the tabletop, and even disappeared behind occluding objects.
“You can bring all these digital characters into the real world,” said Bhowmik. “It could be your favorite Disney character or something from a game.”
Intel also showed how the front-facing 3-D sensors can be used to recognize gestures to play games on a laptop, or take control of some features of Windows. Those demonstrations were reminiscent of Microsoft’s Kinect sensor for its Xbox gaming console, which introduced gamers to depth sensing and gesture control in 2010. Microsoft launched a version of Kinect aimed at Windows PCs in 2012, and significantly upgraded its depth-sensing technology in 2013, but Kinect devices are too large to fit inside a laptop or tablet.
Some of Intel’s demos were rough around the edges, suggesting that their compact sensors are less accurate than the larger ones of Microsoft. However, Bhowmik said that any such glitches would be unnoticeable in the fully polished apps that will appear on commercial devices.
Intel’s two sensors work in slightly different ways. The front sensor calculates the position of objects by observing how they distort an invisible pattern of infrared light by a tiny projector in the sensor. The rear sensor scans a scene using twin cameras that gauge depth with stereovision, combined with an infrared camera to help fine-tune the results.
Intel’s new sensors are roughly the same size as the camera components used in existing devices, says Bhowmik. The rear sensor in particular is compact enough to fit in very slim devices. On Monday, Dell announced that the sensors will appear later this year in its Venue 8 7000 tablet, which is only six millimeters thick, thinner than any other tablet on the market.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.