Skip to Content

Startup Wants You to Capture the World in 3-D

Mantis Vision is developing 3-D scanning technology that could end up in lots of tablets.
July 29, 2014

Gur Bittan envisions a future where you’re not just capturing a regular video of a child’s first steps with a smartphone; you’re doing it in 3-D, and sharing it with friends who can manipulate the video to watch it from different perspectives—even the kid’s point of view, providing you’ve scanned the scene from enough angles.

Google’s Project Tango tablet
On display: Mantis Vision’s technology, which uses a projected infrared light pattern to capture 3-D images and videos, is included in Google’s Project Tango tablet, shown here.

Bittan is the chief technology officer of Mantis Vision, an Israel-based 3-D technology company that hopes to make this kind of experience commonplace. If its 3-D technology is included in mobile gadgets like smartphones and tablets, it could make something as simple as communicating with friends more immersive.

The company’s software and hardware designs are part of Google’s Project Tango tablet, which can map environments and objects. It is also working on a pocket-sized 3-D scanner and already offers an enterprise 3-D scanner called the F5.

Mantis Vision has also been working with electronics designer and manufacturer Flextronics on a tablet called Aquila that should be available in September to manufacturers who want to take it into production. And its technology will be added to some other gadgets, though cofounder and CEO Amihai Loven won’t give specifics (Google has said it is working with LG on a Project Tango consumer device; Loven won’t say if his company is involved). “All I can say is in 2015 it will be in the market,” he says.

The company recently raised $12.5 million in venture capital funding from the venture investment arms of Flextronics and Qualcomm, as well as from Sunny Optical Technology and Samsung.

The method Mantis Vision uses to capture 3-D data—projecting an infrared light pattern onto the environment—is similar to that used by PrimeSense, a company Apple purchased last year. But Mantis Vision believes that its method, which works whether a camera is moving or still, maps detailed things in 3-D more easily and accurately than other technologies. And it hopes this will generate more interest from cell-phone and tablet makers, not to mention consumers. The uses the company envisions for 3-D include gaming, gestural interfaces, and indoor navigation.

To capture 3-D information, a projector overlays an infrared light pattern onto whatever it is you’re trying to scan—a teddy bear, for instance. Then a digital camera and a depth sensor, synched to the projector, capture the scene with the light reflected by the bear. The technology works even in complete darkness, since it includes its own illumination; in bright environments the quality of the resulting image depends on the hardware used.

Via Skype, Bittan showed me a scan of a telephone, which looked as if it were covered with a bunch of interlocking letters in various shades of black, white, and gray, covered in turn with an evenly spaced grid of dots. Mantis Vision’s software analyzes the projected pattern and uses it to create a depth map of the object.

During an in-person meeting, Loven showed me, on his smartphone, a pixelated-looking 3-D video of a woman leaning back in a chair against a black background. He twisted the image around by sliding his finger on the screen to show it from another perspective, an effect made possible by having circled the woman with the camera while the video was shot. The result was not photorealistic, and there were plenty of black spots that were devoid of details, but it was pretty cool-looking.

Even the high-profile project with Google and the upcoming tablet may not be enough to win over consumers immediately: 3-D technology has existed for years in various forms, and it has struggled to move beyond the movie theater. But Loven says that’s because the technology is still “not mature enough.” Mantis Vision hopes to change that. “Let’s develop new technology and bring it,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.