Skip to Content

Virtual Creatures in a Box, Controlled by You

A startup uses an old parlor trick and smartphone sensing to let you control virtual objects in a see-through box.
February 27, 2015

A Canadian startup is working to make monsters, fish, and other creatures seem to come alive inside a tabletop box. The company, H+, hopes you’ll use the device to play games and do other activities with friends.

H+ Technology is building Holus, a tabletop box that can project virtual images that users can interact with.

The startup is still in the prototype stage with Holus—a see-through box roughly the size of a microwave. Inside is a coated plexiglass prism within which projected images appear, allowing you to see virtual characters and content from different viewpoints. The company has built five Holus units so far, and hopes to start shipping them next year.

H+’s chief technology officer, Dhruv Adhia, says Holus combines elements of 3-D projection with an old optical trick called “Pepper’s Ghost,” wherein a hidden object is reflected on a glass panel to make it appear to be in the room with you. (More recent applications of Pepper’s Ghost use digital images rather than real objects, such as a projected performance by deceased rapper Tupac at the Coachella music festival in 2012).

A projector inside the lid of Holus beams four images of the same object onto the walls of the prism, and to the user they appear to form a single image. Users can control the images with a smartphone connected via Bluetooth or Wi-Fi. A tablet computer or laptop attached to the box runs an app that feeds images to the projector, and adjusts what you see based on input from the controller. At this year’s International Consumer Electronics Show in Las Vegas, H+ used Holus to let visitors play a multiplayer dice game controlled with an iPod Touch.

A woman gazes at a demo game for Holus, in which players must work together to defeat a monster.

In a video chat, Adhia gave me a sense of how this looks. He demonstrated characters like a woman in a sort of spacesuit and a sword- and shield-wielding skeleton, whose positions he could change by moving an iPod Touch. He said the Holus app determines the user’s perspective by tracking the controller’s motion, then adjusting in real time.

Images appeared to be visible from multiple angles, and were responsive to swipes and movements he made with the iPod. But it looked far more primitive than some other 3-D augmented-reality efforts. Microsoft unveiled a sophisticated augmented reality headset called HoloLens in January (see “Microsoft Headset Rewrites Reality with Holograms”); while Magic Leap, a startup, showed me impressive imagery when I visited the company late last year (see “10 Breakthrough Technologies 2015: Magic Leap”). Another company, called Leia, is developing a new optical technique that brings glasses-free holographic images to mobile gadgets (see “New Display Technology Lets LCDs Produce Princess Leia-Style Holograms”).

Michael Bove, the leader of the object-based media group at the MIT Media Lab, also noted that the H+ technology appears to be neither holographic nor actually 3-D, meaning you couldn’t walk around it and see a smooth 360-degree view of the image being projected.

H+ hopes to drum up interest by selling its device through Kickstarter this spring. It wants to convince people to shell out about $850 or $950 for their own Holus, depending on whether buyers want a “home” version or a larger one geared toward developers.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.