Microsoft is making fast progress with HoloLens, the device that you can wear over your eyes to blend 3-D virtual objects with reality.
Microsoft hopes that the holographic gadget becomes its next big computing platform, with applications ranging from video games to design, education, and architecture. The company hasn’t yet said precisely when it will release HoloLens, just that it will be available “in the Windows 10 time frame.” But it’s unclear what that means. That operating system is due this summer.
When Microsoft unveiled HoloLens in January during a press event at its headquarters in Redmond, Washington, it looked like a bulbous pair of black ski goggles (see “Microsoft’s New Idea: A Hologram Headset to Rewrite Reality”). Reporters invited to try the device afterward didn’t use such a sleek device, however—they had a more rudimentary headset that was tethered to a computer and a so-called holographic processing unit. This was my experience when I went to Redmond in March to check it out (see “Reality Check: Comparing HoloLens and Magic Leap”).
This week, though, at it its Build conference for software developers, in San Francisco, Microsoft brought hundreds of self-contained HoloLens units that looked just like the black gadget with red accents that it showed off in January. The idea was to let attendees try them out and get the developers interested in making apps for the device.
On Thursday, I got a chance to see how far Microsoft’s come with HoloLens: I participated in a 90-minute demo session at a hotel in San Francisco with a few dozen other reporters who “built” and tried out a simple app for the device. (I use quotation marks because we didn’t actually do any coding; rather, we assembled different pieces, such as pre-written scripts and 3-D spheres, and enabled different kinds of interactions, such as gesture and voice control.)
Each of us was seated at a computer, and every pair of reporters had a mentor from Microsoft sitting between them to help. On a small stage in the center of the room, HoloLens team members explained that we’d be putting together an app called “Project Origami,” which consisted of several origami-like objects (colorful paper planes, boxes, and spheres) atop a base meant to look like a pad of paper.
Before we got started with the app, each of us was handed a HoloLens prototype—matte black, with soft-feeling rubbery sides and a padded inner ring that swiveled and used a small dial in the back to tighten around your head.
It was a big change from the uncomfortable tethered prototype I tried about six weeks ago, showing how far Microsoft has come in its quest to cram a ton of computing power into a bulky but not uncomfortable headset. A row of four tiny LEDs on the back of one of the unit’s outer arms served as a power indicator, according to my mentor, and the arms also included a micro USB port to connect it to a computer and a headphone jack. Two red-accented areas, one on each side of the headset, housed speakers, and the front, a big black shiny visor, sat in front of a number of cameras pointed in different directions.
Once we began putting the app together, it was clear how many pieces are involved in developing software for a device like HoloLens.
We used a popular game development tool called Unity to make our apps viewable from the HoloLens wearer’s ever-shifting perspective. We could place the origami objects so that the wearer would see them, and we added a gaze-controlled cursor and gesture and voice controls. We brought in spatial mapping so that the 3-D objects could respond to real-world obstacles (allowing, say, a sphere to roll onto an actual coffee table), and we added sound that changed with our position and what was happening in the scene.
Each time we added a function, we exported the app from Unity to Microsoft’s Visual Studio app development program and then loaded it onto HoloLens so we could check out how it had changed.
Eventually, we had a simple but functional app: paper airplanes and blocks sitting on a notebook, with two spherical objects (one resembling a ball of paper, the other a multifaceted, multicolored starlike shape) floating above them. If I fixed my gaze-controlled cursor—a red circle—on one of the balls and uttered my chosen verbal command (“Don’t touch that!”), the ball would drop and hit the notebook with a paper-crinkling sound, then bounce off and hit whatever real-world object it would naturally roll onto next. Saying “Reset world” brought the balls back to their starting point. I could also use a finger gesture to move the entire collection to a different spot in the room.
I placed the whole collection on a nearby couch, for instance, and the ball rolled into its corners. When my mentor pushed both of his fists into the cushions, the ball rolled into the depression he’d created.
To be clear, the images I saw didn’t look different from those I saw in March. They appeared to be of similar sharpness and brightness, and the viewing area on the HoloLens headset didn’t look any bigger than it did previously. That said, it definitely didn’t look worse, which is a feat since the technology was crammed into a much smaller space than it had been previously.
The device did have some glitches; for instance, a few times my collection of 3-D objects disappeared from the space in front of me and then reappeared several feet away. And while the 3-D objects I saw were visible and sharp from any angle, if I got within about 70 centimeters (about two feet) they would start to disappear.
It’s also telling that HoloLens still has a small viewing area, making it hard to see entire objects once you get close to them. In contrast, a prototype that competitor Magic Leap showed me late last year let me see much more at different depths (see 10 Breakthrough Technologies: Magic Leap). Then again, Magic Leap’s device was fixed in place—the startup has not yet shown the technology in a wearable headset as Microsoft has. On that front, at least, Microsoft seems closer to bringing long-sought dreams of augmented reality to life.