Skip to Content

Why NASA Wants Microsoft’s HoloLens in Space

The rocket carrying two HoloLens headsets to the International Space Station blew up in June, and NASA is planning another launch.
September 8, 2015

The explosion in June of the SpaceX rocket that was headed to the International Space Station felt like “a punch in the gut” to Jeff Norris, the project manager for two HoloLens projects that NASA is working on at its Jet Propulsion Laboratory (JPL) in Pasadena, California. Among the items on board were two HoloLens headsets—Microsoft’s forthcoming augmented-reality gadgets.

Astronaut Luca Parmitano uses HoloLens
Astronaut Luca Parmitano uses HoloLens at the underwater Aquarius Reef Base, which is located below the Florida Keys.

But within a couple of weeks, Norris says, his team at NASA and his counterparts at Microsoft had new HoloLens hardware that they were certifying for launch into space. That’s now scheduled to happen December 3 as part of a commercial cargo launch by the aerospace company Orbital Sciences to resupply the space station.

Here on earth, augmented-reality devices may eventually be used for a range of things like playing games that mix digital 3-D creatures with reality or talking with remote friends as if they’re in your living room. But NASA sees a number of practical—and possibly time-saving—uses for the technology in space.

NASA hopes to use HoloLens aboard the space station to allow astronauts to work with a remote expert who can see what the astronaut sees and help with unfamiliar tasks. The device might also act as an augmented-reality instruction manual that, say, uses 3-D images to show an astronaut where to place some equipment or what handle to turn. (Microsoft CEO Satya Nadella recently said in an interview that HoloLens will be available to developers within the next year; the timing of a consumer release is still unknown.)

In late July and early August, NASA astronauts tried using HoloLens to help with several tasks at the Aquarius Reef Base.

Norris, who is also the leader of the Ops Lab at JPL, says NASA is also working on other applications for HoloLens, like using augmented reality for inventory management. Apparently keeping track of where things are and how to find them is a big challenge on the space station, even though objects have bar codes on them and are organized with a database. NASA has prototyped an app that can be used to recognize an object and show the HoloLens wearer a path to follow that leads to where the object should be stored, Norris says.

In the meantime, to get some sense of what it will be like to use HoloLens on the space station, NASA experimented with HoloLens at the Aquarius underwater research station off the coast of Key Largo, Florida, in late July and early August. Astronauts used the device for tasks like checking emergency breathing equipment by going through a series of steps ranging from turning valves to finding and plugging in equipment, and setting up equipment to support an undersea robot.

In both cases, an expert sitting in a remote control center on dry land helped by using a Skype program Microsoft built for HoloLens (see “Reality Check: Comparing HoloLens and Magic Leap”) in which a forward-facing camera on the HoloLens let the expert see what the astronaut saw. If needed, the remote expert could draw in midair to point out things that the astronauts would see with the HoloLens headset (the whole time, the astronauts could also see a floating video of the expert in front of their faces). Norris thinks the task would have taken “many times as long” if it had simply been spelled out as a procedure to follow.

Though he thinks it can be helpful, Norris also says there are “enormous challenges” associated with building augmented-reality applications, such as figuring out how an application menu should look and how the user should interact with it when it’s not shown on a laptop or smartphone screen.

“The rules are different when you’re now rendering information all around a person,” he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.