Science purists might find much to complain about in the newest installment of the Iron Man franchise, starring Robert Downey Jr. Admittedly, Tony Stark “creates an element,” and heroes and villains alike seem able to break into high-level computer systems with little more than the wave of an iPhone look-a-like. But I expect computer scientists and designers will be impressed by the movie’s natural user interfaces.
There’s a long tradition of the interfaces envisioned in movies becoming research projects in real life. For years, techies have chased the “Minority Report interface” inspired by a scene in the Tom Cruise action flick in which the main character does his police work by donning a pair of gloves and diving into a hands-on manipulation of his data. Johnny Lee, a researcher in Microsoft’s hardware division, gained acclaim for hacking together a version of the interface using a Nintendo Wii. The company Oblong has been working for years on the g-speak interface, a slicker implementation of the same concept.
Earlier this year at South By Southwest Interactive, a computing and design conference in Austin, Texas, I noticed that Iron Man had stolen the designers’ hearts. I attended multiple panels where designers showed clips from the first film and described what it would take to make that vision a reality.
Iron Man 2 had several enticing scenes of the main character interacting with his computer. Leaving aside the computer’s improbable level of intelligence, Stark interacts with it through sophisticated voice recognition. This is a feature taken for granted in almost all science fiction films. More intriguing are the suggestions for gestural interfaces.
When Stark is mid-design and doesn’t like what he’s working on, he grabs it off the projection and throws it into the trash. This allows him to throw away a virtual idea with as much expression as can be used with an idea that’s taken physical form. At one point, he performs a 3-D scan of a physical model because he wants to create a version that he can manipulate easily. He lifts a projected image off the physical object and becomes able to spin it, change its size, and alter it with flicks of the hand.
What makes the interface look most attractive is how physically involved Stark becomes in design. With the power of voice and gesture combined, Stark is able to give small, quiet commands when contemplative, and become more expansive and hands-on when excited. The vision of a computing device that’s able to adapt so smoothly to the user’s mood and circumstance is compelling to say the least.
Stark’s natural interface also displays a problem that designers still have to overcome with this type of design. Watching closely, it’s clear that it’s ambitious to call such an interface “natural.” Stark knows an entire vocabulary of gestures that would not be obvious to someone approaching the interface for the first time. For natural user interfaces to take off in the real world, designers will have to convince users that learning this new method of interaction provides value that can’t be had with keyboard and mouse.
The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions, making it spit out less unwanted text—but there's still a way to go.
A horrifying new AI app swaps women into porn videos with a click
Deepfake researchers have long feared the day this would arrive.
We can’t afford to stop solar geoengineering research
It is the wrong time to take this strategy for combating climate change off the table.
Meet Altos Labs, Silicon Valley’s latest wild bet on living forever
Funders of a deep-pocketed new "rejuvenation" startup are said to include Jeff Bezos and Yuri Milner.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.