Science purists might find much to complain about in the newest installment of the Iron Man franchise, starring Robert Downey Jr. Admittedly, Tony Stark “creates an element,” and heroes and villains alike seem able to break into high-level computer systems with little more than the wave of an iPhone look-a-like. But I expect computer scientists and designers will be impressed by the movie’s natural user interfaces.
There’s a long tradition of the interfaces envisioned in movies becoming research projects in real life. For years, techies have chased the “Minority Report interface” inspired by a scene in the Tom Cruise action flick in which the main character does his police work by donning a pair of gloves and diving into a hands-on manipulation of his data. Johnny Lee, a researcher in Microsoft’s hardware division, gained acclaim for hacking together a version of the interface using a Nintendo Wii. The company Oblong has been working for years on the g-speak interface, a slicker implementation of the same concept.
Earlier this year at South By Southwest Interactive, a computing and design conference in Austin, Texas, I noticed that Iron Man had stolen the designers’ hearts. I attended multiple panels where designers showed clips from the first film and described what it would take to make that vision a reality.
Iron Man 2 had several enticing scenes of the main character interacting with his computer. Leaving aside the computer’s improbable level of intelligence, Stark interacts with it through sophisticated voice recognition. This is a feature taken for granted in almost all science fiction films. More intriguing are the suggestions for gestural interfaces.
When Stark is mid-design and doesn’t like what he’s working on, he grabs it off the projection and throws it into the trash. This allows him to throw away a virtual idea with as much expression as can be used with an idea that’s taken physical form. At one point, he performs a 3-D scan of a physical model because he wants to create a version that he can manipulate easily. He lifts a projected image off the physical object and becomes able to spin it, change its size, and alter it with flicks of the hand.
What makes the interface look most attractive is how physically involved Stark becomes in design. With the power of voice and gesture combined, Stark is able to give small, quiet commands when contemplative, and become more expansive and hands-on when excited. The vision of a computing device that’s able to adapt so smoothly to the user’s mood and circumstance is compelling to say the least.
Stark’s natural interface also displays a problem that designers still have to overcome with this type of design. Watching closely, it’s clear that it’s ambitious to call such an interface “natural.” Stark knows an entire vocabulary of gestures that would not be obvious to someone approaching the interface for the first time. For natural user interfaces to take off in the real world, designers will have to convince users that learning this new method of interaction provides value that can’t be had with keyboard and mouse.
Saudi Arabia plans to spend $1 billion a year discovering treatments to slow aging
The oil kingdom fears that its population is aging at an accelerated rate and hopes to test drugs to reverse the problem. First up might be the diabetes drug metformin.
Yann LeCun has a bold new vision for the future of AI
One of the godfathers of deep learning pulls together old ideas to sketch out a fresh path for AI, but raises as many questions as he answers.
The dark secret behind those cute AI-generated animal images
Google Brain has revealed its own image-making AI, called Imagen. But don't expect to see anything that isn't wholesome.
A quick guide to the most important AI law you’ve never heard of
The European Union is planning new legislation aimed at curbing the worst harms associated with artificial intelligence.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.