Iron Man 2 Envisions the Future of Computing Interfaces
Science purists might find much to complain about in the newest installment of the Iron Man franchise, starring Robert Downey Jr. Admittedly, Tony Stark “creates an element,” and heroes and villains alike seem able to break into high-level computer systems with little more than the wave of an iPhone look-a-like. But I expect computer scientists and designers will be impressed by the movie’s natural user interfaces.
There’s a long tradition of the interfaces envisioned in movies becoming research projects in real life. For years, techies have chased the “Minority Report interface” inspired by a scene in the Tom Cruise action flick in which the main character does his police work by donning a pair of gloves and diving into a hands-on manipulation of his data. Johnny Lee, a researcher in Microsoft’s hardware division, gained acclaim for hacking together a version of the interface using a Nintendo Wii. The company Oblong has been working for years on the g-speak interface, a slicker implementation of the same concept.
Earlier this year at South By Southwest Interactive, a computing and design conference in Austin, Texas, I noticed that Iron Man had stolen the designers’ hearts. I attended multiple panels where designers showed clips from the first film and described what it would take to make that vision a reality.
Iron Man 2 had several enticing scenes of the main character interacting with his computer. Leaving aside the computer’s improbable level of intelligence, Stark interacts with it through sophisticated voice recognition. This is a feature taken for granted in almost all science fiction films. More intriguing are the suggestions for gestural interfaces.
When Stark is mid-design and doesn’t like what he’s working on, he grabs it off the projection and throws it into the trash. This allows him to throw away a virtual idea with as much expression as can be used with an idea that’s taken physical form. At one point, he performs a 3-D scan of a physical model because he wants to create a version that he can manipulate easily. He lifts a projected image off the physical object and becomes able to spin it, change its size, and alter it with flicks of the hand.
What makes the interface look most attractive is how physically involved Stark becomes in design. With the power of voice and gesture combined, Stark is able to give small, quiet commands when contemplative, and become more expansive and hands-on when excited. The vision of a computing device that’s able to adapt so smoothly to the user’s mood and circumstance is compelling to say the least.
Stark’s natural interface also displays a problem that designers still have to overcome with this type of design. Watching closely, it’s clear that it’s ambitious to call such an interface “natural.” Stark knows an entire vocabulary of gestures that would not be obvious to someone approaching the interface for the first time. For natural user interfaces to take off in the real world, designers will have to convince users that learning this new method of interaction provides value that can’t be had with keyboard and mouse.
How Rust went from a side project to the world’s most-loved programming language
For decades, coders wrote critical systems in C and C++. Now they turn to Rust.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
Design thinking was supposed to fix the world. Where did it go wrong?
An approach that promised to democratize design may have done the opposite.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.