MIT Technology Review Subscribe

CES: The Future of Interfaces

Say goodbye the one-size-fits-all approach to interacting with computers.

The mouse and keyboard have been how nearly everyone has interacted with computers since Apple and Microsoft brought them to the mainstream in the mid 1980s. There have been periodic attempts to replace these devices, mainly on the grounds that because they are so old, there must be something better by now.

But as the speakers on a panel on the future of the human-computer interface at the Consumer Electronics Show pointed out, the mouse and keyboard haven’t changed much because how people use computers haven’t changed much. We still typically use computers with a relatively large screen on some kind of tabletop. The panelists were drawn from organizations such as Microsoft’s mobile division, HP, and Sony’s Playstation group.

Advertisement

However, now that mobile computing is firmly established, moving people away from the desktop, and new applications are driving new interfaces: smartphones with touch-screens, voice-controlled automotive entertainment systems, and motion-based game controllers. The struggle for interface designers is to establish some kind of common grammar to all these systems, so that people can move seamlessly from device to device without having to learn how to operate each one individually. This is as much as marketing challenge as as technical one: however intuitive it might feel today, people had to be taught that making pinching motions on a screen equaled zooming in and out.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Looking towards the future, it’s likely that application designers will have to start taking into account contextual shifts between different interfaces on the same device: for example, a navigation application on a smartphone could be designed for a touch-based interface, but if a user starts driving a car, the application should be able to switch over to voice-based input and output, possibly tapping into the car’s built-in hands-free phone system. Beyond that? Maybe mind control, the panel suggested, tapping electrical impulses to control the computers around us, although they admitted this is still a long way from mainstream adoption.

This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement