At Apple’s annual Macworld event last January, showman and CEO Steve Jobs unveiled the iPhone. Holding it onstage, Jobs tapped on its surface to type, flicked his finger to scroll through songs, and pinched his fingers to make pictures smaller. The crowd went wild. But while the iPhone is the world’s most prominent example of a multi-input touch screen, other equally innovative technologies came to prominence this year. Jeff Han, a researcher at New York University and founder of the startup Perceptive Pixel, believes that multi-input touch screens should be large–the size of a wall. (See “Touch Screens for Many Fingers” and “Jeff Han on a Better Interface.”) Microsoft, for its part, unveiled a multitouch computing table that lets users manipulate virtual objects on the surface. (See “Your Coffee Table as a Computer.”) And a Microsoft researcher, Patrick Baudish, is working on touch-screen technology that’s a few years away from consumers: a double-sided touch screen that lets a user see her fingers on the other side of a tablet PC or phone. (See “Two-Sided Touch Screen.”)
When Apple did away with the keyboard on its phone, it also took away the tactile feedback that people experience when they press a button. Research suggests that smooth touch screens lead to more typing errors than a traditional keypad does, especially in bumpy environments such as a car or a train. Researchers, such as Stephen Brewster at the University of Glasgow, are exploring ways to add a tactile cue that lets a person know when a button on a smooth screen has been tapped. (See “Better Touch Screens for Mobile Phones.”) This burgeoning field, called haptics, is also used to make virtual-reality experiences more real. Yoshinori Dobashi, at the University of Hokkaido, in Japan, has simulated the feel of water. (See “Recreating the Feel of Water.”) And a company is adding tactile feedback to a vest that can be worn when playing video games. (See “Making Games Physical.”)
People are getting more and more accustomed to having their cell phones or laptops with them at all times. Useful as these gadgets are, they can be even more helpful if they can automatically suggest things to do or give directions to a restaurant nearby. This year, a number of products and research projects tried to make phones and other gadgets even smarter. Nokia, for instance, introduced a powerful tablet PC with a Global Positioning System (GPS) chip. (See “Nokia’s GPS-Enabled Pocket Computer.”) But not all gadgets have GPS capabilities. Google recently announced a technology that sidesteps the GPS issue and helps a person place himself on a map, within about 1,000 meters, using information from a cell-phone tower. (See “Finding Yourself without GPS.”) Similarly, the German startup Plazes offers a service that lets a person use a Wi-Fi signal to locate herself, among other services. (See “Marking Your Territory.”) And what to do with all this location information? Researchers at the Palo Alto Research Center have developed an application for a phone that suggests things that the user might want to do, places to eat and shop, and things to see, based on location, time of day, past preferences, and even text-message conversations. (See “Smart Phone Suggests Things to Do.”)
Gain the insight you need on robotics at EmTech MIT.