Skip to Content

The Year in Hardware

The past 12 months have featured touch screens, context-aware gadgets, autonomous vehicles, and brain-computer interfaces.
December 26, 2007

Touch Screens
At Apple’s annual Macworld event last January, showman and CEO Steve Jobs unveiled the iPhone. Holding it onstage, Jobs tapped on its surface to type, flicked his finger to scroll through songs, and pinched his fingers to make pictures smaller. The crowd went wild. But while the iPhone is the world’s most prominent example of a multi-input touch screen, other equally innovative technologies came to prominence this year. Jeff Han, a researcher at New York University and founder of the startup Perceptive Pixel, believes that multi-input touch screens should be large–the size of a wall. (See “Touch Screens for Many Fingers” and “Jeff Han on a Better Interface.”) Microsoft, for its part, unveiled a multitouch computing table that lets users manipulate virtual objects on the surface. (See “Your Coffee Table as a Computer.”) And a Microsoft researcher, Patrick Baudish, is working on touch-screen technology that’s a few years away from consumers: a double-sided touch screen that lets a user see her fingers on the other side of a tablet PC or phone. (See “Two-Sided Touch Screen.”)

A touching year: A user demonstrates a large multitouch display, a technology that promises to revolutionize the way that people view information and collaborate on projects. Multitouch displays made headlines this year with Apple’s iPhone and Microsoft’s Surface.

Tactile Feedback
When Apple did away with the keyboard on its phone, it also took away the tactile feedback that people experience when they press a button. Research suggests that smooth touch screens lead to more typing errors than a traditional keypad does, especially in bumpy environments such as a car or a train. Researchers, such as Stephen Brewster at the University of Glasgow, are exploring ways to add a tactile cue that lets a person know when a button on a smooth screen has been tapped. (See “Better Touch Screens for Mobile Phones.”) This burgeoning field, called haptics, is also used to make virtual-reality experiences more real. Yoshinori Dobashi, at the University of Hokkaido, in Japan, has simulated the feel of water. (See “Recreating the Feel of Water.”) And a company is adding tactile feedback to a vest that can be worn when playing video games. (See “Making Games Physical.”)

Context-Aware Gadgets
People are getting more and more accustomed to having their cell phones or laptops with them at all times. Useful as these gadgets are, they can be even more helpful if they can automatically suggest things to do or give directions to a restaurant nearby. This year, a number of products and research projects tried to make phones and other gadgets even smarter. Nokia, for instance, introduced a powerful tablet PC with a Global Positioning System (GPS) chip. (See “Nokia’s GPS-Enabled Pocket Computer.”) But not all gadgets have GPS capabilities. Google recently announced a technology that sidesteps the GPS issue and helps a person place himself on a map, within about 1,000 meters, using information from a cell-phone tower. (See “Finding Yourself without GPS.”) Similarly, the German startup Plazes offers a service that lets a person use a Wi-Fi signal to locate herself, among other services. (See “Marking Your Territory.”) And what to do with all this location information? Researchers at the Palo Alto Research Center have developed an application for a phone that suggests things that the user might want to do, places to eat and shop, and things to see, based on location, time of day, past preferences, and even text-message conversations. (See “Smart Phone Suggests Things to Do.”)

Brain-Computer Interfaces
As processing power increases, researchers and companies are looking at novel ways of taking advantage of it. One idea is to make the user interface better. Startup Emotiv is betting on a wireless electroencephalograph (EEG) cap for gamers that lets them control the game by concentrating on certain tasks. (See “Connecting Your Brain to the Game.”) Another startup, called Emsense, believes that EEG can help it collect better market-research data about how people respond to advertisements, video games, and political speeches. (See “Brain Sensor for Market Research.”) Microsoft researcher Desney Tan is leveraging EEG in a different way: he’s using it to collect people’s subconscious responses to pictures in order to try to teach computers to recognize certain types of images. Ideally, computers will be able to differentiate between images of an animate and an inanimate object. (See “Human-Aided Computing.”)

Multimedia

  • Watch a video about Jeff Han’s multitouch screen.

Multicore Computing
As transistors shrink in size every two years or so, companies such as Intel and AMD are cramming more and more of them onto a single processor. But they are also adding more processors to computers to make them faster and more energy efficient. This year, consumers became accustomed to dual-core chips, processors with two number-crunching engines–and ever more powerful computers with many more cores are on their way. (See “The Promise of Personal Supercomputers.”) But as each generation of processor comes out with a larger number of cores, engineers will run into problems. No one quite knows how best to design a consumer processor with tens or hundreds of cores, and no one knows how to make it easy to program. MIT spinoff Tilera has an approach that it hopes will work for some video applications. It has built its chip using a network structure that ensures that all the cores have access to the resources, including memory, that they need at any given time. (See “A New Design for Computer Chips.”) A different set of MIT researchers have also developed software that may make it easier to write programs that naturally take advantage of the power of multiple cores–a task that is usually difficult and time consuming. Saman Amarasinghe has designed a compiler–a tool that converts code into instructions that a computer can read–that sees which programming tasks are independent. The compiler places separate tasks on different cores, so that they won’t interfere with each other or try to use the same portion of memory. (See “Simpler Programming for Multicore Computers.”)

Autonomous Vehicles
This year, the Defense Advanced Research Projects Agency (DARPA) held a robotic-car competition that attracted the world’s best minds in robotics and artificial intelligence. Two years ago, DARPA put on the Grand Challenge, in which cars drove for miles on an empty desert road. This year’s Urban Challenge required them to obey traffic laws and interact with other cars on the road (including other robotic cars). An early favorite in the competition was Stanford’s entry, named Junior, since the team’s vehicle won the 2005 Grand Challenge. (See “Stanford’s New Driverless Car.”) An MIT team competed with an autonomous Land Rover that had more computational power and sensors than any other vehicle in the race. (See “A Land Rover That Drives Itself.”) Technology Review was at the race, held at the abandoned air-force base in Victorville, CA, to interview team leaders and meet the robots. (See “Prelude to a Robot Race.”) In the end, the vehicle from Carnegie Mellon completed the race the fastest, and with the most sensible driving of any of the six that crossed the finish line. Stanford came in second, and Virginia Tech’s entry was awarded third place. As for MIT, it rolled in at a respectable fourth place. (See “Champion Robot Car Declared.”)

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.