The last 12 months changed the shape and definition of computers, which no longer necessarily involve a keyboard, a monitor, and a mouse. Apple started the year by launching its tablet (The iPad, Like an iPhone, Only Bigger), which soon spawned many imitators (Androids Will Challenge the iPad). Google started the year by showing off the most powerful smart phone yet (Google Reveals its New Phone) and ended it with a personal computer that relies entirely on the Web, by way of Chrome OS (The Browser Takes All).
Another new category of computers grew out of the industry’s obsession with adding computing power to television. Google’s ambitious but troubled effort (Google TV Faces Some Prime-Time Challenges) joined a more established apps-for-TV scheme from Yahoo (Yahoo Brings Apps to TVs) and a stripped-down entrant from Apple (Apple Shows a Facebook Rival and Apple TV 2.0). All put Web-streamed content and social networking at the heart of their strategies, trying to connect living-room viewing with online friends (Making TV Social, Virtually).
The new kinds of computers required new kinds of controls. 2010 saw enhancements to touch technology, such as a way to simulate the sensation of texture on a flat screen (Touch Screens that Touch Back) and a more powerful version of the laptop track pad (Upgrading the Laptop’s Touch Pad). New physical interfaces were also introduced, such as Microsoft’s technology for gestural control (Hackers Take the Kinect to New Levels) and a prototype device that the user controls by tapping a forearm (Putting Virtual Controls on Your Arm). More speculative projects showed that it’s possible to control a cell phone with your eyes (Eye Tracking for Mobile Control) or brain (Mobile Phone Mind Control).
All these innovations were made possible by continuing advances in the power and compactness of computer components. One route that both Intel (Computing at the Speed of Light) and IBM (Electricity and Light in One Chip) explored was to try to overcome the limitations of electricity by developing computers that run on light instead. Another radical idea, realized by a startup, was to create chips that work with probabilities, not 1s and 0s, an approach that could speed cryptography and other statistical calculations (A New Kind of Microchip).
Meanwhile, Apple (What’s Inside the iPad’s Chip?) and the Chinese government (China: a New Processor for a New Market) each took chip design in a new direction. Apple is striving to make chips for the iPad that balance portability and power, and China to make computing power available inexpensively to parts of the huge country that are as yet unwired.
But while Moore’s law held, exponentially increasing the computing power that can fit into a given space, our power supplies haven’t improved so fast. That puts a premium on less-energy-intensive ways to use computing, and it motivated research showing that Wi-Fi on mobile devices uses much more power than necessary (How Wi-Fi Drains Your Cell Phone). Intel demonstrated that chips allowed to make more errors use significantly less power and still get the job done (Intel Prototypes Low-Power Circuits). And a way to cut the power use of desktop computers by an average of 60 percent was introduced, achieved by putting a virtual copy of a desktop computer on a cloud server (PCs that Work While they Sleep).
A relatively new feature of computers, whether these are smart phones or TVs, is the cloud—the distant servers whose ample computing resources and storage space are accessed over the Internet. The cloud seems sure to become significantly more useful. Two startups showed that the cloud can enable small devices to act like much bigger, more powerful ones (Cloud Services Let Gadgets Punch Above Their Weight). The security worries that come with entrusting all your data to others also inspired cryptographers to hone a method that could let servers work with your data without being able to read (and potentially leak) it (Computing with Secrets, but Keeping Them Safe).
Google invented a new kind of cloud service when it made rudimentary AI available to all (Google Offers Cloud-Based Learning Engine). Researchers also tackled some of the logistical challenges to cloud computing, and came up with ways to easily move desktop software into the cloud (Drag and Drop Into the Cloud) and to compare the abilities of different cloud providers (Pitting Cloud Against Cloud).
Of course, even computers that incorporate the best of these ideas are still likely to crash. Fortunately, the last year brought new ideas that may make future machines more reliable. One new system can automatically diagnose a PC’s problems (Software Works Out What’s Troubling a PC); another can learn computer maintenance and repair by watching how an expert tunes a system (Software that Learns by Watching). A Stanford research project showed that building chips with transistors dedicated to spotting problems can create more reliable hardware (Speedier Bug Catching).
Security flaws, too, are universal, even unto the computer systems in cars (Is Your Car Safe From Hackers?) and ATMs (How to Make an ATM Spew Out Money), both of which can be compromised remotely, researchers demonstrated. New ideas about boosting security came from other researchers who bravely installed malware on a high-performance research computer (Raising a Botnet in Captivity) and from a company that can add computer smarts to the plastic in people’s wallets (A Credit Card with a Computer Inside).