The Year in Computing
2011 saw the personal computer continue to be marginalized. Although PCs are still the workhorse computing device in homes and offices, the most exciting innovations over the last 12 months were centered on very small-scale computing, very large-scale computing, and networked combinations of the two.
Developments in small-scale computing, in the shape of consumer mobile hardware such as the iPad2 tablet or Galaxy Nexus smart phone, were naturally the most visible. Most of these lightweight devices use ARM-based chips, prompting Intel, best known for its desktop and laptop processors, to develop prototype smart phone and tablet devices that will almost certainly herald the arrival of new challengers to Apple’s iPad and iPhone in 2012 and 2013.
The software that runs, and run on, these devices saw tremendous development activity as well. Hewlett-Packard tried (and failed) to break into the mobile market with an operating system, WebOS, that was generally judged to be better than the hardware it ran on. The jury is still out on whether Research in Motion can revive the flagging fortunes of the BlackBerry following the announcement of its new mobile OS. And there were major updates from the two heavyweights in the mobile OS world, Google and Apple, with Google releasing Android 4.0 (also known as “Ice Cream Sandwich”) in September. With Android 4.0, tablet and smart-phone makers no longer have to run separate versions of Android, and the operating system also boasts features such as facial recognition. In October, coinciding with the launch of the iPhone 4S, Apple released its iOS 5 operating system, featuring the Siri voice-activated digital assistant.
These new devices and operating systems have driven an explosion of mobile applications for areas such as electronic payments, health care, augmented reality, and games. The richness and range of these applications is causing a shift in corporate IT, with increasing numbers of companies allowing workers to bring their own devices to work.
But a lot of this mobile flexibility is possible only thanks to cloud computing, which lets devices seamlessly hand off complex tasks to data centers. The Kindle Fire is essentially just a front end to Amazon’s cloud services, for example, and Apple’s Siri won’t work without a network connection. Consequently, companies have been building a new generation of data centers, designed to handle cloud-computing loads as efficiently as possible. HP and Calxeda are pioneering the use in servers of low-power chips originally designed for battery-constrained mobile devices, in a bid to slash the massive electricity bills of data centers.
Tying cloud computing to consumer devices meant that a lot of personal (and corporate) information migrated into cyberspace this year, putting an even sharper emphasis on security and privacy issues. While some argue your data is better off in the cloud, researchers continue to worry about holes that can be exploited by criminals.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.