A Look Back at Predictive Assistants and Lurching Giants
In 2012, hardware and software brought us usability advances, faster chips, and gesture control.
One of the most interesting threads of innovation in computing over the past 12 months can be traced back to the preceding year. In 2011, Apple’s virtual assistant Siri showed how software and computers could be more than just tools—something closer to collaborators. In 2012, Apple’s competitors extended that notion in ways that could shape all kinds of technology for years to come.
The company that first created Siri, SRI, created a similar system capable of working as a bank teller. Meanwhile, Google launched two alternative versions of a mobile assistant. Google Now, built into newer Android smartphones and tablets, works like a search engine in reverse—offering up data such as weather forecasts, traffic reports, or transit times when it thinks a person needs that information. A similar app, called Field Trip, is intended for use when exploring a new city; it notifies users about nearby attractions, well-reviewed businesses, and events. Both are currently free from ads but show obvious potential for including location-based offers. Just this month, a slick app closely modeled on Google Now launched for the iPhone.
One of Microsoft’s leading researchers, Eric Horvitz, contributed to the trend with a browser that can identify and explore landmark events in a person’s past. However, software is still a long way from matching human abilities to process, filter, and construct information. One university research project showed as much, in September, by making a virtual assistant that draws on crowdsourcing as well as AI software to carry on intelligent conversations. Just a few months later, one startup announced that such an assistant would soon be available as a product.
All those advances—Siri included—owe much to improvements in machine learning, a branch of artificial intelligence concerned with enabling software to consume data and figure things out for itself.
One result of such an improvement is that Google’s English-language speech recognition got 20 percent more accurate this year, thanks to an upgrade to the company’s machine learning software that’s soon to be rolled out for other languages. Engineers moved from purely statistics-based models to so-called artificial neural networks, which are loosely modeled on biological neurons. The same machine learning technology powered a remarkable demonstration by Google researchers of software that learned to recognize cats from watching YouTube videos and a demonstration by Microsoft in which spoken English was translated into spoken Chinese in real time. (For those for whom such work raises questions about reality, here’s how to test whether the world around you is a computer simulation.)
Touch and Feel
To become close collaborators, computers need to be able to understand us, a more likely prospect thanks to this year’s improvements in the interfaces we use to communicate with software and machines.
Some researchers demonstrated ways to make conventional touch screens more expressive. Startup Qeexos showed off hardware and software upgrades that enable a touch screen to distinguish fingers from knuckles, while another young company, Tactus, invented a shape-shifting screen capable of flipping between a flat surface and one with raised buttons.
Companies large and small also looked beyond the touch screen, moving us toward a time when it will become normal to control a computer or mobile device using gestures. Startup Leap Motion wowed many—this writer included—with its $70 gesture controller. Intel showed off laptops with similar technology inside, while Microsoft readied a version of its Kinect gaming accessory for PCs and other home computers.
The most radical new vision of all for interacting with computing came from Google: a pair of eyeglass frames holding a small display. A slick promotional video for the product, known as Google Glass, gave hints about what it might allow, and Google’s founders were seen trying out the technology near TR’s San Francisco office, but even the company admitted it needed help from outside developers to find the best applications for it.
Better Building Blocks
All the advances mentioned so far rested on improvements in hardware. Researchers continued developing faster, more powerful, and more efficient hardware, with much focus on propping up Moore’s Law—the metronomic growth in the density of processors on computer chips that has continued for 50 years.
Progress in this area has relied on finding ways to carve finer features into chips, and in April, Intel launched the Ivy Bridge line, the first chips with details as small as 22 nanometers. Intel also led chip companies in launching a multibillion-dollar collaboration on “extreme” ultraviolet technology, intended to ensure that the size of components keeps dropping further still.
Smaller-scale manufacturing techniques are crucial to hard-disk technology, too—and a breakthrough in self-assembly designs, announced last month, suggests a way forward. A more unusual chip technology story came from devotees of the virtual currency Bitcoin, who started designing custom chips to help them “mine” digital cash faster.
But Intel—and Moore’s Law—were mentioned rarely in relation to smartphones and tablets, most of which are powered by more efficient processors based on designs from U.K. chip designer ARM. The company’s CEO told me last month that he considers Moore’s Law irrelevant and that ARM plans to compete in other areas of computing. Its technology could help out companies such as Facebook, which in 2012 released figures on the energy consumed by the vast data centers that serve its one billion users.
A wilder bet on the future of computing hardware was placed by Amazon founder Jeff Bezos and the CIA. They both invested in a Canadian company that may (or may not) have figured out how to tap weird quantum-mechanical effects to analyze data faster than a conventional computer can.
The biggest stories from computing giants Apple and Microsoft also involved ways of making computers easier for us to understand and relate to. Apple, whose late founder Steve Jobs had dismissed tablets smaller than 10 inches on the diagonal as “tweeners … dead on arrival,” backtracked and copied rivals such as Google by launching just such a tablet, the iPad Mini. The new device was well received by reviewers: the reduced size and weight made it much easier to use without compromising on power, so it seemed more likely to become a person’s constant companion.
Microsoft’s own efforts in the field of human-computer relations came to the fore with the release of Windows 8, an effort to reimagine the operating system used by some 1.3 billion people and help the company regain influence in an industry now shaped more by mobile devices than by PCs.
Windows 8 for desktop and Windows 8 phone both received generally positive reviews from our writers. But features seemingly designed for a future era when every PC has a touch screen like a tablet caused some confusion, a feeling echoed by other reviewers and early upgraders.
Despite such carping, the executive in charge of Windows 8 product development told MIT Technology Review that data automatically collected from some users of the operating system suggests people are adjusting to its novel design features just fine. But as one of our bloggers pointed out, the arguments over Windows 8’s design are as much a reminder that all computer operating systems have their shortcomings as an indictment of Microsoft’s choices.
The coming year will be a test for many of the computing technologies that gained attention in 2012; we will certainly find out whether Microsoft’s grand experiment has been a success or a titanic flop.
Discover how artificial intelligence is driving the future of work at EmTech Next!Find more information and register