2013: The Year’s Most Important Computing Stories
By Tom Simonite
Although the groundwork was laid in previous years, in 2013 it became apparent that computers mounted on your wrist and face will preoccupy the computing industry for years to come.
One of our first stories of the year noted the launch of the crowdfunded smart watch Pebble, a product that seemingly catalyzed the resurgence of interest in wrist-mounted computers from gadget buyers and companies large and small. Soon after that, we picked smart watches as one of MIT Technology Review’s 10 Breakthrough Technologies of 2013.
Rumors swirled throughout the year, without any definitive proof, that Microsoft, Google, and Apple were all working on smart watches. But the world’s largest smartphone maker, Samsung, did launch a smart watch called the Galaxy Gear, which has the processing power of a smartphone and even a 1.9-megapixel camera. Qualcomm, which makes more smartphone processors than any other company, also launched a smart watch. Called the Toq, it features a novel display technology that borrows a trick from butterfly wings to display crisp colors even in bright light.
We also saw researchers and startups create novel technologies that could make future smart watches easier to use. Many mobile developers adapted their apps for wrist-mounted screens or created new ones for such devices; a company called Chirp developed a compact ultrasonic sensor that allows a small device to recognize gestures; and researchers at Carnegie Mellon invented a simple touch keyboard app for accurate typing on tiny screens.
Despite the activity, it’s not yet clear quite how smart watches will fit into our lives. Intel’s lead gadget anthropologist, Genevieve Bell, told us that the computing industry has yet to figure out what problem smart watches solve for people. However, as we noted in a lengthy review of the smart watches available so far, few live up to what seems to be the promise of the form factor—helping people manage their digital life with fewer interruptions to their offline life.
Meanwhile, the spectacle-style computer Google Glass reached a larger number of early testers, and the company behind it worked hard to ready the technology, and the world, for a consumer launch in 2014.
Two employees working on Google Glass gave MIT Technology Review their takes on what it was like to live with the device. Wearable-computing pioneer Thad Starner, a Georgia Tech professor and technical lead on the project, said that the device offers a “killer existence.” Mary Lou Jepsen, who works on display technology for Google’s secretive Google X division, said that Glass had become deeply embedded into her life, describing it as “a way of amplifying you.”
Several small companies showed off products inspired by Google Glass in 2013, and some of them may point to what this type of gadget will do in the future. One startup, Meta, teamed up with Steve Mann, a researcher who built some of the first wearable displays and computers, to make glasses that include a 3-D camera and can sense the wearer’s gestures. Another startup, Atheer Labs, showed a similar device with depth sensing and the ability to overlay 3-D imagery onto a person’s vision. Starner revealed an even quirkier technology being developed with academic colleagues: a wearable device for dogs, called FIDO.
Notable improvements were also made in 2013 to better-established computer designs. The 41-megapixel camera that Nokia added to one smartphone showed that big strides can still be made in mobile cameras. And Apple and Motorola both launched smartphones with chips inside, intended to improve their ability to understand a person’s activities and needs. Apple’s version is a motion-sensing chip in the new iPhone that should allow for smarter fitness tracking and help apps guess at things like whether you are driving or walking, so that they know whether to interrupt. Motorola’s new Moto X phone launched with a processor optimized to listen out for voice commands at all times.
Leading hard-drive manufacturer HGST started pumping helium into its drives to reduce friction, and Apple began building iPads using a material called indium gallium zinc oxide, a sign that extremely pixel-dense displays will soon be heading to TVs and large monitors. Google also tried to invent a new class of simple gadget to get Web video onto conventional TV screens in the thumb-drive-size Chromecast.
Meanwhile, a growing realization that conventional computers are poorly matched to tasks such as understanding images and other messy data led major technology companies to invest in efforts to reinvent their basic design.
Google and NASA teamed up to open a quantum computing lab, based on computers supplied by the controversial Canadian company D-Wave. The move was motivated by a desire to come up with more efficient ways to analyze large volumes of data—something both Google and NASA have more of than they can possibly work on all at once. Microsoft’s incoming research boss also told us that his company was expanding its investment in quantum computing “dramatically,” something he expected to produce significant gains in security and privacy technology.
The poor efficiency of computers at processing real-world data such as images led IBM and some other companies to make major investments in neuromorphic computing, which involves crafting hardware that processes data to the way biological brains do. For certain tasks, neuromorphic chips can be significantly more power efficient than conventional processors. Leading smartphone chip maker Qualcomm unveiled a neuromorphic research program that involved tests on a trainable robot; IBM released a programming language for its neuromorphic processors, hoping that many coders will make use of the language in coming years; and a startup in Switzerland used the principle to create a camera sensor modeled on the wiring of the human retina.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.