Thanks to renewed interest in hands-on computing, researchers have continued to push the boundaries of displays and interfaces. This year, researchers at Microsoft demoed a back-of-the-screen touch pad, and a startup based in New York called Perceptive Pixel came up with an intuitive way of sliding an on-screen object underneath another based on the touch force. (See “What’s Next for Computer Interfaces?”) Touch screens came down in cost, becoming available to the average hacker. Engineers at Nordt, a research studio based in New York, introduced a product called TouchKit, which lets anyone make and modify his or her own touch-screen table for less than $1,000. (See “Open-Source, Multitouch Display.”) And Microsoft researchers demonstrated an easy, cheap way to turn a normal display into a multitouch surface. (See “A Low-Cost Multitouch Screen.”) Taking things one step further, Samsung partnered with software provider Reactrix to entirely remove the need for touch with a gesture-based interface: a screen that incorporates computer vision software to “see” the hand movements of people standing in front of it. (See “A Display That Tracks Your Movements.”)
Storing More for Less
Advances in flash memory continued according to Moore’s Law, which states that the number of transistors on chips doubles roughly every two years. Even so, this year, researchers provided details of up-and-coming memory technologies that could overcome some of the drawbacks of flash: its slowness and the way it starts leaking data after about 10 years. One possible successor, phase-change memory, which stores data by altering the crystal structure of a material (rather than using the charge within transistors), seems likely to enter the market in 2009. Over the past year, companies including Samsung and a Swiss startup called Numonyx have begun sending out test samples to gadget makers. (See “A Memory Breakthrough” and “A New Memory Company.”) The developer of the magnetic spin valve used in hard drives–IBM’s Stu Parkin–introduced a technology called racetrack memory, in which nanowires hold data in the form of magnetic spin. (See “IBM’s Faster, Denser Memory.”) According to Parkin, racetrack memory could match the durability of flash memory, the speed of phase-change memory, and the capacity of spinning magnetic hard disks.
The microchip industry and research community is always hunting for ways to make electronics more energy efficient, and this year, it was prompted to rethink the fundamental design aspect of microprocessors. At the Lawrence Berkeley National Lab, researchers found a way to get more performance out of a supercomputer than ever before (while also slashing power consumption): by borrowing design tricks from the cell-phone industry. (See “A Smarter Supercomputer.”) And a team at the University of Michigan designed a special chip for small sensor applications that consumes only 30 picowatts of power when idle and 2.8 picojoules of energy per computing cycle. The chip is so energy efficient that it can be powered by a battery no larger than itself. (See “A Picowatt Processor.”) Continuing the low-power theme, Intel, for its part, launched Atom, a power-efficient processor designed for small notebooks and handheld gadgets. (See “Inside Intel’s New Chip.”) The chip maker also provided details about Nehalem, its newest multicore chip design with a novel memory structure that lets data flow to the microprocessor more efficiently. (See “Intel’s Power Play.”) Intel could soon face a foreign challenge, however. This year, Chinese researchers released details of their latest multicore chip, Godson-3. (See “A Chinese Challenge to Intel.”)
It may sometimes seem as if the United States is stuck in a wireless rut, with patchy access to Wi-Fi and relatively slow cellular networks, but better connectivity could be just around the corner. In March, when the FCC auctioned off new slices of wireless spectrum, Verizon obtained a significant portion, which it promised to make open, enabling access to previously off-limits airwaves. (See “What the FCC Auction Means” and “Opening the Airwaves.”) Another slice of wireless spectrum will become available when television stations switch from analogue to digital in February, and researchers spent much of 2008 developing these new “white space” devices. (See “The Coming Wireless Revolution.”) Wi-Fi could also get a boost from an Intel research project that involved rewriting the software in routers to quickly beam data over more than 60 miles. (See “Long-Distance Wi-Fi.”) Another emerging point-to-point wireless technology that was announced this year makes use of an underused part of the spectrum to blast more than 10 gigabits of data per second through the air. (See “Wireless at Fiber Speeds.”)
For people who love smart phones, 2008 was a big year. Apple opened its iPhone to developers and launched the app store, letting them sell software for the phone directly to users. The apps released so far range from the sublime to the ridiculous–from applications that search the Web via voice commands to games that make use of the iPhone’s built-in accelerometer, and virtual musical instruments that rely on the touch screen and microphone. (See “What to Expect from the Open iPhone” and “What Does Apple Want?”) Google’s big mobile play–a mobile operating system called Android–also finally arrived on its first phone, the T-Mobile G1. (See “Awaiting the Google Phone” and “Android Has Arrived.”) But for some people, smaller and simpler is still better when it comes to cell phones. An Israeli startup called Modu Mobile introduced a modular handset that slides into a number of different skins and even a car adapter. (See “Rethinking the Cell Phone.”) With so many cell phones available, obsolete devices are rapidly piling up in desk drawers, but there’s good news for the environmentally conscious: more companies than ever are shipping obsolete phones to specialized recycling centers, where they are either rejuvenated or melted down for the precious metals that they contain. (See “Where Cell Phones Go to Die.”)
DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.
“This is a profound moment in the history of technology,” says Mustafa Suleyman.
What to know about this autumn’s covid vaccines
New variants will pose a challenge, but early signs suggest the shots will still boost antibody responses.
Human-plus-AI solutions mitigate security threats
With the right human oversight, emerging technologies like artificial intelligence can help keep business and customer data secure
Next slide, please: A brief history of the corporate presentation
From million-dollar slide shows to Steve Jobs’s introduction of the iPhone, a bit of show business never hurt plain old business.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.