As processing power increases, researchers and companies are looking at novel ways of taking advantage of it. One idea is to make the user interface better. Startup Emotiv is betting on a wireless electroencephalograph (EEG) cap for gamers that lets them control the game by concentrating on certain tasks. (See “Connecting Your Brain to the Game.”) Another startup, called Emsense, believes that EEG can help it collect better market-research data about how people respond to advertisements, video games, and political speeches. (See “Brain Sensor for Market Research.”) Microsoft researcher Desney Tan is leveraging EEG in a different way: he’s using it to collect people’s subconscious responses to pictures in order to try to teach computers to recognize certain types of images. Ideally, computers will be able to differentiate between images of an animate and an inanimate object. (See “Human-Aided Computing.”)
As transistors shrink in size every two years or so, companies such as Intel and AMD are cramming more and more of them onto a single processor. But they are also adding more processors to computers to make them faster and more energy efficient. This year, consumers became accustomed to dual-core chips, processors with two number-crunching engines–and ever more powerful computers with many more cores are on their way. (See “The Promise of Personal Supercomputers.”) But as each generation of processor comes out with a larger number of cores, engineers will run into problems. No one quite knows how best to design a consumer processor with tens or hundreds of cores, and no one knows how to make it easy to program. MIT spinoff Tilera has an approach that it hopes will work for some video applications. It has built its chip using a network structure that ensures that all the cores have access to the resources, including memory, that they need at any given time. (See “A New Design for Computer Chips.”) A different set of MIT researchers have also developed software that may make it easier to write programs that naturally take advantage of the power of multiple cores–a task that is usually difficult and time consuming. Saman Amarasinghe has designed a compiler–a tool that converts code into instructions that a computer can read–that sees which programming tasks are independent. The compiler places separate tasks on different cores, so that they won’t interfere with each other or try to use the same portion of memory. (See “Simpler Programming for Multicore Computers.”)
This year, the Defense Advanced Research Projects Agency (DARPA) held a robotic-car competition that attracted the world’s best minds in robotics and artificial intelligence. Two years ago, DARPA put on the Grand Challenge, in which cars drove for miles on an empty desert road. This year’s Urban Challenge required them to obey traffic laws and interact with other cars on the road (including other robotic cars). An early favorite in the competition was Stanford’s entry, named Junior, since the team’s vehicle won the 2005 Grand Challenge. (See “Stanford’s New Driverless Car.”) An MIT team competed with an autonomous Land Rover that had more computational power and sensors than any other vehicle in the race. (See “A Land Rover That Drives Itself.”) Technology Review was at the race, held at the abandoned air-force base in Victorville, CA, to interview team leaders and meet the robots. (See “Prelude to a Robot Race.”) In the end, the vehicle from Carnegie Mellon completed the race the fastest, and with the most sensible driving of any of the six that crossed the finish line. Stanford came in second, and Virginia Tech’s entry was awarded third place. As for MIT, it rolled in at a respectable fourth place. (See “Champion Robot Car Declared.”)
Gain the insight you need on robotics at EmTech MIT.