Intel, Microsoft Push Parallel Computing
Yesterday, in a teleconference, Intel and Microsoft announced that they will start to fund serious efforts to make parallel computing mainstream. The companies are injecting $20 million into computer-science research at the University of California, Berkeley, and at the University of Illinois, Urbana-Champaign, dubbing them the Universal Parallel Computing Research Centers (UPCRC).
For decades, parallel computing–the tricky job of dividing programming tasks among a number of processors–has been the province of academics who write programs for supercomputers. But with the advent of multicore chips–chips with more than one processing center–personal computers are on track to become supercomputers as well.
The main problem outlined at yesterday’s teleconference is that there is still no consensus on the best way to easily program general-purpose chips, like the ones in consumer computers, which have a large number of cores. Historically, programmers have been able to wait for the next generation of chips to come out to make their programs run faster. Now that the next generation of chips includes multiple cores, it’s not as obvious how to divvy up the tasks among the cores to achieve those same gains.
“Programmers have no choice: if they want fast programs, they’re going to have to write parallel programs,” says Dave Patterson, the director of the UPCRC and a computer-science professor at Berkeley.
Although no details were given on specific research projects, Tony Hey of Microsoft Research mentioned some areas of interest, which include developing parallel code libraries with chunks of code that are ready to use, testing different memory approaches, and exploring different types of parallel languages.
Below is a roundup of the most interesting quotes from the teleconference.
“The shift in hardware technology from a single core to multicore will have a profound effect in the way we do programming in the future. We’re really in the midst of a revolution in the computing industry.” –Tony Hey, Microsoft Research
“Parallelism is coming not just to high-end systems, but across a very broad slice of computer science, and we expect to permeate every corner of computer science going forward.” –Andrew Chien, director of Intel Research
“The technology revolution that Andrew [Chien] outlined means that every computer will be a parallel computer. Every problem must be a parallel problem … We want to democratize parallelism.” –Marc Snir, codirector of the UPCRC and a professor of computer science at the University of Illinois
“My perspective here is, there’s not that much that universities haven’t invented [in parallel computing], but there was no strong pull from industry because parallelism wasn’t pervasive; it was a small subset of the industry … Now that the pipeline is unclogged, now that there’s strong interest from Microsoft and Intel … work that’s been going on for many years will become much more fruitful.” –Marc Snir
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.