Skip to Content

Collective Computing

The future of big computing may lie in distributing the work.

For years, researchers have worked to tap the powers of many remote computers to accomplish large computational tasks, from cracking encryption algorithms to gene sequence analysis. While this “distributed computing” isn’t new, the proliferation of home and office computing, along with the spread of the Internet, is heightening interest in developing technologies to exploit today’s multitude of machines. One example is SETI@home, a free program from the University of California, Berkeley. When a user’s screen saver kicks on, the program signals the SETI@home host computer over the Internet that the local computer is available to join the Search for Extraterrestrial Intelligence. The PC is sent a slice of cosmic radio-frequency data, which it analyzes for anomalies indicative of alien communication (none found so far, sorry).

But while some experts call distributed computing a generic approach to which nobody has a proprietary claim, U.S. patent examiners think otherwise. Last year IBM won an apparently broad patent on a way to broker large computing tasks. Rather than locking machines into one SETI-like job, the method can handle many tasks. And while Big Blue is guarded about how its patent relates to other distributed-computing work, one insider says it could bring distributed computing into general use. “Distributed computing is part of the way we’re looking at how future computing networks are going to operate,” the executive says. “That affects our software strategy, hardware, server [and] service [businesses].”

The patent, filed in March 1998, before SETI@home became available, describes how a coordinating computer splits up a large task. Peripheral computers “subscribe” to this central computer and get a special screen saver. When the screen saver is activated, the subscribing computer alerts the central machine and receives its job. It works on this task whenever it’s idle and sends back the results. “We saw there was a huge amount of machines sitting around idle at corporations that were connected to the Internet but were not used for any tasks,” says co-inventor Reiner Kraft of IBM’s Almaden Research Center in San Jose, CA. “We thought of expediting this kind of work, and making it easy for people to use.”

Big Blue says this approach has applications for high-end computing workloads including genome analysis, crunching financial data, weather forecasting and anything graphics-intensive-especially in small and medium-sized companies that can’t afford a dedicated mainframe. IBM even envisions spot markets in computation, in which a broker would take on big jobs and farm out the work to home or office machines whose owners receive a small payment-a few cents-in return for making their PCs available.

IBM’s patent is not yet well known, so it’s hard to get experts to comment on its likely impact. But the field is hot, with many corporate and university researchers developing other distributed-computing strategies. “Many things that actually occur in the Internet today really are distributed tasks,” says Rick Rashid, senior vice president at Microsoft Research.

IBM agrees, and believes it has staked a key claim on the area. So even if your computer never finds alien life, it might identify a gene sequence that helps save lives here at home. And earn you some pocket change, to boot.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.