Skip to Content

Virtual Supercomputers Sign Up for Business

Vendors orchestrate thousands of PCs to tackle truly tough problems.
March 13, 2001

What do most personal computers spend most of the time doing? Absolutely nothing. But quite a few small vendors are out to change that-by tying thousands of PCs together in order to create a virtual supercomputer.

This approach of distributed computing (sometimes called parallel computing) is best known for SETI@Home. The University of California at Berkeley program invites us all to lend our computing resources-now said to total three million computers worldwide-to analyze data from radio telescopes trained to listen for extra-terrestrial intelligence. (The software acts as a screen saver, doing its business in the background.)

Closer to home, startup vendors now compete to sell aggregated computer services and the necessary infrastructure to businesses. A few even have paying customers in compute-intensive industries such as bioinformatics and financial analysis.

Distributed computing faces stiff competition from traditional compute farms, which typically employ similar aggregation and coordination techniques but run them on servers and workstations bought for that purpose.

Distributed computing’s key cost benefit is its ability to exploit the unused processing power, storage and network bandwidth of existing hardware. But vendors are also confronting the following key challenges:

  • enticing individuals or corporations to rent out their PCs
  • efficiently coordinating the decentralized hardware
  • delivering the final aggregated compute power to paying customers
  • selling the whole concept to likely customers
  • making a profit while competing against all the rivals

Getting Down to Business

SETI’s influence is strong at United Devices in Austin, TX. “Our founder, Ed Hubbard, downloaded SETI@Home and started noticing that very soon [it] had several teraflops of computing power,” recalls spokesperson Andy Prince. (A teraflop is one trillion floating-point mathematical operations per second, a common measure of computing power. Supercomputers generally provide several teraflops of power.) The company’s chief technology officer, Dr. David Anderson, was the technical lead and director of the SETI@Home project.

One of United Devices’ first major commercial customers is Exodus Communications of Santa Clara, CA. This giant Internet-hosting provider uses United Devices’ Global MetaProcessor Service to test network performance on the PCs of United Devices’ “member community.” (A version announced in February, the MetaProcessor Platform, lets companies do the same with their own internal computers.) “It gives you a much better experience of what the user experience is than would lab-generated tests on a Web site,” says United Devices senior product manager Robert Reynolds. The company also has pilots with six bioinformatics companies that Reynolds won’t name.

Another early entry in the Web-testing market, Popular Power of San Francisco, names BEA Systems of Dallas as its first commercial customer. BEA uses distributed computing resources to test its WebLogic Server software. The second is an unnamed major pharmaceutical company.

Staking out the financial opportunity is NY-based Data Synapse, which has about a dozen customers using the technology to run trading applications, risk management and decision support, according to vice president Tom Ricciardi.

The company claiming the most experience is Entropia of San Diego. Formed in 1997, it provided technology for SETI@Home and for The Great Internet Mersenne Prime Search, a search for the largest prime numbers. Entropia is also the largest vendor, having raised $30 million in venture capital, claims chief executive officer Jim Madsen. “We’ve put together a network of over 15,000 PCs,” Madsen says. He claims that the aggregated resource provides more than 6 teraflops of peak computing power, a bit less than half that of SETI@Home but accomplished on far fewer computers.

Entropia’s customers include Envive of Mountain View, CA, a provider of Web performance-management solutions; SolidSpeed Networks of Ann Arbor, MI, which sells value-added Web bandwidth; and The Scripps Research Institute in San Diego, a major biomedical research organization. Madsen can’t disclose the names of several large pharmaceutical customers.

Other distributed computing players include Distributed Science of Pasadena, CA, with its ProcessTree Network, and Parabon Computation of Fairfax, Virginia.

In short, the nascent industry is already crowded. “You can expect a certain amount of attrition,” says Alex Veytsel, a research associate at Aberdeen Group in Boston. “It’s a nice vertical [market], but it’s not going to let you take over the world.”

In House and Out

These distributed computing vendors all offer dual services, depending on how much of a processing powerhouse is required.

For some projects, vendors help design and install the client/server infrastructure solely on the customer’s hardware, typically charging site or per-computer licenses. While this is similar to conventional projects, the need to “borrow” PCs from other groups can create some internal politics. “The business user who has the project may not be the owner of the resources, so you have to cross organizational boundaries to get access to those resources,” notes Data Synapse’s Ricciardi.

For situations requiring more compute power-or for customers who don’t want to pony up for their very own in-house supercomputer-the provider may offer resources gleaned from the public Internet, serving as a kind of broker.

To entice people to download the client software and turn over their computer’s idle cycles to strangers, the vendors donate part of the public computing resource to non-profit research organizations or offer small monetary enticements.

Entropia, for example, powers Scripps’ FightAIDSAtHome, a SETI@Home-like project that helps design anti-HIV drugs. Popular Power helps develop influenza vaccines. United Devices runs sweepstakes for MP3 players, gives $300 cash prizes and offers additional cash contests for employees who bring in group donations from their corporations. It also helps develop cancer-fighting drugs.

Beyond Batch

Distributed computing is moving from today’s “batch” processing (in which typically a single large task is parceled out) to more sophisticated applications crunching through a set of related tasks, says Nelson Minar, chief technology officer at Popular Power. The result: ever bigger and smarter virtual supercomputers.

“My interest is in turning networks of computers into interactive things,” Minar adds. “It’s ridiculous that computers are sitting idle all the time.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.