Select your localized edition:

Close ×

More Ways to Connect

Discover one of our 28 local entrepreneurial communities »

Be the first to know as we launch in new countries and markets around the globe.

Interested in bringing MIT Technology Review to your local market?

MIT Technology ReviewMIT Technology Review - logo


Unsupported browser: Your browser does not meet modern web standards. See how it scores »

{ action.text }

Cloud computing offers a cheap way to carry out data-intensive computing, allowing companies to effectively lease processing power from an online supplier. But uploading large amounts of data to cloud computing systems has remained costly and time-consuming.

Today, Amazon announced a new, ultrafast file transfer protocol designed to make uploading to its cloud service easier. The move could broaden the appeal of cloud computing by allowing smaller organizations and even individuals to upload data without expensive infrastructure.

“The biggest bottleneck in cloud computing is without a doubt the data transmission–uploading and downloading data to and from the cloud,” says Ian Sommerville, of the Co-laboratory for Cloud Computing at the University of St. Andrews, Scotland. Small businesses often must choose between enduring slow data transfer rates or investing in extra infrastructure, Sommerville says.

The heart of the problem is the way one of the Internet’s core features–the transmission control protocol (TCP)–works. TCP regulates the flow of data by breaking it up into small packets of information, sending each packet, and then waiting for an acknowledgment that the packet has been received before sending the next one. If a packet does not arrive, TCP either resends it or assumes that the network is being overloaded and initiates an aggressive congestion-control strategy, slowing the data rate down to avoid triggering a network collapse.

While TCP works fine for sending relatively small amounts of data over small distances, it can cause major headaches for cloud-computing customers. The distance that data has to travel–measured geographically as well as by the number of network nodes it has to pass through–affects the number of errors that creep into the signal. For example, transferring data across the United States on a 100-megabits-per-second (Mbps) Internet link can result in a latency of 100 milliseconds and a loss of about 1 percent of packets, which translates to real transfer rates of just 10 Mbps or less.

Nick Trigg of Constellation Technologies, a cloud-computing company spun out of the Rutherford Appleton Laboratory, in Oxfordshire, U.K., and CERN, in Geneva, Switzerland, says that TCP can be a dramatic bottleneck for large amounts of data. This means that sometimes it is faster to physically deliver data on a disk than to upload it, he says.

To solve this problem, Amazon Web Services will use technology developed by Aspera, based in Emeryville, CA, called the Fast And Secure Protocol (FASP).

“Our core technology is an alternate bulk data moving protocol,” says Michelle Munson, Aspera’s CEO and cofounder. “The inefficiency [with TCP] is really very noticeable when transferring large amounts of data,” she says.

5 comments. Share your thoughts »

Credits: Technology Review , Aspera

Tagged: Computing, cloud computing, data, Amazon, protocols, TCP, data transmission

Reprints and Permissions | Send feedback to the editor

From the Archives


Introducing MIT Technology Review Insider.

Already a Magazine subscriber?

You're automatically an Insider. It's easy to activate or upgrade your account.

Activate Your Account

Become an Insider

It's the new way to subscribe. Get even more of the tech news, research, and discoveries you crave.

Sign Up

Learn More

Find out why MIT Technology Review Insider is for you and explore your options.

Show Me