Skip to Content

A Faster Way to the Cloud

Amazon’s new protocol should make accessing the cloud faster and more reliable.
September 11, 2009

Cloud computing offers a cheap way to carry out data-intensive computing, allowing companies to effectively lease processing power from an online supplier. But uploading large amounts of data to cloud computing systems has remained costly and time-consuming.

Today, Amazon announced a new, ultrafast file transfer protocol designed to make uploading to its cloud service easier. The move could broaden the appeal of cloud computing by allowing smaller organizations and even individuals to upload data without expensive infrastructure.

“The biggest bottleneck in cloud computing is without a doubt the data transmission–uploading and downloading data to and from the cloud,” says Ian Sommerville, of the Co-laboratory for Cloud Computing at the University of St. Andrews, Scotland. Small businesses often must choose between enduring slow data transfer rates or investing in extra infrastructure, Sommerville says.

The heart of the problem is the way one of the Internet’s core features–the transmission control protocol (TCP)–works. TCP regulates the flow of data by breaking it up into small packets of information, sending each packet, and then waiting for an acknowledgment that the packet has been received before sending the next one. If a packet does not arrive, TCP either resends it or assumes that the network is being overloaded and initiates an aggressive congestion-control strategy, slowing the data rate down to avoid triggering a network collapse.

While TCP works fine for sending relatively small amounts of data over small distances, it can cause major headaches for cloud-computing customers. The distance that data has to travel–measured geographically as well as by the number of network nodes it has to pass through–affects the number of errors that creep into the signal. For example, transferring data across the United States on a 100-megabits-per-second (Mbps) Internet link can result in a latency of 100 milliseconds and a loss of about 1 percent of packets, which translates to real transfer rates of just 10 Mbps or less.

Nick Trigg of Constellation Technologies, a cloud-computing company spun out of the Rutherford Appleton Laboratory, in Oxfordshire, U.K., and CERN, in Geneva, Switzerland, says that TCP can be a dramatic bottleneck for large amounts of data. This means that sometimes it is faster to physically deliver data on a disk than to upload it, he says.

To solve this problem, Amazon Web Services will use technology developed by Aspera, based in Emeryville, CA, called the Fast And Secure Protocol (FASP).

“Our core technology is an alternate bulk data moving protocol,” says Michelle Munson, Aspera’s CEO and cofounder. “The inefficiency [with TCP] is really very noticeable when transferring large amounts of data,” she says.

Data dashboard: Aspera’s user interface lets a user control data-transfer rates and shows transfer times and real-time network information.

Unlike TCP, FASP does not wait for confirmation of receipt, but simply assumes that all packets have arrived, says Simon Hudson of Cloud2, a provider of cloud-computing services in East Yorkshire, U.K., and an early adopter of FASP. Under this protocol, only packets that are confirmed to have been dropped are re-sent. “And instead of sending lots of small packets, it sends fewer large packets,” Hudson says. The result is that the available bandwidth is used more efficiently–more data gets through, and it gets there faster.

Another issue is traffic monitoring, says Anna Liu, a cloud-computing researcher at the University of New South Wales in Sydney, Australia. “In cloud computing, the challenge is the unpredictable nature of the public network,” she says. “You can’t control what else is happening on the network due to other people’s activities.”

FASP handles this unpredictability by monitoring all network traffic and altering the size of packets and the rate and order in which they are sent, according to available bandwidth and other traffic issues. This way, the data flow can be regulated, ensuring that FASP data gets through without saturating the network. This also means it becomes possible to guarantee file-transfer times, says Munson. When transferring data over a 100 Mbps connection, Munson says, “FASP will achieve about 95 Mbps or better.”

Since Amazon is such a big player in cloud computing, its adoption of FASP could broaden the appeal of the technology, says Trigg. “It’s the 800-pound gorilla in the market,” he says. “If you improve the network connection, you lower the hurdle and allow more people to use it.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.