To bring some order to the fuzzy world of cloud computing, the U.S. government’s National Institute of Standards and Technology has created a standard definition and a Cloud Computing Reference Architecture. Both are in the form of “Special Publications,” which are not official U.S. government standards but are designed to provide guidance to specific communities of practitioners and researchers.
The NIST Definition of Cloud Computing, currently in draft form, is based on NIST-sponsored workshops and public comments. The single definition helps ensure that government workers, industry, and other groups are talking about the same thing when they use the same words.
The draft document defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
The definition specifies five “essential” characteristics of cloud computing: self-service; accessibility from desktops, laptops, and mobile phones; resources that are pooled among multiple users and applications; elastic resources that can be rapidly reapportioned as needed; and measured service. These characteristics combine to make cloud computing a kind of infrastructure or utility. It’s not cloud computing when a company rents a specific computer in a rack at a facility that happens to be in Denver; it is cloud computing when a company rents a virtual host generated by machines that might physically reside in Denver, Atlanta, or New York.
NIST defines three “service models,” or types of service, that a cloud provider might sell.
Infrastructure as a Service (IaaS) is the most basic. Customers can buy processing, storage, and network services (typically using a Web-based self-service tool) and then build their own systems on top of this infrastructure. Two of the best known IaaS providers are Amazon and Rackspace. While customers have the illusion that they are renting specific servers, hard drives, and network switches, this equipment is in fact simulated by virtualization software, allowing multiple customers to be served by the same physical device.
Platform as a Service (PaaS) is one step up: vendors provide preconfigured computers running operating systems and applications. Amazon is a player here too, with a variety of highly specialized offerings that provide scalable databases, message queues, Web-accessible storage, and the “Mechanical Turk” system for organizing human computation. The Google App Engine, a system for high-performance computing, is another example. Microsoft is also a supplier: its Azure service provides preconfigured computers running Windows and SQL Server. All of these are elastic pay-as-you-go services. For example, you can tell Microsoft you want a “small” computer (one-gigahertz CPU, 1.75 gigabytes of RAM) and a 10-gigabyte SQL database. Your cost is $190 per month. Likewise, many Web hosting providers now sell “virtual private servers” priced at $17 to $400 per month or more.