///The Technological Underpinnings of the Cloud

The Technological Underpinnings of the Cloud

Introduction to Enterprise Cloud Computing

Either through curiosity or because it makes you a better driver and owner, most of us learn the basics of how our car works. Doing so makes us better drivers and better car owners. Similarly, so that we can choose the right kind of Cloud and then use it most effectively, in this article we will learn about the basic components needed to build a Cloud:

This article is based on The Cloud at Your Service. It is being reproduced here by permission from Manning Publications. Manning early access books and ebooks are sold exclusively through Manning. Visit the book’s page for more information.

Developer Tutorial readers can get 30% off any version (ebook or print book) of The Cloud at Your Service. Simply use the code “devtut30” at checkout. Offer expires March 31, 2010.

  • We need servers on a network and they need a home. That physical home and all the gear in it make up a data center;
  • In order to utilize a large bank of servers effectively, they need to be virtualized or the economics of a huge number of servers will not allow the Cloud to be cost effective;
  • At a minimum we need a way to access the Cloud, to provision new servers, get data in and out of storage, start and stop our applications on those servers, and a way to decommission servers we no longer need. That is a description of a Cloud API without which the virtualized servers in that data center would be very quiet and lonely;
  • We need some storage; a place to store what are called virtual machine images, our applications, and persistent data needed by those applications;
  • Most applications also need structured data during execution and that is usually some sort of database; and finally,
  • Because one of great attractions of Cloud computing, in the first place, is the ability to have applications that can scale when we are fortunate enough to have an application a lot of people want to use, we need elasticity as a way to expand and contract (i.e., scale) our application as demand for it grows and shrinks.

The Modern Data Centers Used by the Cloud Providers

Using our vehicle analogy, the data center is like the car’s engine. A data center-one that you might find inside any large company-is simply a facility (usually secure) used to house a large collection of computers and networking and communications equipment. But the large Internet-based companies like Amazon.com, Yahoo!, Google, Intuit, and the like have, over the years, built up what have to be considered mega-data centers with thousands of servers. These latter ones are the starting point for what is being built out by Cloud providers. It is useful to understand how these massive data centers are built so you understand how much scale you will have access to, how reliable your Cloud computing will be, how secure your data will be, and where the economics of the public Clouds are going and so you can understand what your economics might look like should you decide to build your own private Cloud. We have much more on Private Clouds later in this chapter and then Chapter 3 is dedicated to the topic.

The structure of a data center
A data center can occupy one room of a building, one or more floors, or an entire building. Most of the equipment is often in the form of servers mounted in 19-inch rack cabinets, which are usually placed in single rows forming corridors between them. This allows people access to the front and rear of each cabinet. Servers differ greatly in size from 1U servers (which occupy one of 42 slots in a standard rack) to large freestanding storage silos that occupy many tiles on the floor. Some equipment such as mainframe computers and storage devices are often as big as the racks themselves, and are placed alongside them. Very large data centers may use shipping containers packed with 1,000 or more servers each; when repairs or upgrades are needed, whole containers are replaced (rather than repairing individual servers).

Clean unwavering power-and lots of it-is essential. Data centers need to keep their computers running at all times so they need to be prepared to handle power outages and even brownouts. The power must be conditioned and backup batteries and diesel generators must be available to keep power flowing no matter what.

All of that power means that a lot of heat is created. Data centers have to cool their racks of equipment. The most common mode of cooling is air conditioning although water-cooling is also being employed when it is easily available such as some of the new data centers being built along the Columbia River in Washington State. Cooling is not the only thing air conditioning is needed for; controlling humidity is also important to avoid condensation or static electric buildup.

Network connectivity and ample bandwidth to and from the network backbones to handle all the input and output from the entire collection of servers and storage units is vital: all these servers will be very lonely if no one can access them.

Physical and logical security is important, as the bigger data centers are targets for hackers all over the world. Some freestanding data centers begin with security through obscurity and disguise the fact that a data center even exists at that location. Guards, mantraps, and state-of-the-art authentication technology keep the unauthorized from physically entering. Firewalls, VPN gateways, intrusion detection software ,and the like keep the unauthorized from entering over the network.

Finally, data centers must always assume the worst and have disaster recovery contingencies in place to avoid loss of data and relatively short loss of service in case of disaster.

Data centers scaling for the Cloud
A large data center today costs in the range $100 to 200 million. But the total cost of building the largest mega data centers used to provide Cloud services is now on the order of $500 million or more. What is going into that much higher cost and what can the biggest Cloud data centers do that normal companies cannot?

The largest data center operators like Google, Amazon, and Microsoft are situating their data centers in geographic proximity to heavy usage areas to keep network latency to a minimum and to provide failover options. They are also choosing geographies with access to cheap power. The northwest is particularly advantageous as the available hydropower is the cheapest power in the country and air conditioning needs are low to zero. Major data centers can use a whopping amount of wattage and cost their owners upwards of $30 million a year for electricity.

In fact the total cost of powering data center servers now represents 1.2 percent of total power consumption in the U.S. and it’s going up. The Cloud data centers use so much power and have so much clout that they can negotiate huge power volume discounts.

Similarly, these giant data centers tend to buy so much hardware they can negotiate huge volume discounts far beyond the reach of even the largest company building their own dedicated data center. For example, Amazon spent almost $90 million for 50,000 servers from Rackable/SGI in 2008, which would have cost $215 million without massive volume discounts.

Servers still dominate data center costs. This is why Google and others are trying to get cheaper and cheaper servers and have taken to building their own from components. Google relies on cheap computers with conventional multi-core processors. A single data center has tens of thousands of these inexpensive processors and disks, held together with Velcro tape in a practice that makes for easy swapping of components.

To reduce the machines’ energy appetite, Google fitted them with high-efficiency power supplies and voltage regulators, variable-speed fans, and system boards stripped of all unnecessary components like graphics chips. Google has also experimented with a CPU power-management feature called dynamic voltage/frequency scaling. It reduces a processor’s voltage or frequency during certain periods (for example, when you don’t need the results of a computing task right away). The server executes its work more slowly, thus reducing power consumption. Google engineers have reported energy savings of around 20 percent on some of their tests.

In 2006, Google built two Cloud Computing data centers in Dalles, Oregon, each of which has the acreage of a football field with four floors and two four-story cooling plants all plainly visible in the photo shown in Figure 1.

The Dalles Dam is strategic for the significant energy and cooling needs of these data centers. Today’s large data centers rely on cooling towers, which use evaporation to remove heat from the cooling water, instead of traditional energy-intensive chillers. The Dalles data center also benefits from good fiber connectivity to various locations in the U.S., Asia, and Europe, thanks to a large surplus of fiber optic networking, a legacy of the dot-com boom.

In 2007, Google built at least four new data centers at an average cost of $600 million each adding to its Googleplex, the massive global computer network that is estimated to span 25 locations and 450,000 servers. Amazon also chose a Dalles location just down the river for their largest data center.

Figure 1. Google’s top secret Dalles, OR data center built near the Dalles Dam for access to cheap power. Note the large cooling towers on the end of each football sized building on the left. These towers cool through evaporation not through the much more power hungry chillers normally used.

Meanwhile, Yahoo and Microsoft chose Quincy, Washington. Microsoft’s new facility there has more than 477,000 square feet of space, or nearly the area of 10 football fields. The company is tight-lipped about the number of servers at the site, but it does say the facility uses 3 miles of chiller piping, 600 miles of electrical wire, 1 million square feet of drywall, and 1.6 tons of batteries for backup power. And the data center consumes 48 megawatts-enough power for 40,000 homes.

World’s servers surpassing Holland’s emissions
The management consulting firm McKinsey & Co. reports that the world’s 44 million servers consume 0.5 percent of all electricity and produce 0.2 percent of all carbon dioxide emissions, or 80 megatons a year, approaching the emissions of entire countries like Argentina or the Netherlands.

Cloud data centers becoming more efficient and more flexible through modularity
Already, through volume purchasing, through custom server construction and through careful geographic locality, the world’s largest data center owners can build data centers at a fraction of the cost per CPU operation of private corporations. This economics of scale is a trend that will continue in their favor as they become dramatically more efficient through modular data centers. These highly modular, scalable, efficient, just-in-time data centers can provide capacity that can be delivered anywhere in the world very quickly and cheaply. An artist’s rendering (because photographs of such facilities are highly guarded) is shown in Figure 2.

Figure 2. Expandable, modular Cloud data center. There is no roof. New containers with servers, power, cooling and network taps can be swapped in and out as needed. Source: IEEE Spectrum magazine.

The goal behind modular data centers is to standardize data centers and move away from custom designs, enabling a commoditized manufacturing approach. Most striking is their appearance: there is no roof. Like Google, Microsoft is driven by energy costs and environmental pressures on them to reduce emissions and increase efficiency. Their goal is a Power Usage Effectiveness (PUE) at or below 1.125 by 2012 across all their data centers.

Power Usage Effectiveness (PUE)
Power usage effectiveness (PUE) is a metric used to determine the energy efficiency of a data center. PUE is determined by dividing the amount of power entering a data center by the power used to run the computer infrastructure within it. PUE is therefore expressed as a ratio, with overall efficiency improving as the quotient decreases toward 1. According to the Uptime Institute, the typical data center has an average PUE of 2.5. This means that for every 2.5 watts in at the utility meter, only one watt is delivered out to the IT load. Uptime estimates most facilities could achieve 1.6 PUE using the most efficient equipment and best practices. Google and Microsoft are both approaching 1.125 far exceeding what any corporate or co-host data center can achieve.

Virtualization Ensures the Cloud High Server Utilization

Virtualization in our car analogy is like the suspension. It’s what provides the high server utilization we need by smoothing out the variations between some applications that barely need any CPU time (they can share a CPU with other applications) and those that are compute intensive and need every CPU they can get. Virtualization is the single-most revolutionary Cloud technology whose broad acceptance and deployment truly enabled the Cloud Computing trend to begin. Without virtualization and the 60-plus percent server utilization it allows, the economics of the Cloud would just not work and it could never have taken off.

Virtualization is not new at all. In fact, IBM mainframes used virtualization in the 60s to enable large numbers of people to share a large computer and not interact or interfere with each other. Previously, one had to live with the constraints of scheduling dedicated time on these machines, where you had to get all your work for the day done in that scheduled time slot. Similarly the concept of virtual memory, introduced around 1962, was considered pretty radical but ultimately freed programmers from the straight jacket constraints of having to constantly worry about how close they were to the limits of physical memory. Now server virtualization is proving just as dramatic for application deployment and scaling and, of course, it is the key enabler for the Cloud. How did this happen?

The average server in a corporate data center has typical utilization of only 5% and at peak load utilization is still usually no better than 20%. Even in the best-run data centers, servers only run on average at 15% or less of their maximum capacity. But when these same data centers fully adopt server virtualization, their server utilization increases to as high as 80%. For this reason, in just a few short years, most corporate data centers have deployed hundreds or thousands of virtual servers in place of their previous model of one server on one physical computer box. How does server virtualization work to make utilization jump this much?

How it works
Server virtualization transforms or “virtualizes” the hardware resources of a computer-including the CPU, RAM, hard disk and network controller-to create a fully functional virtual machine that can run its own operating system and applications just like a “real” computer. This is accomplished by inserting a thin layer of software directly on the computer hardware that contains a virtual machine monitor (VMM) or “hypervisor” that allocates hardware resources dynamically and transparently. Multiple guest operating systems run concurrently on a single physical computer and share hardware resources with each other. By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine can be made completely compatible with all standard operating systems, applications, and device drivers. This virtual machine architecture for VMware on the x86 is shown in figure 3.

Figure 3 Virtual machine architecture using VMware as an example. The virtualization layer is what interfaces directly to all hardware components including the CPU. That layer then presents each guest operating system with its own array of virtual hardware resources. The guest OS does not operate differently than it would if installed on the bare hardware but now several instances of guest OSs with all their applications can share a single physical device which can now have much higher effective utilization.

Virtualization as applied to the Cloud
When virtualization passed muster with enterprise architects and CIOs, it began its rapid penetration. It was all about saving money. Enterprises started seeing utilization of their hardware assets increase pretty dramatically. It was easy to go from the typical 5 to 6% to 20%. With good planning 60% utilization or better was attainable.

Besides increased utilization and the associated cost savings, virtualization in the corporate data centers set the stage for Cloud Computing in several interesting ways. It decoupled users from implementation, it brought speed, flexibility, and agility never before seen in corporate data centers and it broke the old model of software pricing and licensing. Let’s go through these before we finally look at virtualization used in the Cloud.

Virtualization in corporate data centers:

  • Decouples users from implementation. The concept of a virtual server forces users to not worry about the physical servers and their location and allows them to focus just on service level agreements.
  • Brings speed, flexibility, and agility to server provisioning. Getting a (physical) server requisitioned, installed, configured, and deployed takes larger organizations 60 to 90 days and some 120 days. In the virtual server model, it is literally minutes or hours from request to fully ready for application deployment depending on how much automation has been put in place.
  • Breaks software pricing and licensing. No longer can the data center charge for an entire server or every server the software might run on. Instead they have to start charging for actual usage. This is a totally new model for IT.

Not only do these issues begin to paint the picture of how the Cloud providers offer their services but it also points to a growing recognition and readiness within the enterprise for the model change virtualization has already brought to them and prepares them to adapt more easily to the Cloud computing model.

Now imagine we have thousands of physical servers each of which is virtualized and can run any number of guest OSs, can be configured and deployed in minutes, and are set up to bill you by the CPU hour. The combination of cheap abundant hardware and virtualization capability, wrapped with automated provisioning and billing allows the huge economies of scale now achievable in the mega data centers to be harnessed through Cloud computing. This has all been enabled by virtualization just as the suspension systems for cars suddenly enabled them to go fast and not kill the occupants every time a bump was encountered.

But a powerful engine (data center) and a smooth suspension (virtualization) is not enough; we need a set of controls to start, stop, and steer our car. In our analogy, we need an API to control our Cloud.

A Cloud API Controls the Remote Servers Running Your Applications

The API to a Cloud is like the dashboard and controls in your car. There is a lot of power under that hood, but you need the dials and readouts to know what it is doing, and you need the steering wheel, accelerator, and brake to control it.

Once we have a Cloud, in order to make use of it, we need to have a way to access it. The highest level Clouds, those offering Software-as-a-Service (SaaS) applications, simply offer a browser-based Web interface. Lower level Clouds, like those offering Infrastructure-as-a-Service (IaaS), also need a way to access them. So each type of Cloud has to provide some kind of API that can be used to grab resources and configure, control, and release them when they are no longer needed.

An API is necessary to “engage” the service of a Cloud provider. It’s a way for the vendor to expose service features and potentially enable competitive differentiation. For example, Amazon’s EC2 API is a SOAP- and HTTP Query-based API used to send proprietary commands to create, store, provision, and manage Amazon Machine Images (AMIs). Sun’s Project Kenai Cloud API Specification is a RESTful API for creating and managing cloud resources, including compute, storage, and networking components.

As your Cloud applications will be the lifeblood of your company, you will want to make sure only authorized parties can do anything with your application. If that application was running in your company’s secure data center protected by layers of physical and logical security, you would be quite certain that no one unauthorized could access it. Here, since everything having to do with your application and the server it runs on is by definition accessible over the Internet, the approach Amazon and others take to security is to issue X.509 public key pairs initially and then to require a key on every API call to make sure the caller has the credentials to access the infrastructure.

To understand a Cloud API-for which there is not yet an accepted standard-it is best to look at Amazon’s Cloud API as it has the best chance of becoming the default standard.

Table 1. outlines some of the basic definitions and operations central to the Amazon Cloud API.

AMIs An Amazon Machine Instance is an encrypted machine image suitable to run in a virtual server environment. For example, it might contain Linux, Apache, MySQL as well as the application of the AMI’s owner.

AMIs can be public (provided by Amazon), private (custom designed by its creator), paid (purchased from a third party), or shared (created by the community for free).

AMIs are stored in Amazon’s Simple Storage Service (S3).

Instances The result of launching an AMI is a running system called an instance. When an instance terminates, the data on that instance vanishes. For all intents and purposes, an Instance looks identical to a traditional host computer.
Standard flow
  1. Use a standard AMI and customize an existing one.
  2. Bundle the AMI and get an AMI ID so as many instances of the AMI as needed can
    be launched.
  3. Launch one or more instances of this AMI.
  4. Administer and use the running instance(s).
Connecting From a web browser you just go to http://<hostname> where <hostname> is your instance’s public hostname.

If instead, you want to connect to a just-launched public AMI that has not been modified, you run the ec2-get-console-output command.

The result in either case enables you then to log in as root and exercise full control over this instance just like any host computer you could walk up to in a data center.

We have barely scratched the surface of all the concepts and corresponding API calls that exist in Amazon’s API, which has extensive documentation on the site http://docs.amazonwebservices.com. APIs also cover the areas of:

  • Using instance addressing
  • Using network security
  • Using regions and availability zones
  • Using Amazon Elastic Block Store
  • Using Auto Scaling, Elastic Load Balancing and Amazon CloudWatch
  • Using Public Data Sets
  • Using Amazon Virtual Private Cloud

We will now talk about the next important layer in what it takes to set up and use a Cloud: Cloud Storage.

Cloud Storage is Where You Save Persistent Data

Your car needs a place for you to put your groceries, your suitcase, and your sports equipment. Similarly, the Cloud needs to provide you a place to store your machine images, your applications, and any data your applications need.

Just like Cloud computing, Cloud storage has also been increasing in popularity recently for many of the same reasons. Cloud storage delivers virtualized storage on demand, over a network based on a request for a given quality of service (QoS). There is no need to purchase storage or, in some cases, even provision it before storing data. Typically, one pays for transit of data into the cloud and then a recurring fee based on the amount of storage your data is actually consuming.

Cloud storage is used in many different ways. For example: local data (such as on a laptop) can be backed up to cloud storage; a virtual disk can be “synched” to the cloud and distributed to other computers; and the cloud can be used as an archive to retain (under some policy) data for regulatory or other purposes.

For applications that provide data directly to their clients via the network, cloud storage can be used to store that data and the client can be redirected to a location at the cloud storage provider for the data. Media such as audio and video files are an example of this, and the network requirements for streaming data files can be made to scale in order to meet the demand without affecting the application.

The type of interface used for this is just HTTP. Fetching the file can be done from a browser without having to do any special coding, and the correct application is invoked automatically. But how do you get the file there in the first place and how do you make sure the storage you use is of the right type and QoS? Again many offerings expose an interface for these operations, and it’s not surprising that many of these interfaces use REST principles as well. This is typically a data object interface with operations for creating, reading, updating, and deleting the individual data objects via HTTP operations.

A Cloud Storage standard?
The Storage Networking Industry Associationâ„¢ has created a technical work group to address the need for a cloud storage standard. The new Cloud Data Management Interface (CDMI) is meant to enable interoperable cloud storage and data management. In CDMI, the underlying storage space exposed by the above interfaces is abstracted using the notion of a container. A container is not only a useful abstraction for storage space, but also serves as a grouping of the data stored in it, and a point of control for applying data services in the aggregate.

Keeping with Amazon’s APIs as good examples to study, a simple API exists for dealing with Amazon’s Simple Storage Service (S3) that is outlined in Table 2.

Table 2 Table outlining the basics of dealing with Amazon’s Simple Storage Service (S3)

Objects Objects are the fundamental entities stored in S3. Each object can range in size from 1 byte to 5 GB. Each object has object data and metadata. Metadata is a set of namevalue pairs that describe the data.
Buckets Buckets are the fundamental container in S3 for data storage. Objects are uploaded into buckets. There is no limit to the number of objects you can store in a bucket. The
bucket provides a unique namespace for the management of objects contained in the bucket. Bucket names are global so each developer can own only up to 100 buckets at
a time.
Keys A key is the unique identifier for an object within a bucket. A bucket name plus a key uniquely identifies an object within all of S3.
Usage
  1. Create a bucket in which to store your data
  2. Upload (write) data (objects) into the bucket
  3. Download (read) the data stored in the bucket
  4. Delete some data stored in the bucket
  5. List the objects in the bucket

In many cases the coarse granularity and unstructured nature of Cloud storage services like S3 are not sufficient for the type of data access required. An alternative structured data storage method is required. Let’s explore next how databases in the Cloud work (and don’t).

Cloud Databases Store Your Application’s Structured Data

The navigation system in your car takes your current location and your destination and provides you constant updates during your journey about where you have been and the route you need to take. It is data that is critical for just that trip and not useful after you arrive. The navigation system is to the car what a Cloud database is to an application running in the Cloud. It is transactional data created and used during the running of that application. When we think of databases we usually think of Relational Databases.

What is a Relational Database Management System (RDBMS) and why will you frequently hear that they do not work in the Cloud? An RDBMS is a database management system in which data is stored in the form of tables and the relationship among the data is also stored in the form of tables, as shown in the simple relation in Figure 4.

Figure 4. A simple example of how a relational database works. Four tables map out relationships between the data. Because the car makes are listed in a separate table from the color table, there is no need for a red Nissan to be listed separately from a blue Nissan.

The challenge for an RDBMS in the Cloud is scaling. An application requiring an RDBMS that has a fixed number of users and whose workload is known not to expand will not have any problems using that RDBMS in the Cloud. But as more and more applications are launched in environments that have massive workloads, such as web services, their scalability requirements can, first of all, change very quickly and, secondly, grow very large. The first scenario can be difficult to manage if you have a relational database sitting on a single in-house server. For example, if your load triples overnight, how quickly can you upgrade your hardware? The second scenario can be too difficult to manage with a relational database in general.

As we have already shown, one of the core benefits of the Cloud is the ability to quickly (or automatically as we will show) add more servers to an application as its load increases, thereby scaling it to heavier workloads. But it is very hard expand an RDBMS this way. Data must either be replicated across the new servers or partitioned between them. In either case, adding a machine requires data to be copied or moved to the new server. Since this data shipping is a time-consuming and expensive process, databases are unable to be dynamically and efficiently provisioned on demand.

A big challenge with RDBMS and partitioning or replicating is maintaining referential integrity. Referential integrity requires that every value of one attribute (column) of a relation (table) exist as a value of another attribute in a different (or the same) relation (table). A little less formally, this means that any field in a table that is declared a foreign key can contain only values from a parent table’s primary key or a candidate key. In practice this means deleting a record that contains a value referred to by a foreign key in another table breaks referential integrity. When you partition or replicate a database, it becomes nearly impossible to guarantee referential integrity is maintained across all databases. Basically, it is the very useful property of RDBMS to be constructed out of lots of small index tables that are referred to by values in records that becomes so unworkable when these databases have to scale to deal with huge workloads that Cloud applications are otherwise ideally suited for.

The new type of database that does scale well and so has been used in the Cloud is generically a key-value database. Key-value databases are item-oriented, meaning all relevant data relating to an item are stored within that item. Here a table can contain vastly different items. For example, a table may contain car make, car model, and car color items. This means that data are commonly duplicated between items in a table (another item also contains Color: Green) as can be seen in Figure 5. This is accepted practice because disk space is relatively cheap. But this model allows a single item to contain all relevant data, which improves scalability by eliminating the need to join data from multiple tables. With a relational database, such data needs to be joined to be able to regroup relevant attributes. This is the key issue for scaling: if a join is needed that depends on shared tables, then
replicating the data is very hard and blocks easy scaling.

Figure 5. The same data as used in Figure 4 is shown here for a key-value type of database. Because all data for an item (row) is contained within that item, this type of database is trivial to scale because a datastore can be split (just copy some of the items) or replicated (copy all of the items to an additional datastore) and referential integrity is maintained.

When setting out to create a public computing cloud (such as Amazon) or when building massively parallel, redundant, and economical data-driven applications (such as Google), relational databases became untenable. Both needed a way of managing data that was almost infinitely scalable, inherently reliable, and cost effective. Consequently, both came up with non-relational database systems based on this key-value concept that can handle massive scale. Amazon’s Cloud database offering is called SimpleDB and Google’s is called BigTable.

Google’s BigTable solution was to develop a relatively simple storage management system that could provide fast access to petabytes of data, potentially redundantly distributed across thousands of machines. Physically, BigTable resembles a B-tree index-organized table in which branch and leaf nodes are distributed across multiple machines. Like a B-tree, nodes “split” as they grow and-since nodes are distributed-this allows for very high scalability across large numbers of machines. Data elements in BigTable are identified by a primary key, column name, and, optionally, a timestamp. Lookups via primary key are predictable and relatively fast. BigTable provides the data storage mechanism for Google App Engine-Google’s Platform-as-a-Service Cloud-based application environment.

Google charges $180 per terabyte per month for BigTable storage. See Listing 1 for an example of BigTable usage.

Amazon’s SimpleDB is conceptually similar to BigTable and forms a key part of the Amazon Web Services (AWS) Cloud computing environment. (Microsoft’s SQL Server Data Services [SSDS] provides a similar capability.) Like BigTable, this is a key-value type of database. The basic organizing entity is a domain. Domains are collections of items that are described by attribute-value pairs. An abbreviated list of the SimpleDB API calls with their functional description is shown in Table 3.

API call API functional description
CreateDomain Create a domain that contains your dataset.
DeleteDomain Delete a domain.
ListDomains List all domains.
DomainMetadata Retrieve information about creation time for the domain; storage information both as counts of item names and attributes, as well as total size in bytes.
PutAttributes Add or update an item and its attributes, or add attribute-value pairs to items that exist already. Items are automatically indexed as they are received.
BatchPutAttributes For greater overall throughput of bulk writes, perform up to 25 PutAttribute operations in a single call.
DeleteAttributes Delete an item, an attribute, or an attribute value.
GetAttributes Retrieve an item and all or a subset of its attributes and values.
Select Query the data set in the familiar, “select target from domain_name where query_expression” syntax. Supported value tests are: =, !=, <,
> <=, >=, like, not like, between, is null, is not null, and every (). Example: select * from mydomain where every(keyword) = ‘Book’. Order results using the SORT operator, and count items that meet the condition(s) specified by the predicate(s) in a query using the Count operator.

Converting an existing application to use one of these Cloud-based databases is somewhere between difficult and not worth the trouble but for applications already using the Object-Relational Mapping (ORM)-based frameworks, these cloud databases can easily provide core data management functionality and they can do it with compelling scalability and the same economic benefits of Cloud computing in general.

CLOUD DATABASE DRAWBACKS
Transactional support and referential integrity Applications using cloud databases are largely responsible for maintaining the integrity of transactions and relationships between “tables.”
Complex data accesses Cloud databases (and ORM in general) excel at single row transactions: get a row, save a row, etc. However, most non-trivial applications do have to perform joins and other operations that Cloud databases cannot do.
Business Intelligence Application data has value not only in terms of powering applications, but also as information that drives business intelligence. The dilemma of the pre-relational database, in which valuable business data was locked inside of impenetrable application data stores, is not something to which business will willingly return.

Cloud databases could displace the relational database for a significant segment of next-generation, cloudenabled applications. However, business is unlikely to be enthusiastic about an architecture that prevents application data from being leveraged for business intelligence and decision support purposes. An architecture that delivered the scalability and other advantages of cloud databases without sacrificing information management would be just the ticket.

The last technological underpinning we need to learn about is elasticity-the transmission in our on-going vehicle analogy.

Elasticity Allows Your Application to Scale as Demand Rises and Falls

It is the transmission that smoothly adapts the speed of the car’s wheels to the engine speed as you try to go faster and then slower. Similarly, it is elasticity that enables an application running in a Cloud to smoothly expand and contract as demand grows and shrinks. More precisely, elasticity is the ability to have capacity as demand increases and to release that capacity when you are done with it. There are many famous examples of when the failure to have capacity when it was needed was disaster or close to it for big organizations that you expect to be better prepared.

Elasticity and celebrity deaths
In July 2009 two celebrity deaths occurred on the same day. First Charlie’s Angel “Farrah Fawcett” had died, which resulted in a minor news flurry. Then later in the afternoon a major web storm erupted when news of Michael Jackson’s death hit the social web. Unexpectedly Twitter had major scaling issues dealing with the sudden influx of hundreds of thousands of tweets as Michael Jackson’s death spread. But twitter wasn’t alone. According to TechCrunch, “Various reports had the AOL-owned TMZ, which broke the story, being down at multiple points throughout the ordeal. As a result, Perez Hilton’s hugely popular blog may have failed as people rushed there to try and confirm the news. Then it was the LA Times which had a report saying Jackson was only in a coma rather than dead, so people rushed there, and that site went down. (The LA Times eventually confirmed his passing.)” Numerous examples exist of a news story, a product announcement or even the infamous Victoria’s Secret commercial that directly sent people to their web site which then crashed, where too much traffic met with insufficient capacity and resulted in catastrophe. People’s reaction, especially when they are directed to a site and it breaks is not to try that again so these issues really hurt a company’s business. Thus, the ability to scale as capacity dynamically grows is such a desirable thing.

Scalability is about the cloud platform being able to handle an increased load of users working on a Cloud application. Elasticity, then, is the ability of the cloud platform to scale up or down based on need without stopping the way the business is handled. Without this, the economies of moving a business/application to the cloud do not make sense.

In Listing 2, you set up an EC2 application to be load balanced and auto-scaled with a minimum number of two instances and maximum number of 20 instances. Auto Scaling in this example is configured to scale out by one instance when the application’s average CPU Utilization exceeds a threshold of 80% and scale in by one instance when it drops below 40% for 10 minutes.

It is time for us to leave our vehicle analogy behind now but it has been useful to see how we get a cloud to work. But, there isn’t only one flavor of cloud computing. The six critical enabling technologies inside Clouds we just discussed, as well as the knowledge of the Cloud types, help to understand how these different flavors of Clouds work, what they offer, and how they differ. All of that serves to help you understand which is best for you.

Summary

Here we focused on how the Cloud works by looking under the hood and examining the technological underpinnings of the Cloud. The Cloud providers have made leaps and bounds in the economies of scale they can get when creating their data centers. This means their costs keep getting lower and lower, while their specialized expertise in operating these massive data centers keeps getting better and better.

We examined some of the core enabling technologies upon which the Clouds are built. First and foremost is virtualization which even most corporate data centers have already embraced as a way to increase their server utilization and thereby lower costs. Since a Cloud is a virtualized server environment where new instances of machines or of applications can be created quickly, all of which can be controlled over the network, both automation and network access are also vital in Cloud computing. An API to create, operate, expand elastically, and destroy instances is also required, and we examined what such an API looks like. Trends seem to be leading in the direction of Amazon’s API becoming an industry standard. We looked at Cloud storage focusing on Amazon’s S3 as an example. We also looked at the problem of how relational databases do not scale because they have to be shared and so new key-value types of databases are instead becoming the norm in the Cloud. Here we compared Amazon’s SimpleDB to Google’s BigTable. One of the biggest benefits of moving to the Cloud is the ability to scale almost infinitely as demand for your application grows. This is called elasticity and we looked at how that works in some detail showing the calls required in Amazon EC2 to create an automatically scaled, load balanced EC2 application.

2010-05-25T20:55:42+00:00 March 11th, 2010|Miscellaneous|0 Comments

Leave A Comment