You are on page 1of 55

2012

Demystifying the Cloud


This eBook explains the concepts of Cloud Computing in
simple terms. The first three chapters introduce the key
concepts and the terminologies of the Cloud. The
remaining chapters cover the major implementations of
the Cloud Computing including Amazon Web Services,
Microsoft Windows Azure Platform, Google App Engine,
OpenStack and Cloud Foundry. This is targeted towards
the beginners and intermediate developers with a basic
understanding of web technologies.

Get Cloud Ready

2012 Janakiram & Associates

Janakiram MSV
www.GetCloudReady.com
1
1/1/2012

About the Author


A technologist at heart, I have been championing
Distributed Computing for the last 14 years.
In my current role of an unbiased, neutral Cloud
Specialist, I offer strategic advice and coaching to
enterprises and start-ups. I help them adopt and
take advantage of the Cloud. Founder and driving
force of an initiative called GetCloudReady.com, I
enable Architects, Developers and IT Professionals to be ready for the cloud and thereby get
people, processes, products and business applications to leverage the benefits of the cloud.

My greatest imaginable dream is to continue to be the catalyst to the cloud revolution by


being an independent authority on Cloud Computing as a Blogger, a Cloud Evangelist, a
Speaker, an Advisor and Mentor to startups.

I am also the chief editor at CloudStory.in where I blog about the latest trends in Cloud
Computing. As a passionate speaker, I have chaired the Cloud Computing track at premier
events in India. Having spoken at over 50 events in 2011, I have been consistently rated as
the top speaker at premier technical events.

The valuable time and work done at Amazon as the Web Services Evangelist and Technology
Architect Cloud at Microsoft Corporation has offered me the breadth and depth of
knowledge and expertise in the Cloud Computing domain. Prior to this, for about 10 years at
Microsoft Corporation, I have been involved in selling, marketing and evangelizing the
Microsoft Application Platform and Tools.

Founded on a strong belief that technology needs to create a lasting impact on businesses,
people and society at large; I relentlessly strive to deliver value to my customers and the
community.

My Coordinates
Principal Consultant, Janakiram & Associates http://www.janakiramm.net
Cloud Specialist,
Specialist , Get Cloud Ready http://www.GetCloudReady.com
Chief Editor,
Editor , CloudStory.in http://www.CloudStory.in

2012 Janakiram & Associates

Chapter 1
Defining the Cloud
Evolution of ISP
There are multiple factors that led to the evolution of Cloud Computing. One
of the key factors is the way Internet Service Providers (ISP) matured over a
period of time. I am borrowing this analogy from Forrester Research.

Evolution of ISP

From the initial days of offering basic Internet connectivity to offering


Software as a Service (SaaS), the ISPs have come a long way. ISP 1.0 was all
about providing Internet access to their customers. ISP 2.0 was the phase
where ISPs offered hosting capabilities. The next step was co-location
through which the ISPs started leasing out the rack space and bandwidth. By

2012 Janakiram & Associates

this, companies could host their servers running custom, Line of Business
(LoB) applications that could be accessed over the public internet by its
employees, trading partners and customers. ISP 3.0 was offering
applications on subscription resulting in the Application Service Provider
(ASP) model. The latest trend of Software as a Service is a mature ASP
model. The next logical step for ISPs would be to embrace the Cloud.

The Programmable Web


Web Services made the web programmable. They enabled the developers to
look at the Internet as a class library or an object model. Protocols like
Simple Object Access Protocol (SOAP), Representational State Transfer
(REST), JavaScript Object Notation (JSON) and Plain Old XML (POX) fueled
the growth of APIs on the web. Today every popular search engine, social
networking site and syndication portal have APIs exposed to developers.

Programmable Web

Virtualization
Virtualization is the most discussed term among CIOs and IT decision
makers. Through Virtualization, the data center infrastructure can be
consolidated from hundreds of servers to just tens of servers. All the
physical server roles like Web Servers, Database Servers and Messaging
Servers run as virtualized instances. This results in lower Total Cost of
2012 Janakiram & Associates

Ownership (TCO) and brings substantial savings on the power bills and
reduced cost of cooling equipment.

Virtualized Infrastructure

Though the evolution of ISP, programmable web and virtualization are


independent trends, they contribute to the evolution of Cloud Computing.

Understanding Cloud Computing


If you are wondering what is so special about the Cloud in Cloud Computing,
here is an explanation- Traditionally, developers and architects used a
picture of cloud to illustrate a remote resource connected via the web.

2012 Janakiram & Associates

Eventually cloud became the logical connector between the local and remote
resources on the Internet.

Most of the developers get confused when they encounter the term Cloud
Computing. According to them, their Web Services are already hosted on the
Cloud and that can be potentially called as Cloud Services. While there is
some truth in this argument, it is a not very accurate way of describing Cloud
Computing. Lets look at Cloud Computing through the eyes of a developer.

Think Web Services


Most of the developers are familiar with Web Services. Web Services are
based on a few simple concepts. Every Web Service accepts a request and
returns a response (even if there is no explicit return value, a HTTP 200 OK
return value is considered as a response). They are units of code that can be
invoked over the web. Typically, Web Services accept one or more input
parameters and invoke processing logic which will result in an output. Web
Services are a part of web applications that run on a typical stack that has
hardware, a Server OS, application development platform. For a while, think
how you can expose every layer that is powering your web application as a
Web Service.

Web Services Stack

2012 Janakiram & Associates

Cloud OS
Visualize a scenario where the hardware and the Operating System (OS) are
exposed as a Web Service over the public Internet. Based on the principles of
Web Services, we could send a request to this service along with a few
parameters. Since the OS is expected to act as an interface to the CPU and
the devices, we can potentially invoke a service that accepts a job that will
be processed by the OS and the underlying hardware. Technically, this Web
Service has just turned the OS + H/W combination into a Service. We can
start consuming this service by submitting CPU intensive tasks to this new
breed of Web Service. What do you call an OS that is exposed on the web as a
service? May be a Cloud OS? We will answer this in the coming sections.

Exposing the hardware and the OS as a Service

Cloud FX
Developers always develop and deploy their applications on the application
development platforms. Some of the most popular application development
platforms are .NET and Java. In the last scenario, we have seen how the OS +
H/W combination is offered as a service. Now, imagine a scenario where the
application development platform is offered to you as a service. Through
this, you will be able to develop and test your applications on a low end,
inexpensive notebook PC but will able to submit the code to run on the most
powerful hardware infrastructure. It is the same programming language,
SDK and the runtime that runs on your development environment. If the
2012 Janakiram & Associates

hardware, OS, the language runtime and the SDK are offered to you as a
service, what would you call this? A Cloud Platform or may be Cloud FX? We
will address this in the next section.

Exposing the Runtime + SDK as a Service


Cloud Application
Today, most of the traditional desktop applications like word processors and
spreadsheet packages are available over the web. These new breed of
applications just need a browser and offer high fidelity with the desktop
software. This fundamentally changes the way software is deployed and
licensed. You need not double click setup.exe to install an Office suite on
your desktop. Just subscribe to the applications and the features that you
need and only pay for what you use. This is almost equivalent to exposing the
application as a service. These applications may be called as Cloud
Applications. We will take a relook at this later.

Web Application
App lication as a Service

2012 Janakiram & Associates

Welcome to the World of Services

Infrastructure as a Service
In the previous section we discussed the Cloud OS. All that the Cloud OS
offers is the infrastructure services. You may choose to use REST API to
manage this OS or use SSH or Remote Desktop console. Technically, when
you are able to delegate a program to execute on a remote OS running on the
Web, you are leveraging Infrastructure as a Service (IaaS). This is different
from classic web hosting. Web hosting only hosts web pages and cannot
execute code that needs low level access to the OS API. Web hosting cannot
dynamically scale on demand. IaaS enables you to run your computing task
on virtually unlimited number of machines. Remember that through IaaS,
you have just moved a server running in your backyard into the Cloud. You
are still responsible for managing, patching, securing and the health of the
remote servers. Amazon EC2, Rackspace, IBM SmartCloud and OpenStack
are examples of commercial IaaS offering.

Cloud OS = Infrastructure as a Service

Platform as a Service

2012 Janakiram & Associates

Platform as a Service or PaaS goes one level above the Cloud OS. Through
this, developers can leverage a scalable platform to run their applications.
The advantage of PaaS is that the developers need not worry about installing,
maintaining, securing and patching the server. The PaaS provider takes the
responsibility of the infrastructure and exposes the platform alone as a
service. Through this, the developers can achieve higher level of scalability,
reliability and availability of their applications. Microsoft Windows Azure,
Google App Engine, Force.com, Heroku, Engine Yard and Cloud Foundry are
some of the examples of PaaS.

Cloud FX = Platform as a Service

Software as a Service
Software as a Service (SaaS) is a silent revolution in the world of traditional
software products. With the availability of tablets and inexpensive netbooks
combined with abundant bandwidth, most of the applications are moving to
the Cloud to be offered as services. Consumers can now use inexpensive
devices that are capable of connecting to the web to get their work done. This
reduces the upfront investment in software and brings the Pay-as-you-go
model. Google Apps, Salesforce.com and Microsoft Office365 are examples
of SaaS.

2012 Janakiram & Associates

10

Cloud App = Software as a Service

What does Cloud Computing mean to you?


IT Professionals and System Administrators
For IT Professionals, Cloud Computing is all about consolidation and
outsourcing the infrastructure. They are typically focused on the

Infrastructure as a Service. IT Pros will move away from managing


individual servers in their data centers to using a unified console to manage,
track and monitor the health of the remote server instances running on the
Cloud.

IaaS is the focus area of IT Pros and system administrators

Developers and Architects


Ar chitects

Platform as a Service is an offering meant for developers and architects.


They need to design applications keeping the statelessness of the Cloud.
2012 Janakiram & Associates

11

Architects should start thinking about the patterns that will make the
applications seamlessly scale on-the-fly across hundreds of servers.

PaaS is the focus area of developers and architects

Consumers
Consumers will experience the Cloud through a variety of applications that
they will use in their day-to-day life. If you have ever used Google Docs,
Dropbox or Microsoft Live Mesh, you have already leveraging the Cloud.
Consumers will subscribe to Software as a Service offerings.

SaaS delivers software through a subscription for the consumers

2012 Janakiram & Associates

12

Chapter 2
The Tenets of the Cloud
The 4 Key Tenets
I want to quickly recap the definition of Cloud Computing. It is all about
outsourcing your infrastructure and applications to run on a remote
resource. The remote resource phrase in the definition can be misleading
and creates an illusion that running your web app on a server hosted abroad
is Cloud Computing. So, what qualifies the remote resource to be called as
the Cloud?

Here are the 4 key capabilities that the Cloud Computing offers:
Elasticity
This is the most important attribute of the Cloud. You might start running
your application on just a single server. But in no time, Cloud Computing
enables you to scale your application to run on 100s of servers. Once the
traffic and usage of your application decreases, you can scale down to
10s of servers. All this happens almost instantly and the best thing is
your application and your customers dont even realize that. This dynamic
capability of scaling up and scaling down is called Elasticity. Elasticity
brings an illusion of Infinity. Though nothing is infinite in this world, your
application can get any number of resources as it demands. This is the
biggest promise of the Cloud. Now, think of web hosting. When you want
to add another server to your web application, your hoster has to
manually provision that for you. Adding additional servers and
configuring the network topology introduces additional time lag that your
business cannot afford. Most of the Cloud Computing vendors offer an

2012 Janakiram & Associates

13

intuitive way of manipulating your server configuration and topology.


Elasticity is the single most important attribute of the Cloud.

Elasticity of Cloud Computing

PayPay -ByBy -Use


Elasticity and Pay-By-Use attributes go hand in hand. When you are
scaling up your application by adding more resources, you know how
much it is going to cost you. Pay-By-Use is a boon for the startups. As an
entrepreneur, you got to balance your investment between human
resources and IT resources. The biggest benefit of Pay-By-Use is that it
reduces the CAPEX and turns your IT investment into OPEX. The analogy
that I typically use is that of Cable or DTH TV subscription. During the
season of Cricket World Cup or NBA, you would want to subscribe to the
sports channels and unsubscribe that moment the event is over. With
Pay-By-Use, you can subscribe and unsubscribe to the IT infrastructure
based on your needs and you only pay for what you use. This is the most
optimal way of spending your IT budget.

Self Service
When you are able to enjoy the capability of scaling up and scaling down
and only pay for what you use, you never want to wait for someone in the
datacenter to add an additional server to your application. Cloud can
2012 Janakiram & Associates

14

deliver its promise only when there is self service. Through this, you can
control the resources all by yourself without an intermediary. When you
add a new CPU core, a server instance or add extra storage, you do it by
yourself by using the Console offered by the Cloud provider. This results
in reduction in IT support and maintenance. Today most of the
organizations have dedicated IT teams to provision a new machine,
storage, collaboration portal and mailboxes as a part of on-boarding the
new employees. Through Self Service, a fairly non-technical person can
achieve these tasks and you dont need certified system administrators to
do this. For example, when you sign up with Google Apps, it is very simple
and intuitive to configure the mailboxes for the employees. With more
and more applications moving the Cloud, Self Service becomes the
preferred way of configuring and managing the IT infrastructure.

AWS Management Console

2012 Janakiram & Associates

15

Microsoft Windows Azure Management Portal

Programmability
This is the critical attribute of the Cloud. The Cloud makes the developers
extremely important. Developers are familiar with the concepts of
multithreading where they spawn new threads to achieve scalability and
the responsiveness of the application. They incorporate logic to create
additional threads on demand. The programmability aspect of the Cloud
adds a new dimension to the development. Developers can now create
additional machines and add it to the applications on demand. They can
treat the entire data center, servers and machines as an object model
that be programmed. They can now do a For-Each loop on every server
instance and decide what do with each instance. Amazon Web Services
have the most mature API for programmatically controlling the Cloud
based resources. Windows Azure supports the management API that lets
developers programmatically deploy and manage Azure applications. By
leveraging these APIs, developers are building applications to manage
the infrastructure and some of these frontends run on iPad and Android
devices. Now, imagine tapping on your mobile phone to add a dozen

2012 Janakiram & Associates

16

servers to your application. Thanks to the Cloud! Developers are


important more than ever!

AWS SDK for .NET Visual Studio

AWS PlugPlug -in for Eclipse

2012 Janakiram & Associates

17

iPhone App to manage AWS

So, lets summarize what we just discussed. Cloud Computing has 4 key
tenets
1) Elasticity, 2) Pay-By-Use, 3) Self Service, and 4) Programmability.

Hosting vs. Cloud Computing


Revisiting the on-going debate of hosting vs. Cloud Computing, lets see what
attributes hosting model exposes. Hosting can never meet the promise of
elasticity. Even if it does, it wont match the economics of the Cloud. Hosting
does offer some level of Self Service but not to an extent of manipulating the
server configuration on the fly! Pay-By-Use attribute is emulated by some
hosting companies. But, it is not a norm in the hosting business.
Programmability is too expensive to be supported by hosters as they cannot
invest in the SDK and tools to manage the infrastructure. So, it is clearly
evident that hosting is not the same as Cloud Computing.

Having understood the key attributes of the Cloud, you might start wondering
how you can bring these capabilities to your data center in the enterprise.
The reality is that these capabilities can be applied to your data center and
officially that is called as the Private Cloud. It is time for us to discuss
2012 Janakiram & Associates

18

various implementations of the Cloud. We will look at 4 different


mechanisms the way Cloud can be implemented.

Hosting vs. the Cloud

The 4 Implementations of the Cloud


Public Cloud
This is the most popular incarnation of the Cloud. Many businesses and
individuals realize Cloud through the Public Cloud implementation. It needs
a huge investment and only well established companies with deep pockets
like Microsoft, Amazon and Google can afford to set them up. Public Cloud is
implemented on thousands of servers running across hundreds of data
centers deployed across tens of locations around the world. The best thing
about Public Cloud is that the customers can choose a location for his
application to be deployed. This will reduce the latency when the consumers
access the application. For example, a London based business can choose to
deploy their app at the Europe data center and an American company prefers
a data center in North America. With the geographical spread, Public Clouds
like Amazon Web Services, Rackspace and Microsoft Windows Azure also
offer Content Delivery Network (CDN) features. Through this, static content
will be automatically cached across all the data centers around the globe
thus increasing the scalability of the application and offering better user
experience to end-users.

2012 Janakiram & Associates

19

Public Cloud

Private Cloud
Simply put, Private Clouds are normal data centers within an enterprise with
all the 4 attributes of the Cloud Elasticity, Self Service, Pay-By-Use and
Programmability. By setting up a Private Cloud, enterprises can consolidate
their IT infrastructure. They will need fewer IT staff to manage the data
center. They will also realize reduced power bills because of the low
electricity consumption and lesser cooling equipment needs. Private Cloud
empowers employees within an organization through Self Service of their IT
needs. It becomes easy to provision new machines and quickly assign them
to project teams. Private Cloud borrows some of the best practices of Public
Cloud but limited to an organizational boundary. Private Cloud can be setup
using a variety of offerings from vmWare, Microsoft, IBM, SUN and others.
There are also some of the Open Source implementations like Eucalyptus
and OpenStack. We will discuss more of Private Cloud in the coming
episodes.

2012 Janakiram & Associates

20

Private Cloud

Hybrid Cloud
There are scenarios where you need a combination of Private Cloud and
Public Cloud. Due to the regulations and compliance issues in few countries,
sensitive data like citizen information, patient medical history, and financial
transactions cannot be stored in servers that physically not located within
the political boundaries of a country. In some scenarios, the enterprise
customers want to get best of the both worlds by logically connecting their
Private Cloud and the Public Cloud. Through this, they can offer seamless
scalability by moving some of the on-premise and Private Cloud based
applications to the Public Cloud. Security plays a critical role in connecting
the Private Cloud to the Public Cloud. Realizing its importance, Amazon Web
Services offers Virtual Private Cloud (VPC) that securely bridges Private
Cloud and Amazon Web Services. It is a way of extending your infrastructure
beyond the organizational boundary and the firewall in a secure way.
Windows Azure AppFabric brings the concept of Hybrid Cloud to Microsofts
enterprise customers.

2012 Janakiram & Associates

21

Hybrid Cloud
Community Cloud
Community Cloud is implemented when a set of businesses have a similar
requirement and share the same context. This would be made available to a
set of select organizations. For example, the Federal government in US may
decide to setup a government specific Community Cloud that can leveraged
by all the states. Through this, individual local bodies like state governments
will be freed from investing, maintaining and managing their local data
centers. Similarly, the Reserve Bank of India (RBI) or Unique Identification
Authority of India (UIDAI) may setup a Community Cloud for all the financial
institutions that share common goals and requirements. So, a Community
Cloud is a sort of Private Cloud but goes beyond just one organization.

Community Cloud
2012 Janakiram & Associates

22

2012 Janakiram & Associates

23

Chapter 3
The Anatomy of the Cloud
Introduction to Virtualization
Virtualization is abstracting the hardware to run virtual instances of multiple
guest operating systems on a single host operating system. You can see
Virtualization in action by installing Microsoft Virtual PC, VMware Player or
Oracle VirtualBox. These are desktop virtualization solutions that let you
install and run an OS within the host OS. The virtualized guest OS images are
called Virtual Machines. The benefit of virtualization is realized more on the
servers than on the desktops.

Server Virtualization
There are many reasons for running Virtualization on the servers running in
a traditional data center. Here are a few:
Mean Time To Restore
It is far more flexible and faster to restore a failed web server, app server or
a database server that is running as a virtualized instance. Since these
instances are physical files on the hard disk for the host operating system,
just copying over a replica of the failed server image is faster than restoring
a failed physical server. Administrators can maintain multiple versions of the

2012 Janakiram & Associates

24

VMs that come handy during the restoration. The best thing about this is that
the whole copy and restore process can be automated as a part of disaster
recovery plan.
Maximizing the server utilization
It is very common that certain servers in the data center are less utilized
while some servers are maxed out. Through virtualization, the load can be
evenly spread across all the servers. There are management software
offerings that will automatically move VMs to idle servers to dynamically
manage the load across the data center.
Reduction in maintenance cost
Virtualization has a direct impact on the bottom line. First, by consolidating
the data center to run on fewer but powerful servers, there is a significant
cost reduction. The power consumed by the data center and the maintenance
cost of the cooling equipment comes down drastically. The other problem
that virtualization solves is the migration of servers. When the hardware
reaches the end of the lifecycle, the physical servers need to be replaced.
Backing up and restoring the data and the installation of software on a
production server is very complex and expensive. Virtualization makes this
process extremely simple and cost effective. The physical servers will be
replaced and the VMs just get restarted without any change in the
configuration. This has a significant impact on the IT budgets.
Efficient management
All major virtualization software have a centralized console to manage,
maintain, track and monitor the health of physical servers and the VMs
running on these servers. Because of the simplicity and the dynamic
capabilities, IT administrators will spend less time in managing the
infrastructure. This results in better management and cost savings for the
company.

Virtualization on the Server


Lets understand more about the server virtualization. Typically the OS is
designed to act as an interface between the applications and the hardware. It
is not specifically designed to run the guest OS instances on top of it.

2012 Janakiram & Associates

25

Ap
p

Ap
p

Ap
p

Ap
p

Ap
p

Ap
p

Operating System
Hardware

An OS Managing the Applications

In fact, in the server virtualization scenario, the host OS is not very


significant. It is just confined to booting up and running the VMs. Given the
fact that the OS is not ideal for running multiple VMs and has a little role to
play, there is a new breed of software called Hypervisor that takes over the
OS. Hypervisor is an efficient Virtual Machine Manager (VMM) that is
designed from the ground up to run multiple high performant VMs. So, a
Hypervisor is to VMs what an OS is to processes.

VM

VM

VM

VM

VM

VM

Hypervisor
Hardware

A Hypervisor Managing the Virtual Machines

A Hypervisor can potentially replace the OS and can even boot directly from a
VM. This is called bare metal approach to virtualization. These Hypervisors
have low footprint of few megabytes (vmWare ESXi is just 32MB in size!) and
have an embedded OS with them. Hypervisors are assisted by the hardware
virtualization features built into the latest Intel and AMD CPUs. This
2012 Janakiram & Associates

26

Figure 3. Hypervisor manages the VMs

combination of hardware and Hypervisor turns the server into a lean and
mean machine to host multiple VMs. The VM that is used by the Hypervisor to
boot as a host is called a para-virtualized VM. This concept makes
virtualization absolutely powerful. Imagine a server booting in few seconds
and the required para-virtualized (host) VM gets copied over a gigabit
Ethernet to run multiple guest VMs. This enables the datacenter to be very
dynamic and agile. The Hypervisor can be controlled by a central console and
can be instructed about the host VM to boot and the guest VMs to be run on it.

App

App

App

VM

App
VM

App

App
VM

Paravirtualized VM
Hypervisor
Hardware

A look at the Hypervisor market


Citrix XenServer
This product is based on the proven, open source Hypervisor called Xen.
Xens paravirtualization technology is widely acknowledged as the fastest
and most secure virtualization software in the industry and it is enhanced by
taking full advantage of the latest Intel VT and AMD-V hardware
virtualization assist capabilities. This product is free and can be downloaded
from Citrix.com.
VMware ESXi
2012 Janakiram & Associates

27

This product is another bare metal Hypervisor from the virtualization leader,
VMware. This is one of the best Hypervisors with just 32MB footprint. ESXi
ships with Direct Console User Interface (DCUI) that provides basic UI
required for administering and managing the Hypervisor. Through its
standard Common Information Model (CIM) system, it also exposes the APIs
to control the infrastructure.
Microsoft HyperHyper -V Server
This is a free Hypervisor from Microsoft based on the same Hypervisor that
ships with Microsoft Windows Server Hyper-V edition. This is best suited for
Virtual Desktop Infrastructure (VDI) because of its compatibility with
Windows Vista and Windows 7. Hyper-V does not have any local GUI but can
be managed from System Center Virtual Machine Manager (SCVMM).

Virtualization and the Cloud


The architecture that we discussed forms the heart and soul of Cloud
Computing.
Here is how
Elasticity
We know that the key attribute of the Cloud is Elasticity, which is the ability
to scale up and scale down on the fly. This capability is achieved only through
virtualization. Scaling up is technically adding more server VMs to an
application and scaling down is detaching the VMs from the application.
Self Service
The next attribute is Self Service. The Hypervisor comes with an API and the
required agents to manage it remotely. This functionality can surface
through the Self Service portals that the Cloud vendor offers. So, when you
move a slider to increase the number of servers in your web tier, you are
essentially talking to the Hypervisor to action that request.
PayPay -ByBy -Use

2012 Janakiram & Associates

28

Pay-By-Use is the next attribute of the Cloud. By leveraging the management


and monitoring capabilities of the Hypervisor, metering the usage of
resources like the CPUs, RAM and storage can be easily achieved.
Programmable Infrastructure
Programmable Infrastructure is the last key tenet of the Cloud. We already
saw how the API wired into Hypervisors can be leveraged. Developers can
directly talk to the Hypervisor through the native APIs or Web Services
exposed by the Cloud vendors. Through this, they can take the control of the
VMs.
It is very obvious that the Cloud is heavily relying on virtualization and
efficient Hypervisors to achieve its goal.

Dissecting the Cloud


Now that we know how Virtualization forms the core of the Cloud, lets me
put things in perspective. Lets see what actually goes inside the Cloud.
Geographic location
We start deciding where to physically run our application. Most of the Cloud
providers give you an option to host your application at a specific location.
Depending on the customer base and the expected user location, you can
choose a location. This will ensure that all your components like storage,
compute and database services are hosted within the same data center. This
will reduce the latency and makes the application more responsive.

2012 Janakiram & Associates

29

Geographically spread Cloud data centers

Data Center
Though you do not have a direct choice in this, your app will be deployed at a
data center physically located at a place that you have chosen. These data
centers typically run thousands of powerful servers that offer a lot of storage
and computing power.

A Cloud data center runs hundreds of servers


Server
You never know which physical server is responsible for running your code
and the application. In most of the cases, the app that you deployed may be
powered by more than one server running within the same data center. You
cannot assume that the same physical server will run the next instances of

2012 Janakiram & Associates

30

your app. Servers are treated as a commodity resource to host the VMs.
There is no affinity between a VM and a physical server. Each server in the
data center is optimally utilized at any given point.

Each server
serve r runs the Hypervisor and the VM(s)

Virtual Machine
This is the layer that you will directly interact with. In Platform as a Service
(PaaS), you may not realize that you are dealing with a VM but in reality most
of the Cloud implementations will host your code or app on a VM. VMs are
essential to respect the 4 tenets of the Cloud. Your application runs on a VM
that is managed by the Hypervisor running across all the servers. These VMs
are moved across servers based on the server utilization. There is no
guarantee that the VM that you launch will run on the same physical server.
There will be a load balancer which will ensure that your applications are
scalable by exploiting the power of all the VMs associated with your
application.

2012 Janakiram & Associates

31

Under the hood of a server

2012 Janakiram & Associates

32

Chapter 4
Introducing Amazon Web Services
Overview of Amazon Web Services
Amazon Web Services is one of the early and also the most successful
implementations of the Public Cloud. Many well-known online properties
leverage AWS. Amazon initially started offering a Cloud based Message
Queuing service called Amazon Simple Queue Service or SQS. They
eventually added services like Mechanical Turk, Simple Storage Service (S3),
Elastic Compute Cloud (EC2), A CDN service called CloudFront, a flexible and
distributed database service called SimpleDB. In the last few years the
number of services offered by AWS has grown significantly.

Amazon Web Services


Given that Amazon offers the core capabilities to run a complete web
application or a Line of Business application, it is obvious that it is

2012 Janakiram & Associates

33

Infrastructure as a Service (IaaS). AWS is truly the platform of the platforms.


You can choose an OS, App server and the programming language of your
choice. AWS SDK and API is available for most of the popular languages
including Java, .NET, PHP, Python and Ruby.

Lets take a closer look at some of the key service offerings from Amazon
Web Services.

Amazon Elastic Compute Cloud


In simple terms, EC2 is hiring a server running at a remote location. These
servers are actually Virtual Machine images running on top of Amazons
powerful data centers. Amazon calls these virtualized server instances as
Amazon Machine Images or AMI. These instances come in different sizes that
you can choose from. Please refer to http://aws.amazon.com/ec2/#instance
for more details on the instance types. There are many pre-configured AMIs
that you can choose from. The typical workflow on EC2 is that you choose a
pre-configured AMI, launch that AMI, customize it by adding additional
software and by loading an app and finally, save that AMI as your custom
AMI. You can launch multiple instances of your AMI and attach them to an IP
called the Elastic IP. Because of the dynamic capability of launching multiple
instances of the same AMIs to scale up and terminating them to scale down,
it is called Elastic Compute Cloud.

Amazon Simple Storage Service


Amazons Simple Storage Service or S3 is a great way to store data on the
Cloud that can be accessed by any application with access to the Internet. S3
can store any arbitrary data as objects accompanied by metadata. These
objects can be organized into buckets. Every bucket and object has a set of
permissions defined in the Access Control List (ACL). The objects stored in
S3 can be anything from a document, a media file, serialized objects or even

2012 Janakiram & Associates

34

Virtual Machine images. Each object can be 5TB in size while the metadata
can be up to 2KB. All the objects can be accessed using simple REST or SOAP
calls. This makes S3 an ideal storage solution to centrally store and retrieve
data across multiple clients. Some tools let you treat S3 a virtual file system
to provide persistence storage capabilities for backup and archiving
scenarios.

Amazon S imple Queuing Service


SQS is the message queue on the Cloud. It supports programmatic sending
of messages via web service applications as a way to communicate over the
Internet. Message Oriented Middleware (MOM) is a popular way of ensuring
that the messages are delivered once and only once. Moving that
infrastructure to the web by yourself is expensive and hard to maintain. SQS
gives you this capability on-demand and through the pay-by-use model. SQS
is accessible through REST and SOAP based API.

Amazon CloudFront
When your web application is targeting the global users, it makes sense to
serve the static content through a server that is closer to the user. One of the
solutions based on this principle is called Content Delivery Network (CDN).
But this infrastructure of geographically spread servers to serve static
content can be very expensive. CloudFront is CDN as a service. Amazon is
leveraging its data center presence across the globe by serving content
through these edge locations. CloudFront works in conjunction with S3 by
replicating the buckets across multiple edge servers. Amazon charges you
only for the data that is served through CloudFront and there is no
requirement for upfront payment.

Amazon SimpleDB

2012 Janakiram & Associates

35

If S3 offers storage for arbitrary binary data, SimpleDB is a flexible way to


store Name/Value pairs on the Cloud. This dramatically reduces the
overhead of maintaining a relational database continuously. SimpleDB is
accessed through REST and HTTP calls and can be easily consumed by any
client that can parse a HTTP response. Many Web 2.0 applications built using
AJAX, Flash and Silverlight can easily access data from SimpleDB.

Amazon Relational Database Service


ervice
Amazon RDS offers relational database on the Cloud. It supports the popular
MySQL and Oracle databases. When you are moving a traditional Line of
Business application to the Cloud and want to maintain high fidelity with the
existing systems, you can choose RDS. The advantage of RDS is that you do
not install, configure, manage and maintain the DB server. You only consume
it and Amazon takes care of the rest. Routine operations like patching the
server and backing up the databases are taken care and you only consume
the service. RDS is priced on Pay-as-you-go model and there is no upfront
investment required. It is accessible through the REST and SOAP based API.

Amazon Virtual Private Cloud


Amazon VPC is a secure way of connecting companys existing IT
infrastructure to the AWS cloud. Amazon VPC enables enterprises to connect
their existing infrastructure to a set of isolated AWS compute resources via a
Virtual Private Network (VPN) connection, and to extend their existing
management capabilities such as security services, firewalls, and intrusion
detection systems to include their AWS resources.

AWS Scenarios
Scalable Web Application

2012 Janakiram & Associates

36

If you are an aspiring entrepreneur and want to go-live with your app without
an upfront investment, Amazon is the place to go. By running your web app
on Amazon, you can dynamically scale you application on demand and only
pay for what you use. This can be the best playground for you to determine
the server capacity needs and asses the peak traffic patterns before the
commercial launch of your web app.

Line of Business Application


If your enterprise has to open up an internal LOB application to its
employees and trading partners, it can extend the application to the Cloud by
leveraging a concept of AWS called Virtual Private Cloud (VPC). This is
achieving the Hybrid Cloud capabilities by partially moving an application to
the Cloud while still running the sensitive and proprietary part of the LOB
application secured behind the firewall.

VPC enables organizations to

securely extend itself to the Cloud.

Data Archival
Data that is not very frequently accessed but may be required due to data
retention policies can be easily archived on Amazon S3. By building a simple,
searchable frontend, this data can be searched and retrieved on-demand.
Moving the data to the Cloud will ensure that is available from anywhere and
anytime.

HighHigh-Performance Computing On Demand


For many enterprises, there is an occasional requirement of high
performance computing. Investing in high-end servers is not an optimal
solution because they may not be utilized after the task is done. With AWS,
companies can virtually hire as much computing power as they need and

2012 Janakiram & Associates

37

pay only for what they used. This will eliminate the expensive proposition of
investing in the infrastructure.

Scalable Media Delivery


A TV channel might want to start delivering the recorded shows to its global
audience. Since most of the content is static, they can leverage the CDN
capabilities. Signing up with services like Akamai and LimeLight can be
expensive. Because the media content is already stored on S3, it is very easy
and cost effective to leverage Amazons CloudFront to deliver the media
content through the geographically spread edge locations.

2012 Janakiram & Associates

38

Chapter 5
Introduction to Microsoft
Windows Azure
Overview of Windows Azure Platform
Microsoft Windows Azure Platform is a Platform as a Service offering from
Microsoft. It was announced in 2009 and became available in 2010. Since
then Microsoft has been constantly improving the platform by adding new
features.

Windows Azure Platform is aiming at becoming the preferred platform for


both, the new age web developers and enterprises. For the web developers
building scalable web applications, it offers compute, storage, and caching
along with CDN services. It also has enterprise specific features like Active
Directory

integration,

virtual

networking

and

business

intelligence

capabilities.

Microsoft Windows Azure Platform

2012 Janakiram & Associates

39

Though Windows Azure Platform is designed for the developers building


applications on the Microsoft platform, developers building applications on
Java and PHP environments can also leverage this. Microsoft is investing in
the right set of tools and plug-ins for Eclipse and other popular developer
environments.

I will first explain each of the components of Windows Azure Platform and
then walk you through the scenarios for deploying applications on this
platform.

Windows Azure
Windows Azure is the heart & soul of the Azure Platform. It is the OS that
runs on each and every server running in the data centers across multiple
geographic locations. It is interesting to note that Windows Azure OS is not
available as a retail OS. It is a homegrown version exclusively designed to
power Microsofts Cloud infrastructure. Windows Azure abstracts the
underlying hardware and brings an illusion that it is just one instance of OS.
Because this OS runs across multiple physical servers, there is a layer on
the top that coordinates the execution of processes. This layer is called the
Fabric. In between the Fabric and the Windows Azure OS, there are Virtual
Machines (VM) that actually run the code and the applications. As a
developer, you will only see two services at the top of this stack. They are 1)
Compute and, 2) Storage.

Compute
You interact with the Compute service when you deploy your applications on
Windows Azure. Applications are expected to run within one of the three
roles called Web Role, Worker Role and VM Role. Web Role is meant to host

2012 Janakiram & Associates

40

typical ASP.NET web applications or any other CGI web applications. Worker
Role is to host long running processes that do not have any UI. Think of the
Web Role as an IIS container and the Worker Role as the Windows Services
container. Web Role and Worker Role can talk to each other in multiple
ways. The Web Role can also host WCF Services that expose a HTTP
endpoint. The code within Worker Role will run independent of the Web Role.
Through the Worker Role, you can port either .NET applications or native
COM applications to Windows Azure. Through Worker Role, Windows Azure
offers support for non-MS environments like PHP, Java and Node.js. VM
Role enables running applications within a custom Windows Server 2008 R2
image. This will enable enterprises to easily port applications that have
dependencies or 3rd party components and legacy software.

Storage
When you run an application, you definitely need storage to either store the
simple configuration data or more complex binary data. Windows Azure
Storage comes in three flavors. 1) Blobs, 2) Tables and, 3) Queues.

Blobs can store large binary objects like media files, documents and even
serialized objects. Table offers flexible name/value based storage. Finally,
Queues are used to deliver reliable messages between applications. Queues
are the best mechanism to communicate between Web Role and Worker
Role. The data stored in Azure Storage can be accessed through HTTP and
REST calls.

Service Bus
This service enables seamless integration of services that run within an
organization behind a firewall with those services that are hosted on the
Cloud. It forms a secure bridge between the legacy applications and the
Cloud services. Service Bus provides secure connectivity between on-

2012 Janakiram & Associates

41

premise and Cloud services. It can be used to register, discover and


consume services irrespective of their location. Services hosted behind
firewalls and NAT can be registered with the Service Bus and these services
can be then invoked by the Cloud Services. The Service Bus abstracts the
physical location of the service by providing a URI that can be invoked by any
potential consumer.

Access Control Service


Access Control is a mechanism to secure your Cloud services and
applications. It provides a declarative way of defining rules and claims
through which callers can gain access to Cloud services. Access Control
rules can be easily and flexibly configured to cover a variety of security
needs and different identity-management infrastructures. Access Control
enables enterprises to integrate their on-premise security mechanisms like
Active Directory with the Cloud based authentication. Developers can
program Access Control through simple WCF based services.

Caching
Caching provides an in-memory caching service for applications hosted on
Windows Azure. This avoids the disk I/O and enables applications to quickly
fetch data from a high-speed cache. Cache can store multiple types of data
including XML, Binary, Rows or Serialized CLR Objects. Web applications that
need to access read-only frequently can access the cache for better
performance. ASP.NET developers can move the session to Cache to avoid
single-point-of failure.

SQL Azure
SQL Azure is Microsoft SQL Server on the Cloud. Unlike Azure Storage, which
is meant for unstructured data, SQL Azure is a full-blown relational database
engine. It is based on the same DB engine of MS SQL Server and can be

2012 Janakiram & Associates

42

queried with T-SQL. Because of its fidelity with MS SQL, on-premise


applications can quickly start consuming this service. Developers can talk to
SQL Azure using ADO.NET or ODBC API. PHP developers can consume this
through native PHP API. Through the Microsoft SQL Azure Data Sync, data
can be easily synchronized between on-premise SQL Server and SQL Azure.
This is a very powerful feature to build hubs of data on the Cloud that always
stay in sync with your local databases. For all practical purposes, SQL Azure
can be treated exactly like a DB server running in your data center without
the overhead of maintaining and managing it by your teams. Because
Microsoft is responsible for installation, maintenance and availability of the
DB service, business can only focus on manipulating and accessing data as a
service. With the Pay-as-you-go approach, there is no upfront investment
and you will only pay for what you use.

Scenarios for Microsoft Windows Azure Platform

Scalable Web Application


Because Windows Azure Platform is based on the familiar .NET platform,
ASP.NET developers can design and develop web applications on fairly
inexpensive machines and then deploy them on Azure. This will empower the
developers to instantly scale their web apps without worrying about the cost
and the complexity of infrastructure needs. Even PHP developers can enjoy
the benefits of elasticity and pay-by-use attributes of the platform.

Compute Intensive Application


Windows Azure Platform can be used to run process intensive applications
that occasionally need high end computing resources. By leveraging the
Worker Role, developers can move code that can run across multiple

2012 Janakiram & Associates

43

instances in parallel. The data generated by either Web Role or On-Premise


applications can be fed to the Worker Roles through Azure Storage.

Centralized Data Access


When data has to be made accessible to a variety of applications running
across the browser, desktop and mobile, it makes sense to store that in a
central location. Azure Cloud based storage can be great solution for
persisting and maintaining data that can be easily consumed by desktop
applications, Silverlight, Flash and AJAX based web applications or mobile
applications. With the Pay-as-you-grow model, there is no upfront
investment and you will only pay for what you use.

Hybrid Applications (Cloud + OnOn -Premise)


There may be a requirement for extending a part of an application to the
Cloud or building a Cloud faade for an existing application. By utilizing the
services like Service Bus and Access Control, on-premise applications can
be seamlessly and securely extended to the Cloud. Service Bus and a
technology called Azure Direct Connect can enable the Hybrid Cloud
scenario.

Cloud Based Data Hub


Through SQL Azure, companies can securely build data hubs that will be
open to trading partners and mobile employees. For example, the Inventory
of a manufacturing company can be securely hosted on the Cloud that is
always in sync with the local inventory database. The Cloud based DB will be
opened up for B2B partners to directly query and place orders. SQL Azure
and the SQL Azure Data Sync will enable interesting scenarios.

2012 Janakiram & Associates

44

Chapter 6
Introducing Google App Engine
Google App Engine is a platform to deploy and run web applications on
Googles infrastructure. It comes with a dynamic web server with full
support for common web technologies. It offers a transactional data store for
persisting data. Developers can integrate their web application with Google
Accounts through the APIs. The biggest advantage of running web
applications on GAE is the scalability that it offers. Your web application will
be as scalable as some of the popular Google services like search.

Google App Engine currently supports Python, Java and a brand new
language from Google called Go. Java developers will be able to deploy and
run JSPs and Servlets while Python developers can use standard library.
Since GAE runs in a sandbox, not all operations are possible. For example,
opening and listening on sockets is disabled. The applications running on
GAE live in a sandbox that provides multi-tenancy and isolation across
applications.

Components of Google App Engine


The next logical layer is a set of APIs and Services to support the web
application developers. This layer has a persistent Datastore, User
Authentication services, Task Scheduler and Task Queue, URL Fetch, a Mail
component, MemCache and Image Manipulation. All these services are
exposed through native API bindings. For example, Java developers will be
able to use JDO/JPA to talk to the datastore.

2012 Janakiram & Associates

45

Lets take a closer look at some of the services provided by GAE.

Java Runtime
GAE is based on Java 6 VM and Servlet 2.5 Container. The datastore can be
accessed through the JDO/JPA API. It supports JSR 107 for MemCache API.
Mail can be accessed through javax.mail API. Javax.net.URLConnection
provides access to URLFetch service. Apart from core Java language, other
dynamic languages based on Java like JRuby and Scala.

Python Runtime
GAE comes with a rich set of API and tools for developing web applications
based on Python. It supports Python 2.5.2 and Python 3 is being considered
for the future releases. You can also take advantage of a wide variety of
mature libraries and frameworks for Python web application development,
such as Django. The Python environment provides rich Python APIs for the
datastore, Google Accounts, URL fetch, and email services. App Engine also
provides a simple Python web application framework called webapp to make
it easy to start building applications.

Datastore
App Engine comes with a very powerful data storage that can scale
dynamically. It also features a query engine and support for transactions.
The datastore is different from traditional relational databases. The objects
stored in datastore are called Entities which are schemaless. These
entities have a set of properties that can be queried using a SQL like

2012 Janakiram & Associates

46

grammar like GQL or Google Query Language. The datastore is strongly


consistent and supports optimistic concurrency control.

User Authentication
One of the advantages of using GAE is its integration with Google Accounts.
This empowers the developers to leverage Googles secure authentication
engine for their custom applications. While a user is signed in to the
application, the app can access the user's email address, as well as a unique
user ID. The app can also detect whether the current user is an
administrator, making it easy to implement admin-only areas of the app.

URL Fetch
This service will fetch external web pages using the high bandwidth that
many other Google applications use.

Mail
This will enable developers to programmatically send email messages from
custom web applications.

MemCache
The Memcache service provides applications with a high performance inmemory key-value cache that is accessible by multiple instances of the
application. Memcache is useful for data that does not need the persistence
and transactional features of the datastore, such as temporary data or data
copied from the datastore to the cache for high speed access.

Image Manipulation
Through this service, developers can manipulate images. With this API, you
can resize, crop, rotate and flip images in JPEG and PNG formats.

2012 Janakiram & Associates

47

Scheduled Tasks
Scheduled Tasks are also called cron jobs. Other than running interactive
web applications, GAE can also schedule tasks that can be invoked at a
specific time.

To get started on Google App Engine, download the Eclipse plug-in and the
SDK.

The SDK emulates the GAE environment locally and enables you to design,
develop and test applications on your machine before finally deploying on
GAE.

2012 Janakiram & Associates

48

Chapter 7
Introducing OpenStack
In July 2010, Rackspace, one of the IaaS players has partnered with NASA to
launch an open source cloud project called OpenStack. The key objective of
OpenStack is to create a unified platform that powers both the Private Cloud
and the Public Cloud. This will make it easy to move workloads seamlessly
across the Private Cloud and the Public Cloud. Enterprises can deploy
OpenStack within their data center and service providers can leverage it to
set up Public Cloud. Though another open source project called Eucalyptus
was started with the same mission, it did not garner much attention as
OpenStack did. OpenStack has the credibility of NASA and the track record
of Rackspace. This attracted over 20 companies including the biggies like
Intel, Dell and Citrix to join this initiative. Since its launch the list has grown
to over 120 companies. One of the recent wins for OpenStack came from HP.
HP has recently announced HP Cloud that is completely built on OpenStack.
Its definitely an indication that the industry is open to embrace OpenStack.
In October 2011, Rackspace has announced plans to hand over the
responsibility of the project to OpenStack Foundation with the intent of
making it a true industry standard for Cloud. Internap, one of the Public
Cloud providers had introduced the first commercial implementation of
OpenStack.

Unlike the commercial implementations, OpenStack is Hypervisor agnostic.


Instead of focusing on building a new Hypervisor, it is designed to run on
popular Hypervisors including Citrix XenServer, Microsoft Hyper-V, Xen,
KVM, VMware ESX and others. This enables enterprises and service
providers to setup a cloud based on their existing Hypervisors thus
protecting their investments.

So, what are the key components of

2012 Janakiram & Associates

49

OpenStack? The basic requirement for any IaaS environment is compute and
storage services. In the case of Amazon Web Services, EC2 and S3 deliver the
compute and storage services. Beyond these two core services, IaaS also
needs a service to catalog and manage the virtual machine images. Of
course, there are many other key components that make the offering more
complete and mature, but compute, storage and VM catalog manager forms
the core of an IaaS platform. OpenStacks compute service is code-named
Nova, the storage service is code-named as Swift and the catalog manager is
code-named Glance. These three form the building blocks of OpenStack.
Lets take a closer look at these services.

OpenStack Compute (Nova)


This is responsible for the provisioning of VMs, managing networking of VMs
and creating redundancy of the compute layer. It exposes API and provides
the control panel to manage the environment. Nova is designed to run on all
major Hypervisors. It also runs a fabric to orchestrate and coordinate the VM
provisioning and de-provisioning across multiple physical machines. Nova is
the foundation for large cloud deployment that spans data centers and
locations. This closely resembles the Amazon EC2 environment.

OpenStack Storage (Swift)


Swift is the object store of OpenStack that is based on Rackspace CloudFiles.
It is designed to handle Petabytes of storage using the standard commodity
2012 Janakiram & Associates

50

hardware. Swift is massively scalable and provides redundant object


storage. Objects are written to multiple physical storage devices to ensure
data integrity and redundancy. The structure of Swift is comparable to
Amazon S3 environment. The APIs are compatible with Amazon S3 API and
just by changing the end-point, existing Amazon S3 based applications can
talk to Swift.

OpenStack Image Service (Glance)


Glance provides discovery, registration and delivery services for virtual disk
images. It maintains the metadata of the VM images that can be queried to
get a catalog of existing images. It can also maintain the details of images
stored on Amazon S3. Glance supports many formats of virtual disk images
including AMI (Amazon EC2), VHD (Microsoft Hyper-V), VDI (Oracle
VirutalBox) and VMDK (VMware).

2012 Janakiram & Associates

51

Chapter 8
Introduction to Cloud Foundry
Cloud Foundry originally started as a platform to deploy Java Spring
applications on Amazon Web Services. In April 2011, VMware acquired Cloud
Foundry and made that it into an open source, multi-language and multiframework PaaS offering. Cloud Foundry supports multiple languages and
multiple runtimes such as Java, Spring, Ruby, Scala and Node.js. VMware
calls this an Open PaaS as Cloud Foundry can run on anything from a
notebook PC to a Public Cloud.

Cloud Foundry as Open PaaS

Cloud Foundry has three dimensions to the platform. The first one is all
about the choice of frameworks while the second is choice of application
services and the final dimension is the deployment choice.

Framework Choice

2012 Janakiram & Associates

52

Cloud Foundry supports Spring for Java, Rails and Sinatra for Ruby, Node.js
and JVM languages like Groovy, Grails and Scala. It also supports Microsoft
.NET Framework and became the first non-Microsoft platform to support
.NET. This makes Cloud Foundry one of the first Polyglot PaaS.

Application Choice
The Cloud-era developers need support of a reliable messaging system,
NoSQL databases along with relational databases. Cloud Foundry includes
support for RabbitMQ for messaging, MongoDB and Redis for NoSQL and
MySQL as the relational database. The list of supported services is growing
and recently PostgreSQL support was added to the platform.

Deployment Choice
Cloud Foundry can be deployed on notebook PCs through Micro Cloud
Foundry. Micro Cloud Foundry is the complete version of Cloud Foundry
designed to run in a virtual machine on a developers PC or Mac. It can also
be deployed on Private Cloud running within an enterprise or on a Public
Cloud like Amazon Web Services. This makes Cloud Foundry an extremely
flexible PaaS.

Deploying applications on Cloud Foundry


Developers can choose to deploy applications either through the
SpringSource Tool Suite (STS) or through the Command line Ruby Gem
called VMC.

2012 Janakiram & Associates

53

Logical view of Cloud Foundry

Messaging is the nervous system of Cloud Foundry. It is the central


communication system that enables all the components to talk to each other.

Routers handle all HTTP the traffic targeting the applications. They route
URLs to applications and also does load balancing of the traffic across
instances.

Cloud Controllers are the key components that handle packaging and staging
of the applications. They bind various services to applications. The external
REST API is exposed through them.

Health Manager monitors the health of all running applications. In case of a


failure of an application, it informs the Cloud Controllers to take action.

DEA stands for Droplet Execution Agent. Every unit of executable code in
packaged as a Droplet in Cloud Foundry. These Droplets abstracts the
underlying code and exposes a generic executable code unit. DEA is

2012 Janakiram & Associates

54

responsible for executing the code within each Droplet. They provide the OS
and runtime environment.

Summary
Cloud Foundry is quickly gaining ground as an Open PaaS. Many vendors are
announcing their support and bringing newer platforms and services into the
stack. It may provide tough competition to commercial PaaS vendors like
Microsoft and Google.

2012 Janakiram & Associates

55

You might also like