You are on page 1of 203


I (Page 4)

a. Introduction 5
b. Cloud Architecture 16
c. Big Data and Cloud Technologies 65
d. The Cloud and the Fog 72
e. Thriving in the Cloud 78
f. ERP and the Cloud 88
g. Risks of Adopting Cloud Computing 95

Lecture II (Page 98)

a. Mobile Cloud 99
b. Cloud Security Issues 107
c. Mobile Cloud Computing - Security 152
d. Security Analysis in the Migration

to Cloud Environments 159

Lecture I

a. Introduction 5
b. Cloud Architecture 16

c. Big Data and Cloud Technologies 65

d. The Cloud and the Fog 72
e. Thriving in the Cloud 78
f. ERP and the Cloud 88
g. Risks of Adopting Cloud Computing 95

I) a. Introduction

Computing is being transformed to a model consisting of services that are commoditized
and delivered in a manner similar to traditional utilities such as water, electricity, gas, and
telephony. In such a model, users access services based on their requirements without
regard to where the services are hosted or how they are delivered. Several computing
paradigms have promised to deliver this utility computing vision and these include cluster
computing, Grid computing, and more recently Cloud computing. The latter term denotes
the infrastructure as a Cloud from which businesses and users are able to access
applications from anywhere in the world on demand. Thus, the computing world is rapidly
transforming towards developing software for millions to consume as a service, rather
than to run on their individual computers. At present, it is common to access content
across the Internet independently without reference to the underlying hosting
infrastructure. This infrastructure consists of data centers that are monitored and
maintained around the clock by content providers.

Cloud computingis an extension of this paradigm wherein the capabilities of business

applications are exposed as sophisticated services that can be accessed over a network.
Cloud service providers are incentivized by the profits to be made by charging consumers
for accessing these services. Consumers, such as enterprises, are attracted by the
opportunity for reducing or eliminating costs associated with in-house provision of
these services. However, since Cloud applications may be crucial to the core business
operations of the consumers, it is essential that the consumers have guarantees from
providers on service delivery. Typically, these are provided through Service Level
Agreements (SLAs) brokered between the providers and consumers. Providers such as
Amazon, Google, Salesforce, IBM, Microsoft, and Sun Microsystems have begun to
establish new data centers for hosting Cloud computing applications in various locations
around the world to provide redundancy and ensure reliability in case of site failures.
Since user requirements for Cloud services are varied, service providers have to ensure
that they can be flexible in their service delivery while keeping the users isolated from the
underlying infrastructure. Recent advances in microprocessor technology and software
have led to the increasing ability of commodity hardware to run applications within
Virtual Machines (VMs) efficiently. VMs allow both the isolation of applications from the
underlying hardware and other VMs, and the customization of the platform to suit the
needs of the end-user. Providers can expose applications running within VMs, or provide
access to VMs themselves as a service (e.g. Amazon Elastic Compute Cloud) thereby
allowing consumers to install their own applications. While convenient, the use of VMs
gives rise to further challenges such as the intelligent allocation of physical resources for
managing competing resource demands of the users. In addition, enterprise service
consumers with global operations require faster response time, and thus save time by
distributing workload requests to multiple Clouds in various locations at the same time.
This creates the need for establishing a computing atmosphere for dynamically
interconnecting and provisioning Clouds from multiple domains within and across
enterprises. There are many challenges involved in creating such Clouds and Cloud

Fig. I.1. Cloud computing is a term that defines the use of computing resources from the
internet. Further, Cloud computing is characterized by: service on demand, is elastic, and
is paid by usage.

I) a1. Emergence of the Cloud Paradigm

Cloud computing shortens the time from planning an application architecture to actual
deployment. Cloud computing incorporates virtualization, on-demand deployment,
Internet delivery of services, and open source software. From one perspective, Cloud
computing is nothing new because it uses approaches, concepts, and best practices that
have already been established. From another perspective, everything is new because Cloud
computing changes how we invent, develop, deploy, scale, update, maintain, and pay for
applications and the infrastructure on which they run. In this lecture, we examine the
trends and how they have become core to what Cloud computing is all about.

I) a2. Virtual machines as the standard deployment object

Over the last several years, virtual machines have become a standard deployment object.
Virtualization further enhances flexibility because it abstracts the hardware to the point
where software stacks can be deployed and redeployed without being tied to a specific
physical server. Virtualization enables a dynamic datacenter where servers provide a pool
of resources that are harnessed as needed, and where the relationship of applications to
compute, storage, and network resources changes dynamically in order to meet both
workload and business demands. With application deployment decoupled from server
deployment, applications can be deployed and scaled rapidly, without having to first
procure physical servers. Virtual machines have become the prevalent abstraction and
unit of deployment because they are the least-common denominator interface between
service providers and developers. Using virtual machines as deployment objects is
sufficient for 80 percent of usage, and it helps to satisfy the need to rapidly deploy and
scale applications. Virtual appliances, virtual machines that include software that is
partially or fully configured to perform a specific task such as a Web or database server,
further enhance the ability to create and deploy applications rapidly. The combination of
virtual machines and appliances as standard deployment objects is one of the key features
of Cloud computing.

Table I.1.

The choice of the right deployment model is influenced by a number of factors including
cost, manageability, integration, security, compliance and quality of service. This table
summarizes how each deployment model compares on the influencing attributes.

Compute Clouds are usually complemented by storage Clouds that provide virtualized
storage through APIs that facilitate storing virtual machine images, source files for
components such as Web servers, application state data, and general business data.

The on-demand, self-service, pay-by-use model
The on-demand, self-service, pay-by-use nature of Cloud computing is also an extension
of established trends. From an enterprise perspective, the on-demand nature of Cloud
computing helps to support the performance and capacity aspects of service-level

The self-service nature of Cloud computing allows organizations to create elastic
environments that expand and contract based on the workload and target performance
parameters. And the pay-by-use nature of Cloud computing may take the form of
equipment leases that guarantee a minimum level of service from a Cloud provider.
Virtualization is a key feature of this model. IT organizations have understood for years
that virtualization allows them to quickly and easily create copies of existing
environments sometimes involving multiple virtual machines to support test,
development, and staging activities. The cost of these environments is minimal because
they can coexist on the same servers as production environments because they use few
resources. Likewise, new applications can be developed and deployed in new virtual
machines on existing servers, opened up for use on the Internet, and scaled if the
application is successful in the marketplace. This lightweight deployment model has
already led to a Darwinistic approach to business development where beta versions of
software are made public and the market decides which applications deserve to be scaled
and developed further or quietly retired. Cloud computing extends this trend through
automation. Instead of negotiating with an IT organization for resources on which to
deploy an application, a compute Cloud is a self-service proposition where a credit card
can purchase compute cycles, and a Web interface or API is used to create virtual
machines and establish network relationships between them. Instead of requiring a longterm contract for services with an IT organization or a service provider, Clouds work on a
pay-by-use, or payby-the-sip model where an application may exist to run a job for a few
minutes or hours, or it may exist to provide services to customers on a long-term basis.

Compute Clouds are built as if applications are temporary, and billing is based on resource
consumption: CPU hours used, volumes of data moved, or gigabytes of data stored. The
ability to use and pay for only the resources used shifts the risk of how much
infrastructure to purchase from the organization developing the application to the Cloud
provider. It also shifts the responsibility for architectural decisions from application
architects to developers. This shift can increase risk, risk that must be managed by
enterprises that have processes in place for a reason, and of system, network, and storage
architects that needs to factor in to Cloud computing designs. infrastructure is
programmable This shift of architectural responsibility has significant consequences.

In the past, architects would determine how the various components of an application

would be laid out onto a set of servers, how they would be interconnected, secured,
managed, and scaled. Now, a developer can use a Cloud provider s API to create not
only an application s initial composition onto virtual machines, but also how it scales and
evolves to accommodate workload changes. Consider this analogy: historically, a
developer writing software using the Java programming language determines when
it s appropriate to create new threads to allow multiple activities to progress in parallel.
Today, a developer can discover and attach to a service with the same ease, allowing them
to scale an application to the point where it might engage thousands of virtual machines in
order to accommodate a huge spike in demand. The ability to program an application
architecture dynamically puts enormous power in the hands of developers with a
commensurate amount of responsibility. To use Cloud computing most effectively, a
developer must also be an architect, and that architect needs to be able to create a selfmonitoring and self-expanding application.

The developer/architect needs to understand when it s appropriate to create a new thread
versus create a new virtual machine, along with the architectural patterns for how they are
interconnected. When this power is well understood and harnessed, the results can be
spectacular. A story that is already becoming legendary is Animoto s mashup tool that
creates a video from a set of images and music. The company s application scaled from
50 to 3,500 servers in just three days due in part to an architecture that allowed it to scale
easily. For this to work, the application had to be built to be horizontal scaled, have limited
state, and manage its own deployment through Cloud APIs. For every success story such
as this, there will likely be a similar story where the application is not capable of selfscaling and where it fails to meet consumer demand. The importance of this shift from
developer to developer/architect cannot be understated. Consider whether your enterprise
datacenter could scale an application this rapidly to accommodate such a rapidly growing
workload, and whether Cloud computing could augment your current capabilities.

Fig. I.2. Four service models. According to NIST there are three service models:
infrastructure (IaaS), platform (PaaS), and software as-a-service (SaaS). To get a better
understanding on what each of the service models comprises, the image depicts the layers
of which a typical IT solution consists. An infrastructure as a service solution should
include vendor-managed network, storage, servers, and virtualization layers for a client to
run their application and data on. Next, platform as a service build on top of infrastructure
as a service adding vendor-managed middleware such as web, application, and database
software. Software as a service again builds on top of that, most of the time adding
applications that implement specific user functionality such as email, CRM, or HRM. IBM
and other major IT and analyst firms have added a fourth service model, namely business
process as a service (BPaaS). BPaaS, as the term implies, offers an entire horizontal or
vertical business process and builds on top of any of the previously depicted Cloud service

I) a3. Applications are composed and are built to be composable

Another consequence of the self-service, pay-by-use model is that applications are
composed by assembling and configuring appliances and open-source software as much as
they are programmed. Applications and architectures that can be refactored in order to
make the most use of standard components are those that will be the most successful in
leveraging the benefits of Cloud computing. Likewise, application components should be
designed to be composable by building them so they can be consumed easily. This
requires having simple, clear functions, and well-documented APIs. Building large,
monolithic applications is a thing of the past as the library of existing tools that can be

used directly or tailored for a specific use becomes ever larger. For example, tools such as
Hadoop, an open-source MapReduce implementation, can be used in a wide range of
contexts in which a problem and its data can be refactored so that many parts of it can
execute in parallel. When The New York Times wished to convert 11 million articles and
images in its archive to PDF format, their internal IT organization said that it would take
seven weeks. In the mean time, one developer using 100 Amazon EC2 simple Web service
interface instances running Hadoop completed the job in 24 hours for less than $300. (This
did not include the time required to upload the data or the cost of the storage.) Even large
corporations can use Cloud computing in ways that solve significant problems in less time
and at a lower cost than with traditional enterprise computing.

example of how the combination of virtualization and self service facilitate application
deployment, consider a two-tier Web application deployment into a Cloud:

1. A developer might choose a load balancer, Web server, and database server appliances
from a library of preconfigured virtual machine images.

2. The developer would configure each component to make a custom image. The load
balancer would be configured, the Web server populated with its static content by
uploading it to the storage Cloud, and the database server appliances populated with
dynamic content for the site.

3. The developer layers custom code into the new architecture, making the components
meet specific application requirements.

4. The developer chooses a pattern that takes the images for each layer and deploys them,
handling networking, security, and scalability issues.

5. The secure, high-availability Web application is up and running. When the application
needs to be updated, the virtual machine images can be updated, versioned, copied across
the development-test-production chain, and the entire infrastructure redeployed.

Cloud computing assumes that everything is temporary, and it s just as easy to redeploy
an entire application than it is to manually patch a set of individual virtual machines. In
this example, the abstract nature of virtual machine images supports a composition-based

approach to application development. By refactoring the problem, a standard set of

components can be used to quickly deploy an application. With this model, enterprise
business needs can be met quickly, without the need for the time-consuming, manual
purchase, installation, cabling, and configuration of servers, storage, and network
infrastructure. Services are delivered over the network It almost goes without saying that
Cloud computing extends the existing trend of making services available over the
network. Virtually every business organization has recognized the value of Web-based
interfaces to their applications, whether they are made available to customers over the
Internet, or whether they are internal applications that are made available to authorized
employees, partners, suppliers, and consultants.

The advantage of Internet-based service delivery, of course, is that applications can be
made available anywhere, and at any time. While enterprises are well aware of the ability
to secure communications using Secure Socket Layer (SSL) encryption along with strong
authentication, bootstrapping trust in a Cloud computing environment requires carefully
considering the differences between enterprise computing and Cloud computing. When
properly architected, Internet service delivery can provide the flexibility and security
required by enterprises of all sizes. the role of open source software. Open source software
plays an important role in Cloud computing by allowing its basic software elements
virtual machine images and appliances to be created from easily accessible
components. This has an amplifying effect:

Developers, for example, can create a database appliance by layering MySQL software
onto an instance of the OpenSolaris Operating System and performing customizations.
Appliances such as these enable Cloud computing applications to be created, deployed,

and dynamically scaled on demand. Consider, for example, how open source software
allows an application such as that created by Animoto to scale to 3,500 instances in a
matter of days. Appliances can be created by layering open source software into a virtual
machine image and performing customizations that simplify their deployment. In this
example, a database appliance is created by layering MySQL software on top of the
OpenSolaris Operating System.

The ease with which open source components can be used to assemble large applications
generates more open source components. This, in turn, makes the role of open source
software even more important. The need, for example, to have a MapReduce algorithm
that can run in a Cloud-computing environment, was one of the factors stimulating its
development. Now that the tool has been created, it is being used to further raise the level
at which developers program Cloud computing applications.

I) b. Cloud Architecture

There are many considerations for Cloud computing architects to make when moving
from a standard enterprise application deployment model to one based on Cloud
computing. There are public and private Clouds that offer complementary benefits, there
are three basic service models to consider, and there is the value of open APIs versus
proprietary ones. Public, private, and hybrid Clouds IT organizations can choose to deploy
applications on public, private, or hybrid Clouds, each of which has its trade-offs. The
terms public, private, and hybrid do not dictate location. While public Clouds are
typically out there on the Internet and private Clouds are typically located on premises,
a private Cloud might be hosted at a colocation facility as well. Companies may make a
number of considerations with regard to which Cloud computing model they choose to
employ, and they might use more than one model to solve different problems. An
application needed on a temporary basis might be best suited for deployment in a public
Cloud because it helps to avoid the need to purchase additional equipment to solve a
temporary need. Likewise, a permanent application, or one that has specific requirements
on quality of service or location of data, might best be deployed in a private or hybrid

I) b1. Public Clouds

Public Clouds are run by third parties, and applications from different customers are likely
to be mixed together on the Cloud s servers, storage systems, and networks. Public

Clouds are most often hosted away from customer premises, and they provide a way to
reduce customer risk and cost by providing a flexible, even temporary extension to
enterprise infrastructure. If a public Cloud is implemented with performance, security, and
data locality in mind, the existence of other applications running in the Cloud should be
transparent to both Cloud architects and end users. Indeed, one of the benefits of public
Clouds is that they can be much larger than a company s private Cloud might be, offering
the ability to scale up and down on demand, and shifting infrastructure risks from the
enterprise to the Cloud provider, if even just temporarily. Portions of a public Cloud can
be carved out for the exclusive use of a single client, creating a virtual private datacenter.
Rather than being limited to deploying virtual machine images in a public Cloud, a virtual
private datacenter gives customers greater visibility into its infrastructure. Now customers
can manipulate not just virtual machine images, but also servers, storage systems, network
devices, and network topology. Creating a virtual private datacenter with all components
located in the same facility helps to lessen the issue of data locality because bandwidth is
abundant and typically free when connecting resources within the same facility.

I) b2. Private Clouds

Private Clouds are built for the exclusive use of one client, providing the utmost control
over data, security, and quality of service. The company owns the infrastructure and has
control over how applications are deployed on it. Private Clouds may be deployed in an
enterprise datacenter, and they also may be deployed at a colocation facility. Private
Clouds can be built and managed by a company s own IT organization or by a Cloud
provider. In this hosted private model, a company such as Sun can install, configure,
and operate the infrastructure to support a private Cloud within a company s enterprise
datacenter. This model gives companies a high level of control over the use of Cloud
resources while bringing in the expertise needed to establish and operate the environment.

I) b3. Hybrid Clouds

Hybrid Clouds combine both public and private Cloud models. They can help to provide
on-demand, externally provisioned scale. The ability to augment a private Cloud with the
resources of a public Cloud can be used to maintain service levels in the face of rapid
workload fluctuations. This is most often seen with the use of storage Clouds to support
Web 2.0 applications. A hybrid Cloud also can be used to handle planned workload spikes.
Sometimes called surge computing, a public Cloud can be used to perform periodic
tasks that can be deployed easily on a public Cloud. Hybrid Clouds introduce the
complexity of determining how to distribute applications across both a public and private

Cloud. Among the issues that need to be considered is the relationship between data and
processing resources. If the data is small, or the application is stateless, a hybrid Cloud can
be much more successful than if large amounts of data must be transferred into a public
Cloud for a small amount of processing.

Architectural layers of Cloud computing Sun s view of Cloud computing is an inclusive
one: Cloud computing can describe services being provided at any of the traditional layers
from hardware to applications. In practice, Cloud service providers tend to offer services
that can be grouped into three categories: software as a service, platform as a service, and
infrastructure as a service.

I) b4. Software as a service (SaaS)

Software as a service features a complete application offered as a service on demand. A
single instance of the software runs on the Cloud and services multiple end users or client
organizations. The most widely known example of SaaS is, though many
other examples have come to market, including the Google Apps offering of basic
business services including email and word processing. Although preceded
the definition of Cloud computing by a few years, it now operates by leveraging its
companion, which can be defined as a platform as a service.

I) b5. Platform as a service (PaaS)

Platform as a service encapsulates a layer of software and provides it as a service that can
be used to build higher-level services. There are at least two perspectives on PaaS

depending on the perspective of the producer or consumer of the services:

Someone producing PaaS might produce a platform by integrating an OS, middleware,
application software, and even a development environment that is then provided to a
customer as a service. For example, someone developing a PaaS offering might base it on
a set of Sun xVM hypervisor virtual machines that include a NetBeans integrated
development environment, a Sun GlassFish Web stack and support for additional
programming languages such as Perl or Ruby.

Someone using PaaS would see an encapsulated service that is presented to them through
an API. The customer interacts with the platform through the API, and the platform does
what is necessary to manage and scale itself to provide a given level of service. Virtual
appliances can be classified as instances of PaaS. A content switch appliance, for example,
would have all of its component software hidden from the customer, and only an API or
GUI for configuring and deploying the service provided to them. PaaS offerings can
provide for every phase of software development and testing, or they can be specialized
around a particular area such as content management.

Applications on Google s infrastructure.

PaaS services such as these can provide a powerful basis on which to deploy applications,
however they may be constrained by the capabilities that the Cloud provider chooses to

I) b6. Infrastructure as a service (IaaS)

Infrastructure as a service delivers basic storage and compute capabilities as standardized
services over the network. Servers, storage systems, switches, routers, and other systems
are pooled and made available to handle workloads that range from application
components to high-performance computing applications. Commercial examples of IaaS
include Joyent, whose main product is a line of virtualized servers that provide a highly
available on-demand infrastructure.

I) b7. Cloud application programming interfaces

One of the key characteristics that distinguishes Cloud computing from standard enterprise
computing is that the infrastructure itself is programmable. Instead of physically
deploying servers, storage, and network resources to support applications, developers
specify how the same virtual components are configured and interconnected, including
how virtual machine images and application data are stored and retrieved from a storage
Cloud. They specify how and when components are deployed through an API that is
specified by the Cloud provider. An analogy is the way in which File Transfer Protocol
(FTP) works: FTP servers maintain a control connection with the client that is kept open
for the duration of the session. When files are to be transferred, the control connection is
used to provide a source or destination file name to the server, and to negotiate a source
and destination port for the file transfer itself.

In a sense, a Cloud computing API is like an FTP control channel: it is open for the
duration of the Cloud s use, and it controls how the Cloud is harnessed to provide the end
services envisioned by the developer. The use of APIs to control how Cloud infrastructure
is harnessed has a pitfall: unlike the FTP protocol, Cloud APIs are not yet standardized, so
each Cloud provider has its own specific APIs for managing its services. This is the
typical state of an industry in its infancy, where each vendor has its own proprietary
technology that tends to lock in customers to their services because proprietary APIs make
it difficult to change providers. Look for providers that use standard APIs wherever
possible. Standard APIs can be used today for access to storage; APIs for deploying and
scaling applications are likely to be standardized over time. Also look for Cloud providers
that understand their own market and provide, for example, ways to archive and deploy
libraries of virtual machine images and preconfigured appliances.

I) b8. Growth of the Cloud Computing landscape

Cloud computing has transformed the way organizations approach IT, enabling them to
become more agile, introduce new business models, provide more services, and reduce IT
costs. Cloud computing technologies can be implemented in a wide variety of
architectures, under different service and deployment models, and can coexist with other
technologies and software design approaches. The Cloud computing landscape continues
to realize explosive growth. The worldwide public Cloud services market was projected
to grow nearly 20 percent in 2012, to a total of $109 billion, with 45.6 percent growth for
IaaS, which is the fastest growing market segment.

However, for security professionals, the Cloud presents a huge dilemma: How do you
embrace the benefits of the Cloud while maintaining security controls over your
organizations assets? It becomes a question of balance to determine whether the
increased risks are truly worth the agility and economic benefits. Maintaining control over
the data is paramount to Cloud success. A decade ago, enterprise data typically resided in
the organization s physical infrastructure, on its own servers in the enterprise s data
center, where one could segregate sensitive data in individual physical servers. Today,
with virtualization and the Cloud, data may be under the organization s logical control,
but physically reside in infrastructure owned and managed by another entity.

Specific security challenges pertain to each of the three Cloud service models Software
as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).

SaaS deploys the provider s applications running on a Cloud infrastructure; it offers
anywhere access, bu-t also increases security risk. With this service model it s essential

to implement policies for identity management and access control to applications. For
example, with, only certain salespeople may be authorized to access and
download confidential customer sales information.

PaaS is a shared development environment, such as Microsoft Windows Azure,
where the consumer controls deployed applications but does not manage the underlying
Cloud infrastructure. This Cloud service model requires strong authentication to identify
users, an audit trail, and the ability to support compliance regulations and privacy

IaaS lets the consumer provision processing, storage, networks, and other fundamental
computing resources and controls operating systems, storage, and deployed applications.
As with Amazon Elastic Compute Cloud (EC2), the consumer does not manage or control
the underlying Cloud infrastructure. Data security is typically a shared respon sibility
between the Cloud service provider and the Cloud consumer. Data encryption without the
need to modify applications is a key requirement in this environment to remove the
custodial risk of IaaS infrastructure personnel accessing sensitive data.

This shift in control is the number one reason new approaches and techniques are required
to ensure organizations can maintain data security. When an outside party owns, controls,
and manages infrastructure and computational resources, how can you be assured that
business or regulatory data remains private and secure, and that your organization is
protected from damaging data breaches and feel you can still completely satisfy the full
range of reporting, compliance, and regulatory requirements? The second lecture in this
tutorial will discuss:

Cloud Computing security challenges

techniques for protecting data in the Cloud

strategies for secure transition to the Cloud

Data protection tops the list of Cloud concerns today. Vendor security capabilities are key
to establishing strategic value, reports the 2012 Computerworld Cloud
Computing study, which measured Cloud computing trends among technology decision
makers. When it comes to public, private, and hybrid Cloud solutions, the possibility of
compromised information creates tremendous angst. Organizations expect third-party

providers to manage the Cloud infrastructure, but are often uneasy about granting them
visibility into sensitive data. Derek Tumulak, vice president of product management at
Vormetric, explains, Everyone wants to use the Cloud due to cost savings and new agile
business models. But when it comes to Cloud security, it s important to understand the
different threat landscape that comes into play. There are complex data security
challenges in the Cloud:

-The need to protect confidential business, government, or regulatory data

-Cloud service models with multiple tenants sharing the same infrastructure

-Data mobility and legal issues relative to such government rules as the EU Data Privacy

-Lack of standards about how Cloud service providers securely recycle disk space and
erase existing data

-Auditing, reporting, and compliance concerns

-Loss of visibility to key security and operational intelligence that no longer is available to
feed enterprise IT security intelligence and risk management

-A new type of insider who does not even work for your company, but may have control
and visibility into your data

Such issues give rise to tremendous anxiety about security risks in the Cloud. Enterprises
worry whether they can trust their employees or need to implement additional internal
controls in the private Cloud, and whether third-party providers can provide adequate
protection in multitenant environments that may also store competitor data. There s also
ongoing concern about the safety of moving data between the enterprise and the Cloud, as
well as how to ensure that no residual data remnants remain upon moving to another
Cloud service provider.

Without question, virtualized environments and the private Cloud involve new challenges
in securing data, mixed trust levels, and the potential weakening of separation of duties
and data governance. The public Cloud compounds these challenges with data that is
readily portable, accessible to anyone connecting with the Cloud server, and replicated for

availability. And with the hybrid Cloud, the challenge is to protect data as it moves back
and forth from the enterprise to a public Cloud.

Chous Theories of Cloud Computing: The 5-3-2 Principle

YungChou 3 Mar 2011 8:15 AM

Notice the 5-3-2 Principle Theory 3 is based on NIST SP 800-145. However
the latter categorizes 4 Cloud deployment models including public, private,
community and hybrid; at the same time the former states two Cloud
deployment models while considering a hybrid Cloud is a private Cloud
variant and a community Cloud is a private Cloud of an associated

Theory 1: You can not productively discuss Cloud computing without first
clearly defining what it is.

Cloud computing can be confusing since everyone seems to have a different
definition of Cloud computing. Notice the issue is not lack of definitions, nor
the need for having an agreed definition. The issue is not having a sound
definition to operate upon. And without first properly defining what it is, a
conversation of Cloud computing all too often becomes non-productive. And
the reason is simple. If one cant define what it, how can one tell what is
good, secure, sufficient or not? Not to mention, Cloud computing is a
generational shift on how IT manages resources and deploys services. In my
view, Cloud computing is essentially a set of capabilities applicable to all
aspects of IT from acquisitions, infrastructure, architecture, development,
deployment, operations, automation, optimization, manageability, cost, et. al.
Based on an individuals background and experience, Cloud means different
things to different people. Without a clear baseline of Cloud computing,
miscommunication and misunderstanding should be expected.

Theory 2: The 5-3-2 principle defines the essence and scopes the subject
domain of Cloud computing.

Employ the 5-3-2 principle as a message framework to facilitate the
discussions and improve the awareness of Cloud computing. The message of
Cloud computing itself is however up to individuals to formulate. A system
administrator and an application developer may have a very different view of
Cloud computing. Processes, operations and tasks may be at variance, the

characteristics of Cloud computing should nevertheless be consistent. Stay

with this framework and focus on translating the capabilities of Cloud
computing into business values to realize the applicability of Cloud
computing to an examined business scenario.

Theory 3: The 5-3-2 principle of Cloud computing describes the 5 essential
characteristics, 3 delivery methods, and 2 deployment models of Cloud

The 5 characteristics of Cloud computing, shown below, are the expected
attributes for an application to be classified as a Cloud application. These are
the differentiators. Questions like I am running X, do I still need Cloud?
can be clearly answered by determining if these characteristics are expected
for X.

The 3 delivery methods of Cloud computing, as shown below, are the
frequently heard: Software as a Service, Platform as a Service, and
Infrastructure as a Service, namely SaaS, PaaS, and IaaS respectively. Here,
the key is to first understand what is a service. All 3 delivery methods are
presented as services in the context of Cloud computing. Without a clear
understanding of what is service, there is a danger of not grasping the
fundamentals as to misunderstand all the rest.

The 2 deployment methods of Cloud computing are public Cloud and private
Cloud. Public Cloud is intended for public consumption and private Cloud is
a Cloud (and notice a Cloud should exhibit the 5 characteristics) while the
infrastructure is dedicated to an organization. Private Cloud although
frequently assumed inside a private data center, as depicted below, can be on
premises or hosted off premises by a 3rd party. Hybrid deployment is an
extended concept of a private Cloud with resources deployed on-premise and

The 5-3-2 principle is a simple, structured, and disciplined way of conversing

Cloud computing. 5 characteristics, 3 delivery methods, and 2 deployment
models together explain the key aspects of Cloud computing. A Cloud
discussion is to validate the business needs of the 5 characteristics, the
feasibility of delivering an intended service with SaaS, PaaS, or IaaS, and if
public Cloud or private Cloud the preferred deployment model. Under the
framework provided by the 5-3-2 principle, now there is a structured way to
navigate through the maze of Cloud computing and offer a direction to an
ultimate Cloud solution. Cloud computing will be clear and easy to
understand with the 5-3-2 principle as following:

I) b9. IaaS

Infrastructure as a Service is a type of Cloud computing platform wherein the customer
organization outsources its IT infrastructure including storage, processing, networking,
and other resources. Customers access these resources over the internet i.e. Cloud
computing platform, on a pay-per-use model. IaaS, earlier called hardware as a service
(HaaS), is a Cloud computing platform based model. In traditional hosting services, IT
infrastructure was rented out for specific periods of time, with a pre-determined hardware
configuration. The client paid for the time and configuration, regardless of actual use.
With IaaS Cloud computing platform, clients can dynamically scale the configuration to
meet changing needs, and are billed only for the services actually used. IaaS Cloud
computing platform eliminates the need for every organization to maintain IT
infrastructure. SMBs can curtail their IT investments using IaaS Cloud computing

Enterprises can fulfill contingent needs with IaaS. IaaS Cloud computing platform
providers host IT infrastructure on a large scale, segmented for different customers,
creating economies of scale. IaaS Cloud platform can bring vast computing power,

previously available only to governments

Fig. I.3. While companies reasons for considering IaaS differ, among SMBs and
Enterprises alike, cost savings remains a key objective. A recent Yankee Group survey,
focusing on cost savings, illustrates the top five motivations specified by respondents as
reasons to use IaaS.

and large corporations, to smaller organizations. IaaS is offered in three models: private,
public, and hybrid Cloud. Private Cloud implies that the infrastructure resides at the
customer-premise. In case of public Cloud, it s located at Cloud computing platform
vendor s data center; and hybrid Cloud is a combination of two with customer choosing
the best of both worlds.

Pros and cons of IaaS Cloud computing platform
Dynamically choose a CPU, memory, and storage configuration to suit your needs
Access to vast computing power available on IaaS Cloud platform
Eliminates the need for investment in rarely used IT hardware
IT overheads handled by the IaaS Cloud computing platform vendor
In-house IT infrastructure can be dedicated to activities central to the organization

There is a risk of IaaS Cloud computing platform vendor gaining access to the

organization s data. Can be avoided by opting for private Cloud.

IaaS Cloud computing platform model is dependent on internet availability.
Dependence on the availability of virtualization services.
IaaS Cloud computing platform may limit user privacy and customization options.

Points to consider before making a choice:
IaaS Cloud computing platform may not replace traditional hosting. Where resource
requirements are predictable, viz. for internal databases, applications, and email,
traditional hosting may remain the viable option. Apart from contingency needs, IaaS
Cloud computing platform is useful for application development and testing.
IaaS Cloud computing platform may not eliminate the need for an in-house IT
department. It will be needed to monitor the IaaS setup. IT salary expenditure might not
reduce significantly, although other IT expenses will.
Breakdowns at the IaaS Cloud computing platform vendor s end can bring your
business to a halt. Assess the IaaS Cloud computing platform vendor s finances and
stability. Ensure that the SLAs provide backups for hardware, network, data, and
application failures. Image portability and third-party support is a plus.
The IaaS Cloud computing platform vendor can get access to your sensitive data.
Engage only with credible players. Study their security policies and precautions.

IaaS market developments
IaaS Cloud computing platform is a new technology, and therefore evolving. Amazon Web
Services (AWS) is the first and most popular IaaS Cloud computing platform vendor.
AWS suite offers technologies and skills developed or acquired by to run its
own websites. Other key international players in IaaS market are Rackspace, Google,
GoGrid, and Joyent. In India, ground infrastructure in the form of widespread internet
connectivity and virtualization services remain insufficiently developed. However, that is
changing, and studies suggest that IaaS Cloud computing platform will be commonplace
in Indian enterprises in the near future. The notable Indian players include Reliance, Tata,
Sify, and Netmagic Solutions. Netmagic was the first to offer IaaS in India.

Traditionally, companies met their growing IT needs by investing in more capital
equipment. Today, competitive pressures continue to demand improvements in quality of
service despite growing numbers of users and applications. At the same time, the
challenging economic environment has increased pressure on IT departments to keep costs
down. The convergence of those trends, with other advances of the last several years, has
made it possible to take infrastructure outsourcing to a new level. Building on the
foundation of managed services such as colocation, hosting, and virtualization services,

IaaS has emerged as an easily deployed service that enables companies to flexibly and
cost-effectively anticipate and evolve with their customers rapidly changing business

With IaaS, as with any new development, there are concerns about risks, readiness, and
managing the transition. Frequently asked questions center on costs, the transition process
from a data center to IaaS, minimizing risk, ensuring performance, and managing the new

Total Cost of Ownership (TCO)
To determine if transitioning to IaaS really is a strategic move from a cost perspective,
calculating TCO is a must. This determination must include costs such as upkeep, salaries
of IT personnel and the time commitment of senior management when planning, building,
and managing a data center. With static, continuous loads, an IaaS environment will
generally bring cost savings, and with bursty and dynamic loads, those savings will be

Migrating to IaaS
The prospect of migrating existing applications from a data center to IaaS is a primary
concern of enterprise IT managers. IaaS offers encouragement in that it offers a great
deal of flexibility anything that can be virtualized can be run on IaaS. In the end, the
question is whether the benefits of IaaS outweigh the investment in learning new APIs and
web interfaces, and the risks of migration.

Managing Risk
In industries such as healthcare, where privacy of data is a key concern, IT administrators
are often apprehensive that using Cloud computing services versus on-premises data
management may risk higher exposure of confidential information. IaaS providers are
addressing these risks with features such as federation capabilities, which address multiple
Clouds and offering enterprise versions of the service.

Ensuring Performance
Service Level Agreements (SLAs) accompany voice, bandwidth, and a number of IT
services. However, an SLA does not necessarily affect the actual operations; its terms and
conditions are only recited when things go awry, and it typically does not protect a
business from loss of system uptime. The same holds true with SLAs and IaaS providers.
In the end, the quality of the uptime is directly related to the sophistication of the IT
department, not the strength of the SLA. Choosing an IaaS provider that employs best

practices in design and operations and promotes transparency offers the greatest assurance
of performance.

Managing the Cloud
The system management tools available from IaaS providers represent an additional
concern, since, like any other service (e.g., virtualization), they will require a learning
curve. Just as the move to virtualization added tools for VMware, Xen, and alternatives,
IaaS will require learning new tools. Many companies will find, however, that the time
sacrifice is worthwhile, especially when using IaaS in situations where it is particularly
advantageous (i.e., transient projects), over other services.

IaaS Deployment Models
Thus far, the basic and most widely used Cloud offering among IaaS providers are public
Clouds (IaaS), which involve sharing compute resources among any number of customers
( multitenant ), without physically segregating them. IaaS providers have also started
to develop alternative deployment models to address the concerns of Enterprises, which
often center on security and the public Cloud. These models include:
Virtual Private IaaS
Dedicated IaaS
Private Community IaaS
Hybrid IaaS

Making the Move to IaaS
Because they generally lack the resources and expertise required to deploy internal IT
infrastructures, the early adopters of IaaS and other Cloud-computing models have mostly
been Web 2.0 start-ups, small Independent Software Vendors (ISVs), and SMBs.
Enterprises, with a different set of criteria and priorities, have followed more slowly,
though many are undertaking low-risk approaches to trial IaaS.

For the Enterprise
The transition to IaaS for some enterprises merely represents an evolutionary step
following virtualization. For others, it will entail a dramatic change in the way they do
business. However, it is important to note that the adoption of IaaS (or any
Cloudcomputing model) is not an all-or-nothing endeavor. From bringing in a new
application to migrating to an existing one, there are many strategies of evaluating how, if,
and in what ways, an IaaS solution can best benefit an organization.

Choosing an IaaS Provider

As with any service a business evaluates, the features and benefits of IaaS, the price, and
the provider must all be taken into consideration. The stakes are particularly high when
moving IT resources from an in-house (or other arrangement) to an IaaS provider. For this
reason, an IaaS service provider must be chosen carefully. From service-related questions
such as: what is the minimum charge unit (i.e., hours versus minutes), to service-providerrelated questions, such as: if the chosen provider has the expertise, scale and geographic
coverage to meet a company s unique needs, there are many different concerns that need
to be evaluated.

The overall objective for choosing an IaaS provider should be a long-term relationship.
Turning over part, or all, of a business s IT to an outside organization will have
challenges, not the least of which will be a perceived loss of control. The right IaaS
partner will provide an elevated sense of control, bringing to bear its expertise,
comprehensive tool set for management, monitoring, and reporting, and responsive
customer service.

The promise of Cloud computing has long been a new height of convenience easily and
rapidly provisioned pay-per-use computing resources, scaling automatically and instantly
to meet changing demands. Emerging at the convergence of major computing trends such
as virtualization, service-oriented architectures, and standardization of the Internet, IaaS
comes closer than ever before to fulfilling that vision. IaaS is being deployed by worldclass organizations as well as aggressive SMBs. The next several years will see IaaS
embraced by companies of all sizes, using all manner of deployment models, as the
overwhelming economic benefits and flexibility of its elastic metered services prevail over
other IT solutions.

As with disruptive business models from the past, certain technical, legal, and personnel
challenges must be overcome before IaaS will enter the mainstream. Nonetheless,
organizations would do well to begin the evaluation process by:
Amassing available literature on IaaS
Contacting IaaS providers for a consultation and audit of current practices
Developing an accurate TCO of current IT solutions
Working with an IaaS provider to develop a migration plan
Testing IaaS with a new application launch or nonbusiness-critical application
Benchmarking costs and performance of current solutions vs. IaaS candidate applications

Companies that effectively leverage the benefits of an IaaS environment may be able to

gain an edge in a rapidly evolving economy.

I) b10. PaaS

PaaS potentially offers the greatest impact over any other aspect of Cloud computing it
brings custom software development to the Cloud. because NIST describes PaaS as: The
capability provided to the consumer to deploy onto the Cloud infrastructure consumercreated or acquired applications created using programming languages and tools supported
by the provider.

In simpler terms, PaaS provides developers (the consumer) with easier ways to create and
deploy software on Cloud infrastructure. Those easier ways may be graphical user
interfaces (GUIs), sandboxes, programming languages, shared services, application
programming interfaces (APIs) and other online tools for developers. PaaS
implementations vary from vendor to vendor. Keep in mind that the concept of
development tools and platforms is not entirely new, although the underlying
infrastructures have changed significantly. In the 1990s, desktop platforms (operating
systems) and development tools catapulted the sale of PCs by empowering developers and
making PCs easier to use. In the next 10 years, PaaS will drive demand for the Cloud in
similar ways. So why is PaaS so important? Because it speeds development and saves a lot
of money! Using PaaS, it s possible to save millions of dollars on a single, large-scale
software development project. Developers can create and deploy software faster. Agencies
can lower their risks, promote shared services and improve software security via a
common security model. Data centers can leverage PaaS to make their infrastructure more
valuable. PaaS can lower the skill requirements to engineer new systems and can lower
risks by taking advantage of pretested technologies. It has been said that an order-ofmagnitude in economics will change an industry. 2PaaS has been shown to provide
enormous improvements in the economics of engineering and deploying custom software.
An early 2009 IDC study demonstrated 720-percent return on investment for stakeholders. Since that time, several new products have emerged. It is
reasonable to expect the economics to improve as the market matures over time.

Despite its many advantages, PaaS is not perfect. For example, many PaaS vendors
require their customers to make long-term commitments to proprietary infrastructures.
Some early adopters of PaaS have unknowingly made casual long-term commitments to
infrastructure providers. Its somewhat like buying gum at the counter, but needing to rent
the store for 10 years. Thats why NIST is stressing the importance of openness and
portability. The NIST Cloud Computing Reference Architecture depicts PaaS as playing
an integral role. In fact, platforms will play the same vital role in the Cloud computing

model as with prior computing models: namely desktops and mainframes. The value is
simply too significant to ignore. A Gartner report in 2011 predicted that PaaS would
become mainstream, going so far as to say the battle for leadership in PaaS and the key
PaaS segments will engulf the software industry. According to Fabrizio Biscotti, a
Research Director at Gartner , PaaS is the least developed [of the service models], and
it is where we believe the battle between vendors is set to intensify. Mr. Biscotti goes on
to say, Clearly, from the attention given to this segment by the industrys giants, it is
likely that they are viewing PaaS as a strategic undertaking as much as an incremental
market opportunity.

For the IT industry, PaaS will drive sales of software, infrastructure and integration
services. As we approach 2030, the interest in PaaS is reaching critical mass and the
market is poised for hypergrowth. System integrators are leveraging PaaS to win more
proposals and deliver faster. IaaS providers are leveraging PaaS to radically differentiate
their offerings. IT buyers are looking toward PaaS to turn around troubled projects.
Enterprise software companies are acquiring PaaS solutions to establish new identities.
Understand it or not, PaaS is quickly becoming the new way to build and integrate
software on the Cloud. In a few years, PaaS will likely be the norm rather than the
exception. Soon, it will be unfathomable to build a software system without leveraging
shared platform services.

What Is PaaS?
Let s append the definition in Section I) b5again with the NIST full definition of
PaaS: The capability provided to the consumer to deploy onto the Cloud infrastructure
consumer-created or acquired applications created using programming languages and tools
supported by the provider. NIST goes on to say, The consumer does not manage or
control the underlying Cloud infrastructure including network, servers, operating systems
or storage, but has control over the deployed applications and possibly configuration
settings for the application-hosting environment. Said another way, PaaS provides
developers with easier ways to create and deploy software onto Cloud infrastructure.
Those easier ways typically exist as GUIs, sandboxes, programming languages, shared
services, APIs and other online tools for software developers.

Fig. I.4. PaaS provides developers with easier ways to create and deploy software onto
Cloud infrastructure. Those easier ways typically exists as GUIs, sandboxes, pr
ogramming languages, shared services, APIs and other online tools for software
developers. WSO2 Private PaaS is built on top of Apache Stratos. It is the most complete,
enterprise-grade solution, offering an open Platform as a Service, enriched with all the
generic features that a PaaS would include. More significantly, it adds functionality to host
pre-integrated, fully multi-tenant WSO2 Carbon middleware products as a composition of
cartridges that deliver a wide range of Cloud PaaS.

To better understand the basic concept of PaaS, imagine logging onto a website that lets
you provision a hello world software application on top of Cloud infrastructure. Now
imagine using online tools and programming languages to build out your application for a
more serious business need. Imagine adding forms, features and reports; integrating with
legacy systems; and deploying your software on the Cloud zero uploads, installations
or system configurations to worry about. Its all managed for you. Whereas PaaS offerings
may differ greatly from vendor to vendor, the purpose remains primarily for developers to
create software on Cloud infrastructure.

According to NIST, PaaS consumers employ the tools and execution resources provided
by Cloud providers to develop, test, deploy and manage the operation of PaaS applications

hosted in a Cloud environment. PaaS consumers can be application developers who design
and implement application software; application testers who run and test applications in a
Cloud-based environment; application deployers who publish applications into the Cloud;
and application administrators who configure, monitor and manage applications deployed
in a Cloud. PaaS consumers can be billed according to the number of PaaS users; the
processing, storage and network resources consumed by the PaaS application; and the
duration of the platform usage.

PaaS is not a single technology, but rather a collection of related services for creating and
deploying software on the Cloud. That collection of technologies is growing. Early PaaS
offerings included limited feature sets, such as forms, databases and simple APIs. As the
market matures, we are seeing PaaS offerings that manage user subscriptions, security,
resource metering, workflow, commerce, role-based security, reporting and other shared
services. These integrated PaaS offerings are evolving into operating systems for the

Service Model Delivery
By definition, PaaS is provided as a service you can use it over the internet with no
need to ever install, upgrade or host. That means that PaaS is provided on demand in ways
that support essential characteristics for Cloud computing. It is elastic it can be scaled
up and down quickly based on needs. It also takes advantage of a shared pool of
computing resources to handle surges. Developers can deploy their SaaS in a way that
consumers only pay for what they use. This has huge implications for software

Until the emergence of PaaS, the term Cloud computing was nearly synonymous with
infrastructure services. The SaaS segment has been dominated by giants like
Microsoft and Google . With PaaS, system integrators are empowered to enter the
space with Cloud-enabled mission solutions. In essence, PaaS is the enabling technology
that finally brings custom software to the Cloud.


Fig. I.5. Until the emergence of PaaS, the term Cloud computing was nearly
synonymous with infrastructure services. In essence, PaaS is the enabling technology that
finally brings custom software to the Cloud.

PaaS is readily distinguished from traditional web platforms, which require installations,
uploads, downloads and managed hosting. As a service means that developers can
provision and manage instances of the platform on demand with no need to coordinate
with their information technology (IT) departments or manage the underlying servers.
More importantly, if you build Cloud software on top of a PaaS, your solution is
inherently Cloud ready, taking advantage of underlying Cloud infrastructure and as-aservice delivery models. Along the same lines, PaaS is often confused with application
frameworks, such as Ruby on Rails or .Net. In short, there is little or no comparison.
With PaaS, there is no need for uploading, configuring permissions and troubleshooting

because it is delivered over the internet as a Cloud service. Application frameworks and
PaaS may coexist to support SaaS solutions, such as with the Heroku platform (owned
by ) and SaaS Maker . Such platforms facilitate integration and
deployment of applications that were written in a variety of programming languages.

Why Is PaaS So Important?
PaaS has been shown to speed development of complex software, while making it easier
to deploy and manage applications on the Cloud. It shields developers from the underlying
complexities of installing and configuring applications on low-level operating systems. As
a result, IT stakeholders benefit in several ways:

Lower costs: PaaS has been shown to reduce costs by more than half, and in some cases
improve return on investment (ROI) by more than 700 percent.
Faster time to market: PaaS dramatically reduces time-to-market by serving as a launch
pad for software applications and managing common functions.
Lower risks: PaaS can reduce risks because common functions are already tested
sometimes over a period of years.
Rapid prototyping: PaaS provides unique capabilities for developers to create and deploy
concept applications on the Cloud for their customers. It provides a way to demonstrate
results faster to end users.
Higher security and interoperability: The Federal Cloud Computing Strategy
describes potential platform strength as greater uniformity and homogeneity, and
resulting in improved information assurance, security response, system management,
reliability and maintainability.

PaaS is a component of the NIST Reference Model for Cloud Computing. If youre
developing a custom software system without a PaaS, then you are likely building a
stovepipe. The NIST Cloud Computing Reference Architecture depicts PaaS as the
middle layer in a three-layered architecture. The mysterious inverted Ls imply that
SaaS may be created in either of two ways: as a layered architecture (on top of PaaS and
IaaS) or as traditional ground-up stovepipes, avoiding PaaS altogether. Many of todays
established software vendors deliver SaaS without leveraging PaaS or IaaS technologies,
often because PaaS was unavailable at development time.

Trends will rapidly move toward PaaS to knock down stovepipes and deliver shared
services. The stovepipe problem has existed for many years, with redundant approaches
to managing security, workflow and multi-tenancy. PaaS consolidates common functions
into shared services that are easy to consume. As a result, applications share common
ways to do things and they achieve higher levels of integration and interoperability.

Fig. I.6. Although developers are the primary users of PaaS, all IT stakeholders will
ultimately benefit from its advantages. IT buyers will benefit because they are the ones
suffering from astronomical software engineering costs and delays. The end users will
benefit from the lower usage fees and by gaining access to their applications sooner.

Although developers are the primary users of PaaS, all IT stakeholders will ultimately
benefit from its advantages. IT buyers will benefit because they are the ones suffering
from astronomical software engineering costs and delays. The end users will benefit from
the lower usage fees and by gaining access to their applications sooner.

How Big Is the Problem Being Solved?
PaaS solves the biggest problem with software development projects today: web-based
software is extremely complicated, risky and expensive to engineer. These problems are
largely related to stovepipe development projects. The U.S. Chief Information Officer s
(CIO) 25 Point Implementation Plan to Reform Federal Information Technology
Management sheds light on the problem this way: Too often, agencies build large
standalone systems from scratch, segregated from other systems. These systems often
duplicate others already within the federal government, wasting taxpayer dollars. U.S.

CIO Steven VanRoekel, recently prescribed a shared-first initiative aimed at solving

the stovepipe problem. Among other advantages, platforms serve as a way to share
services across an enterprise without reinventing the wheel. The reason stovepipes are
expensive is that they require labor intensive and error-prone engineering and integration.
It s like building a house from a pile of nails, lumber, pipes and wires often costing
millions of dollars and taking years to construct. Instead imagine building a prefabricated
house you specify the color, size, type, carpet and more. The prefab house may be
delivered in a fraction of the time, risk and cost.

Enterprise software systems are similar; using PaaS, it s possible to order a prefabricated
software architecture over the internet as a Cloud service. Much of the integration is
already done for you, saving months or years of engineering time. For example, the
architecture may already support single user sign on, common search, records
management, workflow, reporting and a multi-tenant architecture. If you re asking the
question, What is a multi-tenant architecture? then exactly thats the point!
Application multi-tenancy is highly complex when integrated with role-based security and
reporting. You wouldn t want to program it to every application. PaaS provides these
features as shared services, so there s no need to reinvent the wheel.

Things to Consider
The term platform is plagued with market confusion and is often misused to refer to
customizable software. Software that can be customized is simply that: customizable
software. Some infrastructure vendors promote their products inaccurately as PaaS.
Amidst the confusion are many Cloudwashed Web 2.0 and middleware products that
have been rebranded for the PaaS space. Traditional platform technologies have existed
for years, and are long-associated with Web 2.0 projects. PaaS comes in many shapes and
-Google is currently dominating the consumer application platform with its Apps
Engine. is emerging as a major player in the enterprise application platform
-SaaS Maker provides integrated development tools, shared services and open
-Amazons Elastic Beanstalk provides sandbox capabilities on Amazons infrastructure.
-Heroku provides automated scaling and application management services.
-Azure provides enterprise infrastructure and database services by way of APIs.

As Cloud adoption increases, enterprise companies appear to be struggling for new
identities. Oracle and other enterprise vendors appear to be rebranding traditional

middleware offerings as PaaS. Similarly, many large system integration firms are still
defending the old way of building software.

History has demonstrated that companies must successfully transition to new platform
models to survive. Its important to understand that Cloud computing (including PaaS) is a
highly disruptive technology in the same way that cell phones disrupted the land line
business or light bulbs disrupted gas lighting. It represents a true transformational shakeup
of the IT industry, in which new leaders will emerge and former leaders will fall by the

Fig. I.6. Cloud computing (including PaaS) is a highly disruptive technology. It represents
a true transformational shakeup of the IT industry, in which new leaders will emerge and
former leaders will fall by the wayside.

This is a time to value innovation. Of the types of platforms that are offered as a service,
enterprise business platforms may provide the greatest value to government, simply
because enterprise business systems are extremely expensive sometimes costing
millions of dollars to engineer.

Here are a few questions to consider: 1. Is it delivered as a Cloud service? By definition,

PaaS delivers its platform as a Cloud service and allows software to be published as a
service. If it does not do both then it s not a true PaaS.
2. Is it portable? Can you run your applications on multiple Cloud infrastructures?
3. Does the PaaS do what you need it to do? For example: does it support features for
forms, reports and workflow? Does it support role-based access control? Does it allow
apps to be published as a Cloud service?
4. Is it an open platform? Are you overly reliant on a single software vendor, computer
language, database or other technology for your PaaS-enabled applications?

Why Are Open Platforms So Important?
Open platforms allow multiple vendors to build on the platform using a variety of vendorindependent languages and technologies, and in doing so, open platforms lower the longterm costs of ownership, mitigate vendor lock in and increase solution choices. IT buyers
have an opportunity to learn from history.

During the 1980s, the Department of Health and Human Services ran its personnel and
payroll systems on Wang computers. If HHS needed computing services, the agency
needed to buy it from Wang at whatever price or else invest into migrating to more
open Unix platforms, which HHS eventually did over the course of a decade at great
expense. We don t want to repeat history as we move into the Cloud. This is an ideal
time to explore open platforms. Thats why open platforms are important as the Cloud
unfolds. The term open has many meanings. The reality is that platforms usually have
levels of openness rather than an all-or-nothing openness. For example, Windows was
much more open than Wang, because any vendor could develop on Windows. With
Windows, we could buy software from thousands of vendors instead of being restricted to
a single vendor. The Windows platform also supported multiple hardware (infrastructure)
providers. A more open platform may actually make its APIs available as proposed
standards so other platform vendors can adopt and implement them. In such cases, the
software can run on any platform that supports the open standard interfaces. PaaS will
similarly evolve with levels of openness. In some cases, PaaS may appear open, but will
require a proprietary data center commitment. Developers should consider the possibility
of porting their apps or data to a future platform, but not resort to ground-up stovepipes to
do so. Instead it is important to consider levels of openness when choosing a platform.

PaaS as Operating Systems for Data Centers
Modern PaaS offerings are evolving into operating systems for Cloud-enabled data
centers. Similar to desktop operating systems, PaaS shields users from underlying
complexities of the infrastructure, provides central administration and runs software
applications. PaaS supports development tools and APIs for integrating on top of the
platform. It s critical to understand that the Cloud is a low-level computing platform that

needs an operating system just like its desktop predecessor. The need is growing as the
Cloud increases in its complexity with web services scattered across the Internet. Dan
Tapscott, author of Wikinomics, talks about the growing software complexity problem this
way: The Web look[s] increasingly like a traditional librarian s nightmare a noisy
library full of chatty components that interact and communicate with one another. Mr.
Tapscott is referring to the Cloud as a cluttered hodgepodge of web apps and services
each with their own logins, data sources and security/resource functions. In the absence
of Cloud platforms, we are recreating the wheel millions of times over. In a few years, the
redundancies will drive up costs by billions within federal IT systems, health IT systems
and other enterprise IT systems that rely on Cloud services. All these IT systems will
struggle with disparate security models and interoperability concerns. As with desktop
operating systems, PaaS provides a common user interface, common security model and
core functionality (e.g., workflow, reporting), and manages resources, while shielding
users from underlying complexities.

This section is dedicated to common questions about PaaS.

1. Should I build ground up to avoid a platform?

Absolutely not! If theres one lesson IT history has taught, it is that groundup stovepipes are the most costly forms of proprietary systems in existence.
These one offs usually result in schedule and budget overruns and longterm dependencies on a single vendor. That is why stovepipes so often fail.
These mega-million-dollar stovepipes continue to cost the federal
government billions of dollars each year with their overreliance on a handful
of large system integration firms. On the other hand, developers can instantly
leverage a PaaS to save years of development time and cost. In doing so, they
are taking advantage of reusable technology and services that are shared
between other organizations with similar challenges. It is important however
that IT buyers avoid platforms that implement proprietary programming
languages or specific infrastructures to avoid long-term over-dependencies.
We use the term overdependencies to emphasize that dependencies are not
necessarily bad; otherwise we would rarely leverage commercial software.
IT buyers can save years and millions on large-scale projects by leveraging
platforms with open APIs and portability across data centers.

2. How is PaaS different from application frameworks?

Application frameworks (e.g., Ruby on Rails or .Net) are not inherently
offered as a service. Some software companies are making frameworks
available as part of bundled hosting plan. The approach more closely
resembles glorified hosting because it falls short of supporting NISTs
essential characteristics of Cloud computing.

3. Is PaaS only relevant to new software projects?

No. You may be able to finish an underway project faster by switching to
PaaS. The easiest way to start is to try a small prototype using a fraction of
the larger budget. PaaS is an outstanding way to turn around failing software
development projects.

By 2030, PaaS will become mainstream, just as platforms have played central roles in
prior computing models. Forward-thinking CIOs are already looking toward platforms as
a part of their migration strategies to do more with less as their budgets shrink. The U.S
CIO s 25 Point Implementation Plan to Reform Information Technology Management is
a sign of growing trends toward platforms. The report describes shared services as a
solution for the type of out-of-control IT project that is outdated on the day it
starts. These same sentiments are reinforced by the U.S. CIO s suggestion of
a shared-first initiative and similar federal initiatives.

-Open Platforms. First, we will see trends toward open platforms. If we look back at the
history of computing, we are reminded of the importance of openness versus the high cost
of proprietary platforms. In a perfect world, we would see open standards established long
before platforms are developed. However, such idealism is unrealistic in such a young
market. The NIST Cloud Computing Reference Architecture was based on a guiding
principle to develop a solution that does not stifle innovation by defining a prescribed
technical solution. This solution will create a level playing field for industry to discuss and
compare their Cloud offerings with the U.S. government.

-Software Shakeout. We will see new software vendors emerge amidst innovation, while
many large companies struggle for new identities. In the shakeout, there will be winners
and losers. Some of today s enterprise software leaders will remain sternly committed to
the attractive revenue streams from their legacy technologies. Meanwhile, new leaders,
such as Salesforce , will emerge as major players in the enterprise platform market. The
same will be true for large software integration firms that are major beneficiaries of
stovepipe development and integration projects. Agile and lean Cloud
development companies will emerge and displace many of the federal projects that have
so visibly failed in years past.

-Changes in Project Awards. We will see changes in the way projects are awarded. PaaS
makes it uniquely possible for contracting officers to try a rapid prototype, rather than
blindly vetting a technology for two years before making a long-term commitment. The
shift toward Cloud will be fully realized when contracting officers realize the opportunity
to buy differently.

-Special Purpose Platforms. We will see several special purpose platforms, rather than the
emergence of a single de facto platform as was the case with the Microsoft
Windows phenomenon. The reason for this is that the IT landscape is dramatically
different than it was for desktop computing 30 years ago. An unprecedented number of
software developers and applications for software exists across many industries. We will
see special purpose PaaS offerings emerge for healthcare, manufacturing, financial
management, federal systems and many other domains.

The Next Big Trend in PaaS
Portability Many vendors are tightly coupling their PaaS offerings with their own
infrastructures. One of the most important trends in platforms is toward openness and
portability. IT buyers should ensure that their PaaS solution is portable across data centers
to avoid long-term lock in to a single infrastructure provider. In the absence of this
understanding, some government agencies are making casual, long-term commitments to
vendors that may span 20 years. The authors of this paper have compared it to buying a
pack of gum with a requirement to rent the store.

Fig. I.7. IT buyers should ensure their PaaS solution is portable across data centers to
avoid long term lock in to a single infrastructure provider. In the absence of this
understanding, some government agencies are making casual, longterm commitments to
vendors that may span 20 years.

Various interpretations of PaaS have led to a broad misconception that a Cloud PaaS
provider will also always provide the underlying IaaS resources. This misconception
arises from the common commercial practice of bundling PaaS offerings with an
underlying proprietary IaaS platform. , Azure and AppEngine exemplify
this practice. The NIST U.S. Government Cloud Computing Technology Roadmap,
Release 1.0 (Draft) includes the following language to further describe Platform as a
Service: For PaaS, the Cloud provider manages the computing infrastructure for the
platform and runs the Cloud software that provides the components of the platform, such
as runtime software execution stack, databases, and other middleware components.

The PaaS Cloud provider typically also supports the development, deployment, and
management process of the PaaS Cloud consumer by providing tools such as integrated
development environments (IDEs), development versions of Cloud software, software
development kits (SDKs), and deployment and management tools. The PaaS Cloud
consumer has control over the applications and possibly over some of the hosting
environment settings, but has no or limited access to the infrastructure underlying the
platform such as network, servers, operating systems (OSs) or storage. It is important to
highlight that while the PaaS Cloud provider manages the computing infrastructure for

the platform, there is no requirement to actually provide the computing infrastructure. To

emphasize this point, consider the separation of personal computer hardware vendors and
operating system providers.

PaaS will evolve in similar ways as former computing models, which have clearly proven
the significance of portability. These are not new concepts. Today s Microsoft
Windows and Linux operating systems thankfully run on hardware from any number of
vendors. This allows federal buyers to invest into large software systems that will run
across hardware from a variety of vendors. The same will (and must) be true of next
generation platforms on the Cloud.

I) b11. SaaS

Software-as-a-Service helps organizations avoid capital expenditure and pay for the
functionality as an operational expenditure. Though enterprises are unlikely to use SaaS
model for all their information systems needs, certain business functionalities such as
Sales Force Automation (SFA), are more seen to be implemented using SaaS model. Such
demand has prompted quite a few vendors to offer SFA functionality as SaaS. Enterprises
need to adopt an objective approach to ensure they select the most appropriate SaaS
product for their needs. This paper presents an approach that makes use of Analytic
Hierarchy Process (AHP) technique for prioritizing the product features and also for
expert-led scoring of the products.

Fig. I.8. Composition architecture is designed to draw from a number of different sources
of different types and in different locations. [MSDN]

SaaS is a software delivery paradigm in which the software is hosted off-premise and
delivered via web. The mode of payment follows a subscription model. SaaS helps
organizations avoid capital expenditure and let them focus on their core business instead
of support services such as IT infrastructure management, software maintenance etc.
Hence, we see increasing number of organizations adopting SaaS, for business
applications like sales force automation, payroll, and e-commerce. In a Forrester survey,
sales force automation application is found to be the top-ranked application being used as
SaaS. When several vendors offer SaaS based products, the selection of product becomes
a key issue. It involves analysis of selection parameters and product offerings of the
vendors. As multiple criteria are involved in decisionmaking, it is a multi-criteria
decision-making (MCDM) problem. Being a problem involving multi-criteria and multiproducts, it can t be solved with mere judgment or intuition. The judgments may work

fine, only when the selection parameters are few. During selection process, usually the
features are ranked or prioritized. The prioritization involves deciding the weights of
parameters. While assigning judgmental weights, it is quite likely that the user judgment
may be biased towards the key parameters only. This may lead to improper priority and
incorrect weights being assigned to the parameters. To make an informed decision, it is
necessary to have quantifiable values in the place of subjective opinions. We have
proposed widely accepted expert driven analytical hierarchy process approach to deal with
this problem. Remaining part of this section discusses SaaS product selection parameters
based on literature study, methodology adopted, and application of AHP to the problem at
hand followed by conclusion.

SaaS product selection parameters Many factors are involved in selection of a software
product. Based on experience and interviews with the experts, we propose factors for SaaS
selection such as: Functionality, Architecture, Usability, Vendor Reputation, and Cost.
These factors are selected primarily considering our case study of sales force automation

Functionality: Functionality factor includes attributes that are typically called as functional
modules of SFA. It includes:
(i) Contact and Activity Management for tracking customer contacts. It ensures sales
efforts are not duplicated.
(ii) Opportunity Management helps track and manage opportunities through every stage of
the sales pipeline. It includes functionality such as lead creation, lead-to-opportunity
conversion, opportunity tracking, etc.
(iii) Sales Performance Management supports territory and quota assignment to multiple
levels of sales organizations from regions and districts to individual sales persons
(iv) Sales Analysis module provides dashboards and reports.

Architecture: The architecture factors are as follows:
(i) Integration attribute includes ability of product to integrate with other applications.
Integration attribute becomes quite relevant for SaaS products as SaaS products are hosted
off-premise and hence can be perceived as difficult to integrate with the on-premise legacy
(ii) Scalability refers to the SaaS product s ability to maintain reasonable response time
for users even during peak load.
(iii) Reliability refers to the SaaS product s ability to remain available for the users for
given time windows. It requires vendors to deploy monitoring and diagnostic tools;
(iv) Security is considered to be the major concern for SaaS products. Vendor having

certifications such as ISO 27000 helps ensure security adopted for handling of customer

Usability: Usability related attributes are as follows:
(i) User interface includes facets such as intuitiveness, ease-of-use for frequently required
tasks and aesthetic nature of graphical elements.
(ii) Help attribute refers to availability of easy-to-use user manuals, eLearning modules,
and context-sensitive help.
(iii) Support for mobile device has become important as modern sales workforce
extensively depends on the mobile devices such as PDA etc.
(iv) Offline support is important. It means the SaaS products support a mechanism to let
users work on system in offline mode and let them synchronize once connected to internet.

Vendor Reputation: Vendor reputation factor includes two attributes:
(i) Number of clients/users indicates the level of usage, which roughly indicates whether
the product is fairly new entry or is well-established one.
(ii) The brand value of vendor is also important, as sometimes a new product from wellknown vendor may be preferred over a product having vast customer base but being
provided by not-sowell-known vendor.

Cost: Cost factor includes two attributes: annual subscription and one-time
implementation cost. Usually, cost of hardware and support personnel is covered under
annual subscription, while cost of initial consulting, configuration efforts, etc is covered
under one-time implementation.

Analytical hierarchy process
SaaS selection based solely on judgment is a highly cognitive and tedious process which
could be quite error prone. Humans are supposed to be very good at one to one
comparison. If a problem is decomposed into clusters, and attributes are compared
pairwise within the cluster, then decision problems can be solved easily with reduced
cognitive load. Saaty developed the Analytic Hierarch Process (AHP) method, which is
very useful in simplifying multi-criteria problems into hierarchy thus forming the
comparison matrix to judge the weight. The AHP deals with intuitive, rational and/or
irrational, multiobjective, multi-criteria decision making with certainty and/or uncertainty
for any number of alternatives. It breaks down a problem into its smaller constituent parts
forming hierarchy and then calls for only simple pair-wise comparison judgments. AHP
has a formal way of dealing with inconsistencies and hence is not dependent on the
decision analysts ad hoc rules for consistency assurance.

The AHP process starts with hierarchy development. An advantage of hierarchy is that it
allows focusing judgment separately on each of the several properties, which is essential
for making a sound decision. Each element is compared with every other element to
decide the importance of one element over the other on a 1 to 9 scale. The elements at
every level of hierarchy are also compared in a similar way. The comparison is checked
for inconsistency and should not be more than 10%. The comparison matrices are
normalized, and Eigen vectors (or priorities) are calculated from it. AHP has provision to
synthesize feedback of multiple experts to get a final prioritization. SaaS selection
methodology The methodology adopted starts with the literature study to understand the
parameters satisfying the application requirements. These parameters are discussed with
the experts in the next phase and, hierarchy is developed. The survey instruments of AHP
are developed from this hierarchy. Two types of AHP survey instruments are developed
for pairwise comparison. One is for comparison of parameters and the other for products
comparison. The pair of comparison is judged on 1-9 scale. The survey respondents are
only experts hence; number of responses required is limited. Five experts are selected for
each survey. The mandatory requirement for expert is to have experience in using the SFA
products and should evaluate the product before responding the survey. Three-part
methodology is adopted for the SaaS product selection. The first part covers the
prioritization of parameters while second part is about product comparison. The third part
combines the results obtained from first two parts to rank the products.

Sales force automation case study We have selected a case study of SaaS product
selection for SFA at a mid-size professional services organization. SFA is one of the key
ingredients of Customer Relationship Management systems.

The hierarchy considered for the SFA is shown in Fig. I.9. The hierarchy is only for the
selection parameters and not the products. The pairwise comparison matrix at level-1 and
level-2 of hierarchy, shown in Table I.2, gives the global and local level prioritization
respectively. These two prioritizations are synthesized to find out the weight of every

The local weights of attributes are converted into the global weights using global weights
of factors shown in Table I.2. We have considered three leading SaaS products for SFA as
A, B, and C instead of using their real names. Pair-wise one-to-one comparison survey is
conducted for these product with respect to each attribute shown at level-2 of hierarchy.
This comparison gives the scoring of every product with respect to the attributes. The
local weight of every attribute and raw score of every product are multiplied to get
weighted score of product for each attribute. The ranked sum of weighted scores in
descending order gives the ranking of the products as shown in Table I.3. The sum shows
that the product C is the most suitable option.

Fig. I.9 . Hierarchy

Table I.2.

Table I.3.

Table I.4.

The local weight of every attribute and raw score of every product are multiplied to get
weighted score of product for each attribute. The ranked sum of weighted scores in
descending order gives the ranking of the products as shown in Table I.4. The sum shows
that the product C is the most suitable option.

Related work
Though SaaS is a recent phenomenon, a good amount of research has been reported in the
areas of configurability [Nitu, Configurability in SaaS (software as a service)
applications , ISEC 09: Proceeding of the 2nd annual conference on India software
engineering conference, February 2009, pp. 19-26.], security, integration, [A. V. Hudli, B.
Shivaradhya, and R. V. Hudli, Level-4 SaaS applications for healthcare
industry, COMPUTE 09: Proceedings of the 2nd Bangalore Annual Compute

Conference, Proceedings of ACM, January 2009], networking challenges [D. Greschler, T.

Mangan, Networking lessons in delivering Software as a Service: part II, International
Journal of Network Management , Volume 12 Issue 6, John Wiley & Sons Inc., November
2002, pp. 317-321] and business mode [H. Liao, C. Tao, An Anatomy to SaaS Business
Mode Based on Internet, ICMECG, 2008 International Conference on Management of eCommerce and e-Government, 2008, pp.215-220]. However, there is no explicit guidance
available on selection of SaaS product for business application such as sales force
automation. At generic level, guidance on using quantitative methods for software
selection and evaluation is available [M. S. Bandor, Quantitative Methods for Software
Selection and Evaluation, Technical Note, MU/SEI-2006-TN-026, September 2006],
which was adapted in the methodology described in this paper by suitably modifying the
Decision Analysis Spreadsheet.

The selection of best possible SaaS product satisfying most of the requirements from
available alternatives is a MCDM problem. This problem needs thorough understanding of
requirements and product offerings. The selection process involves multiple criteria and
multiple products; hence, selection based on judgements fails to identify suitable choice.
The ranking process requires a crucial step of prioritizing the parameters and products.
This step is usually performed manually and may be judgmental or based on some
judgmental scales. These scales lack the rigor. This work suggests the use of AHP as the
quantitative technique to address this issue. We have used AHP to calculate weights of
selection parameters and scores for products. These weights and scores are more rational
than subjective opinions. A case study provides complete understanding of importance
and significance of quantitative method to solve SaaS selection. This work also discusses
the major parameters, which are useful in a SaaS selection.


Ql: What is the definition of SaaS?
Gartner definition: Software as a Service (SaaS) is software that is owned,
delivered and managed remotely by one or more providers. The provider
delivers software based on one set of common code and data definitions that
is consumed in a one-to-many model by all contracted customers, at any
time, on a pay-for-use basis or as a subscription based on used metrics. While
not all SaaS solutions from your provider may fit this exact definition, the
minimum criteria that all SaaS solutions from your provider meet include:

A. They are all owned, delivered and managed remotely by your provider or
a provider service delivery partner
B. All offerings are subscription priced
C. All offerings have a well-defined target SLA
D. All upgrades performed by your provider and all customer settings are
preserved through upgrade

Q2: What is the difference between Cloud Computing and SaaS?
The term Cloud generally refers to a collection of infrastructure technology
and software that can be consumed over the Internet. At a fundamental level,
its a collection of computers, servers and databases that are connected
together in a way that users can lease access to share their combined power.
The computing power is scalable so that buyers can dynamically increase, or
decrease, the amount of computing power they consume. The Cloud can be
understood to refer to anything thats hosted remotely and delivered to users
via the Internet. SaaS is a subset of Cloud computing and refers specifically
to software applications that are consumed as a service and paid for based on
usage. The customer is agnostic to the environment (hardware, operating
system, software, database and storage) on which it is installed. Given the
widespread growth of Cloud accessibility, its widely considered to be easier,
faster and less expensive for customers to buy SaaS solutions-particularly
from larger software vendors that provide a comprehensive set of solutions.
Today, nearly every type of core business function from human resources to
enterprise resource planning is available via SaaS.

Q3: What are some of the key benefits of SaaS?
Organizations that have deployed SaaS find that one of its greatest benefits is
the speed with which they can deploy and scale applications, while reducing
operational cost and risk. The specific benefits that customers most
frequently cite include:Faster time-to-value with rapid deployment and
lower upfront costsOpex instead of Capex spendEase of use and accessible
anywhereOn-demand and highly scalable: scales up or down based on
demandAutomatic upgrades offering latest solution features with minimal
customer involvementReduced risk by using a secure and ready-to-use
infrastructureFaster implementationsHigh-solution adoption by end user
leveraging best practice implementationsAssured service levels provided by
vendorCompliance with regulatory requirements made easier.

SaaS Economics
Q4: Will SaaS actually cost me more than my on-premises software over
the long-term future? SaaS is a different pricing and consumption model
compared to on-premises perpetual software licensing. Each approach has its
own cost advantage specific to customer requirement-meaning different
organizations may find different values for both TCO (total cost of
ownership) and ROI (return on investment). In order to perform a fair
comparison, you will need an accurate estimate of the internal cost for
housing and managing the same service offered by SaaS, which may be
difficult to establish. The more practical way to address this question is to
highlight major considerations that may impact the TCO comparison between
these two software consumption models.

These benefits of SaaS accrue only when the SaaS offering is used as-is
without too many custom changes or extensions. By definition, SaaS vendors
optimize their operations by making things repeatable and standardized. Also
large scale organizations may find that they have internal economies of
operation that compare well with SaaS providers. Refer to the section on
Customization and Integration for more answers to questions on whether you
should choose to deploy the solution yourself or subscribe to SaaS.

Q5: Who owns my data and how much control do I have over the data?
Using terms that have been formalized in the promulgation of privacy laws,
SaaS solutions from the provider are configured so that the customer
assumes the role of the data controller, while SaaS solutions from the
provider acts as data processor. The data controller determines how the data
is used, who has the right to access, amend and delete it and how the data is
to be downloaded and stored locally any time they wish. At any point, the
data controller can request to stop using the SaaS solution and the data can be
extracted and returned in a secure manner. SaaS solutions from your provider
act as the data processor and does not retain customer data beyond the need
to:Deliver the service or to comply with regulatory requirements Provide
supporting financial data for billing inquiries Effectively implement backups
and disaster recovery as outlined in the applicable SaaS Listing During all
periods while SaaS solutions from your provider has or retains customer data,
SaaS solutions from your provider has a security policy governing such
possession. We have policies and procedures in place designed to protect the
security, integrity and confidentiality of our customers data and our
adherence to these policies is validated through regular, third-party external

Q6: How does SaaS solutions from a provider safeguard my data?
Top-tier providers have a dedicated SaaS Operations group that is responsible
for running and monitoring the SaaS solution. Because they could have
customers that span multiple industries banking, insurance, pharmaceutical,
healthcare, energy and government they adopt very stringent policies that
often exceed the requirements for any one industry, allowing our other
customers to benefit from such heightened requirements. They also have
detailed procedures in place to ensure the necessary levels of physical
security, network security, application security, internal systems security,
operating system security and third-party certifications. As a check on all
these procedures, SaaS solutions from your provider should have an
independent compliance team that sets the policies and coordinates internal
audits and third-party audits to ensure that the requirements are being met.

Your provider will select the location for storing and accessing customer data
in accordance to the security needs of the SaaS solution. Whether they use a
co-location facility or an Infrastructure-as-a-Service (IaaS) vendor, they
understand that they are responsible for the security of our customers data.
Top-tier providers select only best-in class vendors and require that all the
services they provide to be subject to similar reviews/audits. Likewise, they
carefully select personnel and require them to undergo background checks as
standard process before taking up any activity on the SaaS infrastructure.
These background checks are applicable to all provider employees,
contractors and sub-contractors.

Q7: Where is our data physically located and how is it managed?
All customer data, primary and backup, is stored in data centers in the region
specified in the applicable SaaS Listing. Data is stored only on devices that
are attached to the applicable server and not on devices such as flash drives,
compact discs or tape. Data is backed up and retained per the data retention
policies defined in the SaaS Listing for the specific offering. Access to data is
limited to individuals whose role requires such access. There are procedures
in place to ensure only authorized individuals gain access to the data. These
procedures apply to all individuals whom your provider employs to provide
SaaS services; whether they are your provider employees, or provider-hired
contractors or sub contractors.

Q8: What happens to our data upon termination?
Upon termination or expiration of subscription, customer data is subject to

the following conditions: If requested by the customer, the data is exported to

an industry standard format and shared with the customer A portion of the
data or meta data that is required for billing and audit purposes is retained
and all other data is securely deleted from the primary and backup locations.

Q9: Our company needs to adhere to strict internal and external
regulatory controls. Does that limit us to on-premises software?
The regulatory controls generally apply to all infrastructure and software
operations, irrespective of whether it is deployed on-premises or SaaS. Most
enterprises are distributed and use dedicated hosting centers. Therefore in all
likelihood, even with on-premises, your servers are not located in your own
building; nor are your operators sitting at the console when interfacing with
the servers. Very rarely do regulations expressly require that the software
reside in-house. They typically require a set of documented controls and
demonstrable implementation of those controls. In that sense, SaaS may
actually help you. Due to size and diversity of customer base, your provider
should be able to invest much more in security, monitoring and automation
than most large enterprises. Of course, it helps that the provider author a lot
of the software used by the enterprises. Furthermore, providers should
undergo stringent security procedural audits that test the data centers level of

Q10: What types of certifications and/or third-party audits do SaaS
solutions from your provider undergo?
A top-tier provider undergo multiple audits. Not all SaaS solutions are
audited against all standards, but the majority of their operational procedures
are written to address the requirements of these standards. So while your
specific SaaS solution may not require an explicit audit, your provider may
be holding the offering to the additional standard that is built into your
providers policies and procedures. Many provider SaaS solutions undergo a
SSAE-16 Type I audit. Additionally, your provider should undergo (or will
undergo in the near term) the following audits:Payment Card Industry (PCI)
Data Security Standard (DSS): Applicable to credit/debit card oriented
solutions Visa ACS: Applicable to SaaS solutions that hold card issuer
specific cryptographic keys SSAE-16 Type II SOC 1: Some providers added
SOC 2 to some applications and will be extending this to other Saas
solutions over the near term FedRAMP: Some are already in process for
some SaaS solutions and expect to complete audits soon.

Q11: What is the Payment Card Industry (PCI) Data Security Standard

PCI refers to the Payment Card Industry (i.e., issuers of credit, debit, prepaid,
e-purse, ATM and point-of-sale card), and in this context, specifically to
the requirements issued by the PCI Security Standard Council (PCI SSC) to
protect the security and confidentiality of credit card data. PCI SSC was
founded by leading credit card issuers and payment processing networks
including American Express, Discover Financial Services, JCB International,
MasterCard and Visa. PCI DSS outlines 12 specific top-level controls that are
further detailed into 300+ sub-controls.

Q12: What does it mean to have a SSAE 16 Type II SOC 1and SSAE 16
Type II SOC 2?
A Service Organization Controls (SOC), previously known as SAS70 Type II,
is a report against a well-recognized auditing standard (Statement of
Standards for Attestation of Controls (SSAE)) developed by the American
Institute of Certified Public Accountants (AICPA) and applicable to service
providers like SaaS vendors. The Type II report, produced after an annual or
twice a year audit, covers the activities of the SaaS provider over a period of
time (audit period) and looks at the conformance to documented controls over
that period of time. The range of controls is broad and covers everything
from hiring, setting up and hardening of servers, granting and revoking
access to secure systems, retention and review of logs, customer onboarding
and change management. The SOC 2, in addition to confirming adherence to
the set of controls covered in SOC 1, provides an attestation from the auditors
on the effectiveness of the controls for meeting the Trust Services Principles:
security, availability, processing integrity, confidentiality and privacy.

Q13: What is the FedRAMP program? The Federal Risk and Authorization
Management Program (FedRAMP) is a government-wide program that
provides a standardized approach to security assessment, authorization and
continuous monitoring for Cloud products and services. This approach uses a
do once, use many times framework that saves cost, time and staff required
to conduct redundant agency security assessments.

Q14: We have separate regulations for my country and my industry.
How do you support regional and vertical specific requirements for
Security and Data privacy? Your provider should self-certify and conform
to Safe Harbor requirements. In addition, Many providers have mapped their
controls to match EU Security and Data Privacy regulations as a data
processor. Some providers are in the process of expanding their security
frameworks to map onto other standards including IS027001. While they do
not directly undergo vertical specific certifications like HIPPA or CFR 21

Part 11, you can use their controls to map to these requirements. If you have
additional requirements specific to your region or business, your provider
will work with you to understand the requirements and find the right SaaS
solutions that fits your needs.

Q15: Are all certifications available for all SaaS solutions from your
No. Different SaaS solutions from your provider require different
certifications. For more information on a specific SaaS solution from your
provider, please refer to the applicable SaaS Listing.

Q16: Where are SaaS solutions from your provider hosted?
SaaS solutions from your provider are hosted in data centers across North
America, Europe and Asia Pacific. Many provider data centers meet or
exceed Tier 3 standards as defined by the Uptime Institute. Their facilities
and control processes have been designed to meet the requisite standards for
availability and security.

Q17: What steps do you take to protect a SaaS application instance
against infrastructure failures?
Your providers data centers should meet or exceed Tier-3 standards, which
ensure that both the infrastructure and application layers are protected from
events such as power failures and network outages through redundant
Internet connectivity and power supply, including UPS and generators. In
addition, the components they use typically have built in redundancy,
including, dual power connectors, multiple CPUs and RAID storage guarding
against single points of failure. Many providers have a 24x7 fully staffed
Network Operations Center (NOC) that is constantly watching for any issues
reported by their monitoring software and is trained to respond to any critical
issue immediately.

Q18: How is my instance of the SaaS application protected against access
by another customer or failures caused by another tenant customers
SaaS solutions from your provider may use different architectures. In some
cases, each customer runs on a separate instance so the credentials and URL
to access one customers instance is different from that of other customers. In
cases where the provider has multi-tenant, single-instance architecture, many

have an access control layer that allows each customer to only access their
own configurations and data. All configurations and data are tagged to each
customer so the access control layers can check to block potential
compromise points. Customers are protected from failures caused by another
customer in one of two ways. In single-tenant instances, each customer is
deployed with their own stack of the solution and thus isolated from other
customer solution stacks. In multitenant solutions, the application addresses
customer separation, which prevents one tenant from affecting the solution
stack while the deployment is redundant to ensure application is highly

Q19: We are a 24x7 operation. Can we expect around-the-clock support
if we move to a SaaS model?
Not all SaaS providers are created equal and not every SaaS vendor can
provide 24x7 support, so it is important to evaluate your SaaS vendor
carefully. For SaaS solutions from your provider, outstanding a round-theclock software support is part of our DNA. Many established providers have
provided software support to enterprises of all sizes including most of the
Global 100 for a number of years. In addition to software-based proactive
monitoring, providers have a staffed 24x7 Network Operations Center (NOC)
where they continuously monitor the SaaS solutions and take immediate
corrective action as soon as they detect any issue. All SaaS solutions from
your provider support should include multiple access methods and support
services to meet your operational and business needs including:Online
support for self-service and case management 24x7x365 telephone support
for Severity 1 cases Direct telephone support for Severity 2 to 4 cases
during local business hours.

Backup and Recovery
Q20: How do you manage data backups?
At present, data backups are managed separately for each SaaS solutions
from your provider; however, as a general rule, local backups are completed
(typically multiple versions) at least every 24 hours and stored locally in the
event that data needs to be recovered/restored due to a server or storage
failure. Offsite backups are taken at regular daily or weekly intervals
(depending on the SaaS offering) and stored at either one of SaaS solutions
from your provider may alternate hosting sites or at an industry standard
backup/escrow provider. The offsite backups will be used to recover/restore
data at a secondary hot or cold site (depending on the SaaS solution) in the
event the primary site is down. Please refer to the applicable SaaS Listing for
details on availability of data backups and location of data.

Business Continuity and Disaster Recovery
Q21: Do providers have a Business Continuity and Disaster Recovery
Yes. Your providers Business Continuity Management (BCM) program
consists of crisis management, business continuity planning and disaster
recovery. BCM ensures that the organization is prepared to respond to
unplanned business interruptions that affect the availability of critical
business processes and the IT services that support those processes. Your
provider establishes and maintains policies and procedures relevant to
contingency planning, recovery planning and proper risk controls to help
ensure SaaS solutions from your providers continued performance in the
event of any business disruption worldwide. These plans provide for off-site
backup of critical data files, program information, software, documentation,
forms and supplies, as well as alternative means of transmitting and
processing program information at an alternate site and resource support. The
recovery strategy provides for recuperation after both shortterm and longterm disruptions in facilities, environmental support and data processing
equipment. Testing is performed on a routine basis following industry
standard best practices.

Q22: Since the data centers are geographically dispersed and far from
my office, how do I make sure there is no delay in the response times?
There are three contributors to application response times: Latency This
reflects the time it takes for data to travel between the end users system and
the server that is processing the data. Many providers have setup services in
locations that are in various sections in the world) to minimize the number of
hops that data has to travel. Bandwidth This reflects the size of the
connection to the servers. The best providers run their services from data
centers that are Tier-3 or better and subscribe to plans that allow us to
increase the bandwidth we need based on demand. This allows for minimal
delay in returning the data that is requested by the end user system. Of
course, the response seen at the end-user system will depend on other factors
such as the available last mile bandwidth to the customer infrastructure.
Application Performance The processing time of the transaction from
request to response or at the server itself without being impacted by latency
or bandwidth. Your providers applications should be tested for high
performance under load. They continue to monitor the performance so that
we can take corrective action in the event of any degradation. To achieve
consistent and predictable operations, SaaS solutions from you prohhddt
starts with clear design principles and targets to ensure performance. For each
product, specific objectives for network latency, response time and

availability are proactively monitored and reported upon to ensure service

levels are met and action is taken in a situation that indicates a problem could
occur. The provider monitor synthetic connectivity and response times from
various locations to make sure there is minimal latency.

Customization and Integration
Q23: Do I need an on-premises solution if I want to customize my
That really depends on the level of customization that you require. Many of
the organizations your provider work with have experienced the
consequences of highly customized, off-the-shelf software only to be stuck
later with an implementation that no longer resembles the original product
and cant be upgraded without a significant investment in time and money.
SaaS solutions from a provider should be designed and built upon a principle
of Configure, Dont Code to help protect customers from customizing
themselves into a corner. SaaS solutions from the provider allow service
resources, customer application resources, or other third-party consultants to
sensibly mold the application to support the identified business requirements
through configuration parameters rather than creating custom code. This
method ensures the application is easily updated as software releases become
available without any significant cost or time investment. Remember that the
reduced cost of operations in a SaaS model is predicated upon the fact that
each customer does not have a separate, customized codebase. The common
architecture and code enables us to automate our operations and reduce the
total cost of ownership.

Q24: SaaS makes sense for smaller companies but does it make sense for
a larger enterprise like mine?
In the past, smaller business departments within large enterprises and even
some vertical industries were early adopters of SaaS solutions. But today,
enterprise SaaS is mainstream offering a variety of solutions to a wide
spectrum of medium to large to very large customers whose employee base
ranges from hundreds to tens of thousands of potential SaaS solution users.
While enterprises will choose the appropriate solutions to address their
business requirements, for a great number of enterprises, SaaS solutions offer
the best means of cutting costs, meeting project timelines, and increasing
solution adoption. The speed and ease of deployment, limited capital expense
and lower TCO are the most critical factors driving SaaS growth from all
segments of businesses today. In fact, many SaaS customers today are large
enterprises that are leveraging the benefits of the SaaS solutions from your
providers delivery model without compromising functionality and the

capabilities often associated with on-premises software solutions. Their

infrastructure and software are architected such that they can quickly scale
the service up or down depending on your level of usage.

Q25: Do I need an on-premises solution if I need to integrate several
applications? Most of CA Technologies products leverage integration with
on-premises or other Cloud applications. Providers have designed products
with ease of integration in mind. These applications have web-services based
APIs that allow them to be easily integrated. Complex customer specific
integrations are implemented with ease and speed because of your
providers experience with application integrations.

Q26: What are the providers guidelines for customizations and who can
perform the customizations?
As discussed above, provider products follow the Configure, Dont Code
framework. However, there are a few solutions that do allow you to write and
deploy custom code. You can engage a developer of your choice or utilize
services offered by your provider to develop the customizations. However,
providers have coding guidelines that specify how the customizations should
be developed with a very controlled mechanism of testing and deploying the
custom code in production.

They have three separate environments for solutions that require extensive
configurations and customizations: the development environment to develop
the code, a staging environment to test the code in conditions that are very
similar to production and the production environment. Once the custom code
is rigorously tested in the development and staging environments, it is
migrated to the production environment. This change management process
ensures tight control on custom code and minimizes errors and performance
issues in the production environment.

Maintenance and Upgrades
Q27: How will my provider make sure my applications are up-to-date?
One of the key benefits of SaaS is that all updates and upgrades of the
product are managed by the SaaS provider and the customer does not worry
about installing software patches or updates, or keeping up with changing
compliance requirements for that products usage. Although security patches
are deployed immediately, for non-urgent patches and upgrades, providers
will have a periodic schedule to apply the patches. Maintenance falls into
three categories: Scheduled Periodic Maintenance These maintenance

windows are scheduled typically for the whole year and at least 3 months in
advance and are scheduled typically during local, non-business hours. There
is limited customer input over these scheduled windows as infrastructure
maintenance performed during these windows may impact multiple or all
clients. Security patches and other operating system updates are applied
during these windows. A reminder notification will be sent at least five days
prior to these maintenance windows. Critical Planned Periodically, critical
maintenance involving security or system stability may be required and
putting it off till the next available scheduled maintenance window may not
be feasible.

A 72-hour notice will be provided to customers for these activities.
Customers may request small adjustments to this maintenance plans and SaaS
solutions from your provider will provide reasonable accommodations to
these requests when possible. Unplanned Unplanned downtime is defined
as any loss of production system availability without at least 72 hours of
advanced notice to customers. These downtimes are generally system fault
type issues but can also be proactive, emergency maintenance performed to
prevent a system failure from occurring. These unplanned events will
typically be charged against the SLA target. Occasionally, the providers
software development team, software and hardware vendors or the security
authorities provide emergency patches that must apply to your environment
to prevent attacks or outages. This emergency patching may result in an
unplanned service interruption. Notices of service interruption will be sent as
soon as the maintenance is scheduled or monitoring has determined a
customers system is unavailable; a minimum of 72 hours of advanced notice
is provided when practical. In all cases, the provider will make all efforts to
minimize, and even zero out, the system outage during maintenance and
customers must provide contacts that can be notified before and during the
maintenance window. Most providers recommend that customers set up email
aliases so that they can be sure that they receive notifications and that
something does not get missed because an individual is on vacation or
traveling. The exact patch schedule may vary by product and the customer
should refer to the service documentation for applicable maintenance
windows and notification methods.

Q28: How will the upgrade process impact my configurations?
The software is designed in a way that customer configurations can be
preserved through patches or upgrades. The configurations are stored in the
database or as files in a specific location. Once the provider apply the patch
or upgrade, the data is migrated to the newer version and the configurations
are migrated automatically. This is a major benefit of the SaaS model.

SaaS Solutions Strategy
Q29: Is the on-premises software option still available to provider
That will depend on the specific software. In some cases, the on-premises
software option will be available to provider customers: however, there will
be cases where the specific software application will only be available as a
SaaS option as described below. For products that have almost exact onpremises and SaaS versions, the customers can purchase any option and the
options are interchangeable. The customer can move from a SaaS version to
an on-premises version without any significant loss of functionality.
However, if a customer has implemented an on-premises version, they may
have to remove all customizations and pay for a migration to the SaaS model.
Certain products have different packaging for on-premises and SaaS. In that
case, the customer will have to make a decision to select a delivery model.
Moving from one model to another may still be possible, but there may be
significant difference in functionality between the two models. Some
products are only available in the SaaS model and there is no on-premises

Q30: Do providers provide a proof of concept of their product including
new features?
They should provide proof of concept for their SaaS products. However, for
many Saas solutions from providers, a proof of concept (POC) is useful only
when it is integrated with your other on-premises or third-party systems. That
means the cost and effort of a proof of concept is similar to an
implementation of a production system. Therefore, it is recommended that
customers, who are new to the product/service, purchase the SaaS solution
for a small user population for a short subscription period with an option to
extend usage at a later date. This approach is typically more effective for our
customers and generally provides better results than a POC. Customers are
able to integrate onsite and SaaS components and use the service in a serious
setting (with a pilot team) and draw conclusions from the experience.
Customers are also able to iteratively customize and extend the application
based on initial experience before deciding on a larger rollout.

Ic. Big Data and Cloud Technologies

We live in an era of Big Data that has embedded a huge potential and increased
information complexity, risks and insecurity as well as information overload and
irrelevance. Also business intelligence and analytics are important in dealing with data
driven problems and solutions in the contemporary society and economy. Analysts,
computer scientists, economists, mathematicians, political scientists, sociologists, and
other scholars are clamouring for access to the massive quantities of data in order to
extract meaningful information and knowledge. Very large data sets are generated by and
about organisations, people, and their collaboration and interactions in the digital business
ecosystems. For example, the connected devices such as smartphones, RFID readers,
webcams, and sensor networks add a huge number of autonomous data sources. Scholars
argue about the potential benefits, limitations, and risks of accessing and analysing huge
amounts of data such as financial data, genetic sequences, social media interactions,
medical records, phone/email logs, government records, and other digital traces generated
by people and organisations. With the development of internet communication and
collaboration, data is playing a central and crucial role. Currently data intensive
applications are developed and used. Also applications such as the Google+, Twitter,
LinkedIn and Facebook are generating massive of data. Generally, data intensive
applications including eBay, Amazon store and process data in a Cloud environment.

Big Data could be beneficial to resolve critical issues providing the potential of new
insights for the advancements of medical especially cancer research, global security,
logistics and transportation solutions, identification and predicting terrorism activities, and
dealing with socio-economic and environmental issues. The logistics sector is ideally
placed to benefit from the technological and methodological advancements of big data.
Logistics providers manage a massive flow of goods and that create massive data sets.
Millions of shipments every day, of different origins and destinations, size, weight,
content, and locations are tracked across global delivery networks (e.g. DHL, UPS)
However this present and past data tracking is not fully exploited in order to deliver
business value. Most likely there is huge untapped potential for improving operational

efficiency and customer experience, and creating useful new business models based on the
exploration of big data.

Big Data is defined as a complex data infrastructure and new powerful data technologies
and management approaches are needed. These solutions are directed to improve the
decision making processes and forecasting through application of advanced data
exploratory techniques, data mining, predictive analytics and knowledge discovery. The
main key characteristics that define Big Data are volume, velocity, variety and value.
Veracity could be also considered an additional characteristic. The related big data models
are presented in Fig. I.10.

Fig. I.10. Most enterprises are trying to build specific tailored solutions in-house to address their basic needs. The Big
Data solution space is still a evolving and there is lot of opportunities for innovation and creativity The solution
market for Big Data is still an untapped market.

The story is a bit different when it comes to realtime analytics. Enterprises clearly understand the importance of realtime analytics and how it provides a value to the current business. As a result there are vendors who have already built
realtime analytical solutions that the market wants and that help enterprises reshape their existing business model.

On the other hand because of the characteristics of the Cloud, this is an enabler of big data
acquisition, and associated software processing tools/strategies. Based on Gartner s
estimation, 50% of data will be stored on the Cloud by 2016. However, in the reality,
Cloud has not been widely used for data analytics especially in practical applications. The

availability of Cloud based solutions has dramatically lowered the cost of storage,
amplified by the use of commodity hardware even on a pay as-you-go basis that is
directed to effectively and timely processing large data sets. The big data could be
analyzed as -a -service . Google BigQuery is an example of providing real-time
insights about massive data sets in a Cloud based platform.

In Cloud computing, data and software applications are defined, developed and
implemented as services. These services have defined a multi-layered infrastructure and
are described as follows:

1. SaaS: applications are hosted and delivered online via a web browser offering
traditional desktop functionality

2. PaaS: the Cloud provides the software platform for systems (as opposed to just

3. IaaS: a set of virtual computing resources, such as storage and computing capacity, are
hosted in the Cloud; customers deploy and run only their own applications for obtaining
the needed services.

On the other hand, it is recognized there is tension between Big Data strategies, and
solutions versus information security and data privacy requirements. The big data might
enable the violation of the privacy and information security breaches, and by consequence,
decreasing the trust in data defined as a service in the Cloud. Big Data stored and
processed in the Cloud could lack a centralized control and ownership.

According to McKinsey Global Institute, big data is seen as the next frontier for
innovation, competition and productivity and as such the related applications will
contribute to economic growth. The positive impacts of big data provide a huge potential
for organizations. In order to achieve these aspirations, several issues should be analyzed
and discussed in the context of complex systems and using systems approaches such as
holistic thinking and system dynamics. Therefore, major issues are emerging and this
work-in-progress attempts to discuss a few key aspects directed to the development and
adopting data mining techniques and strategies for Cloud based big data applications.

I) c1. Background and Research Approach

Analysts Haluk Demirkan, Dursun Delen (2013) have defined some research directions
including dealing with affordable analytics for Big Data in the Cloud. This means using
open-source, freeof-charge data/text mining algorithms and associated commercial tools
(e.g. R, RapidMiner, Weka, Gate, etc.). New approaches need to provide solutions for
moving these tools to the Cloud and produce efficient and affordable applications for
discovering knowledge and patterns from very large/big data sets directed to support
business intelligence and decision support systems applications.

The principles of data/information- as- a- service, data/information-security-as-aservice,
and analytics- as- a- service are explained in the context of using service oriented
architecture. However, the Cloud platforms are not completely following service-oriented
thinking and even more there is a debate that Cloud computing is different from serviceoriented architectures and grid computing.

The main motivation of adopting Cloud computing for analytics applied for large (big)
data sets are based on the accessibility of Cloud solutions outside the a web based
organization communication secured with firewalls. Cloud-based business analytics are
also cost effective, easy to set up and test. The results are easy to be shared outside the
organizations. Greg Sheldon, CIO of Elite Brands said The biggest benefit, is to be able

to access huge amounts of information from anywhere you have web access, specifically
on an iPad. This is beneficial to our field sales team when information is needed on the

The main research questions are related, but not limited to the following aspects:

1. In the context of Cloud based big data how analytics (e.g. data mining), information and
knowledge management disciplines and strategies will evolve?

2. What should be the techniques, strategies and practices to increase the benefits and
minimize the information risks?

3. How to deal with the growing number of security breaches and cyber security risks and
increase organizational awareness, business agility and resilience?

4. How to adapt the existing legislation such as data protection law, regulations and
standards? Moreover, the ethics issues will be considered.

I) c2. Efforts and Challenges of Big Data Mining and Discovery

Considering Big Data a collection of complex and large data sets that are difficult to
process and mine for patterns and knowledge using traditional database management tools
or data processing and mining systems a briefing of the existing efforts and challenges is
provided in this paragraph. While presently the term big data literally concerns about data
volumes, Wu et al. (2013) have introduced HACE theorem that described the key
characteristics of the big data as:

(1) huge based on heterogeneous and diverse data sources,

(2) autonomous with distributed and decentralized control, and

(3) complex and evolving in data and knowledge associations.

Generally, business intelligence applications are using analytics that are grounded mostly
in data mining and statistical methods and techniques. These strategies are usually based

on the mature commercial software systems of RDBMS, data warehousing, OLAP, and

Since the late 1980s, various data mining algorithms have been developed mainly within
the artificial intelligence, and database communities. In the IEEE 2006 International
Conference on Data Mining (ICDM), the 10 most influential data mining algorithms were
identified based on expert nominations, citation counts, and a community survey. In
ranked order, these techniques are as follows C4.5, k-means, SVM (support vector
machine), Apriori, EM (expectation maximization), PageRank, AdaBoost, kNN (k-nearest
neighbors), Na ve Bayes, and CART. These algorithms are for classification, clustering,
regression, association rules, and network analysis. Most of these well known data mining
algorithms have been implemented and deployed in commercial and open source data
mining systems. Analysts have compared data base management systems and analytics as
well as ETL with using MapReduce and Hadoop. Hadoop was originally a (distributed)
file system approach applying the MapReduce framework that is a software approach
introduced by Google in 2004 to support distributed computing on large/big data sets.
Recently, Hadoop has been developed and used as a complex ecosystem that includes a
wider range of software systems, such as HBase (a distributed table store), Zookeeper (a
reliable coordination service), and the Pig and Hive highlevel languages that compile
down MapReduce components (Rabkin and Katz, 2013). Therefore, in the recent
conceptual approaches Hadoop is primarily considered an ecosystem or an infrastructure
or a framework and not just the file system alongside MapReduce components.

The Big Data and Cloud computing frameworks include the Google MapReduce, Hadoop
Reduce, Twister, Hadoop++, Haloop, and Spark etc. which are used to process big data
and run computational tasks. The Cloud databases are used to store massive structured and
semi-structured data generated from different types of applications. The most important
Cloud databases include the BigTable, Hbase, and HadoopDB. In order to implement an
efficient big data mining and analysis framework, the data warehouse processing is also
important. The most important data warehouse processing technologies include the Pig,
and Hive.

Catalin Strimbei (Smart Data Web Services - Informatica Economica) has suggested a
different conceptual interpretation of the OLAP technology considering the emergence of
web services, Cloud computing and big data. One of the most important consequences
could be widely open access to web analytical technologies. The related approach has
evaluated the OLAP Web Services viability in the context of the Cloud based
architectures. There are also a few reported practical applications of Big Data mining in
the Cloud. Analyst Pankesh Patel et al. (Service Level Agreement in Cloud Computing)
have explored a practical solution to big data problem using the Hadoop data cluster,
Hadoop Distributed File System alongside Map Reduce framework using big data
prototype application and scenarios. The outcomes obtained from various experiments

indicate promising results to address Big Data implementation problems.

The challenges for moving beyond existing data mining and knowledge discovery
techniques (NESSI, 2012, Witten et al, 2011) are as follows:

1. a solid scientific foundation to support the selection of a suitable analytical method and
a software design solution

2. new efficiency and scalable algorithms and machine learning techniques

3. the motivation of using Cloud architecture for big data solutions and how to achieve the
best performance of implementing data analytics using Cloud platform (e.g., big data asa-service)

4. dealing with data protection and privacy in the context of exploratory or predictive
analysis of Big Data

5. software platforms and architectures alongside adequate knowledge and development
skills to be able to implement them

6. ability to understand not only the data structures (and the usability for a given
processing method), but also the information and business value that is extracted from Big


The emergence of Big Data movement has energized the data mining, knowledge
discovery in data bases and associated software development communities, and it has
introduced complex, interesting questions for researchers and practitioners. As
organizations continue to increase the amount and values of collected data formalizing the
process of big data analysis and analytics becomes overwhelming. In this tutorial, we
discussed some existing approaches and have analyzed the main issues of big data mining,
knowledge, and patterns discovery in the data driven Cloud computing environment. This
research will be progressed providing theoretical and practical approaches that will be
tested through the development of case studies for the application of Big Data particularly
in collaborative logistics.

Id. The Cloud and the Fog

Fog computing is a new paradigm that exploits the benefits of virtualized IT
infrastructures closer to end-users. In short, Fog computing offers an appealing
combination of computational power, storage capacity, and networking services at the
edge of the networks. Fog computing supports applications and services that require very
low latency, location awareness, and mobility (including vehicular mobility). The
spectrum of potential uses cases is huge, and Fog computing works in concert with Cloud
computing. Indeed, Fog promises to lengthen the reach and complement current cloud
services. Smart cities, smart grid, smart connected vehicles are active areas where Fog
plays a significant role. Emerging distributed services and applications at the edge of the
network is the theme of FOG workshop. The workshop will be an excellent forum to
present and discuss hierarchical partitioning of computation and data, distributed
algorithms for data and computation placement, security issues in a multi-tenant
environment, and network-based computing and storage. The FOG workshop aims at
bringing together researchers from Academia and Industry, to identify and discuss
technical challenges, exchange novel ideas, and explore enabling technologies.

Cloud computing promises to significantly change the way we use computers and access
and store our personal and business information. With these new computing and
communications paradigms arise new data security challenges. Existing data protection
mechanisms such as encryption have failed in preventing data theft attacks, especially
those perpetrated by an insider to the cloud provider.

Researchers at Columbia University suggested a different approach for securing data in
the cloud using offensive decoy technology. We monitor data access in the cloud and
detect abnormal data access patterns. When unauthorized access is suspected and then
verified using challenge questions, we launch a disinformation attack by returning large
amounts of decoy information to the attacker. This protects against the misuse of the
user s real data. Experiments conducted in a local file setting provide evidence that this
approach may provide unprecedented levels of user data security in a Cloud environment.

Businesses, especially startups, small and medium businesses (SMBs), are increasingly
opting for outsourcing data and computation to the Cloud. This obviously supports better
operational efficiency, but comes with greater risks, perhaps the most serious of which are
data theft attacks. Data theft attacks are amplified if the attacker is a malicious insider.
This is considered as one of the top threats to cloud computing by the Cloud Security
Alliance. While most Cloud computing customers are well-aware of this threat, they are

left only with trusting the service provider when it comes to protecting their data. The lack
of transparency into, let alone control over, the Cloud provider s authentication,
authorization, and audit controls only exacerbates this threat. The Twitter incident is one
example of a data theft attack from the Cloud. Several Twitter corporate and personal
documents were ex-filtrated to technological website TechCrunch, and
customers accounts, including the account of U.S. President Barack Obama, were
illegally accessed. The attacker used a Twitter administrator s password to gain access to
Twitter s corporate documents, hosted on Google s infrastructure as Google Docs. The
damage was significant both for Twitter and for its customers.

While this particular attack was launched by an outsider, stealing a customer s password
is much easier if perpetrated by a malicious insider. Research by F. Rocha and M. Correia
[ Lucy in the sky without diamonds: Stealing confidential data in the cloud, in
Proceedings of the First International Workshop on Dependability of Clouds, Data Centers
and Virtual Computing Environments, Hong Kong, ser. DCDV 11, June 2011] outline
how easy passwords may be stolen by a malicious insider of the Cloud service provider.
The Columbia researchers also demonstrated how Cloud customers private keys might
be stolen, and how their confidential data might be extracted from a hard disk. After
stealing a customer s password and private key, the malicious insider get access to all
customer data, while the customer has no means of detecting this unauthorized access.
Much research in Cloud computing security has focused on ways of preventing
unauthorized and illegitimate access to data by developing sophisticated access control
and encryption mechanisms. However these mechanisms have not been able to prevent
data compromise.

Research by M. Van Dijk and A. Juels [ On the impossibility of cryptography alone for
privacy-preserving cloud computing, in Proceedings of the 5th USENIX conference on
Hot topics in security, ser. HotSec 10. Berkeley, CA, USA: USENIX Association, 2010]
have shown that fully homomorphic encryption, often acclaimed as the solution to such
threats, is not a sufficient data protection mechanism when used alone. The Columbia
researchers propose a completely different approach to securing the cloud using decoy
information technology, to be known as Fog computing. This technology was used to
launch disinformation attacks against malicious insiders, preventing them from
distinguishing the real sensitive customer data from fake worthless data. In this paper, we
propose two ways of using Fog computing to prevent attacks such as the Twitter attack, by
deploying decoy information within the Cloud by the Cloud service customer and within
personal online social networking profiles by individual users.

Fog security
Numerous proposals for cloud-based services describe methods to store documents, files,
and media in a remote service that may be accessed wherever a user may connect to the

Internet. A particularly vexing problem before such services are broadly accepted
concerns guarantees for securing a user s data in a manner where that guarantees only the
user and no one else can gain access to that data. The problem of providing security of
confidential information remains a core security problem that, to date, has not provided
the levels of assurance most people desire. Many proposals have been made to secure
remote data in the Cloud using encryption and standard access controls. It is fair to say all
of the standard approaches have been demonstrated to fail from time to time for a variety
of reasons, including insider attacks, mis-configured services, faulty implementations,
buggy code, and the creative construction of effective and sophisticated attacks not
envisioned by the implementers of security procedures.

Building a trustworthy cloud computing environment is not enough, because accidents
continue to happen, and when they do, and information gets lost, there is no way to get it
back. One needs to prepare for such accidents. The basic idea is that we can limit the
damage of stolen data if we decrease the value of that stolen information to the attacker.
We can achieve this through a preventive disinformation attack. We posit that secure
Cloud services can be implemented given two additional security features:
1) User Behavior Profiling: It is expected that access to a user s information in the Cloud
will exhibit a normal means of access. User profiling is a well known technique that can
be applied here to model how, when, and how much a user accesses their information in
the Cloud. Such normal user behavior can be continuously checked to determine
whether abnormal access to a user s information is occurring. This method of behaviorbased security is commonly used in fraud detection applications. Such profiles would
naturally include volumetric information, how many documents are typically read and
how often. These simple userspecific features can serve to detect abnormal Cloud access
based partially upon the scale and scope of data transferred.

2) Decoys: Decoy information, such as decoy documents, honeyfiles, honeypots, and
various other bogus information can be generated on demand and serve as a means of
detecting unauthorized access to information and to poison the thief s ex-filtrated
information. Serving decoys will confound and confuse an adversary into believing they
have ex-filtrated useful information, when they have not. This technology may be
integrated with user behavior profiling technology to secure a user s information in the
Cloud. Whenever abnormal access to a cloud service is noticed, decoy information may be
returned by the Cloud and delivered in such a way as to appear completely legitimate and
normal. The true user, who is the owner of the information, would readily identify when
decoy information is being returned by the Cloud, and hence could alter the Cloud s
responses through a variety of means, such as challenge questions, to inform the Cloud
security system that it has inaccurately detected an unauthorized access. In the case where
the access is correctly identified as an unauthorized access, the Cloud security system
would deliver unbounded amounts of bogus information to the adversary, thus securing
the user s true data from unauthorized disclosure.

The decoys, then, serve two purposes: (1) validating whether data access is authorized
when abnormal information access is detected, and (2) confusing the attacker with bogus
information. The researchers posit that the combination of these two security features will
provide unprecedented levels of security for the Cloud. No current Cloud security
mechanism is available that provides this level of security. They have applied these
concepts to detect illegitimate data access to data stored on a local file system by
masqueraders, i.e. attackers who impersonate legitimate users after stealing their
credentials. One may consider illegitimate access to Cloud data by a rogue insider as the
malicious act of a masquerader. Their experimental results in a local file system setting
show that combining both techniques can yield better detection results, and our results
suggest that this approach may work in a Cloud environment, as the Cloud is intended to
be as transparent to the user as a local file system. In the following is a brief review of the
experimental results achieved by using this approach to detect masquerade activity in a
local file setting.

A. Combining User Behavior Profiling and Decoy Technology for Masquerade Detection
1) User Behavior Profiling: Legitimate users of a computer system are familiar with the
files on that system and where they are located. Any search for specific files is likely to be
targeted and limited. A masquerader, however, who gets access to the victim s system
illegitimately, is unlikely to be familiar with the structure and contents of the file system.
Their search is likely to be widespread and untargeted. Based on this key assumption, the
researchers profiled user search behavior and developed user models trained with a
oneclass modeling technique, namely one-class support vector machines. The importance
of using one-class modeling stems from the ability of building a classifier without having
to share data from different users. The privacy of the user and their data is therefore
preserved. They monitor for abnormal search behaviors that exhibit deviations from the
user baseline. According to the researchers assumption, such deviations signal a potential
masquerade attack. Their previous experiments validated their assumption and
demonstrated that they could reliably detect all simulated masquerade attacks using this
approach with a very low false positive rate of 1.12%.

2) Decoy Technology: The team placed traps within the file system. The traps are decoy
files downloaded from a Fog computing site, an automated service that offers several
types of decoy documents such as tax return forms, medical records, credit card
statements, e-bay receipts, etc. The decoy files were downloaded by the legitimate user
and placed in highly-conspicuous locations that are not likely to cause any interference
with the normal user activities on the system. A masquerader, who is not familiar with the
file system and its contents, is likely to access these decoy files, if he or she is in search
for sensitive information, such as the bait information embedded in these decoy files.
Therefore, monitoring access to the decoy files should signal masquerade activity on the
system. The decoy documents carry a keyed-Hash Message Authentication Code

(HMAC), which is hidden in the header section of the document. The HMAC is computed
over the file s contents using a key unique to each user. When a decoy document is
loaded into memory, the team verified whether the document was a decoy document by
computing a HMAC based on all the contents of that document. The team compared it
with HMAC embedded within the document. If the two HMACs match, the document is
deemed a decoy and an alert is issued. The advantages of placing decoys in a file system
are threefold:
(A) the detection of masquerade activity

(B) the confusion of the attacker and the additional costs incurred to distinguish real from
bogus information, and

(C) the deterrence effect which, although hard to measure, plays a significant role in
preventing masquerade activity by risk-averse attackers.

3) Combining the Two Techniques: The correlation of search behavior anomaly detection
with trap-based decoy files should provide stronger evidence of malfeasance, and
therefore improve a detector s accuracy. The team hypothesized that detecting abnormal
search operations performed prior to an unsuspecting user opening a decoy file will
corroborate the suspicion that the user is indeed impersonating another victim user. This
scenario covers the threat model of illegitimate access to Cloud data. Furthermore, an
accidental opening of a decoy file by a legitimate user might be recognized as an accident
if the search behavior is not deemed abnormal. In other words, detecting abnormal search
and decoy traps together may make a very effective masquerade detection system.
Combining the two techniques improves detection accuracy.

Decoys were used as a flag for validating the alerts issued by the sensor monitoring the
user s file search and access behavior. In their experiments, they did not generate the
decoys on demand at the time of detection when the alert was issued. Instead, they made
sure that the decoys were conspicuous enough for the attacker to access them if they were
indeed trying to steal information by placing them in highly conspicuous directories and
by giving them enticing names. With this approach, the team was able to improve the
accuracy of their detector. Crafting the decoys on demand improves the accuracy of the
detector even further. Combining the two techniques, and having the decoy documents act
as an oracle for our detector when abnormal user behavior is detected may lower the
overall false positive rate of detector.

The team trained eighteen classifiers with computer usage data from 18 computer science
students collected over a period of 4 days on average. The classifiers were trained using
the search behavior anomaly detection described in a prior paper. They also trained

another 18 classifiers using a detection approach that combines user behavior profiling
with monitoring access to decoy files placed in the local file system, as described above.
The team tested these classifiers using simulated masquerader data. Figure 1 displays the
AUC scores achieved by both detection approaches by user model. The results show that
the models using the combined detection approach achieve equal or better results than the
search profiling approach alone.

Fig. I.11. AUC Comparison By User Model for the Search Profiling and Integrated Approaches.

The results of this experiments suggest that user profiles are accurate enough to detect
unauthorized Cloud access. When such unauthorized access is detected, one can respond
by presenting the user with a challenge question or with a decoy document to validate
whether the access was indeed unauthorized, similar to how we used decoys in a local file
setting, to validate the alerts issued by the anomaly detector that monitors user file search
and access behavior.

In this subsection, the research presented a new approach to securing personal and
business data in the Cloud. It proposed monitoring data access patterns by profiling user
behavior to determine if and when a malicious insider illegitimately accesses someone s
documents in a Cloud service. Decoy documents stored in the Cloud alongside the user s
real data also serve as sensors to detect illegitimate access. Once unauthorized data access
or exposure is suspected, and later verified, with challenge questions for instance, the

malicious insider is inundated with faulty information in order to dilute the user s real
data. Such preventive attacks that rely on disinformation technology, could provide
unprecedented levels of security in the Cloud and in social networks.
[This subsection is based on work supported by the Defense Advanced Research Projects Agency (DARPA) under the
ADAMS (Anomaly Detection at Multiple Scales) Program with grant award number W911NF-11-1-0140 and through
the Mission-Resilient Clouds (MRC) program under Contract FA8650-11-C-7190]

Ie. Thriving in the Cloud

Cloud technology has entered a new phase; several times removed from the staid,
cumbersome back-office functionality of the 1980s and 1990s, this new phase of Cloud
technology is transforming entire business sectors and forging new revenue streams from
previously inconceivable avenues. Although there are plenty of agile, ambitious start-ups
set on using Cloud technology to disrupt and innovate business models, multinationals can
also be found at the bleeding edge of Cloud-based business innovation. If these
multinationals succeed, you can soon expect seismic shifts across both the public and
private sectors, reverberating across all areas of industry.

Furthermore, research from Harvard Business Review shows a correlation between a more
mature use of Cloud and a variety of new business activities. Cloud leaders that is,

companies that take a more managed, enterprise approach are significantly more
likely to have launched new products and expanded into new markets than companies that
take a more ad-hoc approach . Video rental and streaming company Netflix, for example,
transitioned away from an online subscription-based, postal DVD rental service to launch
its Cloud-based film-streaming service. By 2014, the service had 62 million subscribers in
over 50 countries, with Cloud-based entertainment streaming accounting for 89% of the
business s revenue of US$1.6bn in the first quarter of 2015, up from 84% a year earlier
and 76% in the first quarter of 2013. Not only has Netflix gained access to new revenue
streams across multiple jurisdictions, but it is also disrupting traditional content-creation
business models in the entertainment industry by commissioning its own content, for
example by distributing global hit shows House of Cards and Orange is the New Black.

Cloud-based innovation is not just the preserve of the entertainment sector, or even just
consumer-facing businesses. Pearson, a UK-based media conglomerate, has launched
Cloud-based educational curricula that can provide a data feedback loop on student
progress and which introduce a potential new revenue stream in Cloud-based professional
development for teachers. This report will first explore the types and variety of
opportunities offered by Cloud technologies. In this context, the report will examine the
case study of the Pearson System of Courses, illustrating how a multinational is forging
new revenue streams by putting Cloud technology at the center of new business ventures.
The report will then extrapolate from these early adopters to consider how Cloud
technology is likely to affect multinationals business models and revenue streams in the
near future.

Despite its fluffy moniker, Cloud computing simply refers to data stored on a physical
server that can be accessed through an Internet connection from anywhere, at any time,
using any Internet-connected device. Faster Internet speeds, fixed and mobile, have
increased the delivery of Cloud-based services through high-quality, reliable content
delivery (including multimedia) and near real-time updating of Cloud-based data. Cloud
applications have facilitated the delivery of a vast range of services through Internetconnected devices, from streaming services such as Netflix, Google Play, Amazon Prime
and Spotify through to real-time international game play using Sony PlayStation or
multinationals accessing Cloud-based client relationship management software, such as

Reimagining ownership: products as a service Brian David Johnson, a futurist at a
multinational technology company, Intel, says: Technology has radically redefined what
ownership means for businesses because Cloud technology is now supported by the
Internet infrastructure. This means that not only can you store and move all that data, but
you also have the computational power to do things with that data.

He explains that it is the computational power that allows us to, say, watch TV in the
Cloud, play games in the Cloud and begin to have more enterprise ambition in the
Cloud. Just as US-based Apple revolutionised the concept of music ownership and
distribution with the launch of its iTunes music store in 2003, so now is Cloud technology
contributing to a cultural shift in how people interpret the meaning of ownership,
according to Mr. Johnson. This provides far-reaching opportunities for multinationals to
deliver services to anyone with an Internet-connected device. Apple s iTunes was (and
is) a software-based online shop that first introduced selling single songs via electronic file
downloads, rather than as physical products (such as vinyl, cassettes and compact discs).
By 2006, within three years of launching, Apple had sold 1 billion songs; within five years
it had sold 5bn; and after expanding into selling TV shows, films and apps, in 2014
Apple s CEO, Tim Cook, said that iTunes software and services were the fastest-growing
part of the business. Mr. Johnson explains: Apple was able to create a business model
and strike the business deals that radically redefine what the ownership of a song is and
that s not from a consumer standpoint but from a business standpoint.

Cloud and the sharing economy Digital innovation, coupled with consumer demands
for more flexible yet personalised products and services, has carved a new economic era:
the so-called sharing economy. Facilitated by Cloud technology, the sharing economy
allows people to share property, resources, time and skills across online platforms. This
enables micro-entrepreneurs to unlock previously unused or underused assets such
as renting a spare room to holidaymakers via Airbnb or allowing access to expensive
assets only when consumers want them; for example, renting expensive items such as cars,
tools, or luxury watches and bags through peer-to-peer lending schemes. This means that

the business-to-business (B2B) and business-to-consumer (B2C) markets are accepting

(and increasingly demanding) products delivered as a service.

Paul Smith, senior vice-president and general manager for EMEA at Salesforce Marketing
Cloud, says that it is at the intersection of the sharing economy and the convenience
economy that multinationals can find significant opportunities for new revenue streams.

Using the example of the Dollar Shave Club, which operates a subscription business
model for replacement razors, Mr. Smith explains that there are many similar products
that could very easily become a service [which is] a lot more transformative in the way
we ve done things, from being productownership-driven into being more
servicedriven .

Mr. Smith says that this shift, powered by Cloud technology, means that multinationals
that had previously considered themselves B2B could grow new B2C revenue streams by
delivering products as a service, particularly where convenience is a factor.

Any products that a consumer has to remind themselves to purchase are primed, Mr. Smith
explains, for a subscription service enabling the producer to post the product directly to
the consumer. The latter benefits from the ease of service, while the company gains from
higher customer-retention rates and brand loyalty, as well as less dependence on an
intermediary sales channel (such as a high-street shop).

Data-driven personalized journeys at scale echoing Mr Johnson s view, that increased
computational power helps Cloud technology to transform multinational businesses, Mr.

Smith believes that multinationals now have the power to collect and use both internal
customer data and an increasing volume and variety of external data (such as website
behavior, tracking of Internet Protocol addresses and social data). These data can be
shared in real time across geographies and time zones to enhance customer service and
improve sales and marketing functions. In retail, this can mean that rule-based automation
prevents inappropriate marketing messages from reaching a specific customer who may be
complaining to customer service about a faulty product. In entertainment, it can facilitate
cross-selling opportunities; the European division of Sony Computer Entertainment
(SCE), a Japanese multinational, uses Salesforce Marketing Cloud to personalise and
target real-time SMS and email notifications based on in-game play on their PlayStation
connected devices.

Mr. Smith says that SCE matches these data with targeted and personalized sales content
delivered through SMS or email. The result has been a marked increase in the company s
engagement rate on its outbound marketing communications and its conversion rates on
targeted calls to action.

I)d1. Regional variations

Multinationals using Cloud computing to process and share internal and external data to
create personalised journeys at scale must consider regional technological preferences and
consumer behavior. In developed markets such as Europe, the variations are less countryspecific and more about the concentration of industries in certain countries and markets
that are innovating using Cloud technology, Mr. Smith explains; examples include
Unilever in the UK7, food and beverage company Nestl in Switzerland and technology
company Philips in the Netherlands. In Asia, Singapore is at the forefront of incorporating
Cloud technology into its ambition to become a smart nation . Singapore has one of the
highest smartphone penetration rates in the world.

However, this is only the beginning. A government agency, the Infocomm Development
Authority (IDA), has the task of transforming Singapore into a country of complete
connectivity. Part of this ambition is an enabling environment for Cloud technology. A key
focus for IDA is developing capabilities in local SaaS companies, to enhance their
market competitiveness and consumer focus . New initiatives in Cloud and data
analytics include a Data-as-a-Service pilot.

As at late April 2015, 21 companies from various industries were participating in the pilot.
Africa, meanwhile, has been described as not only a mobile-first continent but
a mobileonly continent, with mobile phones as common in South Africa and Nigeria as
in the US. However, with significantly constrained smartphone penetration,

multinationals Cloud-supported, data-driven omnichannel business campaigns must be

sensitive to the bias towards SMS in this region. Mr Smith says that, as smartphone costs
fall, multinationals operating in African countries, as well as in the Middle East, are
increasingly seeing success from GPS-based push notifications to smartphone devices, in
effect leapfrogging desktop computing and email. Such regional variations in
technology culture mean that, while Europe and North America are evolving mobilefirst business and marketing strategies, multinationals are becoming far more innovative
in the use of Cloud and mobile technology in Africa and the Middle East, says Mr Smith.
He singles out consumer-goods company Unilever and beverage firm SABMiller as strong
examples in this regard.

Looking at the forecast for Cloud technology and its impact on multinationals, three key
trends are likely to stand out:

1) the growing market opportunities arising from computational power;

2) the rise in corporate partnerships; and

3) the opportunities provided by collaboration between five generations.

The smart, connected computational power of everything Mr. Johnson points out that the
next step in Cloud tech is to understand we are beginning to see computational power
approach zero, which means we will be able to turn anything into a computer our
homes, our cities, our offices. This can already be seen with the influx of wearable
technology, the Internet of Things and smart Internet-connected cities such as
Singapore, Stockholm and Seoul. The market opportunity for multinationals is significant.
A business-intelligence provider, IDTechEx, forecasts that the market for wearable
technology will grow from US$20bn in 2015 to US$70bn by 2025.

Similarly, an information-technology research and advisory company, Gartner, predicts
that 4.9bn items will be connected to the Internet during 2015, increasing to 25bn items by
2020. Cloud technology is instrumental in joining up the dots of real-time data flow
between these devices. The rise in corporate partnerships Partnerships between
multinationals, as well as between multinationals and competitive new entrants, is nothing
new. What Cloud technology is changing are the types and nature of those relationships.
Pearson, for example, is nurturing a consultative relationship with customers, which
influenced its decision to partner with Internet infrastructure corporations.

The Internet of Everything is brokering non-traditional, cross-sector partnerships and

collaborations, too, as consumers expect a higher level of product and service
interconnectedness and compatibility. Data access and sharing will continue to be a thorny
issue for multinationals, as data-sharing among partners fuels data-privacy concerns.
Cloud-based apps are also introducing multinationals to new revenue streams (and

Case study: Pearson System of Courses

Pearson is the worlds largest book publisher and the market leader for
textbooks. Thematically, the business is transitioning from selling printed
material to selling education services; since 2013 Pearsons online content
and software revenue has surpassed that of its traditional printed products,
generating 60% of the companys revenue. In its 2014 results, Pearson said
that it was taking advantage of our new cloud-based, mobile-ready and dataanalytics capabilities.

Pearson has sounded its commitment to cloudbased interactive learning; in
2014 it launched REVEL for US university students and the Pearson System
of Courses (PSOC) for American kindergarten studentsboth are cloudbased, multimedia immersive programmes designed to replace textbookbased learning with digital content delivered through mobile devices. For the
PSOC, Pearson partnered with US-based Microsoft, using its Azure Cloud
Services to process interactions with users that amounted to tens of millions
of events in the PSOCs first four-month period. Most PSOC users have
selected a hybrid cloud solution. The PSOCs kindergarten students also
share use of tablet devices.

The integral people component
Michael Chai, senior vice-president of schoolproduct technology at Pearson,
says that an important component of cloud-based education provision is the
power of large-scale data feedback loops to improve the quality and
personalization of education. Teachers have different access to the courses,
including an analytical component; a teacher can monitor group and
individual student activity in real time, as well as analyse recorded and stored
data about student performance, individually or as a class.

Mr Chai explains: The success of cloud technologyin other sectors as

much as in educationis dependent on the people component and the capital
component. Delivering reliable, effective technology to the classroom is a
key enabler for improving the efficacy of the entire system. We have had
enquiries globally for quite some time, so we
do believe there is underlying market demand for this approach and, I would
say, were seeing a convergence happening at the global level around
connecting teachers and students in this digital world. From a business
perspective, the data collected and stored on the cloud provide Pearson with
useful insights about patterns of learning behaviour and about the most
effective teaching methods. It can be used at micro levelPSOC analytics
can provide granular insights selected by classroom and time or day, for
exampleor (in theory) at macro level, where schools could benchmark
themselves against competing institutions nationally or even internationally.
Mr Chai says that PSOC data can be paired with third-party statistics, such as
socioeconomic status or children in a school or region, and thus be used to
benchmark schools performance against national norms.

Acknowledging the cultural squeamishness around data treatment and
application, Mr Chai says data privacy and security are extremely important
topics and at Pearson we take these matters very seriously; at a larger level
the education industry may find positive lessons in how the healthcare
industry is handling an analogous transition to digital. He adds that,
although Internet infrastructure poses a short-term challenge to cloud-based
business opportunities, the biggest longer-term challenge is around data
privacy and data-treatment laws. Although the PSOC was initially launched
in the USbenchmarked against the countrys Common Core State
Standardsthe course content can be customised to respond to the rigorous
requirements of different countries education curricula, without changing
the functionality and product structure. This should generate growing revenue
across different geographies, the scope for expanding from English and
Maths subjects, and additional revenue streams from professionaldevelopment modules available to teachers. Mr Chai adds: Not only can a
cloud-enabled solution lead to a subscription model, it also includes a
services component that we consider critical to implementation success.

potential disruptors), such as peer-to-peer lending and crowd-sourcing/crowd-funding. An
online money-management tool, Geezeo, which uses mobile and Cloud technology to help
people to manage their household finances, supplies its Personal Financial Management
tool as an overlay to US-listed Regions Bank, which offers this mobile option alongside its
e-banking and traditional services.

The 5G workforce.For the first time in history, five generations are working together: the
traditional generation (born pre-1945); the Baby Boomers (194664); Generation X (196580); Generation Y, also known as Millennials (1981-95); and the Linkster generation
(born after 1995). The latter two are considered digital natives , having grown up using
computers and are comfortable with sharing personal data in the Internet-connected
environment. Intel s Mr. Johnson says that this will create incredible business
innovation . He adds: The post-Millennials a generation that has never known a
time before the Internet, global connectivity and the Cloud set the bar for the new
generation to do incredible business innovations. I think what keeps [multinationals] safe,
what keeps us profitable, what keeps the engine of global commerce going are these five
generations working together, with the newest generation a very powerful addition to the
global workforce. Multinationals will need to juggle meeting the needs of the digital
natives all that innovation coming through the [multinationals ] door while
supporting Baby Boomers who have this incredible bedrock of knowledge . Global
demographic changes, Mr. Johnson predicts, will fuel corporate innovation because all
those people will have computational power in their pockets or on their wrists, and the
ability to connect to a massive amount of computational power in the Cloud , which
provides multinationals with a vast distribution network, as well as powerful knowledgesharing.

Successful businesses are never fully formed; businesses that thrive are constantly shaped
by the business and cultural environments around them. The Information Age has heralded
new tools, skills, revenue streams and expectations, while making others obsolete. Cloud
technology is part of that digital evolution. Its new phase as a harbinger of new
distribution channels for delivering products as a service, for faster and
more live information flows between corporations and their customers is
demanding that multinationals think bigger, and think differently. As computational power
moves towards zero, making it easier to add an increasing array of physical objects to the
Internet and to process large volumes of data from multiple sources, multinationals are
ideally positioned to take advantage of the nimbleness and joined-up
thinking facilitated by Cloud technology. Speed is the new currency; with Cloud-based
applications facilitating crossborder, realtime collaboration, reducing duplication and
streamlining business processes, the potential for major time savings when taking a new
product or concept to market is enormous. When multinationals can adopt a quick-to-

market test and iterate lean methodology to new products and services, the risk from
similarly nimble startups that have less financial clout, fewer staff, lower brand
penetration and higher barriers to international markets diminishes considerably.

Klaus Schwab, founder and executive chairman of the World Economic Forum, puts it this
way: In the new world, it is not the big fish which eats the small fish, it s the fast fish
which eats the slow fish . Today, multinationals are being challenged to re-imagine
ownership to consider which of their products could be delivered as a service.
Multinationals must commit to exploring what that intersection between the sharing
economy and the convenience economy means for their business model(s). This
requires a cultural and mental shift from both multinational leaders and each of their
stakeholder groups, challenging every preconceived notion that they have about people,
products, processes and place. Although Cloud adoption by businesses and individuals has
matured considerably in the past five-to-ten years, challenges remain. In the cultural
sphere, there is some discomfort around data and privacy issues. There are concerns
around security and the threat of cybercrime. And challenges remain regarding reliable
and uniformed connectivity infrastructure. Nonetheless, these challenges also offer
opportunities for innovation.

The digital juggernaut means that the potential for harnessing Cloud technology to
reinvent business models will continue to grow. Cloud technology is a top-ten strategic
trend that will have a significant impact on organizations during the 2020s. According to
David Cearley, vice-president and Gartner Fellow at Gartner Research, Cloud is the new
style of elastically scalable, self-service computing, and both internal applications and

external applications will be built on this new style. But unlike for start-ups, business
agility can be a challenge for multinationals, which are often encumbered by large legacy
systems and product lines, and typically have large workforces spanning multiple
geographies. In addition, many multinationals operate in the glare of the spotlight, with
shareholders, board members and the media interrogating their strategy and research and
development spending, along with potential regulatory complexities. Against this
backdrop, how do multinationals navigate their behemoth businesses towards Cloud-based
business models, systems and revenue streams? Here are three essential considerations:

1) Invest in self-disrupting technologyInvest in technology, even when it disrupts existing
product lines and business systems. John Chambers, CEO of the multinational technology
company Cisco, says that: A whole series of shifts have occurred in the kinds of
technology companies rely on All of them required companies to make big investments
in technology. Those that didn t were, once again, left behind. For Cisco, each transition
required a decision about when to jump from selling a profitable product to a new
technology often one that would cannibalize our existing product line. These jumps
were critical, though, if we wanted to stay ahead of the curve.
[ Top 10 Strategic Technology Trends for 2015: Cloud/Client Computing , IT Business
at: ]

2) Innovate for, and leverage, existingcustomers Multinationals have significant advantage
in their existing brand power and customer base. The latest phase in Cloud technology is
its swift rise and pervasiveness. Helping a customer base navigate the new digital
world paves the way for innovative Cloud-facilitated services, as well as potentially
opening a new revenue stream in itself. Consumer (B2B and B2C) expectations are fast
changing, and the data deluge if analysed correctly can provide tailored insights into
a multinational s customer base, replacing redundant traditional demographic ideas.

3) Encourage technological maturity in all rolesPearson s Mr. Chai says that a solid
foundation of technological proficiency from staff, a technology-embracing internal
business culture, and robust infrastructure and policies to support daily use are essential
groundwork for multinationals to harness the Cloud. He explains: If you want to make
change happen, by necessity it means changing the everyday paradigm in our case that
means teachers, in healthcare it means physicians. Then you need to facilitate leadership
readiness from inside the business but also from institutional stakeholders so, for us,
the superintendent role, the head-teacher role, the teacher role, the parent role and the
student role all have to work together towards this.

If. ERP and the cloud

As a wide variety of information technology services move to online offerings in the
cloud, more and more IT executives are considering whether to move their enterprise
resource planning (ERP) systems there as well. Although some IT organizations have
succeeded in moving a portion of their fringe ERP services, such as human resources
systems, into the cloud, many CIOs remain skeptical of doing the same with core financial
and supply chain operations. There are a number of factors that executives should consider
in deciding whether and how to use cloud-based services for their ERP systems. Industry
type, company size, solution complexity, security needs, and several other organizational
issues must all be addressed. In this Perspective, we analyze the pros and cons of moving
ERP services to the cloud and present a framework that CIOs can use to evaluate the
viability of cloud-based ERP systems for their organizations. Whether or not you choose
to jump in now, it is essential that this be marked on your agenda.

I) f1. Three models for housing ERP

Ever since the advent of full-scale enterprise resource planning (ERP) systems in the early
1990s, companies have struggled to balance the systems high costs and complexity
against the need for customized features and flexibility. Early on, the only choice was an
on-premises model: Long available from companies like SAP and Oracle, these systems
are still the preferred choice for some organizations. The early 2000s saw the arrival of
hosted solutions, in which the platform is managed off-site but the software must be
installed on end-users computers.

Recently, a third model has arisen, in which the ERP system is distributed from the cloud
and accessed by end-users via Web browsers. This solution can offer substantial benefits,
including decreased capital expenditures, lower overall costs, and quicker implementation.
Indeed, much of the ERP market is already moving in this direction: SAP recently
announced that its HANA platform based applications will be available via the cloud,
and Oracle s cloud-based offering for ERP, budgeting, and planning continues to build
interest (see Selected Cloud-based ERP Vendor Offerings, ahead). Although significant
concerns remain limited functionality, the potential loss of internal control,
performance reliability, and security among them cloud-based models continue to gain
traction (Fig. I.12). So is the cloud the right choice? Not necessarily. And even when it is,
there are several approaches IT leaders should consider. We offer an analysis of the
benefits and challenges of these systems and a framework for how to choose.

Fig. I.12. ERP systems deployment models.

I) f2. The benefits of cloud-based ERP

The brief history of ERP systems has been marked by both significant successes and
notorious failures no surprise, given the cost and complexity of these huge
implementations. The cloud promises a new way to address ERP s most notorious

Rather than being purchased outright, cloud-based ERP implementations are paid for
through a subscription model, which typically includes not just the software but also the
hosting and support costs. Thus, the initial capital expenditure required for implementation
is significantly lower than for traditional systems, and operating costs can often be lower
as well. Cloud-based providers can scale up their offerings with relative ease as an
organization s needs evolve.

Vendors are responsible for maintaining both the hardware and the software including
all patches, upgrades, and refreshes. They also provide the necessary backups, system
monitoring, and user support. Transferring all of this responsibility elsewhere should
allow companies to reduce the size of their IT support organizations and free up resources
for other activities that cannot be outsourced.

Overall, the total cost of ownership for a cloud-based solution can be 50 to 60 percent less
than for traditional solutions over a 10-year period (Fig. I.12).

Rapid deployment
One major drawback to both in-house and hosted ERP systems is that vendors and system
integrators frequently use existing templates that must be customized and configured to
match a company s specific practices and processes. Implementations typically take
months and sometimes years.

Cloud-based solutions, on the other hand, offer a basic configuration with a limited range
of options that are designed to meet the requirements of most businesses an approach
that can significantly reduce deployment time while still addressing the most critical needs
of the organization. How long it takes to roll out a cloud-based ERP system is determined
not by the time required to build the system, but by the time needed to update any affected
business processes and convert the pertinent data. In other words, companies must revamp
their business practices to fit the system a reversal of traditional ERP implementations
that can significantly reduce complexity. And despite the limits on configuration, cloudbased systems are designed to let companies quickly add new business functionalities
sales lead generation, for example while meeting any common requirements, such
as high availability and disaster recovery.

Flexibility and scalability
Vendors have been developing new ways for companies to acquire additional software and
functions without going through the usual cumbersome software delivery process. Both
SAP and, for example, offer bolt-on applications for advanced analytics,
collaboration, finance management, and the like through Web-based app stores that
resemble the iTunes store. This makes cloud-based systems even more appropriate for
companies that are quickly evolving to meet a changing competitive environment.
Although the benefits of a cloud-based solution seem clear, many companies are
apprehensive about adopting this technology for ERP systems.

Fig. I.13. Cost comparison of in-house and cloud-based solutions.

I) f3. Limitations of the cloud

Because cloud-based ERP services are still new to the market, and maturity is a concern to
CIOs, some companies remain wary. Other primary concerns include restricted
functionality and customization, and perceived data risk.

Limited functionality and availability
So far, vendors of cloud-based ERP systems have focused on delivering core ERP
functionality such as general accounting, purchasing, and accounts receivable and payable.
They continue to invest in developing new functions like statistical forecasting,
constraint-based planning, social media, and production management but these
offerings have not caught up to the advanced functionality of traditional on-premises and
hosted ERP offerings. Furthermore, cloud-based applications are currently confined to
certain geographies, in part because they cannot yet support the financial reporting
requirements of every region in which a company might operate.

Reduced customization and integration
Compared with traditional on-premises and hosted applications, Cloud-based solutions
typically offer a limited range of configuration options. That makes cloud options most
appropriate for companies that use highly standardized business processes in areas like
sales, purchasing, and accounts receivable. Cloud-based ERP may not be able to handle
the needs of companies with either highly tailored business processes or highly developed
application architectures (such as those involving multiple points of integration across a
variety of legacy IT systems, highly customized software, or packaged software). For
example, SAP s current on-demand ERP system for small and medium enterprises offers
only standard connections via NetWeaver and integration with common applications such

Perceived data risks
Companies choosing a cloud-based ERP system must be willing to trust a third-party
provider with sensitive company information, such as financial data or customer orders,
where it may be mingled with that of other companies. But cloud providers, including
Oracle and SAP, have invested heavily in state-of-the-art security that may exceed what a
hosted solution, or even an on-premises solution, can provide. Some of them are even
willing to guarantee that the data will stay in the same national jurisdiction or in a specific
data center. Moreover, many providers of human resources software already host and
manage sensitive employee data for companies that compete with one another. It s
important to note that certain regulatory requirements such as the U.S. International

Traffic in Arms Regulations and specific business needs that involve storing highly
confidential intellectual property may be too stringent for a cloud-based system. Given the
measures that cloud providers have taken to ensure security, however, the perception
of increased risk tends to be based more on a lack of familiarity with these emerging
options than on actual security risks (see Is the Cloud Secure Enough? ahead).

Organizational resistance
IT organizations at most companies have already put in place the teams and developed the
skills needed to operate their ERP environment, including data-center hosting, support,
maintenance, and ongoing application development. Like any outsourcing decision,
moving ERP to the cloud can create significant organizational disruptions that must be
taken into account when considering the options. IT organizations with a strong culture of
pride of ownership of technology solutions, or those that are new to application and
infrastructure outsourcing, are likely to feel threatened by moving ERP applications into
the cloud.

I) f4. The evaluation framework

Given the trade-offs involved, companies must carefully evaluate whether a cloud-based
ERP system is the right choice. In our experience, two key factors stand out from all the
others: implementation size and system complexity. These issues take on different
intensities depending on whether the company is implementing an ERP solution for the
first time, migrating from its current ERP system, or extending its current system s
capabilities to include additional functionality. Fig. I.14 provides a decision framework

for evaluating whether a cloud-based ERP system would work for your company.

Fig. I.14. Likelihood of success with a cloud-based ERP system.

Implementation size
At present, small to midsized companies are the most likely candidates for cloud-based
ERP systems, because implementation and support costs are relatively low. Many large,
complex companies will find that Cloud-based systems do not yet meet their enterpriselevel needs, although they may be suitable for smaller divisions if the cloud-based solution
can be integrated into the existing enterprise-wide ERP platform. Companies with largescale ERP systems may simply find the benefits of scale gained from in-house ownership
to be greater than the potential cost savings offered by a cloud-based solution today.

System complexity
The complexity of any ERP system is measured along three dimensions: the extent of
integration, the amount of functionality, and the size of the footprint. Corporate
environments that require basic functionality, minimal customization, and limited
integration are particularly appropriate for cloud-hosted solutions. More complex
organizations will likely find that cloud-based solutions are not the best option right now.
Some companies may benefit from so-called hybrid models, where some ERP
functionality is retained in a traditional hosted environment while other applications are
implemented through the cloud. A large company with complex supply chain
requirements, for example, might continue to maintain its customized ERP solution while

using a cloud provider for selected business processes, such as talent management. A
business with multiple subsidiaries might keep a centralized, hosted ERP solution to run
the enterprise while providing its subsidiaries with a cost-efficient cloud-based solution to
run their local operations.

Is the cloud secure enough?

Cloud-based technology solutions require companies to loosen their control of critical data. Companies
must take a comprehensive approach to the risks, from both the business and the IT security
perspectives. Industry security standards are evolving rapidly, and cloud-based ERP providers have
invested millions of dollars in building state-of-the-art security capabilities and information
management processes. In response, IT security managers need to reevaluate how they classify
applications and data based on level of risk, better identify specific security requirements and the
controls required to manage risk, and more thoroughly understand the ability of cloud providers to
meet their security requirements. And although cloud-based ERP solutions offer distinct advantages in
terms of business continuity and disaster recovery, companies still must conduct due diligence to ensure
that any Cloud-based solution meets their business continuity requirements. Even if the cloud provider
has robust site-failover and other disaster-recovery capabilities, clients may lose access to critical
business systems if the network path itself is compromised. Therefore, cloud solutions may force
companies to place greater importance on ensuring network redundancy to provide continued access in
the case of a disruption.


When is adopting a cloud-based ERP system the right choice? That depends. Providers are
investing significantly in enhancing their offerings, expanding the functionality and
availability of their services, and reducing the risks of adoption. Smaller companies that
want to gain the benefits of scale, lower their costs, and drive standardization should
consider this option now, as should larger companies looking to lower costs and drive
standardization within divisions or functional units. ERP in the cloud is the future, and
even companies that have good reason not to take the plunge yet should be monitoring
developments and considering their longer-range plans. [Source: Booz & Company]

Ig. Risks of Adopting Cloud Computing

The process of creating and managing a secure Cloud space is a more challenging task
than creating a secure classical IT environment. Given the immaturity of this technology
the new resources and the reallocation of traditional ones are not fully tested and come
with new risks that are still under research. The main risks of adopting Cloud computing
identified by this subsection are:

a. Misunderstanding responsibilities. If in a traditional scenario the security of data is
entirely the burden of the company owning data. In the Cloud computing scenario the
responsibilities are divided between the two actors: the Cloud provider and the client.
There is a tremendous potential for misguided risk management decisions if Cloud
providers do not disclose the extent to which the security controls are implemented and
the consumer knows which controls are further needed to be adopted.

Different kinds of Cloud services adopted mean different responsibilities for the service
provider and the customer. If an IaaS service model is adopted, then the provider is
responsible for physical security, environment security and the virtualization software
security, whereas the consumer is responsible for securing everything else above this layer
including operating system, applications and data.

However, in an SaaS Cloud service model the provider is responsible not only for the
physical and environmental security but also for all the software services he uses in order
to provide that particular software service to the client. In this case, the responsibilities of
the consumer in the fi eld of security are much lowered.

b. Issues: Data security and confidentiality. One of the biggest security concerns people
have when moving to the Cloud is related to the problem of keeping data secure and
confidential. In this respect, some particular problems arise: who can create data, where
the data is stored, who can access and modify data, what happens when data is deleted,
how the back-up is done, how the data transfer occurs, etc. All of this is known as data
security lifecycle and it is displayed in This lifecycle exists also in the classic architecture
but in a Cloud environment its stages are much more complex, posing higher security risks
and requiring a more careful management. Worth reminding in this respect is that it is
much more difficult for the Cloud customer to effectively check the data handling
practices of the Cloud provider and thus be sure that the data is handled in a proper way.
To counter such a risk, strategies like data encryption, particular public key infrastructure,
data dispersion, standardization of APIs, etc are proposed to customers as security
measures to create a trusted and secure environment.

c. Lack of Standards. The immaturity of this technology makes it difficult to develop a

comprehensive and commonly accepted set of standards. As a result, many standard

development organizations were established in order to research and develop the
specifications. Organizations like Cloud Security Alliance, European Network and
Information Security Agency, Cloud Standards Customer Council, etc. have developed
best practices regulations and recommendations. Other establishments, like Distributed
Management Task Force, The European Telecommunications Standards Institute, Open
Grid Forum, Open Cloud Consortium, National Institute of Standards and Technology,
Storage Networking Industry Association etc., centered their activity on the development
of working standards for different aspects of the Cloud technology.

The excitement around Cloud has created a flurry of standards and open source activity
leading to market confusion. That is why certain working groups like Cloud Standards
Coordination, TM Forum, etc. act to improve collaboration, coordination, information and
resource sharing between the organizations acting in this research field.

d. Interoperability issues. The Cloud computing technology offers a degree of resource
scalability which has never been reached before.

Companies can benefit from additional computational needs, storage space, bandwidth
allocation, etc. whenever they need and without great investments to support peak load
demands. If the demand falls back the additional capacity can be shut down just as quickly
as it was scaled up without any hardware equipment sitting idle.

This great advantage has also a major drawback. It comes alongside with the risk of
managing data within a shared environment (computation, storage, and network) with
other Cloud clients. Additionally, at one time one company may have multiple Cloud
providers for different services which have to be interoperable. In time, for different
reasons, companies may decide to move their services to another Cloud and in such a case
the lack of interoperability can block or raise heavy obstacles to such a process.

Cloud providers may find the customer lock-in system attractive, but for the customers
interoperability issues mean that they are vulnerable to price increases, quality of services
not meeting their needs, closure of one or more Cloud services, provider going out of
business, disputes between with the Cloud provider.

e. Reliability breakdowns. Another important aspect of the Cloud computing is the
reliability or availability of services. The breakdown of an essential service operating in a
Cloud has an impact on many clients. For example, in April 2012, there was a Gmail
disruption that made Gmail services unavailable for almost 1 hour. The company first said

that it affected less than 2 % of their customers, then they updated to 10 %, which sums
around 35 million clients of a total of 350 million users. These incidents are not rare and
evidence the customer lack of control over their data. The irony is that, in terms of
reliability, Cloud providers have set high standards which are rarely achieved in an
internal environment.

However, because these outages affect large numbers of consumers it cast doubts in the
minds of IT decision makers over the viability of replacing desktop functionality with the
functionality offered by the Cloud. Also, in this industry, the leading companies have set
some high level quality services. Those levels are not easy to be reached by the other
Cloud service providers which do not have such a well developed infrastructure.
Unfortunately for the clients these quality services may come at higher costs and
sometimes the decision makers, lured by the cheaper services, will be reluctant to
collaborate with such a provider.

f. Malicious insider. A malicious insider is a person motivated to create a bad impact on
the organization s mission by taking action that compromises information confidentiality,
integrity, and/or availability.

When sensitive data is processed outside the enterprise the organizational managers are
less immediately aware of the nature and level of risk and they do not possess quick and
direct capability to control and counter these risks. Experienced security specialists are
highly aware of the inverse relationship between loyalty and risk. Even if trusted company
employees can make mistakes or commit fraud and the outsiders are not automatically less
ethical than them, it is prudent to invest company s long-term employees with higher
trust. The malicious activities of an insider could potentially have an impact on: the
confidentiality, integrity and availability of all kind of data and services with impact on the
internal activities, organization s reputation and customer trust. This is especially
important in the case of Cloud computing due to the fact that Cloud architectures require
certain roles, like Cloud administrators, Cloud auditors, Cloud security personnel, which
are extremely high-risk.


Cloud computing is based on technologies like virtualization, distributed computing,
grid computing, utility computing, but also on networking, web and software services. The
benefi ts of adopting this technology draw decision makers attention and nowadays many
companies are engaged in adopting or researching Cloud adoption. Specialists who
analyze this sector forecast that the global market for Cloud computing will experience a
signifi cant increase in the next years and will replace traditional IT environment. In the

process of adopting Cloud based services companies and IT organizations should evaluate
the business benefi ts and risks.

The Cloud s economies of scale and flexibility are both a friend and a foe from a security
point of view. The management of security risk involves users, the technology itself, the
Cloud service providers, and the legal aspects of the data and services being used. The
massive concentrations of resources and data present a more attractive target to attackers,
but Cloud-based defenses can be more robust, scalable and cost-effective than traditional
ones. To help reduce the threat, Cloud computing stakeholders should invest in
implementing security measures to ensure that the data is being kept secure and private
throughout its lifecycle.

Lecture II

a. Mobile Cloud 99
b. Cloud Security Issues 107
c. Mobile Cloud Computing - Security 152
d. Security Analysis in the Migration
to Cloud Environments 159

IIa. Mobile Cloud

Currently, mobile application and computing is gaining a high momentum and playing a
significant role in enhancing the internet computing infrastructure. In addition, the mobile
devices and their applications have high technique in the service ever had, and developed

Mobile Cloud computing is expected to generate significantly more innovative with multi
applications. Mobile computing involves mobile communication, mobile hardware and
mobile software, and currently there are many mobile Cloud applications such as web
browsing, email access, video playback, Cisco s web EX on the iPad, document editing,
image editing, Google s Map, Gmail for iPhone, etc. These applications are using the
software as a service model. In this article, a case of the art mobile Cloud computing and
its implementation ways are presented. Some of the challenging issues as well as future
research directions will also be discussed.

Rapid development of information technology (IT) industry for the last several decades
has introduced us with many new terms. It started with the invention of the first computer
device and since then, it has been revolutionized many times in various areas. In those
early days of computing, mainframe computer is expected to lead the future of computing,
when huge scale machines and mainframe computers were used to implement different
tasks and various applications.

Nowadays, we are doing the same tasks but in a flexible, much cheaper, and are in a
portable manner, either by using desktop computer or mobile devices to several types of
servers tied together to create a so called Cloud Computing System (CCS). There are
many approaches and debates about Cloud computing. As it is now most recent research
area especially in the information technology industry and education. Moreover, many
applications about how Cloud computing provides resources and computing infrastructure
on the urgent demand from consumers in different sectors.

Meanwhile, the consumers can use the services and applications on the Cloud through
internet. Nowadays, this Cloud computing is not only limited to the personal computer,
but it also has an influence and profound impact on the mobile technology. New electronic
devices like tablets, net book and different smart phones are considered effective tools of
mobile computing or computing devices. They typically have a display screen with touch
input and/or a miniature keyboard and weighing fewer 2 pounds (0.91 kg). Samsung,
Apple, HTC, LG, Research in Motion Mobility (RIM) and Motorola Mobility are just a
few examples of the many manufacturers that produce these kinds of devices. These
Cloud computing resources are occupying and converging in a new and the fast emerging

field of Mobile Cloud Computing (MCC).

In addition to boost the demand, mobile applications also require more resources to be
equipped to make the user experience better. The resources, for instance, Google app
Engine and Amazon EC2, are considered as a suitable Cloud platform in which MCC as
new example for mobile applications. In the script of MCC are divided two approaches,
first is a simple approach and second is a mobile device approach. Simple approach
implies that both data storage and data processing are implemented outside the mobile

Meanwhile, Cloud resources are already utilized for processing and storage purpose. The
benefit of this concept is not constrained for MCC applications to certain type of mobile
devices or operating systems. Furthermore, there are no concerns for the storage capacity
and computing speed constraints. Meanwhile, mobile device approach implies that both
data storage and data processing are performed through the mobile device. The main
reason is that all mobile devices (smart phones, tablets, etc.), currently are more intelligent
and highly efficient. The benefit of this approach is that it equips the user with an
ownership to fully serviced over store and maintain data on the user mobile device.

This lecture starts with some background to mobile Cloud computing, and followed by
the definitions of related terms. Also highlighted is the concept of mobile Cloud
computing application and a summary of its importance. Afterwards, two mobile Cloud
computing solutions will be explained and the general purpose of mobile Cloud
computing and its applications on specific mobile will also be discussed. The benefits of
both solutions will also be explained; and a discussion of some potential issues of mobile
Cloud computing will follow.

II) a1. Terms for definition

Mobile Cloud computing generally is the state-of-the-art mobile disseminated computing
which involves three components: mobile computing, Cloud computing and wireless
networks. MCC aims to enhance computational capabilities of resource-constrained
mobile devices towards rich and increasing user experience.

MCC equips business and education sectors the opportunities for mobile network
operators as well as Cloud providers. More comprehensively, MCC can be defined as a
rich mobile computing technology that influences united flexible resources of diverse
Clouds and networks technologies toward storage and mobility to serve a multitude of
different mobile devices anywhere, anytime over the special channel of Ethernet or

Internet regardless of heterogeneous suitable environments and platforms based on the

pay-as-you-use principle might including consumer, enterprise, femtocells, trains coding,
end-to-end security, home gateways, and mobile broadband-enabled services.

Thus, MCC is defined as an expansion of Cloud computing with a new ad-hoc
infrastructure which depends on a mobile device MCC consists of a complex network and
involves many relationships between infrastructure providers, Applications Services
Providers (ASP), while end user and developers, are connected over the internet.

II) b2. The Need for Cloud Computing

In this era, all information in different sectors became at finger tips any place at any time,
and it has been driving vision via mobile Cloud computing. Only in this case the user can
have a better experience in mobile Cloud computing environment over mobile devices. In
addition, the mobile could computing contributes user s information in terms of location,
context, accessed high services, applications and network intelligence. Furthermore, MCC
offers effective solutions to the limitations currently faced by Cloud computing such as,
constraint bandwidth capacity and poor network connectivity.

Most affluent Americans aged 18-54 already have smartphones. The Americans who dont
have smartphones, meanwhile, tend to be those who make under $100,000 per year and
are older than age 55. The least-penetrated segment of the U.S. populationand,

therefore, the segment that will likely see the most new-user smartphone growth in the
next few yearsare Americans aged 65 and older who make less than $50,000 a year.

Thus, to cope these constraints, a solution is to instantiate customized service software
close Cloudlet. And then to use the service over a wireless network. For the last two
decades, the number of mobile users in all domains has increased tremendously and so are
the all smart phones. In the modern era of innovative technology, the majority of mobile
devices is much better whether in memory capacity, speed of display, power of battery or
network connectivity for various features, which allow the user to flexibly access via
diverse applications and a lot of services on the mobile Cloud.

II) b3. Stable Solutions for Mobile Cloud Computing

There are many methods help to equip suitable solutions for mobile Cloud computing, and
in this lecture, it will be categorized into two families: General purpose MCC and
application specific MCC. Each of them have their advantages and disadvantages of not
mutually exclusive.

General Purpose mobile Cloud Computing(GPMCC): In GPMCC, a public system is built
which uses the Cloud infrastructure to contribute in improving Mobile device performance
efficiency. It is very important to obtain on the label is for a mobile device over internet in
order to use specific resource or special application is in demand with high manner. A

number of individual applications can perform these tasks, but why not using these
resources in a more general purpose mode so that the computational power limitation of
mobile devices is alleviated incrementally to develop mobile computing. So some general
tasks which are that local level computed on the many mobile devices are outsourced to
the Cloud as they happen. By this manner the computer resource of the many remote
computers is influenced and no need to develop specific applications for that purpose.

1) Using Clone Clouds via Boost Performance for Smart Phones.A number of researchers
have introduced the main idea of improving and developing the performance of hardware
restricted smart phones by using their proposed clone Cloud architecture to be used to
boost performance They have created virtual clones of the number of the smart phone
accomplishment environment in the Cloud (computer, laptop or many servers) and transfer
the accomplished tasks to those virtual devices. So they conducted off load
accomplishment from smart phone to a computational infrastructure hosting a Cloud of
smart phone clones. If the smart phone is lost or destroyed, the clone can be used as a
backup. While another benefit is that hardware restriction of smart phone is coping the
task is transferred to effective and high computation devices in the Cloud. It also
facilitates and makes the developer s job flexible and easy as there are no or few
amendments needed for their applications.

2) Hiding and network bandwidth. There are many other issues related in mobile Cloud
computing including restricted bandwidth and high hiding of the network. For instance,
the bandwidth for 4G cellular systems may be restricted by cell tower bandwidth in
another area with less power signal reception leads to lower bandwidth and higher hiding.

Wi-Fi is a suitable solution to improve network hiding but if the number of mobile users is
more than the bandwidth is decreased. Upgrading to 5G wireless network or more can be a
good solution to the bandwidth and hiding limitations. Another convenient solution is the
use of Cloudlets.

3) Fragmentation and network availability. Internet efficiency involves constant and high
speed connection that must be guaranteed in mobile Cloud computing. The modern mobile
device has always been connected to the Cloud any place or anytime with the easiest way
that the user wants to be connected for different needs. HTML5 as a current technology
comes with a convenient solution by enabling data caching over a mobile device and this
make it possible for a Cloud application effectively to ongoing working in case of
interrupted connectivity.

4) Security and concerns. The development in technology has also brought many new
security hazards within it. Every user wants the high protection of his/her data and is

curious about it. In this respect, there are two main security issues regarding the mobile
Cloud computing. First is mobile device security and the second is Cloud security. In spite
of mobile devices using the Clouds for computing resources and applications. The
nowadays majority of smart phone devices has built-in special security features and high
quality to protect the devices of any abuse.

Meanwhile, Google Device Application private policy states the facility and flexibility for
the users to remotely lock or clear the information and protect them with a stolen if they
lost their mobile devices. In addition, some counter measures Cloud access protection and
established device identity with high protection to be adopted for better security of
different smart phones and the Clouds.

Summary and future research

In this lecture, weve highlighted a comprehensive overview of mobile Cloud computing.
The suitable solutions for mobile Cloud computing have also been discussed so that the
students can have a better understanding of the mobile Cloud computing and its
applications. Some critical and challenges issues as well as problems that exist in mobile
Cloud computing and the solutions for those issues by some experts have also been

presented. In addition, as mobile Cloud computing is a new model it still has an

opportunity for future research expansion in the following areas:

1) Security issues are still frightening and there should be an appropriate solution for it.

2) Architecture for the mobile Cloud diverse wireless network should be investigated.

3) A single access platform for mobile Cloud computing via various operating systems
platforms (e.g. Android, Symbian, Apple, Chrome, MeeGo) needs to be established.

Mobile Cloud computing are the most emerging branches of Cloud computing and it has
invaded our life in all sectors. The main aim is to use Cloud computing techniques for
implementing efficiency applications and storage with the processing of data on mobile

Mobile Cloud computing will equip many benefits to the mobile device users and
applications enterprises. The mobile industry has broad range rapidly and tracks
constantly. The number of mobile users has been boosted swiftly and also smart phones
and different sophisticated mobile devices are in the domain of almost every individual.

The internet usage and mobility concern have leaped and reached to obsession, so we
predict mobile Cloud computing application with its new innovation will invade the

IIb. Cloud security issues

Cloud Computing represents one of the most significant shifts in information technology
many of us are likely to see in our lifetimes. Reaching the point where computing
functions as a utility has great potential, promising innovations we cannot yet imagine.
Customers are both excited and nervous at the prospects of Cloud Computing. They are
excited by the opportunities to reduce capital costs. They are excited for a chance to
divest themselves of infrastructure management, and focus on core competencies. Most of
all, they are excited by the agility offered by the on-demand provisioning of computing
and the ability to align information technology with business strategies and needs more

However, customers are also very concerned about the risks of Cloud Computing if not
properly secured, and the loss of direct control over systems for which they are
nonetheless accountable. To aid both Cloud customers and Cloud providers, CSA
developed Security Guidance for Critical Areas in Cloud Computing , initially released
in April 2009, and revised in December 2009. This guidance has quickly become the
industry standard catalogue of best practices to secure Cloud Computing, consistently
lauded for its comprehensive approach to the problem, across 13 domains of concern.

Numerous organizations around the world are incorporating the guidance to manage their
Cloud strategies. The guidance document can be downloaded at .
The great breadth of recommendations provided by CSA guidance creates an implied

responsibility for the reader. Not all recommendations are applicable to all uses of Cloud

Some Cloud services host customer information of very low sensitivity, while others
represent mission critical business functions. Some Cloud applications contain regulated
personal information, while others instead provide Cloud-based protection against external
threats. It is incumbent upon the Cloud customer to understand the organizational value of
the system they seek to move into the Cloud. Ultimately, CSA guidance must be applied
within the context of the business mission, risks, rewards, and Cloud threat environment
using sound risk management practices.

The purpose of this subsection is to provide needed context to assist organizations in
making educated risk management decisions regarding their Cloud adoption strategies. In
essence, this threat research document should be seen as a companion to Security
Guidance for Critical Areas in Cloud Computing .

There has been much debate about what is in scope for this research. We expect this
debate to continue and for future versions of pertinent literature to reflect the consensus
emerging from those debates. While many issues, such as provider financial stability,
create significant risks to customers, providers have tried to focus on issues they feel are
either unique to or greatly amplified by the key characteristics of Cloud Computing and its

shared, on-demand nature. We identify the following threats in this document:

Abuse and Nefarious Use of Cloud Computing

Insecure Application Programming Interfaces

Malicious Insiders

Shared Technology Vulnerabilities

Data Loss/Leakage

Account, Service & Traffic Hijacking

Unknown Risk Profile

The threats are not listed in any order of severity. Selecting appropriate security controls
and otherwise deploying scarce security resources optimally require a correct reading of
the threat environment. For example, to the extent Insecure APIs (Application
Programming Interfaces) is seen as a top threat, a customer s project to deploy custom
line-of-business applications using PaaS will dictate careful attention to application
security domain guidance, such as robust software development lifecycle (SDLC)

By the same token, to the extent Shared Technology Vulnerabilities is seen as a top threat,
customers must pay careful attention to the virtualization domain best practices, in order
to protect assets commingled in shared environments. In addition to the flagship CSA
guidance and other research in the roadmap, this research should be seen as
complimentary to the high quality November 2009 research document produced by
ENISA (European Network and Information Security Agency), Cloud Computing:
Benefits, Risks and Recommendations for Information Security . ENISA s research
provides a comprehensive risk management view of Cloud Computing and contains
numerous solid recommendations. The ENISA document has been a key inspiration, and
we have leveraged the ENISA risk assessment process to analyze our taxonomy of
threats. We encourage readers of this document to also read the ENISA document:

Threat #1: Abuse and Nefarious Use of Cloud Computing Description

IaaS providers offer their customers the illusion of unlimited compute, network, and
storage capacity often coupled with a frictionless registration process where anyone
with a valid credit card can register and immediately begin using Cloud services. Some
providers even offer free limited trial periods. By abusing the relative anonymity behind
these registration and usage models, spammers, malicious code authors, and other
criminals have been able to conduct their activities with relative impunity. PaaS providers
have traditionally suffered most from this kind of attacks; however, recent evidence shows
that hackers have begun to target IaaS vendors as well. Future areas of concern include
password and key cracking, DDOS, launching dynamic attack points, hosting malicious
data, botnet command and control, building rainbow tables, and CAPTCHA solving farms.

ExamplesIaaS offerings have hosted the Zeus botnet, InfoStealer trojan horses, and
downloads for Microsoft Office and Adobe PDF exploits. Additionally, botnets have used
IaaS servers for command and control functions. Spam continues to be a problem as a
defensive measure, entire blocks of IaaS network addresses have been publicly blacklist.

Stricter initial registration and validation processes.

Enhanced credit card fraud monitoring and coordination.

Comprehensive introspection of customer network traffic.

Monitoring public blacklists for one s own network blocks.

Impact: Criminals continue to leverage new technologies to improve their reach, avoid
detection, and improve the effectiveness of their activities.

Cloud Computing providers are actively being targeted, partially because their relatively
weak registration systems facilitate anonymity, and providers fraud detection capabilities
are limited.

Threat #2: Insecure Interfaces and APIs Description
Cloud Computing providers expose a set of software interfaces or APIs that customers use
to manage and interact with Cloud services. Provisioning, management, orchestration, and
monitoring are all performed using these interfaces. The security and availability of
general Cloud services is dependent upon the security of these basic APIs. From
authentication and access control to encryption and activity monitoring, these interfaces
must be designed to protect against both accidental and malicious attempts to circumvent

Furthermore, organizations and third parties often build upon these interfaces to offer
value-added services to their customers. This introduces the complexity of the new layered
API; it also increases risk, as organizations may be required to relinquish their credentials
to third-parties in order to enable their agency.

Examples Anonymous access and/or reusable tokens or passwords, clear-text
authentication or transmission of content, inflexible access controls or improper
authorizations, limited monitoring and logging capabilities, unknown service or API

Analyze the security model of Cloud provider interfaces.

Ensure strong authentication and access controls are implemented in concert with
encrypted transmission.

Understand the dependency chain associated with the API.

Impact: While most providers strive to ensure security is well integrated into their service
models, it is critical for consumers of those services to understand the security
implications associated with the usage, management, orchestration and monitoring of
Cloud services. Reliance on a weak set of interfaces and APIs exposes organizations to a
variety of security issues related to confidentiality, integrity, availability and

Threat #3: Malicious Insiders Description
The threat of a malicious insider is well-known to most organizations. This threat is

amplified for consumers of Cloud services by the convergence of IT services and

customers under a single management domain, combined with a general lack of
transparency into provider process and procedure. For example, a provider may not reveal
how it grants employees access to physical and virtual assets, how it monitors these
employees, or how it analyzes and reports on policy compliance. To complicate matters,
there is often little or no visibility into the hiring standards and practices for Cloud
employees. This kind of situation clearly creates an attractive opportunity for an
adversary ranging from the hobbyist hacker, to organized crime, to corporate
espionage, or even nation-state sponsored intrusion. The level of access granted could
enable such an adversary to harvest confidential data or gain complete control over the
Cloud services with little or no risk of detection.

Examples No public examples are available at this time.

Enforce strict supply chain management and conduct a comprehensive supplier

Specify human resource requirements as part of legal contracts.

Require transparency into overall information security and management practices, as
well as compliance reporting.

Determine security breach notification processes.

Impact: The impact that malicious insiders can have on an organization is considerable,
given their level of access and ability to infiltrate organizations and assets. Brand damage,
financial impact, and productivity losses are just some of the ways a malicious insider can
affect an operation. As organizations adopt Cloud services, the human element takes on an
even more profound importance. It is critical therefore that consumers of Cloud services
understand what providers are doing to detect and defend against the malicious insider

Threat #4: Shared Technology Issues Description
IaaS vendors deliver their services in a scalable way by sharing infrastructure. Often, the
underlying components that make up this infrastructure (e.g., CPU caches, GPUs, etc.)
were not designed to offer strong isolation properties for a multi-tenant architecture. To

address this gap, a virtualization hypervisor mediates access between guest operating
systems and the physical compute resources. Still, even hypervisors have exhibited flaws
that have enabled guest operating systems to gain inappropriate levels of control or
influence on the underlying platform.

A defense in depth strategy is recommended, and should include compute, storage, and
network security enforcement and monitoring. Strong compartmentalization should be
employed to ensure that individual customers do not impact the operations of other tenants
running on the same Cloud provider. Customers should not have access to any other
tenant s actual or residual data, network traffic, etc.

Joanna Rutkowska s Red and Blue Pill exploits

Kortchinksy s CloudBurst presentations.

Implement security best practices for installation/configuration.

Monitor environment for unauthorized changes/activity.

Promote strong authentication and access control for administrative access and

Enforce service level agreements for patching and vulnerability remediation.

Conduct vulnerability scanning and configuration audits.

Impact: Attacks have surfaced in recent years that target the shared technology inside
Cloud Computing environments. Disk partitions, CPU caches, GPUs, and other shared
elements were never designed for strong compartmentalization. As a result, attackers
focus on how to impact the operations of other Cloud customers, and how to gain
unauthorized access to data.

Threat #5: Data Loss or Leakage Description

There are many ways to compromise data. Deletion or alteration of records without a
backup of the original content is an obvious example. Unlinking a record from a larger
context may render it unrecoverable, as can storage on unreliable media. Loss of an
encoding key may result in effective destruction. Finally, unauthorized parties must be
prevented from gaining access to sensitive data. The threat of data compromise increases
in the Cloud, due to the number of and interactions between risks and challenges which
are either unique to Cloud, or more dangerous because of the architectural or operational
characteristics of the Cloud environment.

Examples Insufficient authentication, authorization, and audit (AAA) controls;
inconsistent use of encryption and software keys; operational failures; persistence and
remanence challenges: disposal challenges; risk of association; jurisdiction and political
issues; data center reliability; and disaster recovery.

Implement strong API access control.

Encrypt and protect integrity of data in transit.

Analyzes data protection at both design and run time.

Implement strong key generation, storage and management, and destruction practices.

Contractually demand providers wipe persistent media before it is released into the pool.

Contractually specify provider backup and retention strategies.

Impact: Data loss or leakage can have a devastating impact on a business. Beyond the
damage to one s brand and reputation, a loss could significantly impact employee,
partner, and customer morale and trust. Loss of core intellectual property could have
competitive and financial implications. Worse still, depending upon the data that is lost or
leaked, there might be compliance violations and legal ramifications.

Threat #6: Account or Service Hijacking Description
Account or service hijacking is not new. Attack methods such as phishing, fraud, and
exploitation of software vulnerabilities still achieve results. Credentials and passwords are

often reused, which amplifies the impact of such attacks. Cloud solutions add a new threat
to the landscape. If an attacker gains access to your credentials, they can eavesdrop on
your activities and transactions, manipulate data, return falsified information, and redirect
your clients to illegitimate sites. Your account or service instances may become a new
base for the attacker. From here, they may leverage the power of your reputation to launch
subsequent attacks.

Examples No public examples are available at this time.

Prohibit the sharing of account credentials between users and services.

Leverage strong two-factor authentication techniques where possible.

Employ proactive monitoring to detect unauthorized activity.

Understand Cloud provider security policies and SLAs.

Impact: Account and service hijacking, usually with stolen credentials, remains a top
threat. With stolen credentials, attackers can often access critical areas of deployed Cloud
computing services, allowing them to compromise the confidentiality, integrity and
availability of those services. Organizations should be aware of these techniques as well as
common defense in depth protection strategies to contain the damage (and possible
litigation) resulting from a breach.

Threat #7: Unknown Risk Profile Description
One of the tenets of Cloud Computing is the reduction of hardware and software
ownership and maintenance to allow companies to focus on their core business strengths.
This has clear financial and operational benefits, which must be weighed carefully against
the contradictory security concerns complicated by the fact that Cloud deployments are
driven by anticipated benefits, by groups who may lose track of the security ramifications.

Versions of software, code updates, security practices, vulnerability profiles, intrusion
attempts, and security design, are all important factors for estimating your company s
security posture. Information about who is sharing your infrastructure may be pertinent, in
addition to network intrusion logs, redirection attempts and/or successes, and other logs.

Security by obscurity may be low effort, but it can result in unknown exposures. It may
also impair the in-depth analysis required highly controlled or regulated operational areas.

IRS asked Amazon EC2 to perform a C&A; Amazon refused.

Heartland Data Breach: Heartland s payment processing systems were using knownvulnerable software and actually infected, but Heartland was willing to do only the bare
minimum and comply with state laws instead oftaking the extra effort to notify every
single customer, regardless of law, about whether their data has been stolen.

Disclosure of applicable logs and data.

Partial/full disclosure of infrastructure details (e.g., patch levels, firewalls, etc.).

Monitoring and alerting on necessary information.

Impact: When adopting a Cloud service, the features and functionality may be well
advertised, but what about details or compliance of the internal security procedures,
configuration hardening, patching, auditing, and logging? How are your data and related
logs stored and who has access to them? What information if any will the vendor disclose
in the event of a security incident? Often such questions are not clearly answered or are
overlooked, leaving customers with an unknown risk profile that may include serious

II)b1. Governance and Enterprise

Risk Management Effective governance and enterprise risk management in Cloud
Computing environments follows from well-developed information security governance

processes, as part of the organization s overall corporate governance obligations of due

care. Well-developed information security governance processes should result in
information security management programs that are scalable with the business, repeatable
across the organization, measurable, sustainable, defensible, continually improving, and
cost-effective on an ongoing basis. The fundamental issues of governance and enterprise
risk management in Cloud Computing concern the identification and implementation of
the appropriate organizational structures, processes, and controls to maintain effective
information security governance, risk management, and compliance. Organizations should
also assure reasonable information security across the information supply chain,
encompassing providers and customers of Cloud Computing services and their supporting
third party vendors, in any Cloud deployment model.

Governance Recommendations:
-A portion of the cost savings obtained by Cloud Computing services must be invested
into increased scrutiny of the security capabilities of the provider, application of security
controls, and ongoing detailed assessments and audits, to ensure requirements are
continuously met.

Both Cloud Computing service customers and providers should develop robust
information security governance, regardless of the service or deployment model.
Information security governance should be a collaboration between customers and
providers to achieve agreed-upon goals which support the business mission and
information security program. The service model may adjust the defined roles and
responsibilities in collaborative information security governance and risk management
(based on the respective scope of control for user and provider), while the deployment
model may define accountability and expectations (based on risk assessment).

User organizations should include review of specific information security governance
structure and processes, as well as specific security controls, as part of their due diligence
for prospective provider organizations.

The provider s security governance processes and capabilities should be assessed for
sufficiency, maturity, and consistency with the user s information security management

The provider s information security controls should be demonstrably risk-based and
clearly support these management processes.

Collaborative governance structures and processes between customers and providers

should be identified as necessary, both as part of the design and development of service
delivery, and as service risk assessment and risk management protocols, and then
incorporated into service agreements.

Security departments should be engaged during the establishment of Service Level
Agreements and contractual obligations, to ensure that security requirements are
contractually enforceable.

Metrics and standards for measuring performance and effectiveness of information
security management should be established prior to moving into the Cloud. At a
minimum, organizations should understand and document their current metrics and how
they will change when operations are moved into the Cloud, where a provider may use
different (potentially incompatible) metrics.

Wherever possible, security metrics and standards (particularly those relating to legal
and compliance requirements) should be included in any Service Level Agreements and
contracts. These standards and metrics should be documented and demonstrable

Enterprise Risk Management Recommendations: As with any new business process, it s
important to follow best practices for risk management. The practices should be
proportionate to your particular usages of Cloud services, which may range from
innocuous and transient data processing up through mission critical business processes
dealing with highly sensitive information.

Here are some Cloud-specific recommendations you can incorporate into your existing
risk management processes.

Due to the lack of physical control over infrastructure in many Cloud Computing
deployments; Service Level Agreements, contract requirements, and provider
documentation play a larger role in risk management than with traditional, enterpriseowned infrastructure.

Due to the on-demand provisioning and multi-tenant aspects of Cloud Computing,
traditional forms of audit and assessment may not be available, or may be modified. For
example, some providers restrict vulnerability assessments and penetration testing, while
others limit availability of audit logs and activity monitoring. If these are required per your
internal policies, you may need to seek alternative assessment options, specific contractual

exceptions, or an alternative provider better aligned with your risk management


Relating to the use of Cloud services for functions critical to the organization, the risk
management approach should include identification and valuation of assets, identification
and analysis of threats and vulnerabilities and their potential impact on assets (risk and
incident scenarios), analysis of the likelihoods of events/scenarios, management-approved
risk acceptance levels and criteria, and the development of risk treatment plans with
multiple options (control, avoid, transfer, accept).

The outcomes of risk treatment plans should be incorporated into service agreements.
Risk assessment approaches between provider and user should be consistent, with
consistency in impact analysis criteria and definition of likelihood.

The user and provider should jointly develop risk scenarios for the Cloud service; this
should be intrinsic to the provider s design of service for the user, and to the user s
assessment of Cloud service risk.

Asset inventories should account for assets supporting Cloud services and under the
control of the provider. Asset classification and valuation schemes should be consistent
between user and provider.

The service, and not just the vendor, should be the subject of risk assessment. The use of
Cloud services, and the particular service and deployment models to be utilized, should be
consistent with the risk management objectives of the organization, as well as with its
business objectives. Where a provider cannot demonstrate comprehensive and effective
risk management processes in association with its services, customers should carefully
evaluate use of the vendor as well as the user s own abilities to compensate for the
potential risk management gaps.

Customers of Cloud services should ask whether their own management has defined risk
tolerances with respect to Cloud services and accepted any residual risk of utilizing Cloud
services. Information Risk Management Recommendations Information Risk
Management is the act of aligning exposure to risk and capability of managing it with the
risk tolerance of the data owner. In this manner, it is the primary means of decision
support for information technology resources designed to protect the confidentiality,
integrity, and availability of information assets.

Adopt a risk management framework model to evaluate IRM, and a maturity model to
assess the effectiveness of your IRM model.

Establish appropriate contractual requirements and technology controls to collect
necessary data to inform information risk decisions (e.g., information usage, access
controls, security controls, location, etc.).

Adopt a process for determining risk exposure before developing requirements for a
Cloud Computing project. Although the categories of information required to understand
exposure and management capability are general, the actual evidential metrics gathered
are specific to the nature of the Cloud computing SPI model and what can be feasibly
gathered in terms of the service.

When utilizing SaaS, the overwhelming majority of information will have to be provided
by the service provider. Organizations should structure analytical information gathering
processes into contractual obligations of the SaaS service.

When utilizing PaaS, build in information gathering as per SaaS above, but where
possible include the ability to deploy and gather information from controls as well as
creating contractual provisions to test the effectiveness of those controls.

When utilizing an IaaS service provider, build information transparency into contract
language for information required by risk analysis.

Cloud service providers should include metrics and controls to assist customers in
implementing their Information Risk Management requirements.

Third Party Management Recommendations:

Customers should view Cloud services and security as supply chain security issues. This
means examining and assessing the provider s supply chain (service provider
relationships and dependencies), to the extent possible. This also means examining the
provider s own third party management.

Assessment of third party service providers should specifically target the provider s
incident management, business continuity and disaster recovery policies, and processes
and procedures; and should include review of co-location and back-up facilities. This

should include review of the provider s internal assessments of conformance to its own
policies and procedures, and assessment of the provider s metrics to provide reasonable
information regarding the performance and effectiveness of its controls in these areas.

The user s business continuity and disaster recovery plan should include scenarios for
loss of the provider s services, and for the provider s loss of third party services and
third party-dependent capabilities. Testing of this part of the plan should be coordinated
with the Cloud provider.

The provider s information security governance, risk management, and compliance
structures and processes should be comprehensively assessed:
o Request clear documentation on how the facility and services are assessed for risk and
audited for control weaknesses, the frequency of assessments, and how control
weaknesses are mitigated in a timely manner.
o Require definition of what the provider considers critical service and information
security success factors, key performance indicators, and how these are measured relative
to IT Service and Information Security Management.
o Review the provider s legal, regulatory, industry, and contractual requirements capture,
assessment, and communication processes for comprehensiveness.
o Perform full contract or terms-of-use due diligence to determine roles, responsibilities,
and accountability. Ensure legal review, including an assessment of the enforceability of
local contract provisions and laws in foreign or out-of-state jurisdictions.
o Determine whether due diligence requirements encompass all material aspects of the
Cloud provider relationship, such as the provider s financial condition, reputation (e.g.,
reference checks), controls, key personnel, disaster recovery plans and tests, insurance,
communications capabilities, and use of subcontractors.

II) b2. Legal and Electronic Discovery

Cloud Computing creates new dynamics in the relationship between an organization and
its information, involving the presence of a third party: the Cloud provider. This creates
new challenges in understanding how laws apply to a wide variety of information
management scenarios. A complete analysis of Cloud Computing-related legal issues
requires consideration of functional, jurisdictional, and contractual dimensions.

The functional dimension involves determining which functions and services in Cloud
Computing have legal implications for participants and stakeholders.

The jurisdictional dimension involves the way in which governments administer laws
and regulations impacting Cloud Computing services, the stakeholders, and the data assets

The contractual dimension involves the contract structures, terms and conditions, and
enforcement mechanisms through which stakeholders in Cloud Computing environments
can address and manage the legal and security issues.

Cloud Computing in general can be distinguished from traditional outsourcing in three
ways: the time of service (on-demand and intermittent), the anonymity of identity of the
service provider(s) and anonymity of the location of the server(s) involved. When
considering IaaS and PaaS specifically, a great deal of orchestration, configuration, and
software development is performed by the customer so much of the responsibility
cannot be transferred to the Cloud provider. Compliance with recent legislative and
administrative requirements around the world forces stronger collaboration among lawyers
and technology professionals. This is especially true in Cloud Computing, due to the
potential for new areas of legal risk created by the distributed nature of the Cloud,
compared to traditional internal or outsourced infrastructure.

Numerous compliance laws and regulations in the United States and the European Union
either impute liability to subcontractors or require business entities to impose liability
upon them via contract. Courts now are realizing that information security management
services are critical to making decisions as to whether digital information may be accepted
as evidence. While this is an issue for traditional IT infrastructure, it is especially
concerning in Cloud Computing due to the lack of established legal history with the


Customers and Cloud providers must have a mutual understanding of each other s roles
and responsibilities related to electronic discovery, including such activities as litigation
hold, discovery searches, who provides expert testimony, etc.

Cloud providers are advised to assure their information security systems are responsive
to customer requirements to preserve data as authentic and reliable, including both

primary and secondary information such as metadata and log files.

Data in the custody of Cloud service providers must receive equivalent guardianship as
in the hands of their original owner or custodian.

Plan for both expected and unexpected termination of the relationship in the contract
negotiations, and for an orderly return or secure disposal of assets.

Pre-contract due diligence, contract term negotiation, post-contract monitoring, and
contract termination, and the transition of data custodianship are components of the duty
of care required of a Cloud services client.

Knowing where the Cloud service provider will host the data is a prerequisite to
implementing the required measures to ensure compliance with local laws that restrict the
cross-border flow of data.

As the custodian of the personal data of its employees or clients, and of the company s
other intellectual property assets, a company that uses Cloud Computing services should
ensure that it retains ownership of its data in its original and authenticable format.

Numerous security issues, such as suspected data breaches, must be addressed in specific
provisions of the service agreement that clarify the respective commitments of the Cloud
service provider and the client.

The Cloud service provider and the client should have a unified process for responding
to subpoenas, service of process, and other legal requests.

The Cloud services agreement must allow the Cloud services client or designated third
party to monitor the service provider s performance and test for vulnerabilities in the

The parties to a Cloud services agreement should ensure that the agreement anticipates
problems relating to recovery of the client s data after their contractual relationship

II) b3. Compliance and Audit

With Cloud Computing developing as a viable and cost effective means to outsource entire
systems or even entire business processes, maintaining compliance with your security
policy and the various regulatory and legislative requirements to which your organization
is subject can become more difficult to achieve and even harder to demonstrate to auditors
and assessors. Of the many regulations touching upon information technology with which
organizations must comply, few were written with Cloud Computing in mind.

Auditors and assessors may not be familiar with Cloud Computing generally or with a
given Cloud service in particular. That being the case, it falls upon the Cloud customer to

Regulatory applicability for the use of a given Cloud service

Division of compliance responsibilities between Cloud provider and Cloud customer

Cloud provider s ability to produce evidence needed for compliance

Cloud customer s role in bridging the gap between Cloud provider and auditor/assessor


Involve Legal and Contracts Teams. The Cloud provider s standard terms of service
may not address your compliance needs; therefore it is beneficial to have both legal and
contracts personnel involved early to ensure that Cloud services contract provisions are
adequate for compliance and audit obligations.

Right to Audit Clause. Customers will often need the ability to audit the Cloud provider,
given the dynamic natures of both the Cloud and the regulatory environment. A right to
audit contract clause should be obtained whenever possible, particularly when using the
Cloud provider for a service for which the customer has regulatory compliance
responsibilities. Over time, the need for this right should be reduced and in many cases
replaced by appropriate Cloud provider certifications.

Analyze Compliance Scope. Determining whether the compliance regulations which the
organization is subject to will be impacted by the use of Cloud services, for a given set of

applications and data.

Analyze Impact of Regulations on Data Security. Potential end users of Cloud
Computing services should consider which applications and data they are considering
moving to Cloud services, and the extent to which they are subject to compliance

Review Relevant Partners and Services Providers. This is general guidance for ensuring
that service provider relationships do not negatively impact compliance. Assessing which
service providers are processing data that is subject to compliance regulations, and then
assessing the security controls provided by those service providers, is fundamental.
Several compliance regulations have specific language about assessing and managing
third party vendor risk. As with non-Cloud IT and business services, organizations need to
understand which of their Cloud business partners are processing data subject to
compliance regulations.

Understand Contractual Data Protection Responsibilities and Related Contracts. The
Cloud service model to an extent dictates whether the customer or the Cloud service
provider is responsible for deploying security controls. In an IaaS deployment scenario,
the customer has a greater degree of control and responsibility than in a SaaS scenario.
From a security control standpoint, this means that IaaS customers will have to deploy
many of the security controls for regulatory compliance. In a SaaS scenario, the Cloud
service provider must provide the necessary controls.

From a contractual perspective, understanding the specific requirements, and ensuring that
the Cloud services contract and service level agreements adequately address them, are

Analyze Impact of Regulations on Provider Infrastructure. In the area of infrastructure,
moving to Cloud services requires careful analysis as well. Some regulatory requirements
specify controls that are difficult or impossible to achieve in certain Cloud service types.

Analyze Impact of Regulations on Policies and Procedures. Moving data and
applications to Cloud services will likely have an impact on policies and procedures.
Customers should assess which policies and procedures related to regulations will have to
change. Examples of impacted policies and procedures include activity reporting, logging,
data retention, incident response, controls testing, and privacy policies.

Prepare Evidence of How Each Requirement Is Being Met. Collecting evidence of

compliance across the multitude of compliance regulations and requirements is a
challenge. Customers of Cloud services should develop processes to collect and store
compliance evidence including audit logs and activity reports, copies of system
configurations, change management reports, and other test procedure output. Depending
on the Cloud service model, the Cloud provider may need to provide much of this

Auditor Qualification and Selection. In many cases the organization has no say in
selecting auditors or security assessors. If an organization does have selection input, it is
highly advisable to pick a Cloud aware auditor since many might not be familiar with
Cloud and virtualization challenges. Asking their familiarity with the IaaS, PaaS, and
SaaS nomenclature is a good starting point. Cloud Provider s SAS 70 Type II. Providers
should have this audit statement at a minimum, as it will provide a recognizable point of
reference for auditors and assessors.

Since an SAS 70 Type II audit only assures that controls are implemented as documented,
it is equally important to understand the scope of the SAS 70 audit, and whether these
controls meet your requirements. Cloud Provider s ISO/IEC 27001/27002 Roadmap.
Cloud providers seeking to provide mission critical services should embrace the ISO/IEC
27001 standard for information security management systems. If the provider has not
achieved ISO/IEC 27001 certification, they should demonstrate alignment with ISO 27002
practices. ISO/IEC 27001/27002 Scoping. The Cloud Security Alliance is issuing an
industry call to action to align Cloud providers behind the ISO/IEC 27001 certification, to
assure that scoping does not omit critical certification criteria.

II) b4. Information Lifecycle Management

One of the primary goals of information security is to protect the fundamental data that
powers our systems and applications. As we transition to Cloud Computing, our
traditional methods of securing data are challenged by Cloud-based architectures.
Elasticity, multi-tenancy, new physical and logical architectures, and abstracted controls
require new data security strategies. With many Cloud deployments we are also
transferring data to external or even public environments, in ways that would have
been unthinkable only a few years ago.

Information Lifecycle Management
The Data Security Lifecycle is different from Information Lifecycle Management,

reflecting the different needs of the security audience. The Data Security Lifecycle
consists of six phases:

challenges regarding data lifecycle security in the Cloud include the following: Data
security. Confidentiality, Integrity, Availability, Authenticity, Authorization,
Authentication, and Non-Repudiation.

Location of the data. There must be assurance that the data, including all of its copies and
backups, is stored only in geographic locations permitted by contract, SLA, and/or
regulation. For instance, use of compliant storage as mandated by the European Union
for storing electronic health records can be an added challenge to the data owner and
Cloud service provider.

Data remanence or persistence. Data must be effectively and completely removed to be
deemed destroyed. Therefore, techniques for completely and effectively locating data
in the Cloud, erasing/destroying data, and assuring the data has been completely removed
or rendered unrecoverable must be available and used when required.

Commingling data with other Cloud customers. Data especially classified / sensitive
data must not be commingled with other customer data without compensating controls
while in use, storage, or transit. Mixing or commingling the data will be a challenge when
concerns are raised about data security and geo-location. Data backup and recovery
schemes for recovery and restoration. Data must be available and data backup and

recovery schemes for the Cloud must be in place and effective in order to prevent data
loss, unwanted data overwrite, and destruction. Don t assume Cloud-based data is backed
up and recoverable.

Data discovery. As the legal system continues to focus on electronic discovery, Cloud
service providers and data owners will need to focus on discovering data and assuring
legal and regulatory authorities that all data requested has been retrieved. In a Cloud
environment that question is extremely difficult to answer and will require administrative,
technical and legal controls when required.

Data aggregation and inference. With data in the Cloud, there are added concerns of data
aggregation and inference that could result in breaching the confidentiality of sensitive
and confidential information. Hence, practices must be in play to assure the data owner
and data stakeholders that the data is still protected from subtle breach when data is
commingled and/or aggregated, thus revealing protected information (e.g., medical
records containing names and medical information mixed with anonymous data but
containing the same crossover field ).


Understand how integrity is maintained and compromise of integrity is detected and
reported to customers. The same recommendation applies to confidentiality when

The Cloud Computing provider must assure the data owner that they provide full
disclosure (aka transparency ) regarding security practices and procedures as stated in
their SLAs.

Ensure specific identification of all controls used during the data lifecycle. Ensure there
specifications of to which entity is responsible for each control between the data owner
and Cloud services provider.

Maintain a fundamental philosophy of knowing where your data is. Ensure your ability
to know the geographical location of storage. Stipulate this in your SLAs and contracts.

Ensure that appropriate controls regarding country location restrictions are defined and

Understand circumstances under which storage can be seized by a third party or
government entity.

Ascertain that your SLA with the Cloud provider includes advance notification to the data
owner (if possible) that the data owner s information has been or will be seized.

In some instances, a subpoena or e-discovery writ may be placed against the Cloud
Computing services provider. In this case, when the provider has custody of customer
data, the Cloud services provider should be required to inform the data owner that the
Cloud services provider is compelled to disclose the data owner s data.

A system of service penalties should be included in the contract between the data owner
and the Cloud service provider. Specifically, data that would be subject to state and
international data breach laws (i.e., California Senate Bill 1386 or the new HIPAA data
breach rules) should be protected by the Cloud service provider. It is the data owner s
responsibility to determine who should access the data, what their rights and privileges
are, and under what conditions these access rights are provided.

The data owner should maintain a Default Deny All policy for both data owner
employees and the Cloud service provider. Cloud services providers should offer
contractual language that warrants the denial of access to data as a fundamental
philosophy (i.e., Default Deny All ). This specifically applies to Cloud services
employees and their customers other than the data owner s employees and authorized
personnel. The data owner s responsibility is to define and identify the data
classification. It is the Cloud service provider s responsibility to enforce the data
owner s access requirements based on data classification. Such responsibilities should
be in the contract and enforced and audited for compliance. When a customer is compelled
to disclose information, contamination of the data must not occur. Not only does the data
owner need to ensure that all data requested for hold orders, subpoenas, e-discovery
rulings, etc. are intact and disclosed properly; the data owner must ensure that no other
data are affected.

Encrypt data at rest and encrypt data in transit (Reference Domain 11, Encryption and
Key Management.) Identify trust boundaries throughout the IT architecture and
abstraction layers.

Ensure subsystems only span trust boundaries as needed and with appropriate safeguards
to prevent unauthorized disclosure, alteration, or destruction of data.

Understand what compartmentalization techniques are employed by a provider to isolate
its customers from one another. A provider may use a variety of methods depending upon
the types and number of services offered.

Understand the Cloud provider s data search capabilities and limitations when
attempting to view inside the dataset for data discovery.

Understand how encryption is managed on multi-tenant storage. Is there a single key for
all data owners, one key per data owner, or multiple keys per data owner? Is there a
system to prevent different data owners from having the same encryption keys?

Data owners should require Cloud service providers to ensure that their backed-up data
is not commingled with other Cloud service customer data.

Understand Cloud provider storage retirement processes. Data destruction is extremely
difficult in a multi-tenant environment and the Cloud provider should be using strong
storage encryption that renders data unreadable when storage is recycled, disposed of, or
accessed by any means outside of authorized applications, processes, and entities. Data
retention and destruction schedules are the responsibility of the data owner. It is the Cloud
service provider s responsibility to destroy the data upon request, with special emphasis
on destroying all data in all locations including slack in data structures and on media. The
data owner should enforce and audit this practice if possible.

Understand the logical segregation of information and protective controls implemented.

Understand the privacy restrictions inherent in data entrusted to your company; you may
have to designate your Cloud provider as a particular kind of partner before entrusting
them with this information.

Understand Cloud provider policies and processes for data retention and destruction and
how they compare with internal organizational policy. Be aware that data retention
assurance may be easier for the Cloud provider to demonstrate, while data destruction may
be very difficult.

Negotiate penalties payable by the Cloud provider for data breaches to ensure this is
taken seriously. If practical, customers should seek to recover all breach costs as part of

their provider contract. If impractical, customers should explore other risk transference
vehicles such as insurance to recover breach costs.

Perform regular backup and recovery tests to assure that logical segregation and controls
are effective. Ensure that Cloud provider personnel controls are in place to provide a
logical segregation of duties.

Understand how encryption is managed on multi-tenant storage. Is there a single key for
all customers, one key per customer, or multiple keys per customer? Data Security
Recommendations by ILM Phase Some of our general recommendations, as well as other
specific controls, are listed within the context of each lifecycle phase.

Please keep in mind that depending upon the Cloud service model (SaaS, PaaS, or IaaS),
some recommendations need to be implemented by the customer and others must be
implemented by the Cloud provider. Create:

Identify available data labeling and classification capabilities.

Enterprise Digital Rights Management may be an option.

User tagging of data is becoming common in Web 2.0 environments and may be
leveraged to help classify the data.
Store Use Share Archive Destroy

Identify access controls available within the file system, DBMS, document management
system, etc.

Encryption solutions, such as for email, network transport, database, files and

Content discovery tools (often DLP, or Data Loss Prevention) can assist in identifying
and auditing data which requires controls.

Activity monitoring and enforcement, via logfiles and/or agent-based tools.

Application logic.

Object level controls within DBMS solutions.

Activity monitoring and enforcement, via logfiles and/or agent-based tools.

Application logic.

Object level controls within DBMS solutions.

Identify access controls available within the file system, DBMS, and document
management system.

Encryption, such as for email, network transport, database, files, and filesystems.

Data Loss Prevention for content-based data protection.

Encryption, such as for tape backup and other long term storage media.

Asset management and tracking.

Crypto-shredding: the destruction of all key material related to encrypted data.

Secure deletion through disk wiping and related techniques.

Physical destruction, such as degaussing of physical media.

Content discovery to confirm destruction processes.

II) b5. Portability and Interoperability

Organizations must approach the Cloud with the understanding that they may have to

change providers in the future. Portability and interoperability must be considered up

front as part of the risk management and security assurance of any Cloud program. Large
Cloud providers can offer geographic redundancy in the Cloud, hopefully enabling high
availability with a single provider. Nonetheless, it s advisable to do basic business
continuity planning, to help minimize the impact of a worst-case scenario.

Various companies will in the future suddenly find themselves with urgent needs to switch
Cloud providers for varying reasons, including:

An unacceptable increase in cost at contract renewal time.

A provider ceases business operations.

A provider suddenly closes one or more services being used, without acceptable
migration plans.

Unacceptable decrease in service quality, such as a failure to meet key performance
requirements or achieve service level agreements.

A business dispute between Cloud customer and provider.

Some simple architectural considerations can help minimize the damage should these
kinds of scenarios occur. However, the means to address these issues depend on the type
of Cloud service.

With SaaS, the Cloud customer will by definition be substituting new software
applications for old ones. Therefore, the focus is not upon portability of applications, but
on preserving or enhancing the security functionality provided by the legacy application
and achieving a successful data migration.

With PaaS, the expectation is that some degree of application modification will be
necessary to achieve portability. The focus is minimizing the amount of application
rewriting while preserving or enhancing security controls, along with achieving a
successful data migration.

With IaaS, the focus and expectation is that both the applications and data should be able

to migrate to and run at a new Cloud provider.

Due to a general lack of interoperability standards, and the lack of sufficient market
pressure for these standards, transitioning between Cloud providers may be a painful
manual process. From a security perspective, our primary concerns is maintaining
consistency of security controls while changing environments.

RecommendationsFor All Cloud Solutions:

Substituting Cloud providers is in virtually all cases a negative business transaction for at
least one party, which can cause an unexpected negative reaction from the legacy Cloud
provider. This must be planned for in the contractual process as outlined in Domain 3, in
your Business Continuity Program as outlined in Domain 7, and as a part of your overall
governance in Domain 2.

Understand the size of data sets hosted at a Cloud provider. The sheer size of data may
cause an interruption of service during a transition, or a longer transition period than
anticipated. Many customers have found that using a courier to ship hard drives is faster
than electronic transmission for large data sets.

Document the security architecture and configuration of individual component security
controls so they can be used to support internal audits, as well as to facilitate migration to
new providers.

For IaaS Cloud Solutions:

Understand how virtual machine images can be captured and ported to new Cloud
providers, who may use different virtualization technologies.

Identify and eliminate (or at least document) any provider-specific extensions to the
virtual machine environment.

Understand what practices are in place to make sure appropriate deprovisioning of VM
images occurs after an application is ported from the Cloud provider.

Understand the practices used for decommissioning of disks and storage devices.

Understand hardware/platform based dependencies that need to be identified before
migration of the application/data. Ask for access to system logs, traces, and access and
billing records from the legacy Cloud provider.

Identify options to resume or extend service with the legacy Cloud provider in part or in
whole if new service proves to be inferior.

Determine if there are any management-level functions, interfaces, or APIs being used
that are incompatible with or unimplemented by the new provider.

For PaaS Cloud Solutions:

When possible, use platform components with a standard syntax, open APIs, and open

Understand what tools are available for secure data transfer, backup, and restore.

Understand and document application components and modules specific to the PaaS
provider, and develop an application architecture with layers of abstraction to minimize
direct access to proprietary modules.

Understand how base services like monitoring, logging, and auditing would transfer over
to a new vendor.

Understand control functions provided by the legacy Cloud provider and how they would
translate to the new provider.

When migrating to a new platform, understand the impacts on performance and
availability of the application, and how these impacts will be measured.

Understand how testing will be completed prior to and after migration, to verify that the
services or applications are operating correctly. Ensure that both provider and user
responsibilities for testing are well known and documented.

For SaaS Solutions:

Perform regular data extractions and backups to a format that is usable without the SaaS

Understand whether metadata can be preserved and migrated.

Understand that any custom tools being implemented will have to be redeveloped, or the
new vendor must provide those tools.

Assure consistency of control effectiveness across old and new providers.

Assure the possibility of migration of backups and other copies of logs, access records,
and any other pertinent information which may be required for legal and compliance

Understand management, monitoring, and reporting interfaces and their integration
between environments. Is there a provision for the new vendor to test and evaluate the
applications before migration?

Understand control functions provided by the legacy Cloud provider and how they would
translate to the new provider.

When migrating to a new platform, understand the impacts on performance and
availability of the application, and how these impacts will be measured.

Understand how testing will be completed prior to and after migration, to verify that the
services or applications are operating correctly. Ensure that both provider and user
responsibilities for testing are well known and documented.

II) b6. Operating in the Cloud

Traditional Security, Business Continuity, and Disaster Recovery
The body of knowledge accrued within traditional physical security, business continuity
planning and disaster recovery remains quite relevant to Cloud Computing. The rapid

pace of change and lack of transparency within Cloud Computing requires that traditional
security, Business Continuity Planning (BCP) and Disaster Recovery (DR) professionals
be continuously engaged in vetting and monitoring your chosen Cloud providers.

Our challenge is to collaborate on risk identification, recognize interdependencies,
integrate, and leverage resources in a dynamic and forceful way. Cloud Computing and its
accompanying infrastructure assist to diminish certain security issues, but may increase
others and can never eliminate the need for security.

While major shifts in business and technology continue, traditional security principles


Keep in mind that centralization of data means the risk of insider abuse from within the
Cloud provider is a significant concern.

Cloud providers should consider adopting as a security baseline the most stringent
requirements of any customer. To the extent these security practices do not negatively
impact the customer experience, stringent security practices should prove to be cost
effective in the long run by reducing risk as well as customer-driven scrutiny in several
areas of concern.

Providers should have robust compartmentalization of job duties, perform background
checks, require/enforce non-disclosure agreements for employees, and limit employee
knowledge of customers to that which is absolutely needed to perform job duties.

Customers should perform onsite inspections of Cloud provider facilities whenever

Customers should inspect Cloud provider disaster recovery and business continuity

Customers should identify physical interdependencies in provider infrastructure. Ensure
there is an authoritative taxonomy stated in contracts to clearly define contractual
obligations related to security, recovery, and access to data.

Customers should ask for documentation of the provider s internal and external security
controls, and adherence to any industry standards.

Ensure customer Recovery Time Objectives (RTOs) are fully understood and defined in
contractual relationships and baked into the technology planning process.

Ensure technology roadmaps, policies, and operational capabilities can satisfy these

Customers need to confirm that the provider has an existing BCP Policy approved by the
provider s board of directors.

Customers should look for evidence of active management support and periodic review
of the BC Program to ensure that the BC Program is active.

Customer should check whether the BC Program is certified and/or mapped to
internationally recognized standards such as BS 25999.

Customers should ascertain whether the provider has any online resource dedicated to
security and BCP, where the program s overview and fact sheets are available for

Ensure Cloud suppliers are vetted via the company Vendor Security Process (VSP) so
there is a clear understanding of what data is to be shared and what controls are to be
utilized. The VSP determination should feed the decision-making process and assessment
of whether the risk is acceptable.

The dynamic nature of Cloud Computing and its relative youth justify more frequent
cycles of all the above activities to uncover changes not communicated to customers.

II) b7. Data Center Operations

The number of Cloud Computing providers continues to increase as business and
consumer IT services move to the Cloud. There has been similar growth in data centers to

fuel Cloud Computing service offerings. Cloud providers of all types and sizes, including
well known technology leaders and thousands of startups and emerging growth
companies, are making major investments in this promising new approach to IT service

Sharing IT resources to create efficiencies and economies of scale is not a new concept.
However, the Cloud business model works best if the traditionally enormous investments
in data center operations are spread over a larger pool of consumers. Historically, data
center architectures have been deliberately oversized to exceed periodic peak loads, which
means during normal or low demand periods, data center resources are often idle or
underutilized for long stretches of time. Cloud service providers, on the other hand, seek
to optimize resource usage, both human and technological, to gain competitive advantage
and maximize operating profit margins.

The challenge for consumers of Cloud services is how to best evaluate the provider s
capabilities to deliver appropriate and cost-effective services, while at the same time
protecting the customer s own data and interests. Do not assume that the provider has
the best interests of their customers as their top priority. With the common carrier model
of service delivery, which Cloud Computing is a form of, the service provider normally
has little or no access to or control over the customers data or systems beyond the
contracted level of management. Certainly, this is the correct approach to take, but some
Cloud architectures might take liberties with customers data integrity and security that
the customer would not be comfortable with if they became aware. The consumer must
educate themselves about the services they are considering by asking appropriate
questions and becoming familiar with the basic architectures and potential areas for
security vulnerabilities. When making a decision to move all or part of IT operations to the
Cloud, it first helps to understand how a Cloud provider has implemented Domain
1 s Five Principal Characteristics of Cloud Computing , and how that technology
architecture and infrastructure impacts its ability to meet service level agreements and
address security concerns.

The provider s specific technology architecture could be a combination of IT products
and other Cloud services, such as taking advantage of another provider s IaaS storage
service. The technology architecture and infrastructure of Cloud providers may differ; but
to meet security requirements they must all be able to demonstrate comprehensive
compartmentalization of systems, data, networks, management, provisioning, and
personnel. The controls segregating each layer of the infrastructure need to be properly
integrated so they do not interfere with each other. For example, investigate whether the
storage compartmentalization can easily be bypassed by management tools or poor key

Lastly, understand how the Cloud provider handles resource democratization and
dynamism to best predict proper levels of system availability and performance through
normal business fluctuations. Remember, Cloud Computing theory still somewhat
exceeds its practice: many customers make incorrect assumptions about the level of
automation actually involved.

As provisioned resource capacity is reached, the provider is responsible for ensuring that
additional resources are delivered seamlessly to the customer.

It is imperative that an organization considering purchasing Cloud services, of whatever
kind, be fully aware of exactly what services are being contracted for and what is not
included. Below is a summary of information that needs to be reviewed as part of the
vendor selection process, and additional questions to help qualify providers and better
match their services against organizational requirements.

Regardless of which certifications Cloud providers maintain, it is important to obtain a
commitment or permission to conduct customer or external third-party audits.

Cloud customers should understand how Cloud providers implement Domain 1 s Five
Principal Characteristics of Cloud Computing , and how that technology architecture and
infrastructure impact their ability to meet service level agreements.

While the technology architectures of Cloud providers differ, they must all be able to
demonstrate comprehensive compartmentalization of systems, networks, management,
provisioning, and personnel.

Understand how resource democratization occurs within your Cloud providers to best
predict system availability and performance during your business fluctuations.

If feasible, discover the Cloud providers other clients to assess the impact their business
fluctuations may have on your customer experience with the Cloud provider. However
this is no substitute for ensuring the service level agreements are clearly defined,
measurable, enforceable, and adequate for your requirements.

Cloud customers should understand their Cloud providers patch management policies
and procedures and how these may impact their environments. This understanding should

be reflected in contract language. Continual improvement is particularly important in a

Cloud environment because any improvement in policies, processes, procedures, or tools
for a single customer could result in service improvement for all customers.

Look for Cloud providers with standard continual improvement processes in place.
Technical support or the service desk is often a customer s window into the provider s
operations. To achieve a smooth and uniform customer support experience for your end
users, it is essential to ensure that the provider s customer support processes, procedures,
tools, and support hours are compatible with yours. As in Domain 7, review business
continuity and disaster recovery plans from an IT perspective, and how they relate to
people and processes. A Cloud provider s technology architecture may use new and
unproven methods for failover, for example. Customers own business continuity plans
should also address impacts and limitations of Cloud computing.

II) b8. Incident Response, Notification, and Remediation

The nature of Cloud Computing makes it more difficult to determine who to contact in
case of a security incident, data breach, or other event that requires investigation and
reaction. Standard security incident response mechanisms can be used with modifications
to accommodate the changes required by shared reporting responsibilities. This domain
provides guidance on how to handle these incidents. The problem for the Cloud customer
is that applications deployed to Cloud fabrics are not always designed with data integrity
and security in mind. This may result in vulnerable applications being deployed into
Cloud environments, triggering security incidents. Additionally, flaws in infrastructure
architecture, mistakes made during hardening procedures, and simple oversights present
significant risks to Cloud operations. Of course, similar vulnerabilities also endanger
traditional data center operations. Technical expertise is obviously required in incident
handling, but privacy and legal experts have much to contribute to Cloud security. They
also play a role in incident response regarding notification, remediation, and possible
subsequent legal action.

An organization considering using Cloud services needs to review what mechanisms have
been implemented to address questions about employee data access that is not governed
by user agreements and privacy policies. Application data not managed by a Cloud
provider s own applications, such as in IaaS and PaaS architectures, generally has
different controls than data managed by a SaaS provider s application. The complexities
of large Cloud providers delivering SaaS, PaaS, and IaaS capabilities create significant
incident response issues that potential customers must assess for acceptable levels of

When evaluating providers it is important to be aware that the provider may be hosting
hundreds of thousands of application instances. From an incident monitoring perspective,
any foreign applications widen the responsibility of the security operations center (SOC).
Normally a SOC monitors alerts and other incident indicators, such as those produced by
intrusion detection systems and firewalls, but the number of sources that must be
monitored and the volume of notifications can increase exponentially in an open Cloud
environment, as the SOC may need to monitor activity between customers as well as
external incidents. An organization will need to understand the incident response strategy
for their chosen Cloud provider. This strategy must address identification and notification,
as well as options for remediation of unauthorized access to application data. To make
matters more complicated, application data management and access have different
meanings and regulatory requirements depending on the data location. For example, an
incident may occur involving data in Germany, whereas if the same data had been stored
in the US it might not have been considered an issue. This complication makes incident
identification particularly challenging.


Cloud customers need to clearly define and communicate to Cloud providers what they
consider incidents (such as data breaches) versus mere events (such as suspicious intrusion
detection alerts) before service deployment.

Cloud customers may have very limited involvement with the providers incident
response activities. Therefore it is critical for customers to understand the prearranged
communication paths to the provider s incident response team.

Cloud customers should investigate what incident detection and analysis tools providers
use to make sure they are compatible with their own systems. A provider s proprietary
or unusual log formats could be major roadblocks in joint investigations, particularly those
that involve legal discovery or government intervention. Poorly designed and protected
applications and systems can easily overwhelm everyone s incident response

Conducting proper risk management on the systems and utilizing defense-in-depth
practices are essential to reduce the chance of a security incident in the first place.
Security Operation Centers (SOC) often assume a single governance model related to
incident response, which is inappropriate for multi-tenant Cloud providers. A robust and
well maintained Security Information and Event Management (SIEM) process that
identifies available data sources (application logs, firewall logs, IDS logs, etc) and merges
these into a common analysis and alerting platform can assist the SOC in detecting

incidents within the Cloud computing platform.

To greatly facilitate detailed offline analyses, look for Cloud providers with the ability to
deliver snapshots of the customer s entire virtual environment firewalls, network
(switches), systems, applications, and data. Containment is a race between damage
control and evidence gathering. Containment approaches that focus on the confidentialityintegrity-availability (CIA) triad can be effective.

Remediation highlights the importance of being able to restore systems to earlier states,
and even a need to go back six to twelve months for a known-good configuration.
Keeping legal options and requirements in mind, remediation may also need to support
forensic recording of incident data. Any data classified as private for data breach
regulations should always be encrypted to reduce the consequences of a breach incident.
Customers should stipulate encryption requirements contractually, per Domain 11.

Some Cloud providers may host a significant number of customers with unique
applications. These Cloud providers should consider application layer logging
frameworks to provide granular narrowing of incidents to a specific customer. These
Cloud providers should also construct a registry of application owners by application
interface (URL, SOA service, etc.). Application-level firewalls, proxies, and other
application logging tools are key capabilities currently available to assist in responding to
incidents in multi-tenant environments.

Application Security
Cloud environments by virtue of their flexibility, openness, and often public
availability challenge many fundamental assumptions about application security. Some
of these assumptions are well understood; however many are not. This section is intended
to document how Cloud Computing influences security over the lifetime of an application
from design to operations to ultimate decommissioning. This guidance is for all
stakeholders including application designers, security professionals, operations
personnel, and technical management on how to best mitigate risk and manage
assurance within Cloud Computing applications.

Cloud Computing is a particular challenge for applications across the layers of SaaS,
PaaS, and IaaS. Cloud-based software applications require a design rigor similar to
applications residing in a classic DMZ. This includes a deep up-front analysis covering all
the traditional aspects of managing information confidentiality, integrity, and availability.

Applications in Cloud environments will both impact and be impacted by the following

major aspects:

Application Security Architecture Consideration must be given to the reality that most
applications have dependencies on various other systems. With Cloud Computing,
application dependencies can be highly dynamic, even to the point where each
dependency represents a discrete third party service provider. Cloud characteristics make
configuration management and ongoing provisioning significantly more complex than
with traditional application deployment. The environment drives the need for
architectural modifications to assure application security.

Software Development Life Cycle (SDLC) Cloud computing affects all aspects of
SDLC, spanning application architecture, design, development, quality assurance,
documentation, deployment, management, maintenance, and decommissioning.

Compliance Compliance clearly affects data, but it also influences applications (for
example, regulating how a program implements a particular cryptographic function),
platforms (perhaps by prescribing operating system controls and settings) and processes
(such as reporting requirements for security incidents).

Tools and Services Cloud computing introduces a number of new challenges around
the tools and services required to build and maintain running applications. These include
development and test tools, application management utilities, the coupling to external
services, and dependencies on libraries and operating system services, which may
originate from Cloud providers. Understanding the ramifications of who provides, owns,
operates, and assumes responsibility for each of these is fundamental.

Vulnerabilities These include not only the well-documented and continuously
evolving vulnerabilities associated with web apps, but also vulnerabilities associated
with machine-to-machine Service-Oriented Architecture (SOA) applications, which are
increasingly being deployed into the Cloud.


Software Development Lifecycle (SDLC) security is important, and should at a high
level address these three main areas of differentiation with Cloud-based development: 1)
updated threat and trust models, 2) application assessment tools updated for Cloud
environments, and 3) SDLC processes and quality checkpoints to account for application
security architectural changes.

IaaS, PaaS, and SaaS create different trust boundaries for the software development
lifecycle; which must be accounted for during the development, testing, and production
deployment of applications. For IaaS, a key success factor is the presence of trusted virtual
machine images. The best alternative is the ability to provide your own virtual machine
image conforming to internal policies. The best practices available to harden host systems
within DMZs should be applied to virtual machines.

Limiting services available to only those needed to support the application stack is
appropriate. Securing inter-host communications must be the rule; there can be no
assumption of a secure channel between hosts, whether in a common data center or even
on the same hardware device.

Managing and protecting application credentials and key material are critical. Extra care
should be undertaken with the management of files used for application logging and
debugging, as the locations of these files may be remote or unknown and the information
could be sensitive.

Account for external administration and multi-tenancy in the application s threat model.

Applications sufficiently complex to leverage an Enterprise Service Bus (ESB) need to
secure the ESB directly, leveraging a protocol such as WS-Security. The ability to
segment ESBs is not available in PaaS environments.

Metrics should be applied to assess effectiveness of application security programs.
Among the direct application security-specific metrics available are vulnerability scores
and patch coverage. These metrics can indicate the quality of application coding. Indirect
data handling metrics, such as the percentage of data encrypted, can indicate that
responsible decisions are being made from an application architecture perspective.

Cloud providers must support dynamic analysis web application security tools against
applications hosted in their environments.

Attention should be paid to how malicious actors will react to new Cloud application
architectures that obscure application components from their scrutiny. Hackers are likely
to attack visible code, including but not limited to code running in the user context. They
are likely to attack infrastructure and perform extensive black box testing.

Customers should obtain contractual permission to perform remote vulnerability
assessments, including traditional (network/host), and application vulnerability
assessments. Many Cloud providers restrict vulnerability assessments due to the
provider s inability to distinguish such tests from actual attacks, and to avoid potential
impact upon other customers.

Encryption and Key Management
Cloud customers and providers need to guard against data loss and theft. Today,
encryption of personal and enterprise data is strongly recommended, and in some cases
mandated by laws and regulations around the world. Cloud customers want their providers
to encrypt their data to ensure that it is protected no matter where the data is physically
located. Likewise, the Cloud provider needs to protect its customers sensitive data.
Strong encryption with key management is one of the core mechanisms that Cloud
Computing systems should use to protect data.

While encryption itself doesn t necessarily prevent data loss, safe harbor provisions in
laws and regulations treat lost encrypted data as not lost at all. The encryption provides
resource protection while key management enables access to protected resources.

Encryption for Confidentiality and Integrity
Cloud environments are shared with many tenants, and service providers have privileged
access to the data in those environments. Thus, confidential data hosted in a Cloud must
be protected using a combination of access control (see Domain 12), contractual liability

(see Domains 2, 3, and 4), and encryption, which we describe in this section. Of these,
encryption offers the benefits of minimum reliance on the Cloud service provider and lack
of dependence on detection of operational failures.

Encrypting data in transit over networks
There is the utmost need to encrypt multi-use credentials, such as credit card numbers,
passwords, and private keys, in transit over the Internet. Although Cloud provider
networks may be more secure than the open Internet, they are by their very architecture
made up of many disparate components, and disparate organizations share the Cloud.
Therefore, it is important to protect this sensitive and regulated information in transit even
within the Cloud provider s network. Typically, this can be implemented with equal ease
in SaaS, PaaS, and IaaS environments.

Encrypting data at rest
Encrypting data on disk or in a live production database has value, as it can protect against
a malicious Cloud service provider or a malicious co-tenant as well as against some types
of application abuse.

For long-term archival storage, some customers encrypt their own data and then send it as
ciphertext to a Cloud data storage vendor. The customer then controls and holds the
cryptographic keys and decrypts the data, if necessary, back on their own premises.
Encrypting data at rest is common within IaaS environments, using a variety of provider
and third party tools. Encrypting data at rest within PaaS environments is generally more
complex, requiring instrumentation of provider offerings or special customization.
Encrypting data at rest within SaaS environments is a feature Cloud customers cannot
implement directly, and need to request from their providers.

Encrypting data on backup media
This can protect against misuse of lost or stolen media. Ideally, the Cloud service provider
implements it transparently. However, as a customer and provider of data, it is your
responsibility to verify that such encryption takes place. One consideration for the
encryption infrastructure is dealing with the longevity of the data. Beyond these common
uses of encryption, the possibly of exotic attacks against Cloud providers also warrants
further exploration of means for encrypting dynamic data, including data residing in

Key Management
Existing Cloud service providers may provide basic encryption key schemes to secure
Cloud-based application development and services, or they may leave all such protective
measures up to their customers. While Cloud service providers are progressing towards
supporting robust key management schemes, more work is needed to overcome barriers to
adoption. Emerging standards should solve this problem in the near future, but work is
still in progress.

There are several key management issues and challenges within Cloud Computing:
Secure key stores. Key stores must themselves be protected, just as any other sensitive
data. They must be protected in storage, in transit, and in backup. Improper key storage
could lead to the compromise of all encrypted data.

Access to key stores. Access to key stores must be limited to the entities that specifically
need the individual keys. There should also be policies governing the key stores, which
use separation of roles to help control access; an entity that uses a given key should not be
the entity that stores that key.

Key backup and recoverability. Loss of keys inevitably means loss of the data that those
keys protect. While this is an effective way to destroy data, accidental loss of keys
protecting mission-critical data would be devastating to a business, so secure backup and
recovery solutions must be implemented. There are a number of standards and guidelines
applicable to key management in the Cloud. The OASIS Key Management
Interoperability Protocol (KMIP) is an emerging standard for interoperable key

management in the Cloud. The IEEE 1619.3 standards cover storage encryption and key
management, especially as they pertain to storage IaaS.


Use encryption to separate data holding from data usage.

Segregate the key management from the Cloud provider hosting the data, creating a
chain of separation. This protects both the Cloud provider and customer from conflicts
when compelled to provide data due to a legal mandate.

When stipulating encryption in contract language, assure that the encryption adheres to
existing industry and government standards, as applicable.

Understand whether and how Cloud provider facilities provide role management and
separation of duties. In cases where the Cloud provider must perform key management,
understand whether the provider has defined processes for a key management lifecycle:
how keys are generated, used, stored, backed up, recovered, rotated, and deleted. Further,
understand whether the same key is used for every customer or if each customer has its
own key set.

Assure regulated and/or sensitive customer data is encrypted in transit over the Cloud
provider s internal network, in addition to being encrypted at rest. This will be up to the
Cloud customer to implement in IaaS environments, a shared responsibility between
customer and provider in PaaS environments, and the Cloud provider s responsibility in
SaaS environments.

In IaaS environments, understand how sensitive information and key material otherwise
protected by traditional encryption may be exposed during usage. For example, virtual
machine swap files and other temporary data storage locations may also need to be

II) b9. Identity and Access Management

Managing identities and access control for enterprise applications remains one of the
greatest challenges facing IT today. While an enterprise may be able to leverage several
Cloud Computing services without a good identity and access management strategy, in the
long run extending an organization s identity services into the Cloud is a necessary
precursor towards strategic use of on-demand computing services.

Supporting today s aggressive adoption of an admittedly immature Cloud ecosystem
requires an honest assessment of an organization s readiness to conduct Cloud-based
Identity and Access Management (IAM), as well as understanding the capabilities of that
organization s Cloud Computing providers. We will discuss the following major IAM
functions that are essential for successful and effective management of identities in the


Identity provisioning/deprovisioning



Authorization & user profile management Compliance is a key consideration throughout.

Identity Provisioning: One of the major challenges for organizations adopting Cloud
Computing services is the secure and timely management of on-boarding (provisioning)
and off-boarding (deprovisioning) of users in the Cloud. Furthermore, enterprises that
have invested in user management processes within an enterprise will seek to extend those
processes and practice to Cloud services.

Authentication: When organizations start to utilize Cloud services, authenticating users in
a trustworthy and manageable manner is a vital requirement. Organizations must address
authentication-related challenges such as credential management, strong authentication
(typically defined as multi-factor authentication), delegated authentication, and managing
trust across all types of Cloud services.

Federation:In a Cloud Computing environment, Federated Identity Management plays a
vital role in enabling organizations to authenticate their users of Cloud services using the
organization s chosen identity provider (IdP). In that context, exchanging identity
attributes between the service provider (SP) and the IdP in a secure way is also an
important requirement. Organizations considering federated identity management in the
Cloud should understand the various challenges and possible solutions to address those
challenges with respect to identity lifecycle management, available authentication
methods to protect confidentiality, and integrity; while supporting non-repudiation.

Authorization & user profile management: The requirements for user profiles and access
control policy vary depending on whether the user is acting on their own behalf (such as a
consumer) or as a member of an organization (such as an employer, university, hospital, or
other enterprise). The access control requirements in SPI environments include
establishing trusted user profile and policy information, using it to control access within
the Cloud service, and doing this in an auditable way.

Identity Provisioning Recommendations:

Capabilities offered by Cloud providers are not currently adequate to meet enterprise

Customers should avoid proprietary solutions such as creating custom connectors unique
to Cloud providers, as these exacerbate management complexity.

Customers should leverage standard connectors provided by Cloud providers to the
extent practical, preferably built on SPML schema. If your Cloud provider does not
currently offer SPML, you should request it. Cloud customers should modify or extend
their authoritative repositories of identity data so that it encompasses applications and
processes in the Cloud.

Authentication Recommendations: Both the Cloud provider and the customer enterprises
should consider the challenges associated with credential management and strong
authentication, and implement cost effective solutions that reduce the risk appropriately.
SaaS and PaaS providers typically provide the options of either built-in authentication
services to their applications or platforms, or delegating authentication to the enterprise.
Customers have the following options:

Authentication for enterprises. Enterprises should consider authenticating users via their
Identity Provider (IdP) and establishing trust with the SaaS vendor by federation.

Authentication for individual users acting on their own behalf. Enterprises should
consider using user-centric authentication such as Google, Yahoo, OpenID, Live ID, etc.,
to enable use of a single set of credentials valid at multiple sites.

Any SaaS provider that requires proprietary methods to delegate authentication (e.g.,
handling trust by means of a shared encrypted cookie or other means) should be
thoroughly evaluated with a proper security evaluation, before continuing. The general
preference should be for the use of open standards. For IaaS, authentication strategies can
leverage existing enterprise capabilities.

For IT personnel, establishing a dedicated VPN will be a better option, as they can
leverage existing systems and processes.

Some possible solutions include creating a dedicated VPN tunnel to the corporate
network or federation. A dedicated VPN tunnel works better when the application
leverages existing identity management systems (such as a SSO solution or LDAP based
authentication that provides an authoritative source of identity data).

In cases where a dedicated VPN tunnel is not feasible, applications should be designed to
accept authentication assertions in various formats (SAML, WS-Federation, etc), in
combination with standard network encryption such as SSL. This approach enables the

organizations to deploy federated SSO not only within an enterprise, but also to Cloud

OpenID is another option when the application is targeted beyond enterprise users.
However, because control of OpenID credentials is outside the enterprise, the access
privileges extended to such users should be limited appropriately.

Any local authentication service implemented by the Cloud provider should be OATH
compliant. With an OATH-compliant solution, companies can avoid becoming locked into
one vendor s authentication credentials.

In order to enable strong authentication (regardless of technology), Cloud applications
should support the capability to delegate authentication to the enterprise that is consuming
the services, such as through SAML.

Cloud providers should consider supporting various strong authentication options such as
One-Time Passwords, biometrics, digital certificates, and Kerberos. This will provide
another option for enterprises to use their existing infrastructure. Federation
Recommendations In a Cloud Computing environment, federation of identity is key for
enabling allied enterprises to authenticate, provide single or reduced Sign-On (SSO), and
exchange identity attributes between the Service Provider (SP) and the Identity Provider
(IdP). Organizations considering federated identity management in the Cloud should
understand the various challenges and possible solutions to address them with respect to
identity lifecycle management, authentication methods, token formats, and nonrepudiation.

Enterprises looking for a Cloud provider should verify that the provider supports at least
one of the prominent standards (SAML and WS-Federation). SAML is emerging as a
widely supported federation standard and is supported by major SaaS and PaaS Cloud
providers. Support for multiple standards enables a greater degree of flexibility.

Cloud providers should have flexibility to accept the standard federation formats from
different identity providers. However most Cloud providers as of this writing support a
single standard, e.g., SAML 1.1 or SAML 2.0. Cloud providers desiring to support
multiple federation token formats should consider implementing some type of federation

Organizations may wish to evaluate Federated Public SSO versus Federated Private

SSO. Federated Public SSO is based on standards such as SAML and WS-Federation
with the Cloud provider, while Federated Private SSO leverages the existing SSO
architecture over VPN. In the long run Federated Public SSO will be ideal, however an
organization with a mature SSO architecture and limited number of Cloud deployments
may gain short-term cost benefits with a Federated Private SSO.

Organizations may wish to opt for federation gateways in order to externalize their
federation implementation, in order to manage the issuance and verification of tokens.
Using this method, organizations delegate issuing various token types to the federation
gateway, which then handles translating tokens from one format to another.

II) b10. Access Control Recommendations

Selecting or reviewing the adequacy of access control solutions for Cloud services has
many aspects, and entails consideration of the following:

Review appropriateness of the access control model for the type of service or data.

Identify authoritative sources of policy and user profile information.

Assess support for necessary privacy policies for the data.

Select a format in which to specify policy and user information.

Determine the mechanism to transmit policy from a Policy Administration Point (PAP)
to a Policy Decision Point (PDP).

Determine the mechanism to transmit user information from a Policy Information Point
(PIP) to a Policy Decision Point (PDP).

Request a policy decision from a Policy Decision Point (PDP).

Enforce the policy decision at the Policy Enforcement Point (PEP).

Log information necessary for audits.

IDaaS Recommendations: Identity as a Service should follow the same best practices that
an internal IAM implementation does, along with added considerations for privacy,
integrity, and auditability.

For internal enterprise users, custodians must review the Cloud provider s options to
provide secured access to the Cloud, either through a direct VPN or through an industry
standard such as SAML and strong authentication. The reduction of cost from using the
Cloud needs to be balanced against risk mitigation measures to address the privacy
considerations inherent in having employee information stored externally.

For external users such as partners, the information owners need to incorporate
interactions with IAM providers into their SDLC, as well as into their threat assessments.

Application security the interactions of the various components with each other, and the
vulnerabilities created thereby (such as SQL Injection and Cross Site Scripting, among
many others) must also be considered and protected against.

PaaS customers should research the extent to which IDaaS vendors support industry
standards for provisioning, authentication, communication about access control policy,
and audit information.

Proprietary solutions present a significant risk for components of IAM environments in
the Cloud, because of the lack of transparency into the proprietary components.
Proprietary network protocols, encryption algorithms, and data communication are often
less secure, less robust, and less interoperable. It is important to use open standards for
the components of IAM that you are externalizing.

For IaaS customers, third-party images used for launching virtual servers need to be
verified for user and image authenticity. A review of the support provided for life cycle
management of the image must verify the same principles as with software installed on
your internal network.

II) b11. Virtualization

The ability to provide multi-tenant Cloud services at the infrastructure, platform, or
software level is often underpinned by the ability to provide some form of virtualization to
create economic scale. However, use of these technologies brings additional security
concerns. This domain looks at these security issues. While there are several forms of
virtualization, by far the most common is the virtualized operating system, and this is the
focus in this version of our guidance.

If Virtual Machine (VM) technology is being used in the infrastructure of the Cloud
services, then we must be concerned about compartmentalization and hardening of those
VM systems. The reality of current practices related to management of virtual operating

systems is that many of the processes that provide security-by-default are missing, and
special attention must be paid to replacing them. The core virtualization technology itself
introduces new attack surfaces in the hypervisor and other management components, but
more important is the severe impact virtualization has on network security.

Virtual machines now communicate over a hardware backplane, rather than a network. As
a result, standard network security controls are blind to this traffic and cannot perform
monitoring or in-line blocking. These controls need to take a new form to function in the
virtual environment. Commingling of data in centralized services and repositories is
another concern. A centralized database as provided by a Cloud Computing service
should in theory improve security over data distributed over a vast number and mixture of
endpoints. However this is also centralizing risk, increasing the consequences of a breach.

Another concern is the commingling of VMs of different sensitivities and security. In
Cloud Computing environments, the lowest common denominator of security will be
shared by all tenants in the multi-tenant virtual environment unless a new security
architecture can be achieved that does not wire in any network dependency for


Identify which types of virtualization your Cloud provider uses, if any.

Virtualized operating systems should be augmented by third party security technology to
provide layered security controls and reduce dependency on the platform provider alone.

Understand which security controls are in place internal to the VMs other than the builtin
hypervisor isolation such as intrusion detection, anti-virus, vulnerability scanning, etc.
Secure by default configuration must be assured by following or exceeding available
industry baselines.

Understand which security controls are in place external to the VMs to protect
administrative interfaces (web-based, APIs, etc.) exposed to the customers.

Validate the pedigree and integrity of any VM image or template originating from the
Cloud provider before using.

VM-specific security mechanisms embedded in hypervisor APIs must be utilized to

provide granular monitoring of traffic crossing VM backplanes, which will be opaque to
traditional network security controls.

Administrative access and control of virtualized operating systems is crucial, and should
include strong authentication integrated with enterprise identity management, as well as
tamper-proof logging and integrity monitoring tools.

Explore the efficacy and feasibility of segregating VMs and creating security zones by
type of usage (e.g., desktop vs. server), production stage (e.g., development, production,
and testing) and sensitivity of data on separate physical hardware components such as
servers, storage, etc.

Have a reporting mechanism in place that provides evidence of isolation and raises alerts
if there is a breach of isolation.

Be aware of multi-tenancy situations with your VMs where regulatory concerns may
warrant segregation.


II.c. Mobile Cloud Computing - Security

Mobile Cloud Computing exposes private data of the mobile user to different security
risks. User s data can be stored on the mobile side or on the Cloud side, can be accessed
by applications (or application components) running on the mobile device or in Cloud, or
can be transmitted between the mobile device application components and Cloud
application components. This section presents in the first part the security issues related to
Mobile Cloud Computing and highlights in the second part the state of the art work
proposed to address these security issues. As we have said previously, Mobile Cloud
Computing is a combination of mobile and Cloud Computing. Thus, the security issues in
Mobile Cloud Computing are due to the security threats against the Cloud, the mobile
devices and the applications running on these devices. These threats can be classified as
follows: mobile threats and Cloud threats. The main purpose of these menaces is to steal
personal data (e.g. credit card numbers, passwords, contact database, calendar, location) or

to exploit mobile device resources.

II) c1. Mobile Threats

A little while ago the malware development for mobile devices was seen as a myth due to
their limitations in terms of hardware and software.

Nowadays, the increasing use and development of mobile devices has lead to the
evolution of mobile threats; from the first case of malware on mobile devices in 2004
targeting Symbian, to the code of DroidDream, DroidKungFu and Plankton discovered in
2011 in the official Android Market. Recent studies have classified mobile attacks in
several categories such as: application-based attacks, web-based attacks, network-based
attacks and physical-based attacks.

The application-based attacks concern both offline and online applications. In these kinds
of attacks are included: malware, spyware and privacy threats.
Malware is software that performs a malicious behavior on a device without the user
being aware of this behavior (e.g. sending unsolicited messages and increasing the
phone s bill or allowing an attacker to have the control over the device).

Spyware is software designed to collect private data without the user s knowledge (e.g.
phone call history, text messages, camera pictures).

Privacy Threats are caused by applications (malicious or not), that in order to run they
need more sensitive data such as location (e.g. location based applications). The webbased attacks are specific to online application and include: phishing scams, drivebydownloads, or browser exploits.

Phishing scams aim stealing information like account login and password.

Drive-by-Downloads is a technique that allows the automatic download of applications
when a user visits a certain web page. In addition to these attacks, attackers use different
techniques to obtain private data: repackaging, misleading disclosure and update.

Repackaging was the most used technique in 2011 to infect applications running under
Android. In this kind of attack, an attacker takes a healthy application; modifies it with a

malicious code and then republishes it.

The main difference between the healthy and modified applications is that the last ones
require more access control permissions such as to access the phone contacts or to send
SMS messages.

Misleading disclosure is a technique used by an attacker to hide the undesirable
functionality of an application, so that a user would not notice it and would agree to. The
undesirable functionality is usually hidden in the applications terms and conditions. The
attackers rely on the fact that usually the users do not pay attention to the applications
terms and conditions while these are installed. Those applications are difficult to block or
remove because they do not violate their own terms of service or any application
market s user agreement.

The update technique was recently used by malware writers as an attack method in
Android Market. Firstly, the malware writer publishes an uninfected application, than the
application is updated with a malicious version. Using this technique, the attacker takes
advantage of the users trust in the applications market. The number of infected devices
increases; there are affected the users that only use the official market to download the
applications. A consequence of this attack technique is a decrease of users confidence in
the application market. This may lower the market customers number and therefore the
market profits.

II) c2. Cloud Threats

The Cloud is almost similar to a big black box where nothing inside is visible to the
clients. Therefore, clients have no idea or control over what happens with their assets.
Cloud Computing is about clients transferring the control of their resources (e.g., data,
applications) and responsibilities to one or more third parties (Cloud services providers).
This brings an increased risk to which client assets are greatly exposed.

Before Cloud s emergence, generally, the companies where keeping their data inside
their perimeter and protecting them from any risks caused by malicious intruders. A
malicious intruder was considered to be an outside attacker or a malicious employee. Now,
if a company chooses to move its assets into the Cloud, it is forced to trust the Cloud
provider and the security solutions it offers when provided. However, even if the Cloud
provider is honest, it can have malicious employees (e.g., system administrators) who can
tamper with the virtual machines and violate confidentiality and integrity of client s

In Cloud Computing, the obligations in terms of security are divided between the Cloud
provider and the Cloud user. In the case of SaaS, this means that the provider must ensure
data and application security; so service levels, security, governance, compliance, and
liability expectations of the service are contractually stipulated and enforced. In the case
of PaaS or IaaS the security responsibility is shared between the consumer and the
provider. The responsibility of the consumers system administrators is to effectively
manage the data security. The responsibility of the provider is to secure the underlying
platform and infrastructure components and to ensure the basic services of availability and
security. Several analyses have been conducted to identify the main security issues
regarding the Cloud Computing. Following these analyses, security issues have been
classified in terms of concerns: domain concerns, services concerns, threats, actors
concerns and properties concerns.

The domain concerns are divided in two types: 1) governance concerns and 2) operation
concerns. Governance addresses strategic and policy security issues within Cloud
Computing. The highlighted issues are: data ownership and data location.

Data Ownership refers to the ownership of purchased digital data. Thanks to the Cloud it
is possible to store purchased media files, such as audio, video or e-books remotely rather
than locally. This can lead concerns regarding the true ownership of the data. If a user
purchases media using a given service and the media itself is stored remotely there is a
risk of losing access to the purchased media. The service used could go out of business,
for example, or could deny access to the user for some other reasons. Data location raises
many issues because of the compliance problem of privacy laws that are different from a
country to another. For example, the laws in European Union (EU) and South America are
different from the laws in United States (US) regarding data privacy. Under EU law and
South American law, personal data can be collected only under strict conditions and for a
legitimate purpose. In the US, there is no all-encompassing law regulating the collection
and processing of personal data. Operation addresses technical security issues within
Cloud Computing; issues as: 1) the security of data stored into the Cloud, 2) the security
of data transmitted between the Cloud services, 3) the security of data transmitted between
the Cloud services and a mobile platform or, 4) data access and integrity. If an application
relies on remote data storage and Internet access in order to function then, any changes to
these data can significantly affect the user.

Threats class identifies the main security issues an organization may face when it wants to
move its assets into the Cloud. The main concerns mentioned are: data loss, unsecured
applications interfaces, denial of services or malicious insider. Actor class identifies the
main security issues that may be caused by the Cloud provider, by the Cloud clients or by
an outsider. Thereby, a Cloud provider may be affected by the malicious Cloud client s

activities. The malicious Cloud clients can target honesty clients data; they can
legitimately be in the same physical machine as the target and they can gather information
about the target.

A Cloud client may be affected by the malicious Cloud provider. The malicious provider
may log the client communication and read the unencrypted data; also it may peek into the
virtual machines or make copies of the virtual machines assigned to run client assets. In
this way a Cloud provider gain information about client data or behavior and sell the
information or even use it itself. An outsider can affect a Cloud client. The outsider may
listen to the network traffic or it may insert malicious traffic and lunch the denial of
service attack. Services class lists the security issues that may occur while using any of the
Cloud provided services: SaaS, PaaS or IaaS. The fundamental security challenges are:
data storage security, data transmission security, application security and security related
to third-party resources. The properties that bring out the security issues encountered in
the Cloud are: the privacy, the security and the trust. Security in general, is related to the
following aspects: data confidentiality, data integrity and data availability. Privacy is one
of the significant concerns in Mobile Cloud Computing. For example, some smart phone
applications use the Cloud to store user s data. The main risk in this context is that
unauthorized people can access and get user s data. Another example concerns locationaware applications such as applications that finds nearby restaurants for the user; or
applications that allows user s friends and family to receive updates regarding her/his


Mobile Cloud Computing is a model that can be described as the availability of Cloud
Computing resources to mobile environments. From a security point of view, Mobile
Cloud Computing introduces many security issues due to the fact that it combines mobile
devices with Cloud services. In this paper were presented the security issues that can
jeopardize the Mobile Cloud users private data or applications. The issues were divided
in two types: mobile threats and Cloud threats. For each threats type were presented the
security issues that may affect the data, the applications, the device (in the case of mobile
threats) and the users privacy. Also the paper presented an overview of the main Mobile
Cloud Computing characteristics. Characteristics used to provide a definition for Mobile
Cloud Computing.

IBM Guardium

IBM InfoSphere Guardium provides the simplest, most robust solution for
assuring the privacy and integrity of trusted information in your data center,
and reducing costs by automating the entire compliance auditing process in
heterogeneous environments.

Deploy centralized and standardized controls for real-time database security
and monitoring, fine-grained database auditing, automated compliance
reporting, data-level access control, database vulnerability management and
auto-discovery of sensitive data.

The InfoSphere Guardium products address the database security and
compliance lifecycle with a unified web console, back-end data store, and
workflow automation system, which are intended to enable you to:

Locate and classify sensitive information in corporate databases
Assess database server and operating system vulnerabilities and configuration
Ensure configurations are locked down after recommended changes are
Provide high visibility and granularity into data transactions and activity across all supported platforms and protocols - with an audit trail that supports
separation of duties and that is designed to be secure and tamper-proof

Track activities on major file and document sharing platforms such as

Microsoft SharePoint
Monitor and enforce your policies with alerting and blocking for sensitive
data access, privileged user actions, change control, application user
activities, and security exceptions such as failed logins
Automate the entire compliance auditing process, including report
distribution to oversight teams, sign-offs, and escalations with preconfigured
reports relating to Sarbanes-Oxley (SOX), PCI DSS, and data privacy
Create a single, centralized audit repository for enterprise-wide compliance
reporting, performance optimization, investigations, and forensics
Easily scale from safeguarding a single database to protecting a large number
of databases in distributed data centers around the world
Enable deeper data activity insights to IT Security Information and Event
Management (SIEM) tools for more accurate and effective security
IBM Guardium Products

Data Activity Monitoring

InfoSphere Guardium offers continuous monitoring to databases, warehouses,
file shares, document-sharing solutions and big data environments while
automating compliance.

Guardium Architecture

The InfoSphere Guardium products offer a simple, robust solution designed
to prevent unauthorized data access, changes, and leaks from databases, data
warehouses, file shares, document-sharing solutions, and big data
environments such as Hadoop, helping to ensure the integrity of information
in the data center and automating compliance controls. They provide a
scalable platform, intended to enable continuous data activity monitoring
from heterogeneous sources, as well as enforcement of your policies for
sensitive data access enterprise-wide. Designed to be a secure, centralized
audit repository combined with an integrated compliance workflow
automation application, the products are designed to streamline compliance
validation activities across a wide variety of mandates.

The InfoSphere Guardium product architecture enables users to select the
modules appropriate for their immediate needs, adding additional modules as
requirements grow and change. Available modules include:

Data Activity Monitor and Audit - Standard:
Data Activity Monitoring for databases, file sharing, document sharing,
warehouses, and Hadoop
Application user activity monitoring (Application End-User Identifier)
Data Activity Monitor and Audit- Advanced: All capabilities in Data Activity
Monitoring and Audit - Standard, plus the ability to block data traffic
according to policy (data-level access control)
Vulnerability Assess and Monitor - Standard:
Database Vulnerability Assessment Application
Database Protection Knowledge Base
Vulnerability Assess and Monitor - Advanced: All capabilities in
Vulnerability Assess and Monitor - Standard, plus Configuration Audit
System Application and Entitlement Reports Applications
Central Manager and Aggregator Pack:
Central Manager and Aggregator Application

Advanced Compliance Workflow Application

Base appliances:
Physical or virtual appliance image or both
Enterprise Integrator
Sensitive Data Finder Application
Guardium Database Security Solutions

Monitor Data Activity in Real Time

Identify unauthorized or suspicious activities by continuously monitoring
access to databases, data warehouses, Hadoop systems and file share
platforms in real-time.

Audit and Validate Compliance

Simplify SOX, PCI-DSS, and Data Privacy processes with pre-configured
reports and automated oversight workflows (electronic sign-offs, escalations,
etc.) to satisfy mandates.

Secure and Protect Big Data Environments

Build security into big data environments to prevent breaches, ensure data
integrity and satisfy compliance.

Protect Data Privacy

Develop a holistic approach to data protection to ensure compliance and
reduce costs.

Assess Vulnerabilities

Scan the entire data infrastructure for vulnerabilities and receive an ongoing
evaluation of your data security posture, using both real-time and historical

Safeguard both Structured and Unstructured Data

Ensure structured and unstructured data is identified, transformed and

Protect and Secure Data in the Cloud and Virtual environments

Providing comprehensive data protection for Cloud, virtual and physical

IId. Security Analysis in the Migration to Cloud Environments

Cloud computing is a new paradigm that combines several computing concepts and
technologies of the Internet creating a platform for more agile and cost-effective business
applications and IT infrastructure. The adoption of Cloud computing has been increasing
for some time and the maturity of the market is steadily growing. Security is the question
most consistently raised as consumers look to move their data and applications to the
Cloud. Providers justify the importance and motivation of security in the migration of
legacy systems and they carry out an analysis of different approaches related to security in
migration processes to Cloud with the aim of finding the needs, concerns, requirements,
aspects, opportunities and benefits of security in the migration process of legacy systems.

Neediest Industry Adopting Cloud Computing

A recent Cloud Computing survey of over 10,500 participants from 88
countries, highlighted the fact that Global NGOs (Non-Governmental
Organizations) have similar reasons to other industries, for why they have or
have not adopted Cloud Computing, and NGOs may in fact be taking the lead
in adopting this new technology, with 90% of respondents worldwide

indicating they are using Cloud Computing.

Why should other industries care? Generally NGOs

Are stretched thin on resources
Have smaller IT budgets
Cannot afford a Redo on any IT mistake

Sound familiar? Granted, outside of core IT and business applications like
HR, CRM, accounting/financial management, social collaboration, etc., most,
if not all industries unique needs do not align with NGOs. However, when
considering any new technology, there is always value in gaining insight into
what others perceive as advantages or deterrents for that technology.

The highlights from this TechSoup Global study, were that Simplified
Administration, Rapid Deployment, and improved Costs, were identified as
the Primary Advantages of Cloud Computing. While Lack of Knowledge,
was identified as the Primary Barrier. Also, as in other studies, Costs and
Data Security, were listed both as an Advantage, as well as a Deterrent,
highlighting potential variances by solution or provider.

Reported Major Advantages:

Reported Major Deterrents:

[Peter Johnson-12-October-2012]

Cloud Computing appears as a computational model or paradigm and its main objective is
to provide secure, quick, convenient data storage and net computing service, with all
computing resources being visualized as services and delivered over the Internet. Cloud
enhances collaboration, agility, scaling, and availability, the ability to scale to fluctuations
in demand, as well as the acceleration of development work and provides the potential for
cost reduction through optimized and efficient computing.

Cloud computing combines a number of computing concepts and technologies such as
SOA, Web 2.0, virtualization and other technologies with reliance on the Internet,
providing common business applications online through web browsers to satisfy the

computing needs of users, while the software and data are stored on the servers. There is
commercial pressure on businesses to adopt Cloud computing models but customers need
to ensure that their Cloud services are driven by their own business needs rather than by
providers interests, which are driven by short-term revenues and sales targets together
with long-term market share aspirations. The global presence of the Internet and the
introduction of wireless networking and mobile devices featuring always on Internet
connectivity have raised expectations of users and demand for services over the internet.
However, the architectures required by service providers to enable Web 2.0 has created an
IT service that is differentiated by resilience, scalability, reusability, interoperability,
security and open platform development. This has effectively become the backbone of
Cloud computing and is considered by a number of vendors and services to be an
operating system layer of its own.

The importance of Cloud computing is increasing and it is receiving growing attention in
the scientific community. In fact, a study of Gartner has considered Cloud computing to
be the first technology among the top 10 technologies, extremely important and with the
best prospect in 2011 and successive years for companies and organizations. NIST
defines Cloud computing as a model for enabling convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This Cloud model promotes availability
and is composed of five essential characteristics, three service models, and four
deployment models. Between the essential characteristics are on-demand self-service,
broad network access, resource pooling, rapid elasticity, highly abstracted resources, near
instant scalability and flexibility and measured service.

In another study about Cloud computing, the majority of the participants expect three
main drivers of Cloud computing: more flexibility, followed by cost savings and better
scalability of their IT. Cloud computing can bring relief by the faster deployment of
applications for less cost. In this same study, an overwhelming majority of participants
consider security issues to be their main concern regarding the use of Cloud computing. In
addition, legal, privacy and compliance issues are considered to be areas of risks.
Focusing on the security issue, the majority of participants agree that security concerns are
blocking their move to the Cloud. It appears that they are not worried primarily about the
lack of security measures in themselves, but about the lack of transparency on the side of

The ENISA report highlights the benefits that some small and medium size companies
can realize with Cloud computing. A smaller, cost-constrained organization may find that
a Cloud deployment allows them to take advantage of large-scale infrastructure security
measures that they could not otherwise afford. Some of the possible advantages include
DDOS (distributed denial of service) protection, forensic image support, logging

infrastructure, timely patch and update support, scaling resilience, and perimeter
protection (firewalls, intrusion detection and prevention services).

The adoption of Cloud computing has been increasing for some time and the maturity of
the market is steadily growing; not just in volume, choice and functionality, but also in
terms of the ability of suppliers to answer the complex security, regulatory and compliance
questions that security oversight functions are now asking. In part this growth has been
driven by the continued view that Cloud services will deliver cost savings and increased
flexibility. Legacy information systems typically form the backbone of the information
flow within an organization and are the main vehicle for consolidating information about
the business. As a solution to the problems these systems pose such as brittleness,
inflexibility, isolation, non-extensibility, lack of openness, etc., many organizations are
migrating their legacy systems to new environments which allow the information system
to be more easily maintained and adaptable to new business requirements.

The essence of legacy system migration is to move an existing, operational system to a
new platform, retaining the functionality of the legacy system while causing as little
disruption to the existing operational and business environment as possible.

Legacy system migration is a very expensive procedure which carries a definite risk of
failure. Consequently before any decision to migrate is taken, an intensive study should be
undertaken to quantify the risk and benefits and fully justify the redevelopment of the
legacy system involved. The need for enterprises to migrate their IT systems to profit from
a wide set of benefits offered by Cloud environments. It is not surprising that one of the
many opportunities facing established companies in today s competitive environment is
how best to leverage the Cloud as resource, and by extension how to migrate their existing
IT environment into a Cloud. Of particular concern to the CIO are two aspects associated
with migration, cost and risk. Security consistently raises the most questions as consumers
look to move their data and applications to the Cloud. Cloud computing does not introduce
any security issues that have not already been raised for general IT security.

The concern in moving to the Cloud is that implementing and enforcing security policies
now involves a third party. This loss of control emphasizes the need for transparency from
Cloud providers. In some cases the Cloud will offer a better security posture than an
organization could otherwise provide. We want to analyze the different existing
approaches in the literature about migration processes to Cloud computing while taking
into account the security aspects that have to be also moved to Cloud. There are different
initiatives that pretend to show the growing importance of migration processes to
modernize legacy systems and advance on business needs and services offered by
organizations towards an increasing market and for the future.

We want to first analyze the different existing proposals to identify and study the most
interesting aspects of migration to Cloud and then extract the main advantages and
disadvantages that exist and identify gaps, challenges and opportunities to be further
investigated. In this study we also focus on security issues considered in migration
processes as the security in these open environments is very important and has a high
value for organizations which wish to move their applications to the Cloud.

II) d1. Security Benefits and Challenges in Cloud Computing

Cloud Computing is not necessarily more or less secure than the current environment
although it does create new risks, new threats, new challenges and new opportunities as
with any new technology. In some cases moving to the Cloud provides an opportunity to
re-architect older applications and infrastructure to meet or exceed modern security
requirements. At other times the risk of moving sensitive data and applications to an
emerging infrastructure might exceed the required tolerance.

Although there is a significant benefit to leveraging Cloud computing, security concerns
have led organizations to hesitate to move critical resources to the Cloud. Corporations
and individuals are often concerned about how security and compliance integrity can be
maintained in this new environment. With the Cloud model, you lose control over physical
security due to the fact that you are sharing computing resources with other companies
(for public Cloud) and moreover, if you should decide to move the storage services
provided by one Cloud vendor s services to another one, these storage services may be
incompatible with another vendor s services.

It is recommended that your development tool of choice should have a security model
embedded in it to guide developers during the development phase and restrict users only
to their authorized data when the system is deployed into production. In the rush to take
advantage of the benefits of Cloud computing, not least of which is significant cost
savings, many corporations are seemingly rushing into Cloud computing without a serious
consideration of the security implications.

To overcome the customer concerns about application and data security, vendors must
address these issues head-on. There is a strong apprehension about insider breaches, along
with vulnerabilities in the applications and systems availability that could lead to loss of
sensitive data and money. Such challenges can dissuade enterprises from adopting
applications within the Cloud. Therefore, the focus is not upon portability of applications,
but on preserving or enhancing the security functionality provided by the legacy
application and achieving a successful application migration.

The Cloud providers and vendors have advanced in this direction improving the security
aspects and solutions which are offered to the customers who wish to move their
applications and data to Cloud, and becoming a very attractive paradigm because of
perceived economic and operational benefits. Among this attractive set of benefits one
can find the security benefits which are offered by the Cloud providers to their customers
who choose to move their applications to the Cloud. Among the most popular security
benefits in Cloud computing we can define the following:

Security and benefits of scale: put simply, all kinds of security measures are cheaper
when implemented on a larger scale due to the massive concentration of resources
however the data presents a more attractive target to attackers, but Cloud-based defenses
can be more robust, scalable and cost-effective. This includes all kinds of defensive
measures such as filtering, patch management, hardening of virtual machine instances and
hypervisors, etc.

Security as a market differentiator: security is a priority concern for many Cloud
customers; many of whom will make buying choices on the basis of the reputation for
confidentiality, integrity and resilience of the provider as well as the security services
offered by the provider.

Standardized interfaces for managed security services: large Cloud providers can offer a
standardized, open interface to managed security services providers. This creates a more
open and readily available market for security services.

Rapid, smart scaling of resources: the ability of the Cloud provider to dynamically
reallocate resources for filtering, traffic shaping, authentication, encryption, etc., to
defensive measures (e.g., against DDoS attacks) has obvious advantages for resilience.

In addition to these benefits, Cloud also has others benefits such as being more timely and
effective and having efficient updates and defaults. There are some good security traits
that come with centralizing your data; Cloud providers have an opportunity for staff to
specialize in security, privacy, and other areas of high interest and concern to the
organization; the structure of Cloud computing platforms is typically more uniform than
that of most traditional computing centers; greater uniformity and homogeneity facilitate
platform hardening and enable better automation of security management activities like
configuration control, vulnerability testing, security audits, and security patching of
platform components resource availability; backup and recovery; and redundancy.

Disaster recovery capabilities are built into Cloud computing environments and ondemand resource capacity can be used for better resilience when facing increased service
demands or distributed denial of service attacks, as well as for quicker recovery from
serious incidents; the architecture of a Cloud solution extends to the client at the service
endpoint, used to access hosted applications; data maintained and processed in the Cloud
can present less of a risk to an organization with a mobile workforce than having that data
dispersed on portable computers or removable media out in the field, where theft and loss
of devices routinely occur.

II) d2. Security Issues in Public, Private and Hybrid Clouds

While Cloud models provide rapid and cost-effective access to business technology, not
all of these services provide the same degree of flexibility or security control. In most
organizations, data protection levels vary depending on the use of technology. Public
Clouds (or external Clouds) describe Cloud computing in the traditional mainstream
sense, whereby resources are dynamically provisioned on a fine-grained, self-service basis
over the Internet, via web applications or web services, from an off-site, third-party
provider who shares resources and bills on a fine-grained, utility-computing basis. In a
public Cloud, security management day-to-day operations are relegated to the third party
vendor, who is responsible for the public Cloud service offering. Private Clouds differ
from public Clouds in that the network, computing, and storage infrastructure associated
with private Clouds is dedicated to a single organization and is not shared with any other
organizations (i.e., the Cloud is dedicated to a single organizational tenant).

The security management and day-to-day operation of hosts are relegated to internal IT or
to a third party with contractual SLAs. By virtue of this direct governance model, a
customer of a private Cloud should have a high degree of control and oversight of the
physical and logical security aspects of the private Cloud infrastructure.

A hybrid Cloud environment consisting of multiple internal and/or external providers is a
possible deployment for organizations. With a hybrid Cloud, organizations might run noncore applications in a public Cloud, while maintaining core applications and sensitive data
in-house in a private Cloud. Providing security in a private Cloud and a public Cloud is
easier, comparing with a hybrid Cloud since commonly a private Cloud or a public Cloud
only has one service provider in the Cloud. Providing security in a hybrid Cloud
consisting of multiple service providers is much more difficult especially for key
distribution and mutual authentication.

Also, for users to access the services in a Cloud, a user digital identity is needed for the
servers of the Cloud to manage the access control. While in the whole Cloud, there are

many different kinds of Clouds and each of them has its own identity management system.
Thus, a user who wants to access services from different Clouds needs to have multiple
digital identities from different Clouds, which will lead to inconvenience for users. Using
federated identity management, each user will have his unique digital identity and with
this identity, he/she can access different services from different Clouds.

II) d3. Approaches of Migration Processes

There are different approaches in which we analyze and define migration processes or
recommend guides of migration to Cloud computing. When making the decision to
migrate a project to an external Cloud, the user should:

(1) Look for an established vendor with a track record;
(2) Does the project really need to be migrated?;
(3) Consider data security;
(4) Data transfer;
(5) Data storage and location; (6) Scaling;
(7) Service level guarantees;
(8) Upgrade and maintenance schedules;
(9) Software architecture; and
(10) Check with the lawyers.

Other important steps, shown in that can be taken in preparation for Cloud computing
adoption are:

(i) Identify all potential opportunities for switching from existing computing arrangements
to Cloud services;

(ii) Ensure that in-house infrastructure complements Cloud-based services;

(iii) Develop a cost/benefit and risk evaluation framework to support decisions about
where, when, and how Cloud services can be adopted;

(iv) Develop a roadmap for optimizing the current ICT environment for adoption of public

and/or private Cloud services;

(v) Identify which data cannot be held in public Cloud computing environments for legal
and/or risk-mitigation reasons;

(vi) Identify and secure in-house competencies that will be required to manage effective
adoption of Cloud services;

(vii) Designate a cross-functional team to continually monitor which new services,
providers, and standards are in this space, and to determine if they affect the roadmap;

(viii) Evaluate technical challenges that must be addressed when moving any current
information or applications into a Cloud environment;

(ix) Ensure that the networking environment is ready for Cloud computing.

Listed below are the points to take into account in the migration such as

(i) Deciding on the applications and data to be migrated;

(ii) Risk mitigation;

(iii) Understanding the costs;

(iv) Making sure the regulatory things are handled;

(v) Training the developers and staff. A phased strategy for migration is presented in
where the author describe a step by step guide with six steps given as such;
(1) Cloud Assessment Phase;
(2) Proof of Concept Phase;
(3) Data Migration Phase;
(4) Application Migration Phase;
(5) Leverage of the Cloud;

(6) Optimization Phase.

In this strategy some security aspects are indicated and some correct security best
practices are defined such as safeguard of credentials, restricting users to resources,
protecting your data by encrypting it at-rest (AES) and in-transit (SSL) or adopting a
recovery strategy.

The alternative migration strategies which Gartner suggests IT organizations should
consider are:

(i) Rehost, i.e., redeploy applications to a different hardware environment and change the
application s infrastructure configuration;

(ii) Refactor, i.e., run applications on a Cloud provider s infrastructure;

(iii) Revise, i.e., modify or extend the existing code base to support legacy modernization
requirements, then use rehost or refactor options to deploy to Cloud;

(iv) Rebuild, i.e., Rebuild the solution on PaaS, discard code for an existing application
and re-architect the application;

(v) Replace, i.e., discard an existing application (or set of applications) and use
commercial software delivered as a service.

As we can see, the approaches of migration process identify and define a set of steps or
points to follow and consider in the migration to Cloud which can be used for our propose
of migrating security aspects to Cloud, but the initiatives do not consider security or only
specific security aspects that do not guarantee a full migration process of all security
features of the legacy systems and it is this aspect which we want to achieve.

II) d4. Analysis of Approaches of Migration to Cloud

Legacy system migration encompasses many research areas. A single migration project
could, quite legitimately, address the areas of reverse engineering, business reengineering, schema mapping and translation, data transformation, application
development, human computer-interaction and testing.

Some proposals have been presented in some of these areas such as in where the authors
have presented a realistic strategy for conducting migration, by considering both the
business needs of the organization and the technical content of the organization s legacy
system portfolio. One of the strategies for migration of legacy systems to SOA is the black
box strategy.

Finally, a re-engineering approach that is used to restructure legacy system code and to
facilitate legacy system code extraction for web service code construction has been
proposed. Sooner or later, enterprises will want to rewrite or replace their legacy
applications with those written using a modern architecture, migrate them to the Cloud,
and manage and control them remotely. Moving critical applications and sensitive data to
public and shared Cloud environments is of great concern for those corporations that are
moving beyond their data center s network perimeter defense. To alleviate these
concerns, a Cloud solution provider must ensure that customers will continue to have the
same security and privacy controls over their applications and services, provide evidence
to customers that their organization and customers are secure and that they can meet their
service-level agreements, and prove compliance to auditors.

Organizations and enterprises are asking how the Cloud providers ensure data at rest (on
storage devices), how they ensure data in transit, how to authenticate users, how are one
customer s data and applications separated from other customers (who may be hackers or
competitors), how to address legal and regulatory issues related to Cloud computing, how
to respond to incidents and how are customers involved, how the customer and the vendor
will respond to incidents in the Cloud, who is charged with responding to each type of
incident, or if they can conduct forensic investigations to determine what caused an
incident. These kind of questions related to security are not clear in Cloud computing and
hence organizations and enterprises do not trust the migration of their applications to
Cloud environments. In this section, we have carried out a review of the existing
approaches regarding migration to Cloud computing, not only in order to summarize the
existing approaches, models, tools, techniques and strategies but also to identify and
analyze the security issues considered in these migration approaches with the aim of
identifying the possible solutions offered which respond to the security concerns or
security needs to be developed or researched. We have carried out a review of the most
relevant sources such as Scholar Google, Science@Direct, DBLP, and so on, obtaining a
set of approaches that we now believe are most interesting for our analysis and which are
detailed as follows.

Model-Based Migration of Legacy Software Systems into the Cloud: The CloudMIG
This approach presents a specific model for migrating legacy systems into the Cloud. It is
called the CloudMIG and, in words of their authors, it is still in an early stage. CloudMIG

is composed of six activities for migrating an enterprise system to PaaS and IaaS-based
Cloud environments:
(1) Extraction: A model describing the actual architecture of the legacy system is extracted
by means of a software architecture reconstruction methodology;

(2) Selection: Common properties of different Cloud environments are described in a
Cloud environment meta-model;

(3) Generation: The generation activity produces three artifacts, namely a target
architecture, a mapping model, and a model characterizing the target architectures
violations of the Cloud environment constraints;

(4) Adaptation: The activity 4 allows the re-engineer to adjust the target architecture
manually towards case-specific requirements that could not be fulfilled during generation
activity 3;

(5) Evaluation: This activity evaluates the outcomes of the activities 3 and 4. The
evaluation involves static and dynamic analyses of the target architecture;

(6) Transformation: This activity comprises the manual transformation of the enterprise
system towards the aimed Cloud environment according to the generated and improved
target architecture.
The approach provides model-driven generation of considerable parts of the system s
target architecture and fosters resource efficiency and scalability on an architectural level.

The work does not deal with security issues, though the third activity (Generation)
provides a model with the target architecture violations of the Cloud environment
constraints. However, it does not seem to be specific either about security constraints of
the legacy or of the target. This approach does not consider security aspects in the process
but it would be possible to incorporate some security aspects into each activity in such a
way that these aspects would be extracted from the legacy system through the use of a
modernization technique or a software architecture reconstruction methodology.

A target security architecture could then be generated using a specific Cloud environment
model together with a security mapping model, and a transformation to secure a migrated
system would be possible with this same approach.

Migrating Legacy Applications to the Service Cloud

The authors present a generic methodology which shows how to migrate legacy
applications to the service Cloud computing platform and they describe a case study for
scientific software from the oil spill risk analysis domain. This methodology defines seven

(1) architectural representation of the legacy: based on the source code and text
descriptions, they can analyze the legacy system and reconstruct an architectural model of
the legacy application;

(2) redesign of the architecture: redesign the original architecture model and in particular
identify services that can be provided in a SaaS architecture, specified in a SoaML model;

(3) MDA transformation: with MDA transformation technology, they can easily transform
the architecture model like SoaML, SysML, UML to target codes like WSDL, JEE

(4) web service generation: they can generate the target Web service based on the WSDL
or JEE Annotation;

(5) web service based invocation of legacy functionalities: the service-base application
invokes the functionalities from the identified function and service points in the legacy

(6) selection of the Cloud computing platform: according to the specific requirements of
the target system, the most suitable Cloud computing platform will be chosen to support
the execution of the Web services;

(7) Web service deployment in the service Cloud: end users can consume the legacy
functionalities through the Web services that run on the Cloud. The lecture only deals with
security issues in the last step (migration to the Cloud).

And there it only mentions security in a general non-specific manner, along with
scalability and networking. Nor does it appear to provide detailed questioning about the
security constraints of the legacy. Nevertheless, this approach could be expanded with
security aspects in such a way that the security code of the legacy system could be
identified, and an architectural security model of the legacy application could be

reconstructed to redesign and identify security services that could be provided in an SaaS
architecture, specified in a SoaML4Security model by carrying out the MDA and MDS
(Model Driven Security) transformations and generating Web Service based on WSDL,
WS-Security, XACML, SAML, etc.

REMICS-REuse and Migration of Legacy Applications to Interoperable Cloud Services
REMICS (REuse and Migration of legacy applications to Interoperable Cloud Services) is
a research project whose main objective is to provide tools for model-driven migration of
legacy systems to loosely coupled systems following a bottom up approach; from recovery
of legacy system architecture (using OMG s ADM Architecture Driven Modernization)
to deployment in a Cloud infrastructure allowing further evolution of the system in a
forward engineering process. The migration process consists of understanding the legacy
system in terms of its architecture, business processes and functions, designing a new
Service-Oriented Architecture (SOA) application, and verifying and implementing the
new application in the Cloud. These methods will be complemented with generic Design
by Service Composition methods providing developers with tools simplifying
development by reusing the services and components available in the Cloud. During
the Migrate activity, the new architecture of the migrated system will be built by
applying specific SOA/Cloud computing patterns and methods like architecture
decomposition, legacy components wrapping and legacy components replacement with
new discovered Cloud services. The migration process will be supported by two
complementary activities: Model-Driven Interoperability and Validate, Control and
Supervise . The system will be rebuilt for a new platform in a forward MDA process by
applying specific transformations dedicated to service Cloud platforms. This work does
not deal specifically with security in the migration process but the authors could expand
their approach by considering security aspects in the technological approach in parallel,
incorporating new activities focused on the extraction of security aspects, the building of a
security architecture for the Cloud platform, and the implementation of Cloud security
services using some other security techniques such as Model driven Security (MDS) or
UMLsec for UML class and deployment diagrams.

A Benchmark of Transparent Data Encryption for Migration of Web Applications in the
In this approach the authors analyze privacy requirements for the Cloud applications and
discuss data encryption approaches for securing ecommerce applications in the Cloud. To
provide quantitative estimation of performance penalties caused by data encryption, they
present a case study for an online marketplace application. The authors argue that both
user related data and critical business transaction data should be encrypted and they
examine available encryption approaches on the different layers: The storage layer
encryption relies on the encryption of storage devices such as file system and disk or
partition encryption; Database layer encryption relies on the encryption functions provided
by DBMS. Mainstream databases like Oracle, DB2, MS SQL Server, Mysql offer built-in

encryption functions; The middleware layer encryption takes places between front-end
applications and backend databases and hides encryption details for the applications;
Applications layer encryption, in contrast to middleware layer encryption, requires
applications themselves to deal with encryption and decryption of data stored in the

They compare the advantages and disadvantages of those encryption approaches and,
specifically, they recommend middleware layer encryption as the most appropriate option
for migration of legacy ecommerce applications in the Cloud, due to its transparency,
scalability and vender independency. This approach analyzes privacy requirements for
migration of ecommerce applications in the Cloud and argues that both user related data
and critical business transaction data should be encrypted.

This work is therefore focused on the encryption of data and the transactions of the owners
when they migrate their data and applications to Cloud, thus assuring data privacy and
providing control of access to the information assets. However, the authors do not indicate
any aspect of how the migration should be carried out and what other security aspects
should be considered.

A Case Study of Migrating an Enterprise IT System to IaaS
This approach describes a case study for the migration of a legacy IT system in the oil &
gas industry based in the UK. They present the cost analysis they made for the company
and the use of a decision support tool to assess migration of businesses into the Cloud.
This case study identifies the potential benefits and risks associated with the migration of
the studied system from the perspectives of: project managers, technical managers,
support managers, support staff, and business development staff. The approach is based
upon data collected from an IT solutions company when considering the migration of one
of their systems to Amazon EC2. The proposed tool is useful for decision-makers as it
helps to address the feasibility challenges of Cloud adoption in enterprises, but this work
does not propose any legacy application migration processes, nor does it deal with the
security constraints of the legacy applications, and the authors do not consider security as
an important point in the migration. Security could be incorporated into this approach by
adding a new perspective of security managers and experts and by taking into account a
cost analysis for the security necessities of the application for decision-makers so that
security is also an important factor in the migration to Cloud.

Decision Support Tools for Cloud Migration in the Enterprise
This approach describes two tools that aim to support decision making during the
migration of IT systems to the Cloud. The first is a modeling tool that produces cost
estimates for using public IaaS Clouds. The tool enables IT architects to model their

applications, data and infrastructure requirements in addition to their computational

resource usage patterns. The tool can be used to compare the cost of different Cloud
providers, deployment options and usage scenarios. The second tool is a spreadsheet that
outlines the benefits and risks of using IaaS Clouds from an enterprise perspective; this
tool provides a starting point for risk assessment. Two case studies were used to evaluate
the tools. The tools were useful as they informed decision makers about the costs, benefits
and risks of using the Cloud. The tools were evaluated using two case studies representing
a technical system managed by a small team, and a corporate enterprise system.

The first case represented a small enterprise that is free from the organizational hierarchy
and overheads of large enterprises. The second case study represented a typical enterprise
division that has its own independently-managed systems, which are part of a large interconnected corporate IT environment. This paper describes one tool for benefit and risk
assessment that aims to support decision making during the migration of IT systems to the
public IaaS Clouds. This provides a starting point for risk assessment as it outlines the
organizational, legal, security, technical and financial benefits and risks of using IaaS
Clouds from an enterprise perspective. As can be observed, the authors present two
support tools (one of which is related to security) for decision making, and they do not
propose any migration processes.

Service Migration in a Cloud Architecture
This approach examines service migration in a Cloud computing environment by
examining security and integration issues associated with service implementation. The
authors believe that the categories of acquisition, implementation, and security, offer the
greatest challenges to service migration in the Cloud from the consumer perspective
because they represent the slowest and most costly components of the migration problem.
They highlight some of the critical problems facing small to medium organizations as they
consider Cloud computing as a means of obtaining computational services. The authors
consider security as a challenge in the migration service and they take into account issues
such as if the user moves to a competing service provider, can you take your data with
you? Do you lose access (and control and ownership) of your data if you fail to pay your
bill? What level of control over your data do you retain: for example, the ability to delete
data that you no longer want? If your data is subpoenaed by a government agency, who
surrenders the data? (e.g., who is the target of the subpoena?). If a customer s
information is in the Cloud, does this violate privacy law? How does an organization
determine that a Cloud provider is meeting the security standards it espouses? What legal
and financial provisions are made for violations of security and privacy laws on the part of
the Cloud provider? Will users be able to access their data and applications without
hindrance form the Cloud provider, third parties, or the government? As we can see,
security is treated as an important aspect to take into account in applications once they are
migrated to Cloud, but the authors do not propose how these security aspects should be
migrated from the legacy applications to Cloud.

Dynamic Service and Data Migration in the Clouds
The authors propose in this work a framework to facilitate service migration and to design
a cost model with the decision algorithm to determine the tradeoffs on service selection
and migration. The important issues addressed in this work include that it is necessary to
consider the infrastructure support in the Cloud to achieve service migration, and that it is
also essential to have a strong decision support to help determine whether to migrate some
services and where to place them. The authors develop a cost model to correctly capture
these costs and help determine the tradeoffs in service selection and migration in Clouds.
The important issues addressed in this work include:

(1) It is necessary to consider the infrastructure support in the Cloud to achieve service
migration. The computation resources (computer platforms) in the Cloud need to be able
to support execution of dynamically migrated services. They develop a virtual machine
environment and corresponding infrastructure to provide such support;

(2) It is also essential to have a strong decision support to help determine whether to
migrate some services and where to place them. The consideration involves the service
migration cost, consistency maintenance cost, and the communication cost gains due to
migration. They develop a cost model to correctly capture these costs and help determine
the tradeoffs in service selection and migration in Clouds. Then, they use a genetic
algorithm to search the decision space and make service selection and migration decisions
based on the cost tradeoffs.

From a security viewpoint, the authors consider security as a critical issue and they
propose mutual authentication and access control among different platforms and services
using certificate authority (CA) services to achieve this goal. They define a Security
Manager that interacts with CAs and performs service validation, authentication, and
authorization. The Security Manager also responds to authentication requests issued by
services from other virtual machines (VM). Since VM isolates multiple execution
environments and supports the ability to run multiple software stacks with different
security levels, they use VM to enforce fine-grained access control to services and local
computing platform resources. As will be noted, this approach does not consider security
in the migration process, but does consider it in the support infrastructure and to virtual
machine level.

Results and Discussion

The modernization of state IT legacy systems is emerging as a significant financial,

technical and programmatic challenge to the states ability to deliver services to citizens,
and conduct day-to-day business. Although state governments have advanced their IT
environment with investments in new technologies, flexible programming and a portfolio
of online services, most still live with legacy. Many state systems have become obsolete,
difficult to secure and costly to operate and support. Without investments in legacy system
renovation, modernization or replacement, the ability of states to operate as a modern
organization and serve its citizens is at risk. In order to sum up the results of the
systematic review we present in Table 1 a summary of the quantity of studies by initiative.

Overview of studies per topics

The initiatives are obtained from the main topics found on the approaches analyzed of the
review carried out about migration processes to Cloud. The initiatives are if the
approaches analyzed define frameworks or methodologies, if these approaches are focused
on standards, if they present support tools, if they propose transformations of models in
the migration process, if security is considered in these approaches, or if the approaches
show a case study.

Also, we consider the technology as an initiative when the approaches are focused on
Cloud technology, and finally, if the approaches indicate and define meta-models and are
based on re-engineering techniques. All these approaches are interesting from the point
view of migration to Cloud which offers methodologies of application, decision tools,

meta-models of semi-automated migration with transformations some of them based on

MDA, cases of studies of migration with specific technology and specific Cloud
providers, and so on, providing interesting aspects to take into account in the migration of
legacy systems to Cloud computing. Some of them show how to implement the migration
approaches in real applications helped by support tools which giving more credibility and
robustness to the proposals analyzed. However, taking into account the importance of
security in Cloud justified with numerous approaches and initiatives in the literature and
that from our point of view and experience, security of legacy systems has to be migrated
and even reinforced in the same way as any other aspect, function or service of the system
to migrate, we have been surprised. This is because we have not seen this importance and
concern in the proposals considered in our review where only some of them offer securityrelated issues when making decisions or issues that should be considered when migrating
to the Cloud.

Organizations moving systems into a Cloud environment, or procuring Cloud services,
may find themselves faced with tough questions on how to ensure security and privacy;
the balance between security and cost-effectiveness; the increased availability of systems
and the presence of a viable exit strategy. Although there are four initiatives that indicate
security aspects to take into account in the migration, none of them presents an approach
indicating which are the most important issues to consider, how to perform the migration
of these aspects of security, what set of security requirements have to consider, which are
the most appropriate mechanisms used to implement certain security services for the
Cloud, what security standards are more appropriate taking into account different
standards for areas such as healthcare (e.g., Health Insurance Portability and
Accountability Act (HIPAA)), finance (e.g., Payment Card Industry Data Security
Standard (PCI DSS)), security (e.g., ISO 27001, ITIL, COBIT), and audit (e.g., Standards
for Attestation Engagements (SSAE) No. 16) [20], and so on. That is, a migration process
to guide and indicate to us the steps, tasks, recommendations, mechanisms, standards, and
decisions to follow with the main objective of migrating security aspects and services to
the Cloud.

Organizations which want to move to Cloud due to insufficient security infrastructure in
its organization or want to add new security services to the systems have clear security
benefits, but no one can ensure that the security and privacy levels are equal to or higher
than the organizations had in their local systems. Organizations want a complete migration
process, offering the same services and even new services improved and provided by
Cloud environments but with security level that is the same as if the system was within
their own organization.

When organizations decide to move to Cloud, they want to migrate their systems and the
security of themselves, of course adapted to the new environment. This is achieved with a
complete migration process where aspects of security and security-related decisions are

considered and different solutions proposed depending on the level of security required,
the scope of the applications and the selected technological providers. Lack of studies and
approaches on security issues in the migration to Cloud is that which we have observed in
carrying out this analysis of the literature, where security in Cloud has a great importance.
However, there are no initiatives where a migration process is proposed for security
aspects, which is very important for an application that provides services in the Cloud.
Therefore, there is an urgent need to provide methodologies, techniques and tools not only
for accessing the data and services which is locked in these closed systems and with a high
level of security, but also to provide a strategy which will allow the migration of the
systems to new platforms and architectures and indicating all security aspects that have to
be considered and covered in the migration process.


Cloud is growing because Cloud solutions provide users with access to high
computational power at a fraction of the cost of buying such a solution outright and which
can be acquired on demand; the network becomes an important element in the Cloud
where users can buy what they need when they need it. Although industry leaders and
customers have wide-ranging expectations for Cloud computing, privacy and security
concerns remain a major impediment to widespread adoption.

The benefits of Cloud computing are the first weapon when organizations or companies
are considering moving their applications and services to Cloud, analyzing the advantages
that it entails and the improvements that they can get. If the customers decide to
incorporate their businesses or part of them to the Cloud, they need to take into account a
number of risks and threats that arise, the possible solutions that can be carried out to
protect their applications, services and data from those risks, and some best practices or
recommendations which may be helpful when the customers want to integrate their
applications in the Cloud. In addition, organizations or customers require guidelines or
processes which indicate the steps necessary and advisable to follow, the techniques most
suitable, the most appropriate mechanisms and the technologies to implement the
successful migration of all security aspects of their systems to the Cloud, with the purpose
of having complete assurance that their systems, data and assets are ensured in the same
form as in their own organization or company. After analysis carried out on such issues in
the literature, we can conclude that there are proposals that attempt to migrate legacy
applications to the Cloud with some security aspects but they do not bear in mind the
security issues to be integrated in their own migration process of legacy systems.

For future work, we will carry out a systematic review of the literature in a formal way,
extending the search to migration processes from legacy systems to Cloud computing,
searching initiatives of Cloud-related technologies, such as SOA, Web services, Grid or

virtual machines, and always considering security aspects in this search. In this way we
will obtain more information and we can extract the most important aspects to define a
migration process of legacy systems to Cloud taking into account the security aspects
within the migration process which have to be migrated as for any other service,
requirements or need.

Also, we will study the implementation of a legacy application together with a Cloud
implementation of the same application and we will compare the aspects, functions,
services and issues of security which have to be considered in the migration processes.
Finally, we will develop a migration process considering security aspects of the process,
adapting and transforming the security components of a legacy application to security
services offered by the Cloud.


Cloud computing benefits In order to benefit the most from Cloud computing, developers
must be able to refactor their applications so that they can best use the architectural and
deployment paradigms that Cloud computing supports.

The benefits of deploying applications using Cloud computing include reducing run time
and response time, minimizing the risk of deploying physical infrastructure, lowering the
cost of entry, and increasing the pace of innovation.

For applications that use the Cloud essentially for running batch jobs, Cloud computing
makes it straightforward to use 1000 servers to accomplish a task in 1/1000 the time that a
single server would require. The New York Times example cited previously is the perfect
example of what is essentially a batch job whose run time was shortened considerably
using the Cloud. For applications that need to offer good response time to their customers,
refactoring applications so that any CPU-intensive tasks are farmed out
to worker virtual machines can help to optimize response time while scaling on demand
to meet customer demands. The Animoto application cited previously is a good example
of how the Cloud can be used to scale applications and maintain quality of service levels.

Minimize infrastructure risk
IT organizations can use the Cloud to reduce the risk inherent in purchasing physical
servers. Will a new application be successful? If so, how many servers are needed and can
they be deployed as quickly as the workload increases? If not, will a large investment in

servers go to waste? If the application s success is short-lived, will the IT organization

invest in a large amount of infrastructure that is idle most of the time? When pushing an
application out to the Cloud, scalability and the risk of purchasing too much or too little
infrastructure becomes the Cloud provider s issue.

In a growing number of cases, the Cloud provider has such a massive amount of
infrastructure that it can absorb the growth and workload spikes of individual customers,
reducing the financial risk they face.

Another way in which Cloud computing minimizes infrastructure risk is by enabling surge
computing, where an enterprise datacenter (perhaps one that implements a private Cloud)
augments its ability to handle workload spikes by a design that allows it to send overflow
work to a public Cloud. Application lifecycle management can be handled better in an
environment where resources are no longer scarce, and where resources can be better
matched to immediate needs, and at lower cost.

Lower cost of entry
There are a number of attributes of Cloud computing that help to reduce the cost to enter
new markets:

Because infrastructure is rented, not purchased, the cost is controlled, and the capital
investment can be zero. In addition to the lower costs of purchasing compute cycles and
storage by the sip, the massive scale of Cloud providers helps to minimize cost,
helping to further reduce the cost of entry.

Applications are developed more by assembly than programming. This rapid application
development is the norm, helping to reduce the time to market, potentially giving
organizations deploying applications in a Cloud environment a head start against the

Increased pace of innovation
Cloud computing can help to increase the pace of innovation. The low cost of entry to new
markets helps to level the playing field, allowing start-up companies to deploy new
products quickly and at low cost. This allows small companies to compete more
effectively with traditional organizations whose deployment process in enterprise
datacenters can be significantly longer. Increased competition helps to increase the pace of
innovation and with many innovations being realized through the use of open source
software, the entire industry serves to benefit from the increased pace of innovation that

Cloud computing promotes.

About the author: George Haynes, industrial designer, social theorist, and futurist, is the
author of numerous books involving manufacturing and information technologies and
social issues. He is currently establishing a string of ebook publishing companies, in
addition to developing his startup company, Logistics-Industrial Design Management. He
is a frequent contributor to LinkedIn. Many of his publications are available through the

Media type: Tutorial (2 lectures)
Size: 6.17 MB
Date completed: 26 Jan 2016
Pages: 175 pgs

CLOUD COMPUTING 101, copyright Cyber Press, all rights reserved. This
publication cannot be reproduced through photocopying, electromechanical, or digital