You are on page 1of 36

UNIT-III

Introduction – Advancing towards a Utility Model – Evolving IT infrastructure – Evolving Software


Applications – Continuum of Utilities- Standards and Working Groups- Standards Bodies and
Working Groups- Service Oriented Architecture- Business Process Execution Language-
Interoperability Standards for Data Center Management – Utility Computing Technology-
Virtualization – Hyper Threading – Blade Servers- Automated Provisioning- Policy Based
Automation- Application Management – Evaluating Utility Management Technology – Virtual Test
and development Environment – Data Center Challenges and Solutions – Automating the Data Center.

Advancing towards a Utility Model:


A utility model is a patent-like intellectual property right to protect inventions. This type of right is
only available in some countries. Although a utility model is similar to a patent, it is generally cheaper
to obtain and maintain, has a shorter term (generally 6 to 15 years), shorter grant lag, and less stringent
patentability requirements. In some countries, it is only available for inventions in certain fields of
technology and/or only for products. Utility models can be described as second-class patents.

A utility model is a statutory exclusive right granted for a limited period of time (the so-called "term")
in exchange for an inventor providing sufficient teaching of his or her invention to permit a person of
ordinary skill in the relevant art to perform the invention. The rights conferred by utility model laws
are similar to those granted by patent laws, but are more suited to what may be considered as
"incremental inventions". Specifically, a utility model is a "right to prevent others, for a limited period
of time, from commercially using a protected invention without the authorization of the right holder(s)

Strengths of the Utility Model


Carr correctly highlights the concept of a general-purpose technology. This class of technology has
historically been the greatest driver of productivity growth in modern economies. They not only
contribute directly, but also by catalyzing myriad complementary innovations.3 for electricity, this
includes the electric lighting, motors, and machinery. For IT, this includes transaction processing,
ERP, online commerce and myriad other applications and even business model innovations.

Some of the economies of scale and cost savings of cloud computing are also akin to those in
electricity generation. Through statistical multiplexing, centralized infrastructure can run at higher
utilization than many forms of distributed server deployment. One system administrator, for example,
can tend over 1,000 servers in a very large data center, while his or her equivalent in a medium-sized
data center typically manages.

By moving data centers closer to energy production, cloud computing creates additional cost savings.
It is far cheaper to move photons over the fiber optic backbone of the Internet than it is to transmit
electrons over our power grid. These savings are captured when data centers are located near low-cost
power sources like the hydroelectric dams of the northwest U.S.

Along with its strengths, however, the electric utility analogy also has three technical weaknesses and
three business model weaknesses.

Cloud computing 34
Technical Weaknesses of the Utility Model
The Pace of Innovation: The pace of innovation in electricity generation and distribution happens on
the scale of decades or centuries.8 In contrast, Moore's Law is measured in months. In 1976, the basic
computational power of a $200 iPod would have cost one billion dollars, while the full set of
capabilities would have been impossible to replicate at any price, much less in a shirt pocket.
Managing innovative and rapidly changing systems requires the attention of skilled, creative people,
even when the innovations are created by others, unlike managing stable technologies.

The Limits of Scale: The rapid availability of additional server instances is a central benefit of cloud
computing, but it has its limits. In the first place, parallel problems are only a subset of difficult
computing tasks: some problems and processes must be attacked with other architectures of
processing, memory, and storage, so simply renting more nodes will not help. Secondly, many
business applications rely on consistent transactions supported by RDBMS. The CAP Theorem says
one cannot have consistency and scalability at the same time. The problem of scalable data storage in
the cloud with an API as rich as SQL makes it difficult for high-volume, mission-critical transaction
systems to run in cloud environments.

Evolving IT Infrastructure:
A major shift in enterprise computing which will change the IT infrastructure forever. It is now clear
how a product like VMware changed server computing from physical to virtual and overhauled the
makeup and footprint of the server room. This revolutionary change in server computing, though epic,
was focused in scope and only affected a single segment of the IT infrastructure. Leveraging this
momentum, the next wave of change in IT infrastructure will trigger a transition from physical to
virtual, hardware-driven to software-driven and in-house to the Cloud.

Transition to the Cloud


Over the last few years, the Cloud has become an over-hyped, over-used term. For those of you who
have lost faith in the Cloud, do not be fooled; it will continue to gain momentum and will shine bright
once in full swing. Looking forward, enterprises will move their servers, storage and backups to the
Cloud, whether it be public or a purpose build hybrid cloud. Offering agility, scalability and cost
benefits, the IaaS cloud delivery model will continue to be the platform of choice. Compliance and
security will be addressed as cloud providers are gearing up their toolset to provide higher levels of
security in order to meet compliance concerns. In the meantime, sensitive data can be stored locally as
part of the hybrid cloud model. No longer will IT have to go through the lengthy, time consuming and
cumbersome process for server or storage requisition, a process that could easily span 30-90 days from
inception to implementation. The Cloud can provide a new server or additional storage in minutes,
greatly increasing the ability and perception of IT.
Another use case for the Cloud gaining traction is Disaster Recovery as a Service (DRaaS), due part
and parcel to its cost effectiveness. Providers of DRaaS leverage server virtualization technologies to
reduce their cost, sharing storage and networking components across many customers, making their
economics almost untouchable by the typical enterprise. The upside of this new service is that many
smaller companies who were unable to afford disaster recovery in the past can now not only afford it,
but also enjoy its near real-time replication and recovery offerings.
Physical to Virtual and Hardware Driven to Software Driven
Another component of the evolving IT infrastructure is the movement from a hardware-driven IT
environment to a software-driven data center. Though hardware will always be required as software

Cloud computing 35
can only run on hardware, the role of hardware will diminish as the intelligence is removed from the
hardware devices and centralized in software devices separated from the hardware it controls. As
server virtualization allowed a single physical server to be carved into multiple, compartmentalized
‗virtual servers,‘ all running on and sharing the resources of the single physical host, other
virtualization technologies are also gaining traction and are all driven by software.
Software Defined Networks move the control plane, the decision-making software, from the network
switch to a common management platform that controls the entire network. Cloud providers will jump
start enterprise adoption as these companies build their hybrid clouds with integrations into both
public and private cloud services.
Another trend that will affect the IT infrastructure is storage virtualization, or Virtual SANs, which
have been around for a while now. These virtual SANs do not face the same throughput limitation of
the existing controller based SANs. Again, this technology is driven by cloud providers looking to
build massively scalable storage solutions to cost effectively meet the needs of their customers. Once
common place in the cloud, enterprises will feel comfortable deploying to their internal hybrid cloud
platform and these enterprises will never return to the traditional controller based SAN currently
deployed.
Software to the Cloud
Another movement gaining traction is the Software as a Service (SaaS) Cloud delivery model. The day
of IT deploying, managing and maintaining all their software applications is dwindling. There is
already a large movement of enterprise applications that have traditionally been housed internally to
the Cloud. This greatly reduces the time IT needs to devote deploying, upgrading and maintaining
these applications. SaaS is also gaining momentum with other mission critical applications such as
SAP and CRM which traditionally
required expensive hardware and specially trained administrators to run and support. Moving forward,
the breadth of applications available via SaaS will be fueled by companies such as Microsoft, IBM and
others whom are committed to making their entire application portfolio available via a SaaS delivery
model.

Evolving Software Applications:


The applications of cloud computing are practically limitless. With the right middleware, a
cloud computing system could execute all the programs a normal computer could run. Potentially,
everything from generic word processing software to customized computer programs designed for a
specific company could work on a cloud computing system.

Why would anyone want to rely on another computer system to run programs and store data? Here are
just a few reasons:
 Clients would be able to access their applications and data from anywhere at any time. They
could access the cloud computing system using any computer linked to the Internet. Data
wouldn't be confined to a hard drive on one user's computer or even a corporation's internal
network.
 It could bring hardware costs down. Cloud computing systems would reduce the need for
advanced hardware on the client side. You wouldn't need to buy the fastest computer with the
most memory, because the cloud system would take care of those needs for you. Instead, you
could buy an inexpensive computer terminal. The terminal could include a monitor, input
devices like a keyboard and mouse and just enough processing power to run the middleware

Cloud computing 36
necessary to connect to the cloud system. You wouldn't need a large hard drive because you'd
store all your information on a remote computer.
 Corporations that rely on computers have to make sure they have the right software in place to
achieve goals. Cloud computing systems give these organizations company-wide access to
computer applications. The companies don't have to buy a set of software or software licenses
for every employee. Instead, the company could pay a metered fee to a cloud computing
company.
 Servers and digital storage devices take up space. Some companies rent physical space to store
servers and databases because they don't have it available on site. Cloud computing gives these
companies the option of storing data on someone else's hardware, removing the need for
physical space on the front end.
 Corporations might save money on IT support. Streamlined hardware would, in theory, have
fewer problems than a network of heterogeneous machines and operating systems.
 If the cloud computing system's back end is a grid computing system, then the client could take
advantage of the entire network's processing power. Often, scientists and researchers work with
calculations so complex that it would take years for individual computers to complete them. On
a grid computing system, the client could send the calculation to the cloud for processing. The
cloud system would tap into the processing power of all available computers on the back end,
significantly speeding up the calculation
(or)
Cloud computing is a process that entails accessing of services, including, storage, applications and
servers through the Internet, making use of another company's remote services for a fee. This enables a
company to store and access data or programs virtually, i.e. in a cloud, rather than on local hard drives
or servers.
The benefits include the cost advantage of the commoditization of hardware (such as on-demand,
utility computing, cloud computing, software-oriented infrastructure), software (the software-as-
service model, software-oriented architecture), and even business processes.
The other catalysts were grid computing, which allowed major issues to be addressed via parallel
computing; utility computing facilitated computing resources to be offered as a metered service and
SaaS allowed subscriptions, which were network-based, to applications. Cloud computing, therefore,
owes its emergence to all these factors.
The three prominent types of cloud computing for businesses are Software-as-a-Service (SaaS), which
requires a company to subscribe to it and access services over the Internet; Infrastructure-as-a-Service
(IaaS) is a solution where large cloud computing companies deliver virtual infrastructure; and
Platform-as-a-Service (PaaS) gives the company the freedom to make its own custom applications that
will be used by all its entire workforce.
 Clouds are of four types: public, private, community, and hybrid. Through public cloud, a
provider can offer services, including storage and application, to anybody via the Internet.
They can be provided freely or charged on a pay-per-usage method.
 Public cloud services are easier to install and less expensive, as costs for application, hardware
and bandwidth are borne by the provider. They are scalable, and the users avail only those
services that they use.
 A private cloud is referred to as also internal cloud or corporate cloud, and it called so as it
offers a proprietary computing architecture through which hosted services can be provided to a

Cloud computing 37
restricted number of users protected by a firewall. A private cloud is used by businesses that
want to wield more control over their data.
 As far as the community cloud is concerned, it is resources shared by more than one
organization whose cloud needs are similar.
 A combination of two or more clouds is a hybrid cloud. Here, the clouds used are a
combination of private, public, or community.
 Cloud computing is now being adopted by mobile phone users too, although there are
limitations, such as storage capacity, life of battery and restricted processing power.

Some of the most popular cloud applications globally are Amazon Web Services (AWS), Google
Compute Engine, Rackspace, Salesforce.com, IBM Cloud Managed Services, among others. Cloud
services have made it possible for small and medium businesses (SMBs) to be on par with large
companies.
Mobile cloud computing is being harnessed by bringing into existence a new infrastructure, which is
made possible by getting together mobile devices and cloud computing. This infrastructure allows the
cloud to execute massive tasks and store huge data, as processing of data and its storage do not take
place within mobile devices, but only beyond them. Mobile computing is getting a fillip as customers
want to use their companies' applications and websites wherever they are.
The emergence of 4G, Worldwide Interoperability for Microwave Access (Wimax), among others, is
also scaling up the connectivity of mobile devices. In addition, new technologies for mobile, such as,
CSS3, Hypertext Markup Language (HTML5) hypervisor for mobile devices, Web 4.0, etc. will only
power the adoption of mobile cloud computing.
The main benefits of using cloud computing by companies are that they need not buy any
infrastructure, thus lowering their maintenance costs. They can do away with the services used when
their business demands have been met. It also gives firms comfort that they have huge resources at
beck and call if they suddenly acquire a major project.
On the other hand, transferring their data to cloud makes businesses share their data security
responsibility with the provider of cloud services. This means that the consumer of cloud services
reposes lot of trust on the provider of those services. Cloud consumers control on the services used is
lesser than on on-premise IT resources.

Continuum of Utilities:
There is an important link between fog computing and cloud computing. It‘s often called an extension
of the cloud to where connected iot ‗things‘ are or in its broader scope of ―the Cloud-to-Thing
continuum‖ where data-producing sources are. Fog computing has been evolving since its early days.
As you‘ll read and see below fog computing is seen as a necessity for iot but also for 5G, embedded
artificial intelligence (AI) and ‗advanced distributed and connected systems‘.
Fog computing is designed to deal with the challenges of traditional cloud-based iot systems in
managing iot data and data generated by sources along this cloud-to-thing continuum. It does so by
decentralizing data analytics but also applications and management into the network with its
distributed and federated compute model – in other words: in iot at the edge.
According to IDC, 43 percent of all iot data will be processed at the edge before being sent to a data
center by 2019, further boosting fog computing and edge computing. And when looking at the impact
of iot on IT infrastructure, 451 Research sees that most organizations today process iot workloads at
the edge to enhance security, process real-time operational action triggers, and reduce iot data storage

Cloud computing 38
and transport requirements. This is expected to change over time as big data and AI drive analysis at
the edge with more heavy data processing at that edge.
N other words: In fog computing the fog iot application will decide what is the best place for data
analysis, depending on the data, and then send it to that place.
If the data is highly time-sensitive (typically below or even very far below a second) it is sent to the
fog node which is closest to the data source for analysis. If it is less time-sensitive (typically seconds to
minutes) it goes to a fog aggregation node and if it essentially can wait it goes to the cloud for, among
others, big data analytics.

Standards and Working Groups:


Technologies like cloud computing and virtualization have been embraced by enterprise IT
managers seeking to better deliver services to their customers, lower IT costs and improve operational
efficiencies.

Cloud Working Groups


Cloud computing ecosystem participants (cloud vendors, service providers, and users) of standards-
based choices in areas such as application interfaces, portability interfaces, management interfaces,
interoperability interfaces, file formats, and operation conventions. The guide groups these choices
into multiple logical profiles, which are organized to address different cloud personalities.
Cloud computing vendors and users in developing, building, and using standards-based cloud
computing products and services, which should lead to increased portability, commonality, and
interoperability. Cloud Computing systems contain many disparate elements. For each element there
are often multiple options, each with different externally visible interfaces, file formats, and
operational conventions. In many cases these visible interfaces, formats, and conventions have
different semantics. This guide enumerates options, grouped in a logical fashion called "profiles," for
such definitions of interfaces, formats, and conventions, from a variety of sources. In this way, cloud
ecosystem participants will tend towards more portability, commonality, and interoperability, growing
the cloud computing adoption

 Cloud Management Working Group (CMWG) - Models the management of cloud services and the
operations and attributes of the cloud service lifecycle through its work on the Cloud Infrastructure
Management Interface (CIMI).
 Cloud Auditing Data Federation Working Group (CADF) - Defines the CADF standard, a full
event model anyone can use to fill in the essential data needed to certify, self-manage and self-
audit application security in cloud environments.
 Software Entitlement Working Group (SEWG) - Focuses on the interoperability with which
software inventory and product usage are expressed, allowing the industry to better manage
licensed software products and product usage.
 Open Virtualization Working Group (OVF) - Produces the OVF standard, which provides the
industry with a standard packaging format for software solutions based on virtual systems.
Resources:
 Virtualization Management (VMAN) - DMTF's VMAN is a set of specifications that address the
management lifecycle of a virtual environment.
 Cloud Standards Wiki - The Cloud Standards Wiki is a resource documenting the activities of the
various Standards Development Organizations (SDOs) working on cloud standards.

Cloud computing 39
Standards Bodies and Working Groups:
Cloud computing standardization was started by industrial organizations, which develop what are
called forum standards. Since late 2009, de jure standards bodies, such as ITU-T and ISO/IEC JTC1,
and ICT-oriented standards bodies, such as IEEE (Institute of Electrical and Electronic Engineers) and
IETF (Internet Engineering Task Force), have also begun to study it. In the USA and Europe,
government-affiliated organizations are also discussing it. The activities of major forum standards
bodies, ICT-oriented standards bodies, de jure standards bodies, and government-affiliated bodies are
described below.

Forum standards bodies


There are two groups of forum standards bodies related to cloud computing. Those in the first
group, including DMTF, OGF, and SNIA, have been active in the field of grids and distributed
processing management and have newly added cloud computing to their agendas. The bodies in the
second group, including OCC, CSA and GICTF, were newly founded to address cloud computing.
(1) DMTF (Distributed Management Task Force)
DMTF has defined the Open Virtualization Format (OVF), which is a standard virtual machine
image format. It established OCSI in April 2009 and is studying standards that will allow
interoperability between cloud systems. The Cloud Management Working Group (CMWG), in which
VMware, Fujitsu, Oracle, and others are proposing a relevant API, was established in June 2010.
DMTF issued a white paper on interoperability between cloud systems in November 2009 and another
on use cases of cloud management and interactions in June 2010. Its Board Members include
VMware, Microsoft, IBM, Citrix, Cisco, and Hitachi.
(2) OGF (Open Grid Forum)
OGF formed the OCCI-WG (WG: Working Group) in April 2009 and defined and released an API
specification, OCCI [2], which makes possible lifecycle management of virtual machines and
workloads through IaaS. OCCI is implemented in Europe‘s OpenNebula Project, etc. The main
participants in OCCI are Fujitsu, EMC, and Oracle.
(3) SNIA (Storage Networking Industry Association)
SNIA established the Cloud Storage Technical Working Group in April 2009 and released CDMI,
which is an interface specification for cloud storage data management. In October 2009, it formed a
sub-working group called the Cloud Storage Initiative (CSI) to educate users and promote the cloud
storage market through the Cloud BUR SIG (Cloud Backup and Recovery Special Interest Group)
project. SNIA‘s membership includes EMC, IBM, Fujitsu, and Hitachi. Its Japan Chapter was
established in 2010.
(4) OMG (Object Management Group)
OMG held a Cloud Standards Summit in July 2009 and inaugurated Cloud Standards Coordination,
which is a round-table conference of cloud-related standards bodies. Participants in the Coordination
currently include DMTF, OGF, SNIA, TM Forum (Tele Management Forum), OASIS, OCC, CSA,
ETSI, and NIST in addition to OMG.

Cloud computing 40
(5) OASIS (Organization for the Advancement of Structured Information Standards)
OASIS established the Identity in the Cloud Technical Committee (ID Cloud TC) in May 2010,
surveyed existing ID management standards, and developed use cases of cloud ID management and
guidelines on reducing vulnerability. It also developed basic security standards, such as SAML
(Security Assertion Markup Language) and maintains liaison with CSA and ITU-T. The main
members are IBM, Microsoft, and others.
(6) OCC (Open Cloud Consortium)
OCC is a nonprofit organization formed in January 2009 under the leadership of the University of
Illinois at Chicago. It aims to develop benchmarks using a cloud testbed and achieve interoperability
between cloud systems. Its Working Groups include Open Cloud Testbed, Project Matsu, which is a
collaboration with NASA, and Open Science Data Cloud, which covers the scientific field. The main
members include NASA, Yahoo, Cisco, and Citrix.
(7) Open Cloud Manifesto
Open Cloud Manifesto is a nonprofit organization established in March 2009 to promote the
development of cloud environments that incorporate the user‘s perspective under the principle of open
cloud computing. It published cloud use cases and requirements for standards as a white paper in
August 2009. The latest version of the white paper is version 4.0 (V4) [3], which included for the first
time the viewpoint of the service level agreement (SLA). The participants include IBM, VMware,
Rackspace, AT&T, and TM Forum. A Japanese translation of the white paper is available [4].
(8) CSA (Cloud Security Alliance)
CSA is a nonprofit organization established in March 2009 to study best practices in ensuring cloud
security and promote their use. It released guidelines on cloud security in April 2009. The current
version is version 2.1 [5], which proposes best practices in thirteen fields, such as governance and
compliance. The main members are PGP, ISACA, ENISA, IPA, IBM, and Microsoft. A distinctive
feature of the membership is that it includes front runners in cloud computing, such as Google and
Salesforce. A Japan Chapter of CSA (NCSA) was inaugurated in June 2010.
(9) CCF (Cloud Computing Forum)
CCF is a Korean organization established in December 2009 to develop cloud standards and
promote their application to public organizations. Its membership consists of 32 corporate members
and more than 60 experts. CCF comprises six Working Groups, including Media Cloud, Storage
Cloud, and Mobile Cloud.
(10) GICTF (Global Inter-Cloud Technology Forum)
GICTF is a Japanese organization studying inter-cloud standard interfaces, etc. in order to enhance
the reliability of clouds. As of March 2011, it has a membership of 74 corporate members and four
organizations from industry, government, and academia. In June 2010, it released a white paper on use
cases of inter-cloud federation and functional requirements. The main members include NTT, KDDI,
NEC, Hitachi, Toshiba Solutions, IBM, and Oracle.

Cloud computing 41
ICT-oriented standards bodies
Major standards bodies in the ICT field have also, one after another, established study groups on
cloud computing. These study groups are holding lively discussions.
(1) IETF
IETF had been informally discussing cloud computing in a bar BOF (discussions over drinks in a
bar; BOF: birds of a feather) before November 2010 when, at IETF79, it agreed to establish the Cloud
OPS WG (WG on cloud computing and maintenance), which is discussing cloud resource
management and monitoring, and Cloud-APS BOF (BOF on cloud computing applications), which is
mainly discussing matters related to applications. Since around the end of 2010, it has been receiving
drafts for surveys of the cloud industries and standards bodies, reference frameworks, logging, etc.
(2) IEEE
IEEE formed the Cloud Computing Standards Study Group (CCSSG) in March 2010. It announced
the launch of two new standards development projects in April 2011: P2301, Guide for Cloud
Portability and Interoperability Profiles (CPIP) and P2302, Standard for Inter cloud Interoperability
and Federation (SIIF).
(3) TM Forum
In December 2009, TM Forum established the Enterprise Cloud Buyers Council (ECBC) to resolve
issues (on standardization, security, performance, etc.) faced by enterprises when they host private
clouds and thereby to promote the use of cloud computing. In May 2010, it started the Cloud Services
Initiative, which aims to encourage cloud service market growth. The main members of this initiative
are Microsoft, IBM, and AT&T.

De jure standards bodies


Since late 2009, major de jure standards bodies have taken up cloud computing as part of their study
subjects. All these bodies are conducting a gap analysis based on the studies made by forum standards
bodies in order to identify the target areas where standardization by de jure standards bodies is desired.
Specific activities to develop recommendations are expected to start in 2011.
(1) ITU-T
In February 2010, ITU-T launched the Focus Group on Cloud Computing, which is discussing the
benefits of clouds and target issues requiring standardization from the telecommunication perspective.
The Group is currently developing six documents on topics such as the cloud ecosystem, functional
architecture, cloud security, and utilization of networks in clouds. Afterwards, relevant Study Groups
will develop recommendations for these issues.
(2) ISO/IEC JTC1
In its Sub Committee 38 (SC38) meeting held in November 2009, ISO/IEC JTC1 established a
Study Group to study cloud computing. Its secretariat is provided by the American National Standards
Institute (ANSI). The Study Group is classifying cloud computing, sorting out terminology, and
maintaining liaison with other organizations. In Japan, SC38 Technical Committee was launched in

Cloud computing 42
February 2010. In addition, SC27 is studying requirements for Information Security Management
Systems (ISMSs).
(3) ETSI (European Telecommunications Standards Institute)
ETSI has established a Technical Committee on grids and clouds. The TC Cloud has released a
Technical Report (TR) on standards required in providing cloud services.

Government-affiliated bodies
Government-affiliated bodies in the USA and Europe are active in cloud-related standardization.
Government systems constitute a large potential cloud market. It is highly likely that the specifications
used by governmental organizations for procurement will be adopted as de facto standards.
(1) NIST
NIST is a technical department belonging to the U.S. Department of Commerce. ―The NIST
Definition of Cloud Computing‖, which was published in October 2009, is referred to on various
occasions. NIST undertakes cloud standardization with five WGs. One of them, Standards
Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC), is intended to promote the
development of cloud standards based on actual examples and use cases. It discloses a number of
different specifications and actual implementation examples on its portal. It also discloses test results
for the developed standard specifications.
(2) ENISA (European Network and Information Security Agency)
In November 2009, ENISA, an EU agency, released two documents: ―Cloud Computing: Benefits,
Risks and Recommendations for Information Security‖, which deals with cloud security, risk, and
assessment and ―Cloud Computing Information Assurance Framework‖, which is a framework for
ensuring security in cloud computing.

Service Oriented Architecture:


Like cloud computing, SOA brings with it a number of key benefits and risks, including: •
Dependence on the network: SOA is fundamentally dependent on the network to connect the service
provider with the consumer. For example, Web Service protocols ride on Internet protocols to invoke
software functions distributed across the network. Poorly performing networks can make a large
impact on the availability of Web Services to the consumer. • Provider costs: Creating a generic
reusable soft - ware component for a broad audience takes more resources (20 percent to 100 percent
more) than creating a less generic point solution. The costs of reuse, therefore, shift s to the service
providers, which benefit the consumers. • Enterprise standards: When many components are being
simultaneously developed by individual teams, it becomes critical for the interface of a provider‘s
service to match up to the ―call‖ of a consumer. Similarly, it helps everyone involved if the interfaces
across services have some commonality in structure and security access mechanisms. Choosing and
communicating a comprehensive set of enterprise standards is a responsible approach to aid in
enterprise SOA integration. • Agility: When we discuss ―agility‖ as it relates to SOA, we are oft en
referring to organizational agility, or the ability to more rapidly adapt a Federal organization‘s tools to
meet their current requirements. An organization‘s requirements of IT might change over time for a
number of reasons, including changes in the business or mission, changes in organizational reporting
requirements, changes in the law, new technologies in the commercial marketplace, attempts to
combine diverse data sources to improve the organization‘s operational picture, and many other
reasons. The larger promise of an enterprise SOA is that once a sufficient quantity of legacy-wrapped

Cloud computing 43
components exist, and are accessible on the internet protocol (IP) wide area network (WAN), they can
be reassembled more rapidly to solve new problems. Comparing Cloud Computing and SOA Cloud
computing and SOA have important overlapping concerns and common considerations, as shown in
Figure 4. The most important overlap occurs near the top of the cloud computing stack, in the area of
Cloud Services, which are network accessible application components and software services, such as
contemporary Web Services. (See the notional cloud stack in Figure 1.)

Both cloud computing and SOA share concepts of service orientation. Services of many types are
available on a common network for use by consumers. Cloud computing focuses on turning aspects of
the IT computing stack into commodities that can be purchased incrementally from the cloud based
providers and can be considered a type of outsourcing in many cases. For example, large-scale online
storage can be procured and automatically allocated in terabyte units from the cloud. Similarly, a
platform to operate web-based applications can be rented from redundant data centers in the cloud.
However, cloud computing is currently a broader term than SOA and covers the entire stack from
hardware through the presentation layer software systems. SOA, though not restricted conceptually to
software, is oft en implemented in practice as components or software services, as exemplified by the
Web Service standards used in many implementations. These components can be tied together and
executed on many platforms across the Network to provide a business function.
Network dependence: Both cloud computing and SOA count on a robust network to connect
consumers and producers and in that sense, both have the same foundational structural weakness when
the network is not performing or is unavailable. John Naughton elaborates on this concern when he

Cloud computing 44
writes that ―with gigabit ethernet connections in local area networks, and increasingly fast broadband,
network performance has improved to the point where cloud computing looks like a feasible
proposition .... If we are betting our futures on the network being the computer, we ought to be sure
that it can stand the strain.
Forms of outsourcing: Both concepts require forms of contractual relationships and trust between
service providers and service consumers. Reuse of an SOA service by a group of other systems is in
effect an ―outsourcing‖ of that capability to another organization. With cloud computing, the
outsourcing is more overt and oft en has a fully commercial flavor. Storage, platforms, and servers are
rented from commercial providers who have economies of scale in providing those commodities to a
very large audience. Cloud computing allows the consumer organization to leave the detailed IT
administration issues to the service providers.
Standards: Both cloud computing and SOA provide an organization with an opportunity to select
common standards for network accessible capabilities. SOA has a fairly mature set of standards with
which to implement software services, such as Representational State Transfer (REST), SOAP, and
Web Services Description Language (WSDL), among many others. Cloud computing is not as mature,
and many of the interfaces offered are unique to a particular vendor, thus raising the risk of vendor
lock-in. Simon Wardley writes, ―The ability to switch between providers overcomes the largest
concerns of using such service providers, the lack of second sourcing options and the fear of vendor
lock-in (and the subsequent weaknesses in strategic control and lack of pricing competition).‖ This is
likely to change over time as offerings at each layer in the stack become more homogenous. Wardley
continues, ―The computing stack, from the applications we write, to the platforms we build upon, to
the operating systems we use are now moving from a product- to a service-based economy. The shift
towards services will also lead to standardization of lower orders of the computing stack to internet
provided components.‖

Major objectives of SOA


There are three major objectives of SOA, all which focus on a different part of the application
lifecycle.

 The first objective aims to structure procedures or software components as services. These
services are designed to be loosely coupled to applications, so they are only used when needed.
They are also designed to be easily utilized by software developers, who have to create
applications in a consistent way.
 The second objective is to provide a mechanism for publishing available services, which
includes their functionality and input/output (I/O) requirements. Services are published in a
way that allows developers to easily incorporate them into applications.
 The third objective of SOA is to control the use of these services to avoid security and
governance problems. Security in SOA revolves heavily around the security of the individual
components within the architecture, identity and authentication procedures related to those
components, and securing the actual connections between the components of the architecture.

Business Process Execution Language:

Cloud computing 45
Business Process Execution Language (BPEL) is an Organization for the Advancement of Structured
Information Standards (OASIS) executable language for exporting and importing business information
using only the interfaces available through Web services.

BPEL is concerned with the abstract process of "programming in the large", which involves the high-
level state transition interactions of processes. The language includes such information as when to
send messages, when to wait for messages and when to compensate for unsuccessful transactions. In
contrast, "programming in the small" deals with short-lived programmable behavior such as a single
transaction involving the logical manipulation of resources.

BPEL was developed to address the differences between programming in the large and programming
in the small. This term is also known as Web Services Business Process Execution Language (WS-
BPEL), and is sometimes written as business process execution language for Web Services.

Microsoft and IBM both developed their own programming in the large languages, which are
very similar, and called XLANG and WSFL respectively. In view of the popularity of a third
language, BPML, Microsoft and IBM decided to combine their two languages into another
called BPEL4WS. After submitting the new language to OASIS for standardization, it
emerged from a technical committee in 2004 as WS-BPEL 2.0.

Web services interactions in BPEL are described in two ways:


1. Executable business processes, a model of actual human behavior
2. Abstract business processes, a partially specified process not intended to be
executed, but with some of the required concrete operational details hidden

Both models serve a descriptive role and have more than one possible use case. BPEL
should be used both between businesses and within a given business. The BPEL4People
language and WS-Human Task specifications were published in 2007 and describe how
people can interact with BPEL processes.

The 10 original design goals of BPEL are:


1. Define business processes that interact with Web-service operations
2. Define business processes that employ an XML-based language
3. Define a set of Web service orchestration concepts to be used by both the abstract
and the executable views of a business process
4. Provide and implement both hierarchical and graph-like control regimes
5. Provide for data manipulations as needed to define process data and control flow
6. Support an identification methodology for process instances as defined by partners,
while recognizing that they may change
7. Support the implicit creation and termination of process instances
8. Define a long-running transaction model based on proven techniques
9. Use Web-based services as a model for process decomposition and assembly
10. Build on Web service standards.

Interoperability Standards for Data Center Management:


The cloud-computing community typically uses the term interoperability to refer to the ability to easily
move workloads and data from one cloud provider to another or between private and public clouds. Even

Cloud computing 46
though this definition corresponds to the meaning of the term portability—the ability to move a system from
one platform to another—the community refers to this property as interoperability, and I will use this term in
this report. In general, the cloud-computing community sees the lack of cloud interoperability as a barrier to
cloud-computing adoption because organizations fear “vendor lock-in.” Vendor lock-in refers to a situation in
which, once an organization has selected a cloud provider, either it cannot move to another provider or it can
change providers but only at great cost [Armbrust 2009, Hinchcliffe 2009, Linthicum 2009, Ahronovitz 2010,
Harding 2010, Badger 2011, and Kundra 2011]. Risks of vendor lock-in include reduced negotiation power in
reaction to price increases and service discontinuation because the provider goes out of business. A common
tactic for enabling interoperability is the use of open standards [ITU 2005]. A representative of the military, for
example, recently urged industry to take a more open-standards approach to cloud computing to increase
adoption [Perera 2011]. The Open Cloud Manifesto published a set of principles that its members suggest that
the industry follow, including using open standards and “playing nice with others” [Open Cloud 2009]. Cerf
emphasizes the need for “inter-cloud standards” to improve asset management in the cloud [Krill 2010].
However, other groups state that using standards is just “one piece of the cloud interoperability puzzle” [Lewis
2008, Hemsoth 2010, Linthicum 2010b, Considine 2011]. Achieving interoperability may also require sound
architecture principles and dynamic negotiation between cloud providers and users. This report explores the
role of standards in cloud-computing interoperability. The goal of the report is to provide greater insight into
areas of cloud computing in which standards would be useful for interoperability and areas in which standards
would not help or would need to mature to provide any value.

Use cases in the context of cloud computing refer to typical ways in which cloud consumers and

providers interact. NIST, OMG, DMTF, and others—as part of their efforts related to standards

for data portability, cloud interoperability, security, and management—have developed use cases

for cloud computing.

NIST defines 21 use cases classified into three groups: cloud management, cloud interoperability,

and cloud security [Badger 2010]. These use cases are listed below [Badger 2010]:

• Cloud Management Use Cases

− Open an Account

− Close an Account

− Terminate an Account

− Copy Data Objects into a Cloud

− Copy Data Objects out of a Cloud

− Erase Data Objects on a Cloud

− VM [virtual machine] Control: Allocate VM Instance

− VM Control: Manage Virtual Machine Instance State

− Query Cloud-Provider Capabilities and Capacities

Cloud computing 47
• Cloud Interoperability Use Cases

− Copy Data Objects Between Cloud-Providers

− Dynamic Operation Dispatch to IaaS Clouds

− Cloud Burst from Data Center to Cloud

− Migrate a Queuing-Based Application

− Migrate (fully-stopped) VMs from One Cloud Provider to Another

• Cloud Security Use Cases

− Identity Management: User Account Provisioning

− Identity Management: User Authentication in the Cloud

− Identity Management: Data Access Authorization Policy Management in the Cloud

− Identity Management: User Credential Synchronization Between Enterprises and the

Cloud

− eDiscovery

− Security Monitoring

− Sharing of Access to Data in a Cloud

OMG presents a more abstract set of use cases as part of the Open Cloud Manifesto [Ahronovitz
2010]. These are much more generic than those published by NIST and relate more to deployment
than to usage. The use cases ―Changing Cloud Vendors‖ and ―Hybrid Cloud‖ are the ones of interest
from a standards perspective because they are the main drivers for standards in cloud computing
environments. ―Changing Cloud Vendors‖ particularly motivates organizations that do not want to be
in a vendor lock-in situation. The full list is presented below [Ahronovitz 2010]:

• End User to Cloud: applications running in the public cloud and accessed by end users

• Enterprise to Cloud to End User: applications running in the public cloud and accessed by

employees and customers

• Enterprise to Cloud: applications running in the public cloud integrated with internal IT capabilities

• Enterprise to Cloud to Enterprise: applications running in the public cloud and interoperating with
partner applications (supply chain)

• Private Cloud: a cloud hosted by an organization inside that organization‘s firewall

• Changing Cloud Vendors: an organization using cloud services decides to switch cloud providers or
work with additional providers

Cloud computing 48
• Hybrid Cloud: multiple clouds work together, coordinated by a cloud broker that federates data,
applications, user identity, security, and other details

DMTF produced a list of 14 use cases specifically related to cloud management [DMTF 2010]:

• Establish Relationship

• Administer Relationship

• Establish Service Contract

• Update Service Contract

• Contract Reporting

• Contract Billing

• Terminate Service Contract

• Provision Resources

• Deploy Service Template

• Change Resource Capacity

• Monitor Service Resources

• Create Service Template

• Create Service Offering

• Notification of Service Condition or Event

Across the complete set of use cases proposed by NIST, OMG, and DMTF, four types of use cases

concern consumer–provider interactions that would benefit from the existence of standards.

These interactions relate to interoperability and can be mapped to the following four basic clouds

interoperability use cases:

1. User Authentication: A user who has established an identity with a cloud provider can use the

same identity with another cloud provider.

2. Workload Migration: A workload that executes in one cloud provider can be uploaded to another

cloud provider.

3. Data Migration: Data that resides in one cloud provider can be moved to another cloud provider.

4. Workload Management: Custom tools developed for cloud workload management can be used to
manage multiple cloud resources from different vendors.

Cloud computing 49
The remainder of this section describes existing standards and specifications that support these four
main types of use cases.

User Authentication
The use case for user authentication corresponds to a user or program that needs to be identified in the
cloud environment. It is important to differentiate between two types of users of cloud environments:
end users and cloud-resource users.

End users are users of applications deployed on cloud resources. Because these users register and
identify with the application and not with the infrastructure resources, they are usually not aware that
the application is running on cloud resources.

Cloud-resource users are typically administrators of the cloud resources. These users can also set
permissions for the resources based on roles, access lists, IP addresses, domains, and so forth. This
second type of user is of greater interest from an interoperability perspective.

Some of the standardization efforts, as well as technologies that are becoming de facto standards, that
support this use case are

• Amazon Web Services Identity Access Management (AWS IAM): Amazon uses this mechanism for
user authentication and management, and it is becoming a de facto standard [Amazon 2012d]. It
supports the creation and the permissions management for multiple users within an AWS account.
Each user has unique security credentials with which to access the services associated with an account.
Eucalyptus also uses AWS IAM for user authentication and management.

• OAuth: OAuth is an open protocol by the Internet Engineering Task Force (IETF) [OAuth 2010]. It
provides a method for clients to access server resources on behalf of the resource owner. It also
provides a process for end users to authorize third-party access to their server resources without
sharing their credentials. The current version is 1.0, and IETF‘s work continues for Version 2.0.
Similarly to WS-Security, OAuth Version 2.0 will support user identification information in Simple
Object Access Protocol (SOAP) messages. Cloud platforms that support OAuth include Force.com,
Google App Engine, and Microsoft Azure.

• OpenID: OpenID is an open standard that enables users to be authenticated in a decentralized manner
[OpenID 2012]. Users create accounts with an OpenID identity provider and then use those accounts
(or identities) to authenticate with any web resource that accepts OpenID authentication.Cloud
platforms that support OpenID include Google App Engine and Microsoft Azure. OpenStack has an
ongoing project to support OpenID.

• WS-Security: WS-Security is an OASIS security standard specification [OASIS 2006]. The current
release is Version 1.1. WS-Security describes how to secure SOAP messages using Extensible Markup
Language (XML) Signature and XML Encryption and attach security tokens to SOAP messages.
Cloud platforms that support WS-Security for message authentication include Amazon EC2 and
Microsoft Azure.

Workload Migration:
The use case for workload migration corresponds to the migration of a workload, typically represented
as a virtual-machine image, from one cloud provider to a different cloud provider. The migration of a

Cloud computing 50
workload requires (1) the extraction of the workload from one cloud environment and (2) the upload
of the workload to another cloud environment. Some of the standards that support this use case are

• Amazon Machine Image (AMI): An AMI is a special type of virtual machine that can be deployed
within Amazon EC2 and is also becoming a de facto standard [Amazon 2012b]. Eucalyptus and
OpenStack support AMI as well.

• Open Virtualization Framework (OVF): OVF is a virtual-machine packaging standard developed and
supported by DMTF [DMTF 2012]. Cloud platforms that support OVF include Amazon EC2,
Eucalyptus, and OpenStack.

• Virtual Hard Disk (VHD): VHD is a virtual-machine file format supported by Microsoft [Microsoft
2006]. Cloud platforms that support VHD include Amazon EC2 and Microsoft Azure.

Data Migration and Management:


The use case for data migration and management corresponds to the migration of data from one cloud
provider to another. As with workload migration, it requires (1) the extraction of the data from one
cloud environment and (2) the upload of the data to another cloud environment. In addition, in an
interoperability context, once the data has been moved to the new provider, any program that
performed create, retrieve, update, or delete (CRUD) operations on that data in the original cloud
provider should continue to work in the new cloud provider.

There are two types of cloud storage. Typed-data storage works similarly to an SQL-compatible
database and enables CRUD operations on user-defined tables. Object storage enables CRUD
operations of generic objects that range from data items (similar to a row of a table), to files, to virtual-
machine images.

Some of the standards that support this use case, especially for object storage, are

• Cloud Data Management Interface (CDMI): CDMI is a standard supported by the Storage
Networking Industry Association (SNIA) [SNIA 2011]. CDMI defines an API to CRUD data elements
from a cloud-storage environment. It also defines an API for discovery of cloud storage capabilities
and management of data containers.

• SOAP: Even though SOAP is not a data-specific standard, multiple cloud-storage providers support
data- and storage-management interfaces that use SOAP as a protocol. SOAP is a W3C specification
that defines a framework to construct XML-based messages in a decentralized, networked
environment [W3C 2007]. The current version is 1.2, and HTTP is the primary transport mechanism.
Amazon S3 provides a SOAP-based interface that other cloudstorage environments, including
Eucalyptus and OpenStack, also support.

• Representational State Transfer (REST): REST is not a data-specific standard either, but multiple
cloud-storage providers support RESTful interfaces. REST is considered architecture and not a
protocol [IBM 2008]. In a REST implementation, every entity that can be identified, CMU/SEI-2012-
TN-012 | 12 named, addressed, or handled is considered a resource. Each resource is addressable via
its universal resource identifier and provides the same interface, as defined by HTTP: GET, POST,
PUT, DELETE. Amazon S3 provides a RESTful interface that Eucalyptus and OpenStack also
support. Other providers with RESTful interfaces for data management include Salesforce.com‘s

Cloud computing 51
Force.com, Microsoft Windows Azure (Windows Azure Storage), OpenStack (Object Storage), and
Rackspace (Cloud Files). The API defined by CDMI is a RESTful interface.

Workload Management:
The use case for workload management corresponds to the management of a workload deployed in the
cloud environment, such as starting, stopping, changing, or querying the state of a virtual instance. As
with the data-management use case, in an interoperability context an organization can ideally use any
workload-management program with any provider. Even though most environments provide a form of
management console or command-line tools, they also provide APIs based on REST or SOAP.
Providers that offer SOAP-based or RESTful APIs for workload management include Amazon EC2,
Eucalyptus, GoGrid Cloud Servers, Google App Engine, Microsoft Windows Azure, and OpenStack
(Image Service).

(or)

Data models describe the structure of data and are used by applications. They enable the
applications to interpret and process the data. They apply to data that is held in store and accessed by
applications, and also to data in messages passed between applications.

A data model may exist as:


 An undocumented shared understanding between programmers
 A human-readable description, possibly a standard
 A machine-readable description

While there are some standard data models, such as that defined by the ITU-T X.500 standards for
directories, most data models are application-specific. There is, however, value in standardizing how
these specific data models are described. This is important for data portability and for interoperability
between applications and software services.

Relational database schemas are the most commonly-encountered data models. They are based on the
relational database table paradigm. A schema typically exists in human-readable and machine-readable
form. The machine-readable form is used by the Database Management System (DBMS) that is the
application that directly accesses the data. Applications that use the DBMS to access the data
indirectly do not often use the machine-readable form of the schema; they work because their
programmers read the human-readable form. The Structured Query Language (SQL) standard [SQL]
applies to relational databases.

The semantic web standards can be used to define data and data models in machine-readable form.
They are based on yet another paradigm, in which data and metadata exists as subject-verb-object
triples. They include the Resource Description Framework (RDF) [RDF] and the Web Ontology
Language (OWL) [OWL]. With these standards, all applications use the machine-readable form, and
there is less reliance on understanding of the human-readable form by programmers.

Application-Application Interfaces

These are interfaces between applications. Increasingly, applications are web-based and
intercommunicate using web service APIs. Other means of communication, such as message queuing

Cloud computing 52
or shared data, are and will continue to be widely used, for performance and other reasons, particularly
between distributed components of a single application. However, APIs to loosely-coupled services
form the best approach for interoperability.

Some cloud service providers make client libraries available to make it easier to write programs that
use their APIs. These client libraries may be available in one or more of the currently popular
programming languages.

A service provider may go further, and make available complete applications that run on client devices
and use its service (―client apps‖). This is a growing phenomenon for mobile client devices such as
tablets and smartphones. For example, many airlines supply ―apps‖ that passengers can use to manage
their bookings and check in to flights.

If a service provider publishes and guarantees to support an API then it can be regarded as an
interoperability interface. Use of a library or client app can enable the service provider to change the
underlying HTTP or SOAP interface without disrupting use of the service. In such a case a stable
library interface may still enable interoperability. A service that is available only through client apps is
unlikely to be interoperable.

A web service API is a protocol interface with four layers.

Web Service APIs

Applications are concerned with the highest layer, which is the message content layer. This provides
transfer of information about the state of the client and the state of the service, including the values of
data elements maintained by the service.
An application-application API specification defines the message content, the syntax in which it is
expressed, and the envelopes in which it is transported.

Cloud computing 53
The platforms supporting the applications handle the Internet, HTTP, and message envelope layers of
a web service interface, and enable a service to send and receive arbitrary message contents. These
layers are discussed under Interfaces below.

Message content is application-specific:


 Some standardization may be appropriate within particular applications (for example, the Open
Travel Alliance (OTA) [OTA] defines standard APIs to services such as hotel reservation, to enable
collaboration between hotels and booking agents).

 The amount of application-specific processing required can be reduced by using standards for
message syntax and semantics.

The Cloud Data Management Interface (CDMI) [CDMI] defined by the Storage Networking Industry
Association (SNIA) is a standard application-specific interface for a generic data storage and retrieval
application. (It is a direct HTTP interface, follows REST principles, and uses JSON to encode data
elements.) It also provides some management capabilities.

Message contents are essentially data. Standards for describing data models can be applied to service
functional interface message contents and improve service interoperability.

Application Management Interfaces:

These are web service APIs, like Application-Application Interfaces, but are presented by applications
to expose management capabilities rather than functional capabilities.

Standardization of some message content is appropriate for these and other management interfaces.
This is an active area of cloud standardization, and there are a number of emerging standards. Some
are generic, while others are specific to applications, platform, or infrastructure management. None,
however, appear to be widely adopted yet.

There are two generic standards, TOSCA and OCCI, which apply to application management.

The OASIS Topology and Orchestration Specification for Cloud Applications (TOSCA) [TOSCA] is
an XML standard language for descriptions of service-based applications and their operation and
management regimes. It can apply to complex services implemented on multiple interacting servers.

The Open Cloud Computing Interface (OCCI) [OCCI] of the Open Grid Forum is a standard interface
for all kinds of cloud management tasks. OCCI was originally initiated to create a remote management
API for IaaS model-based services, allowing for the development of interoperable tools for common
tasks including deployment, autonomic scaling, and monitoring. The current release is suitable to
serve many other models in addition to IaaS, including PaaS and SaaS.

Platform Management Interfaces:


These are web service APIs, like Application-Application Interfaces, that are presented by platforms to
expose management capabilities.

Cloud computing 54
The Cloud Application Management for Platforms (CAMP) [CAMP] is a PaaS management
specification that is to be submitted to OASIS for development as an industry standard. It defines an
API using REST and JSON for packaging and controlling PaaS workloads.

Infrastructure Management Interfaces:


These are web service APIs, like Application-Application Interfaces, that are presented by
infrastructure services to expose management capabilities.

There are some standard frameworks that make it possible to write generic management systems that
interoperate with vendor-specific products.

 The IETF Simple Network Management Protocol (SNMP) [SNMP] is the basis for such a
framework. It is designed for Internet devices.

 The Common Management Information Service (CMIS) [CMIS] and the Common Management
Information Protocol (CMIP) [CMIP] are the basis for another such framework. They are designed
for telecommunication devices.

 The Distributed Management Task Force (DMTF) [DMTF] has defined a Common Information
Model (CIM) [CIM] that provides a common definition of management information for systems of
all kinds.

Utility Computing Technology:


Utility computing is a service provisioning model in which a service provider makes computing
resources and infrastructure management available to the customer as needed, and charges them for
specific usage rather than a flat rate. Like other types of on-demand computing (such as grid
computing), the utility model seeks to maximize the efficient use of resources and/or minimize
associated costs.

The word utility is used to make an analogy to other services, such as electrical power, that seek to
meet fluctuating customer needs, and charge for the resources based on usage rather than on a flat-rate
basis. This approach, sometimes known as pay-per-use or metered services is becoming increasingly
common in enterprise computing and is sometimes used for the consumer market as well, for Internet
service, Web site access, file sharing, and other applications.

Another version of utility computing is carried out within an enterprise. In a shared pool utility model,
an enterprise centralizes its computing resources to serve a larger number of users without unnecessary
redundancy.
This model is based on that used by conventional utilities such as telephone services, electricity and
gas. The principle behind utility computing is simple. The consumer has access to a virtually unlimited
supply of computing solutions over the Internet or a virtual private network, which can be sourced and
used whenever it's required. The back-end infrastructure and computing resources management and
delivery is governed by the provider.

Cloud computing 55
Utility computing solutions can include virtual servers, virtual storage, virtual software, backup and
most IT solutions.

Cloud computing, grid computing and managed IT services are based on the concept of utility
computing.

Virtualization:
Virtualization is the creation of virtual servers, infrastructures, devices and computing
resources. A great example of how it works in your daily life is the separation of your hard drive into
different parts. While you may have only one hard drive, your system sees it as two, three or more
different and separate segments. Similarly, this technology has been used for a long time. It started as
the ability to run multiple operating systems on one hardware set and now it a vital part of testing and
cloud-based computing.

Virtualization vs. Cloud Computing


 Virtualization changes the hardware-software relations and is one of the foundational elements
of cloud computing technology that helps utilize cloud computing capabilities to the full.
Unlike virtualization, cloud computing refers to the service that results from that change.
 The delivery of shared computing resources, SaaS and on-demand services through the Internet.
Most of the confusion occurs because virtualization and cloud computing work together to
provide different types of services, as is the case with private clouds.
 The cloud often includes virtualization products as a part of their service package. The
difference is that a true cloud provides the self-service feature, elasticity, automated
management, scalability and pay-as-you-go service that is not inherent to the technology.

A technology called the Virtual Machine Monitor — also called virtual manager– encapsulates the very
basics of virtualization in cloud computing. It is used to separate the physical hardware from its
emulated parts. This often includes the CPU‘s memory, I/O and network traffic. A secondary operating
system that is usually interacting with the hardware is now a software emulation of that hardware, and
often the guest operating system has no idea it‘s on the virtualized hardware. Despite the fact that
performance of the virtual system is not equal to the functioning of the ―true hardware‖ operating
system, the technology still works because most secondary OSs and applications don‘t need the full use
of the underlying hardware. This allows for greater flexibility, control and isolation by removing the
dependency on a given hardware platform.

The layer of software that enables this abstraction is called ―hypervisor‖. A study in the International
Journal of Scientific & Technology Research defines it as ―a software layer that can monitor and
virtualize the resources of a host machine conferring to the user requirements.‖ The most common
hypervisor is referred to as Type 1. By talking to the hardware directly, it virtualizes the hardware
platform that makes it available to be used by virtual machines. There‘s also a Type 2 hypervisor,
which requires an operating system. Most often, you can find it being used in software testing and
laboratory research.

Types of Virtualization in Cloud Computing


Here are six methodologies to look at when talking about virtualization techniques in cloud computing:

Cloud computing 56
Network Virtualization
Network virtualization in cloud computing is a method of combining the available resources in a
network by splitting up the available bandwidth into different channels, each being separate and
distinguished. They can be either assigned to a particular server or device or stay unassigned
completely — all in real time. The idea is that the technology disguises the true complexity of the
network by separating it into parts that are easy to manage, much like your segmented hard drive
makes it easier for you to manage files.

Storage Virtualizing
Using this technique gives the user an ability to pool the hardware storage space from several
interconnected storage devices into a simulated single storage device that is managed from one single
command console. This storage technique is often used in storage area networks. Storage manipulation
in the cloud is mostly used for backup, archiving, and recovering of data by hiding the real and
physical complex storage architecture. Administrators can implement it with software applications or
by employing hardware and software hybrid appliances.

Server Virtualization
This technique is the masking of server resources. It simulates physical servers by changing their
identity, numbers, processors and operating systems. This spares the user from continuously managing
complex server resources. It also makes a lot of resources available for sharing and utilizing, while
maintaining the capacity to expand them when needed.

Data Virtualization
This kind of cloud computing virtualization technique is abstracting the technical details usually used
in data management, such as location, performance or format, in favor of broader access and more
resiliency that are directly related to business needs.

Desktop Virtualizing
As compared to other types of virtualization in cloud computing, this model enables you to emulate a
workstation load, rather than a server. This allows the user to access the desktop remotely. Since the
workstation is essentially running in a data center server, access to it can be both more secure and
portable.

Application Virtualization
Software virtualization in cloud computing abstracts the application layer, separating it from the
operating system. This way the application can run in an encapsulated form without being dependant
upon the operating system underneath. In addition to providing a level of isolation, an application
created for one OS can run on a completely different operating system.
Hyper-Threading:
Hyper-Threading is a technology used by some Intel microprocessor s that allows a single
microprocessor to act like two separate processors to the operating system and the application
programs that use it. It is a feature of Intel's IA-32 processor architecture.

Cloud computing 57
A superscalar CPU architecture implements parallel threads of information units, a process known as
instruction-level parallelism (ILP). A CPU with multithreading capability can simultaneously execute
different program parts, such as threads.
HT allows a multithreaded application to implement threads in parallel from a single multicore
processor, which executes threads in linear form. HT‘s main advantage is that it allows for the
simultaneous execution of multiple threads, which improves response and reaction time while
enhancing system capabilities and support.
An HT processor contains two sets of registers: the control registers and basic registers. A control
register is a processing register that controls or changes the CPU‘s overall performance by switching
address mode, interrupt control or coprocessor control. A basic register is a storage location and part of
the CPU. Both logical processors have the same bus, cache and performance units. During execution,
each register handles threads individually.

Older models with similar techniques were built with dual-processing software threads that divided
instructions into several streams and more than one processor executed commands. PCs that
multithread simultaneously have hardware support and the ability to execute more than one
information thread in parallel form.

To ensure optimum results, a PC system requires several components, including a compatible


motherboard chipset, a basic input/output system (BIOS) and HT technology-supported upgrades, and
a compatible operating system (OS).

HT was developed by Digital Equipment Corporation, but was brought to market in 2002, when Intel
introduced the MP-based Foster Xeon and released the Northwood-based Pentium 4 with 3.06 GHz.
Other HT processors entered the marketplace, including the Pentium 4 HT, Pentium 4 Extreme Edition
and Pentium Extreme Edition.

Blade Servers:
A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as
server blades. Each blade is a server in its own right, often dedicated to a single application. The blades
are literally servers on a card, containing processors, memory, integrated network controllers, an
optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports.
Blade servers are designed to overcome the space and energy restrictions of a typical data
center environment. The blade enclosure, also known as chassis, caters to the power,
cooling, network connectivity and management needs of each blade. Each blade server in an
enclosure may be dedicated to a single application. A blade server can be used for tasks
such as:
 File sharing
 Database and application hosting
 SSL encryption of Web communication
 Hosting virtual server platforms
 Streaming audio and video content

The components of a blade may vary depending on the manufacturer. Blade servers offer
increased resiliency, efficiency, dynamic load handling and scalability. A blade enclosure
pools, shares and optimizes power and cooling requirements across all the blade servers,
resulting in multiple blades in a typical rack space.

Cloud computing 58
Some of the benefits of blade servers include:
 Reduced energy costs
 Reduced power and cooling expenses
 Space savings
 Reduced cabling
 Redundancy
 Increased storage capacity
 Reduced data center footprint
 Minimum administration
 Low total cost of ownership

Blade servers continue to evolve as a powerful computing solution, offering improvements in


terms of modularity, performance and consolidation.
Automated Provisioning:
Automated provisioning, also called self-service provisioning, is the ability to deploy an information
technology or telecommunications service by using pre-defined procedures that are carried out
electronically without requiring human intervention.
The term provisioning, which originated in telecommunications, is the act of acquiring a service. In a
traditional setting, provisioning is a manual process that requires the assistance of several people in
several roles and involves multiple steps. It could take days or even weeks to move a request from the
submission phase through the actual activation of service. Automating provisioning allows customers
to set up and make changes to services themselves by using a Web browser or other client interface. It
can provide a more efficient and rapid response to business requests and cut service activation or
service change time down to hours or even minutes.

Automated provisioning is a type of policy-based management and provisioning rights can be granted
on either a permissions-based or role-based basis. Once automated provisioning has been implemented,
it is up to the service provider to ensure that operational processes are being followed and governance
policies are not being circumvented.

Policy Based Automation:


Policy-based management is an administrative approach that is used to simplify the management of a
given endeavor by establishing policies to deal with situations that are likely to occur.
Policies are operating rules that can be referred to as a way to maintain order, security, consistency, or
otherwise furth a goal or mission. For example, a town council might have a policy against hiring the
relatives of council members for civic positions. Each time that situation arises, council members can
refer to the policy, rather than having to make decisions on a case-by-case basis.

In the computing world, policy-based management is used as an administrative tool throughout an


enterprise or network, or on workstations that have multiple users. Policy-based management includes
policy-based network management, the use of delineated policies to control access to and priorities for
the use of resources. Policy-based management is often used in systems management.

Policy-based management of a multi-user workstation typically includes setting individual policies for
such things as access to files or applications, various levels of access (such as "read-only" permission,
or permission to update or delete files), the appearance and makeup of individual users' desktops and so

Cloud computing 59
on. There are a number of software packages available to automate some elements of policy-based
management. In general, the way these work is as follows: business policies are input to the products,
and the software communicates to network hardware how to support those policies.

Application Management:
Cloud application management for platforms (CAMP) is a specification developed for the management
of applications specifically in Platform as a Service (PaaS) based cloud environments.
CAMP specification provides a framework for enabling application developers to manage their
applications through open-source API structures based on representation state transfer (REST).

Cloud Application Management for Platforms (CAMP):


CAMP was primarily developed by Oracle Corporation in collaboration with CloudBees, CloudSoft,
Huawei, Rackspace, Red Hat and Software AG. These specifications allow the direct interaction
between the cloud provider that builds and provisions the PaaS service and the cloud consumer that is
using that platform to build applications and services. This allows the cloud consumer to self serve
management of the application while sourcing core PaaS offerings.
CAMP's key characteristics include the management of applications throughout their life cycle and to
be as interoperable as possible. Moreover, the application management services will be handled
through common REST-ful APIs that operate on multiple cloud platforms/environments.

Management Components:
Patterns of this category describe how management functionality can be integrated with components
providing application functionality.

Provider Adapter Managed Configuration Elasticity Manager Elastic Load Balancer

Elastic Queue Watchdog

Management Processes:
Patterns of this category describe how distributed and componentized cloud applications may address
runtime challenges, such as elasticity and failure handling in an automated fashion.
Elasticity Management Process Feature Flag Management Process

Cloud computing 60
Update Transition Process Standby Pooling Process

Resiliency Management Process

Provider Adapter:
The Provider Adapter encapsulates all provider-specific implementations required for authentication,
data formatting etc. in an abstract interface. The Provider Adapter, thus, ensures separation of
concerns between application components accessing provider functionality and application
components providing application functionality. It may also offer synchronous provider-interfaces to
be accessed asynchronously via messages and vice versa.

Managed Configuration:
Application components of a Distributed Application often have configuration parameters. Storing
configuration information together with the application component implementation can be unpractical
as it results in more overhead in case of configuration changes. Each running instance of the
application component must be updated separately. Component images stored in Elastic Infrastructures
or Elastic Platforms also have to be updated upon configuration change.

Cloud computing 61
Commonly, a Relational Database, Key-Value Storage, or Blob Storage from where it is accessed by
all running component instances either by accessing the storage periodically or by sending messages to
the components.

Elasticity Manager:
Application components of a Distributed Application hosted on an Elastic Infrastructure or Elastic
Platform shall be scaled-out. The instances of applications components, thus, shall be provisioned and
decommissioned automatically based on the current workload experienced by the application.
The utilization of cloud resources on which application component instances are deployed is
monitored. This could be, for example, the CPU load of a virtual server. This information is used to
determine the number of required instances.

Elastic Load Balancer:


Application components of a Distributed Application shall be scaled out automatically. Requests sent
to an application shall be used as an indicator for the currently experienced workload from which the
required number of components instances shall be deducted.
Based on the number of synchronous requests handled by a load balancer and possibly other utilization
information, the required number of required component instances is determined.

Cloud computing 62
Elastic Queue:
A Distributed Application is comprised of multiple application components that are accessed
asynchronously and deployed to an Elastic Infrastructure or an Elastic Platform. The required
provisioning and decommissioning operations to scale this application should be performed in an
automated fashion.
Queues that are used to distribute asynchronous requests among multiple application components
instances are monitored. Based on the number of enqueued messages the Elastic Queue adjusts the
number of application component instances handling these requests.

Watchdog:
Applications cope with failures automatically by monitoring and replacing application component
instances if the provider-assured availability is insufficient.
If a Distributed Application is comprised of many application components it is dependent on the
availability of all component instances. To enable high availability under such conditions, applications
have to rely on redundant application component instances and the failure of these instances has to be
detected and coped with automatically.

Cloud computing 63
Individual application components rely on external state information by implementing the Stateless
Component pattern. Components are scaled out and multiple instances of them are deployed to
redundant resources. The component instances are monitored by a separate Watchdog component and
replaced in case of failures.

Evaluating Utility Management Technology:


The performance of the various type of applications - scientific computing, ecommerce and web
applications. For example, the work of Iosupetal. focus on analyzing many-task applications on
Clouds by considering the performance parameter only. In Cloud Cmp framework the performance of
different Cloud services such as Amazon EC2, Windows Azure and Rackspace is compared by
considering low level performance parameters such as - CPU and network throughput. To measure the
quality and prioritizing the service authors have proposed frameworks in. This framework lacks in
considering the flexible view of quality and thus limits the overall evaluation. Saravanan and Kantham
in their work proposed a novel framework for ranking and advanced reservation of cloud services
using Quality of Service (QoS) attributes. Not all the QoS characteristics that haven been discussed
considered for the evaluation purpose.

the authors have proposed a QoS ranking prediction framework for Cloud services by considering
historical service usage data of the consumers. This framework facilities inexpensive real-world
service invocations. the authors have proposed a generic QoS framework consists of four components
for Cloud workflow systems. However, the framework is not suitable for solving complex problems
such as multi QoS based service selection, monitoring and violation handling.

Evaluation techniques

One of the popular decision-making technique utilized for the cloud service evaluation and raking is
Analytical Hierarchical Process (AHP). The framework proposed in utilizes AHP based ranking
mechanism which can evaluate the Cloud services based on different applications depending on QoS
requirements. Various other works have been reviewed in have utilized AHP technique for the cloud
resources. In based on AHP and expert scoring system various SaaS products are evaluated. In AHP is

Cloud computing 64
utilized to evaluate the IaaS products. Though other techniques have also been employed to evaluate
and rank cloud services, they limit in considering both qualitative and quantitative criteria and
computation effectively. Although AHP is an effective decision-making tool, disadvantages such as
complex pair wise comparison and subjectivity makes it a complex tool. Also, the comparisons
become computationally intensive and unmanageable when the criteria and alternatives are large in
number. Thus, this paper utilizes an effective mechanism by employing Technique for Order
Preference by Similarity to Ideal Solution (TOPSIS), that can consider large number of criteria
(qualitative and quantitative) and alternatives to evaluate and rank cloud service. The technique is
computational effective and easily manageable.

Evaluation criteria and metrics

Metrics for measuring scalability, cost, peak load handling, and the fault tolerance of Cloud
environments is proposed. The framework proposed in is used only for quantifiable QoS attributes
such as Accountability, Agility, Assurance of Service, Cost, Performance, Security, Privacy, and
Usability. It is not suitable for non-quantifiable QoS attributes such as Service Response Time,
Sustainability, Suitability, Accuracy, Transparency, Interoperability, Availability, Reliability and
Stability. Garget al. proposed metrics for Data Center Performance per Energy, Data Center
Infrastructure Efficiency, Power Usage Efficiency, Suitability, Interoperability, Availability,
Reliability, Stability, Accuracy, Cost, Adaptability, Elasticity, Usability, Throughput and efficiency
and Scalability. Few of the metrics such as suitability is not computationally effective.

General common criteria mentioned and covered in many studies fall under following - cost,
performance, availability, reliability, security, usability, agility and reputation.

QoS Evaluation Framework

In this section, QoS evaluation and raking framework for cloud service is proposed. Fig.1 illustrates
QoS evaluation and raking framework in which a plurality of cloud services can be evaluated and
ranked.

The framework includes components such as - cloud administrator, cloud data discovery, cloud service
discovery and cloud services. Cloud administrator is composed of cloud service measurement
component and cloud manager component. The cloud administrator component communicates with
cloud data discovery component for getting required service parameter data. The cloud data discovery
component is composed of cloud monitor component and history manager component. In evaluating
and ranking a cloud service, the proposed framework does not necessarily select the cost effective i.e.
least expensive cloud service provider. This is so as the service measurement depends in multiple
other parameters which directly or indirectly effect the cost of the service. The cloud administrator
component is responsible for computing the QoS of cloud service by generating cloud service ranking
in the form of indices. The cloud service measurement component receives the customer‘s request for
the cloud service evaluation. It collects all their requirements and performs the discovery and ranking
of suitable services using other components. The cloud manager component keeps track of SLAs of
customers with Cloud providers and their fulfillment history. The cloud service measurement
component uses one or more QoS parameters to generate service index as to which cloud service
provides best fit the user service request requirements. The cloud manager component, manages the
smooth gathering of information from cloud administrator component and delegating it to the cloud
service measurement component. The cloud data discovery component deals with gathering requisite

Cloud computing 65
service level data to compute QoS of cloud service ranking. The cloud monitor component first
discovers the Cloud services which can satisfy user‘s essential QoS requirements. Then, it monitors
the performance of the Cloud services. The history manager component stores the history of services
provided by the cloud provider. The cloud monitor component gathers data such as speed of VM,
memory, scaling latency, storage performance, network latency and available bandwidth. It also keeps
track of how SLA requirements of previous customers are being satisfied by the Cloud provider. The
history manager component stores the past customer feedback, interaction and service experience
information about the cloud service for each cloud vendors.

Virtual Test and development Environment:


A cloud IDE is a web-based integrated development platform (IDE).

An IDE is a programming environment that has been packaged as an application, typically consisting
of a code editor, a compiler, a debugger, and a graphical user interface (GUI) builder. Frequently,
cloud IDEs are not only cloud-based but also designed for the creation of cloud apps. However, some
cloud IDEs are optimized for the creation of native apps for smartphones, tablets and other mobile
devices.

A virtual testing and development environment lets you test OSes and applications before deployment.
You can even build a virtual test lab at home.

The entire development workspace into the cloud. The developer‘s environment is a combination of
the IDE, the local build system, the local runtime (to test and debug the locally edited code), the
connections between these components and the their dependencies with tools such as Continuous
Integration or central services such as Web Services, specialized data stores, legacy applications or
partner-provided services.

The cloud-based workspace is centralized, making it easy to share. Developers can invite others into
their workspace to co-edit, co-build, or co-debug. Developers can communicate with one another in
the workspace itself – changing the entire nature of pair programming, code reviews and classroom
teaching. The cloud can offer improvements in system efficiency & density, giving each individual
workspace a configurable slice of the available memory and compute resources.

The benefits of cloud IDEs include accessibility from anywhere in the world, from any compatible
device; minimal-to-nonexistent download and installation; and ease of collaboration among
geographically dispersed developers.

The emergence of HTML 5 is often cited as a key enabler of cloud IDEs because that standard
supports browser-based development. Other key factors include the increasing trends toward mobility,
cloud computing and open source software.

Data Center Challenges and Solutions:


Cloud Computing is the next generation internet based computing system which provides easy
and customizable services to the users for accessing or to work with various cloud applications. Cloud
Computing provides a way to store and access cloud data from anywhere by connecting the cloud
application using internet. By choosing the cloud services the users are able to store their local data in
the remote data server. The data stored in remote data center can be accessed or managed through the

Cloud computing 66
cloud services provided by the cloud service providers. So the data stored in a remote data center for
data processing should be done with utmost care.

Cloud Computing security is the major concern to be addressed nowadays. If security measures are not
provided properly for data operations and transmissions then data is at high risk. Since cloud
computing provides a facility for a group of users to access the stored data there is a
possibility of having high data risk. Strongest security measures are to be implemented by
identifying security challenge and solutions to handle these challenges. From Fig. 1 it is clear that
how Data Security and Privacy are most important and critical factor to be considered.

Fig. 2. Data Security Challenges.


Security:
When multiple organizations share resources there is a risk of data misuse. So, to avoid risk it is
necessary to secure data repositories and also the data that involves storage, transit or process.
Protection of data is the most important challenges in cloud computing. To enhance the security in
cloud computing, it is important to provide authentication, authorization and access control for data
stored in cloud. The three main areas in data security are

1. Confidentiality: - Top vulnerabilities are to be checked to ensure that data is protected from
any attacks. So security test has to be done to protect data from malicious user such as Cross-
site Scripting, Access Control mechanisms etc..,.
2. Integrity: - To provide security to the client data, thin clients are used where only few
resources are available. Users should not store their personal data such as passwords so that
integrity can be assured.
3. Availability: - Availability is the most important issue in several organizations facing
downtime as a major issue. It depends on the agreement between vendor and the client.

Cloud computing 67
Locality:
In cloud computing, the data is distributed over the number of regions and to find the location of data
is difficult. When the data is moved to different geographic locations the laws governing on that data
can also change. So there is an issue of compliance and data privacy laws in cloud computing.
Customers should know their data location and it is to be intimated by the service provider.

Integrity:
The system should maintain security such that data can be only modified by the authorized person. In
cloud based environment, data integrity must be maintained correctly to avoid the data lost. In general
every transactions in cloud computing should follow ACID Properties to preserver data integrity. Most
of the web services face lot of problems with the transaction management frequently as it uses HTTP
services. HTTP service does not support transaction or guarantee delivery. It can be handled by
implementing transaction management in the API itself.

Access:
The key is distributed only to the authorized parties using various key distribution mechanisms. To
secure the data from the unauthorized users the data security policies must be strictly followed. Since
access is given through the internet for all cloud users, it is necessary to provide privileged user access.
User can use data encryption and protection mechanisms to avoid security risk.

Confidentiality:
Data is stored on remote servers by the cloud users and content such as data, videos etc.., can be stored
with the single or multi cloud providers. When data is stored in the remote server, data confidentiality
is one of the important requirements. To maintain confidentiality data understanding and its
classification, users should be aware of which data is stored in cloud and its accessibility.

Breaches:
Data Breaches is another important security issue to be concentrated in cloud. Since large data from
various users are stored in the cloud, there is a possibility of malicious user entering the cloud
such that the entire cloud environment is prone to a high value attack. A breach can occur due to
various accidental transmission issues or due to insider attack.

Segregation:
One the major characteristics of cloud computing is multi-tenancy. Since multi-tenancy allows to store
data by multiple users on cloud servers there is a possibility of data intrusion. By injecting a client
code or by using any application, data can be intruded. So there is a necessity to store data separately
from the remaining customer‘s data.

Storage:
The data stored in virtual machines have many issues one such issue is reliability of data
storage. Virtual machines needs to be stored in a physical infrastructure which may cause security
risk.

Data Center Operation:


In case of data transfer bottlenecks and disaster, organizations using cloud computing applications
needs to protect the user‘s data without any loss. If data is not managed properly, then there is an

Cloud computing 68
issue of data storage and data access. In case of disaster, the cloud providers are responsible for the
loss of data.

Data Security Challenges:


Encryption is suggested as a better solution to secure information. Before storing data in cloud
server it is better to encrypt data. Data Owner can give permission to particular group member such
that data can be easily accessed by them. Heterogeneous data centric security is to be used to provide
data access control. A data security model comprises of authentication, data encryption and data
integrity, data recovery, user protection has to be designed to improve the data security over cloud. To
ensure privacy and data security data protection can be used as a service.

RSA based data integrity check can be provided by combining identity based cryptography and RSA
Signature. SaaS ensures that there must be clear boundaries both at the physical level and application
level to segregate data from different users. Distributed access control architecture can be used for
access management in cloud computing. To identify unauthorized users, using of credential or
attributed based policies are better. Permission as a service can be used to tell the user that which part
of data can be accessed. Fine grained access control mechanism enables the owner to delegate most of
computation intensive tasks to cloud servers without disclosing the data contents. A data driven
framework can be designed for secure data processing and sharing between cloud users.

Automating the Data Center:


The dynamic nature of cloud computing has pushed data center workload, server, and even hardware
automation to whole new levels of automation to whole new levels. Now, any data center provider
looking to get into cloud computing must look at some form of automation to help them be as agile as
possible in the cloud world.

New technologies are forcing data center providers to adopt new methods to increase efficiency,
scalability and redundancy. Let‘s face facts; there are numerous big trends which have emphasized the
increased use of data center facilities. These trends include:

 More users
 More devices
 More cloud
 More workloads
 A lot more data
As infrastructure improves, more companies have looked towards the data center provider to offload a
big part of their IT infrastructure. With better cost structures and even better incentives in moving
towards a data center environment, organizations of all sizes are looking at colocation as an option for
their IT environment.
With that, data center administrators are teaming with networking, infrastructure and cloud architects
to create an even more efficient environment. This means creating intelligent systems from the
hardware to the software layer. This growth in data center dependency has resulted in direct growth
around automation and orchestration technologies.
Now, organizations can granularly control resources, both internally and in the cloud. This type of
automation can be seen at both the software layer as well as the hardware layer. Vendors like BMC,
Service Now, and Microsoft SCCM/SCOM are working towards unifying massive systems under one
management engine to provide a single pain of glass into the data center workload environment.

Cloud computing 69

You might also like