You are on page 1of 97

MODULE 1 – Cloud

Computing
Dr. Shrirang Ambaji Kulkarni
CSE Department
MODULE 1 – Cloud Computing
Defining Cloud Computing
Cloud computing refers to applications and services that run on a distributed network
using virtualized resources and accessed by common Internet protocols and networking
standards.
It is distinguished by the notion that resources are virtual and limitless and that details of
the physical systems on which software runs are abstracted from the user.
Common SaaS services include Dropbox, Salesforce, Cisco WebEx. PaaS services include
Google App Engine, Apache Stratos and OpenShift.
Some well-known IaaS services are Amazon Web Services (AWS), Cisco Metapod and
Microsoft Azure.
MODULE 1 – Cloud Computing
Distributed networking is a distributed computing network system where components of
the program and data depend on multiple sources.
Distributed networking, used in distributed computing, is the network system over which
computer programming, software, and its data are spread out across more than one
computer, but communicate complex messages through their nodes (computers), and are
dependent upon each other.
The goal of a distributed network is to share resources, typically to accomplish a single or
similar goal.
Client/server computing is a type of distributed computing where one computer, a client,
requests data from the server, a primary computing center, which responds to the client
directly with the requested data, sometimes through an agent. Client/server distributed
networking is also popular in web-based computing
MODULE 1 – Cloud Computing
Cloud computing
Enterprises with rapid growth and scaling needs may find it challenging to maintain their
own distributed network under the traditional client/server computing model.
Cloud Computing is the utility of distributed computing over Internet-based applications,
storage, and computing services.
A cloud is a cluster of computers or servers that are closely connected to provide scalable,
high-capacity computing or related tasks.
A computer cluster is a set of computers that work together so that they can be viewed
as a single system.
MODULE 1 – Cloud Computing
In a Beowulf cluster, the application programs never see the computational nodes (also
called slave computers) but only interact with the "Master" which is a specific computer
handling the scheduling and management of the slaves.
In a typical implementation the Master has two network interfaces, one that
communicates with the private Beowulf network for the slaves, the other for the general
purpose network of the organization.
The slave computers typically have their own version of the same operating system, and
local memory and disk space.
However, the private slave network may also have a large and shared file server that
stores global persistent data, accessed by the slaves as needed
MODULE 1 – Cloud Computing
Virtualization is a process that allows for more efficient utilization of physical computer
hardware and is the foundation of cloud computing.
Virtualization uses software to create an abstraction layer over computer hardware that
allows the hardware elements of a single computer—processors, memory, storage and
more—to be divided into multiple virtual computers, commonly called virtual machines
(VMs).
Each VM runs its own operating system (OS) and behaves like an independent computer,
even though it is running on just a portion of the actual underlying computer hardware.
MODULE 1 – Cloud Computing
A hypervisor is the software layer that coordinates VMs. It serves as an interface between the
VM and the underlying physical hardware, ensuring that each has access to the physical
resources it needs to execute. It also ensures that the VMs don’t interfere with each other by
impinging on each other’s memory space or compute cycles.
MODULE 1 – Cloud Computing
Defining Cloud Computing
Cloud computing takes the technology, services, and applications that are similar to those on
the Internet and turns them into a self-service utility.
The use of the word “cloud” makes reference to the two essential concepts:
Abstraction: Cloud computing abstracts the details of system implementation from users and
developers.
Applications run on physical systems that aren’t specified, data is stored in locations that are
unknown, administration of systems is outsourced to others, and access by users is ubiquitous.
Virtualization: Cloud computing virtualizes systems by pooling and sharing resources.
Systems and storage can be provisioned as needed from a centralized infrastructure, costs are
assessed on a metered basis, multi-tenancy is enabled, and resources are scalable with agility.
MODULE 1 – Cloud Computing
In cloud computing, multitenancy means that multiple customers of a cloud vendor are using
the same computing resources.
Despite the fact that they share resources, cloud customers aren't aware of each other, and
their data is kept totally separate.
To help clarify how cloud computing has changed the nature of commercial system
deployment, consider these three examples:
Google: In the last decade, Google has built a worldwide network of datacenters to service
its search engine.
In doing so Google has captured a substantial portion of the world’s advertising revenue.
That revenue has enabled Google to offer free software to users based on that infrastructure
and has changed the market for user-facing software.
This is the classic Software as a Service
MODULE 1 – Cloud Computing
The first and best-known offerings are an extensive set of very popular applications that
Google offers to the general public.
These applications include Google Docs, Google Health, Picasa, Google Mail, Google Earth, and
many more.

Azure Platform: By contrast, Microsoft is creating the Azure Platform.


It enables .NET Framework applications to run over the Internet as an alternate platform for
Microsoft developer software running on desktops.

Amazon Web Services: One of the most successful cloud-based businesses is Amazon Web
Services, which is an Infrastructure as a Service offering that lets you rent virtual computers on
Amazon’s own infrastructure.
MODULE 1 – Cloud Computing
Cloud Types
Most people separate cloud computing into two distinct sets of models:
Deployment models: This refers to the location and management of the cloud’s infrastructure.
Service models: This consists of the particular types of services that you can access on a cloud
computing platform.
The NIST model
The U.S. National Institute of Standards and Technology (NIST) has a set of working definitions
(http://csrc.nist.gov/groups/SNS/cloudcomputing/cloud-def-v15.doc) that separate cloud
computing into service models and deployment models.
Those models and their relationship to essential characteristics of cloud computing are shown
in Figure 1.1.
MODULE 1 – Cloud Computing
The NIST model originally did not require a cloud to use virtualization to pool resources, nor
did it absolutely require that a cloud support multi-tenancy in the earliest definitions of cloud
computing.
Multi-tenancy is the sharing of resources among two or more clients.
The latest version of the NIST definition does require that cloud computing networks use
virtualization and support multi-tenancy.
The Cloud Cube Model
The Open Group maintains an association called the Jericho Forum
(https://www.opengroup.org/jericho/index.htm) whose main focus is how to protect cloud
networks.
The group has an interesting model that attempts to categorize a cloud network based on four
dimensional factors.
MODULE 1 – Cloud Computing
The four dimensions of the Cloud Cube Model are shown in Figure 1.2 and listed here:
Physical location of the data: Internal (I) / External (E) determines your organization’s
boundaries.
Ownership: Proprietary (P) / Open (O) is a measure of not only the technology ownership,
but of interoperability, ease of data transfer, and degree of vendor application lock-in.
Security boundary: Perimeterised (Per) / De-perimiterised (D-p) is a measure of whether
the operation is inside or outside the security boundary or network firewall.
Sourcing: Insourced or Outsourced means whether the service is provided by the customer
or the service provider.
MODULE 1 – Cloud Computing
Taken together, the fourth dimension corresponds to two different states in the eight possible
cloud forms: Per (IP, IO, EP, EO) and D-p (IP, IO, EP, EO).
The sourcing dimension addresses the deliverer of the service.
What the Cloud Cube Model is meant to show is that the traditional notion of a network
boundary being the network’s firewall no longer applies in cloud computing.
Deployment models
A deployment model defines the purpose of the cloud and the nature of how the cloud is
located.
The NIST definition for the four deployment models is as follows:
Public cloud: The public cloud infrastructure is available for public use alternatively for a large
industry group and is owned by an organization selling cloud services.
MODULE 1 – Cloud Computing
Private cloud: The private cloud infrastructure is operated for the exclusive use of an
organization. The cloud may be managed by that organization or a third party. Private
clouds may be either on- or off-premises.
Hybrid cloud: A hybrid cloud combines multiple clouds (private, community of public) where
those clouds retain their unique identities, but are bound together as a unit.
A hybrid cloud may offer standardized or proprietary access to data and applications, as well
as application portability.
Community cloud: A community cloud is one where the cloud has been organized to serve a
common function or purpose.
It may be for one organization or for several organizations, but they share common concerns
such as their mission, policies, security, regulatory compliance needs, and so on
Service models
In the deployment model, different cloud types are an expression of the manner in which infrastructure
is deployed.
You can think of the cloud as the boundary between where a client’s network, management, and responsibilities ends
and the cloud service provider’s begins. As cloud computing has developed, different vendors offer clouds that have
different services associated with them.
The portfolio of services offered adds another set of definitions called the service model.
MODULE 1 – Cloud Computing
Three service types have been universally accepted:
Infrastructure as a Service: IaaS provides virtual machines, virtual storage, virtual
infrastructure, and other hardware assets as resources that clients can provision.
The IaaS service provider manages all the infrastructure, while the client is responsible for all
other aspects of the deployment.
This can include the operating system, applications, and user interactions with the system.
Platform as a Service: PaaS provides virtual machines, operating systems, applications,
services, development frameworks, transactions, and control structures.
The client can deploy its applications on the cloud infrastructure or use applications that were
programmed using languages and tools that are supported by the PaaS service provider.
The service provider manages the cloud infrastructure, the operating systems, and the
enabling software.
The client is responsible for installing and managing the application that it is deploying.
MODULE 1 – Cloud Computing
Software as a Service: SaaS is a complete operating environment with applications,
management, and the user interface.
In the SaaS model, the application is provided to the client through a thin client interface (a
browser, usually), and the customer’s responsibility begins and ends with entering and
managing its data and user interaction.
Everything from the application down to the infrastructure is the vendor’s responsibility.
MODULE 1 – Cloud Computing
A native app, or native application, is a software application built in a specific programming
language, for the specific device platform, either iOS or Android.
web app “is an application that is accessed via a web browser over a network such as the Internet.”
A web app provides additional functionality and interactivity. E.g. Facebook
[Web applications are websites with functionality and interactive elements]
API Management
APIs–Application Programming Interfaces–are the building blocks of modern technology and
business infrastructure.
At its most basic level, an API is a piece of software that allows two applications to talk to each
other.
API management is the process of developing, designing, monitoring, testing, securing, and
analyzing APIs for organizations. API management platforms provide a robust set of software and
processes  with this functionality, hosted on premises, in the cloud, or a hybrid environment.
MODULE 1 – Cloud Computing
API management can be delivered on-premises, through the cloud, or using a hybrid on-
premises – SaaS (Software as a Service) approach.

HVAC - What is HVAC (heating, ventilation and air conditioning)?


In computing and especially in enterprise data centers, HVAC systems control the ambient
environment (temperature, humidity, air flow and air filtering) and must be planned for and
operated along with other data center components such as computing hardware, cabling, data
storage, fire protection, physical security .
MODULE 1 – Cloud Computing
Infrastructure as a service (IaaS)
A vendor provides clients pay-as-you-go access to storage, networking, servers, and other
computing resources in the cloud.
Key features:
Instead of purchasing hardware outright, users pay for IaaS on demand.
Infrastructure is scalable depending on processing and storage needs.
Saves enterprises the costs of buying and maintaining their own hardware.
Because data is on the cloud, there can be no single point of failure.
Enables the virtualization of administrative tasks, freeing up time for other work.
MODULE 1 – Cloud Computing
Platform as a service (PaaS)
A service provider offers access to a cloud-based environment in which users can build and
deliver applications. The provider supplies underlying infrastructure.
Key features:
PaaS provides a platform with tools to test, develop, and host applications in the same
environment.
Enables organizations to focus on development without having to worry about underlying
infrastructure.
Providers manage security, operating systems, server software and backups.
Facilitates collaborative work even if teams work remotely.
MODULE 1 – Cloud Computing
Software as a service (SaaS)
A service provider delivers software and applications through the internet. Users subscribe to
the software and access it via the web or vendor APIs.
Key features:
SaaS vendors provide users with software and applications via a subscription model.
Users do not have to manage, install or upgrade software; SaaS providers manage this.
Data is secure in the cloud; equipment failure does not result in loss of data.
Use of resources can be scaled depending on service needs.
Applications are accessible from almost any internet-connected device, from virtually
anywhere in the world.
MODULE 1 – Cloud Computing
Characteristics of Cloud Computing
Cloud computing builds on so many older concepts in computer technology that it can be hard
for people newly introduced to the concept to grasp that it represents a paradigm shift in
computing.
Paradigm shift
When you choose a cloud service provider, you are renting or leasing part of an enormous
infrastructure of datacenters, computers, storage, and networking capacity.
Many of these datacenters are multi-million-dollar investments by the companies that run
them. Most of the large cloud computing service providers have multiple datacenters located
all over the world.
In total, Google operates or is developing nearly 30 data centers around the world. These
data centers support Google’s current and planned 44 cloud regions – a physical location
where it clusters data centers – that are designed to be highly available, fault-tolerant, and
concurrently maintainable.
MODULE 1 – Cloud Computing
Datacenters have been sited to do the following:
Have access to low cost power
Leverage renewable power source
Be near abundant water
Be sited where high-speed network backbone connections can be made
Keep land costs modest and occupation unobtrusive
Obtain tax breaks
Optimize the overall system latency
These characteristics make cloud computing networks highly efficient and capture enough
margin to make utility computing profitable
MODULE 1 – Cloud Computing
Cloud vs Datacenter
Cloud is a virtual resource that helps businesses to store, organize, and operate data efficiently.
Data Center is a physical resource that helps businesses to store, organize, and operate data
efficiently.
A cloud storage service is a version of a data center that is not located on your company’s
physical premises.
It enables access to your business data via the internet.
According to the research firm IDC, the following areas were the top five cloud applications in
use in 2010:
Collaboration applications [Slack is an online collaborative tool for teams to come together and
streamline communication.]
Web applications/Web serving [Amazon, eBay, banking websites, and Facebook are some of the
popular examples of web-based applications. ] 
MODULE 1 – Cloud Computing
Cloud backup [finest cloud backup software or services in the market would be IDrive.]
Business applications [Project Management – Asana (Manage workflow etc),
Personal productivity applications [Hive – Project Management; Calendly – Scheduling
meetings]
Benefits of cloud computing
“The NIST Definition of Cloud Computing” by Peter Mell and Tim Grance” also assigns five
essential characteristics that cloud computing systems must offer.
On-demand self-service: A client can provision computer resources without the need for
interaction with cloud service provider personnel.
Broad network access: Access to resources in the cloud is available over the network using
standard methods in a manner that provides platform-independent access to clients of all
types.
MODULE 1 – Cloud Computing
This includes a mixture of heterogeneous operating systems, and thick and thin platforms such
as laptops, mobile phones, and PDA.
Resource pooling: A cloud service provider creates resources that are pooled together in a
system that supports multi-tenant usage.
Rapid elasticity: Resources can be rapidly and elastically provisioned.
The system can add resources by either scaling up systems (more powerful computers) or
scaling out systems (more computers of the same kind), and scaling may be automatic or
manual.
From the standpoint of the client, cloud computing resources should look limitless and can be
purchased at any time and in any quantity.
MODULE 1 – Cloud Computing
Measured service: The use of cloud system resources is measured, audited, and reported to
the customer based on a metered system.
A client can be charged based on a known metric such as amount of storage used, numberof
transactions, network I/O (Input/Output) or bandwidth, amount of processing power used,
and so forth.
A client is charged based on the level of services provided.
Additional advantages:
Lower costs: Because cloud networks operate at higher efficiencies and with greater
utilization,
significant cost reductions are often encountered.
Ease of utilization: Depending upon the type of service being offered, you may find that
you do not require hardware or software licenses to implement your service.
MODULE 1 – Cloud Computing
Quality of Service: The Quality of Service (QoS) is something that you can obtain under
contract from your vendor.
Reliability: The scale of cloud computing networks and their ability to provide load balancing
and failover makes them highly reliable, often much more reliable than what you
can achieve in a single organization.
Outsourced IT management: A cloud computing deployment lets someone else manage
your computing infrastructure while you manage your business. In most instances, you
achieve considerable reductions in IT staffing costs.
Simplified maintenance and upgrade: Because the system is centralized, you can easily apply
patches and upgrades. This means your users always have access to the latest software
versions.
MODULE 1 – Cloud Computing
Low Barrier to Entry: In particular, upfront capital expenditures are dramatically
reduced. In cloud computing, anyone can be a giant at any time.
Disadvantages of cloud computing
All cloud computing applications suffer from the inherent latency that is intrinsic in their WAN
connectivity.
While cloud computing applications excel at large-scale processing tasks, if your application
needs large amounts of data transfer, cloud computing may not be the best model for you.
Additionally, cloud computing is a stateless system, as is the Internet in general. In order for
communication to survive on a distributed system, it is necessarily unidirectional in nature.
All the requests you use in HTTP: PUTs, GETs, and so on are requests to a service provider. The
service provider then sends a response.
Although it may seem that you are carrying on a conversation between client and provider,
there is an architectural disconnect between the two.
MODULE 1 – Cloud Computing
That lack of state allows messages to travel over different routes and for data to arrive out of
sequence, and many other characteristics allow the communication to succeed even when the
medium is faulty.
Therefore, to impose transactional coherency upon the system, additional overhead in the
form of service brokers, transaction managers, and other middleware must be added to the
system.
This can introduce a very large performance hit into some applications.
If you had to pick a single area of concern in cloud computing, that area would undoubtedly be
privacy and security.
When your data travels over and rests on systems that are no longer under your control, you
have increased risk due to the interception and malfeasance of others.
You can’t count on a cloud provider maintaining your privacy in the face of government actions.
MODULE 1 – Cloud Computing
SYSTEM MODELS FOR DISTRIBUTED AND CLOUD COMPUTING
Distributed and cloud computing systems are built over a large number of autonomous
computer nodes.
These node machines are interconnected by SANs, LANs, or WANs in a hierarchical manner.
With today’s networking technology, a few LAN switches can easily connect hundreds of
machines as a working cluster.
A WAN can connect many local clusters to form a very large cluster of clusters.
In this sense, one can build a massive system with millions of computers connected to edge
networks.
In Table 1.2, massive systems are classified into four groups: clusters, P2P networks,
computing grids, and Internet clouds over huge data centers
MODULE 1 – Cloud Computing
A peer-to-peer overlay network is a computer network built on top of an existing network,
usually the Internet.
Peer-to-peer overlays are commonly used for file sharing and realtime data streaming.
A computational grid is a loose network of computers linked to perform grid computing. In a
computational grid, a large computational task is divided up among individual machines,
which run calculations in parallel and then return results to the original computer.
These individual machines are nodes in a network, which may span multiple administrative
domains and may be geographically distant.
Each of the nodes may be thought of as a discrete system that can perform work and has
access to a network.
Computational grids are often more cost-effective than supercomputers of equal computing
power.
MODULE 1 – Cloud Computing
The distinction between Grid computing vs Cluster computing is that grid computing is a
heterogeneous network whose devices have diverse hardware segments and diverse OS
connected in a grid, while cluster computing is a homogenous network whose devices have
similar hardware parts and a similar OS connected in a cluster ...

Clusters of Cooperative Computers


A computing cluster consists of interconnected stand-alone computers which work
cooperatively as a single integrated computing resource.
In the past, clustered computer systems have demonstrated impressive results in handling
heavy workloads with large data sets.
MODULE 1 – Cloud Computing
Figure 1.15 shows the architecture of a typical server cluster built around a low-latency, high
bandwidth interconnection network.
This network can be as simple as a SAN (e.g., Myrinet) or a LAN (e.g., Ethernet).
To build a larger cluster with more nodes, the interconnection network can be built with multiple
levels of Gigabit Ethernet, Myrinet, or InfiniBand switches.
Through hierarchical construction using a SAN, LAN, or WAN, one can build scalable clusters with
an increasing number of nodes.
The cluster is connected to the Internet via a virtual private network (VPN) gateway.
Network gateways are designed to translate traffic between different protocols.
These translations enable networks using different protocols or data formats to be linked together
with the network gateway performing in-line translation.
The gateway IP address locates the cluster.
The system image of a computer is decided by the way the OS manages the shared cluster
resources.
Most clusters have loosely coupled node computers.
MODULE 1 – Cloud Computing
A loosely coupled multiprocessor system is a type of multiprocessing where the individual
processors are configured with their own memory and are capable of executing user and
operating system instructions independent of each other.
This type of architecture paves the way for parallel processing.
All resources of a server node are managed by their own OS.
Single-System Image
Greg Pfister [38] has indicated that an ideal cluster should merge multiple system images into
a single-system image (SSI).
Cluster designers desire a cluster operating system or some middleware to support SSI at
various levels, including the sharing of CPUs, memory, and I/O across all cluster nodes.
An SSI is an illusion created by software or hardware that presents a collection of resources as
one integrated, powerful resource.
SSI makes the cluster appear like a single machine to the user.
MODULE 1 – Cloud Computing
An SSI presents users with a globalized view of all the resources in the cluster, irrespective of
the nodes to which they are physically connected, while hiding the fact that they are
associated with different nodes.
SSI also ensures. multiprocessing and an even load balancing among the systems.
Hardware, Software, and Middleware Support
Clusters exploring massive parallelism are commonly known as MPPs.
Almost all HPC clusters in the Top 500 list are also MPPs.
The building blocks are computer nodes (PCs, workstations, servers, or SMP), special
communication software such as PVM or MPI, and a network interface card in each computer
node.
Parallel Virtual Machine (PVM) :
PVM is a type of software tool which is designed for heterogeneous distributed computing. In
this system, it allows a heterogeneous network which can be made from a collection of
distributed processors which is viewed as a single parallel virtual machine.
MODULE 1 – Cloud Computing
It permits a collection of heterogeneous UNIX and/or Windows machines which can be used a
single large parallel computer.
It uses simple languages like C/C++, FORTRAN to implement parallel computing.
Message Passing Interface (MPI) :
MPI is a type of message passing library standard.
It is library not a programming language and this library is used by the compilers.
It is the method which is commonly used for programming distributed memory systems.
It is used to define routines.
It is useful on distributed memory machines.
It supports programming languages like C, C++, FORTRAN and JAVA.
It supports the communication modes i.e., standards, synchronous, buffered and ready.
MODULE 1 – Cloud Computing
Most clusters run under the Linux OS.
The computer nodes are interconnected by a high-bandwidth network (such as Gigabit
Ethernet, Myrinet, InfiniBand, etc.).
Special cluster middleware supports are needed to create SSI or high availability (HA).
Both sequential and parallel applications can run on the cluster, and special parallel
environments are needed to facilitate use of the cluster resources.

For example, distributed memory has multiple images.


Users may want all distributed memory to be shared by all servers by forming distributed
shared memory (DSM).
Many SSI features are expensive or difficult to achieve at various cluster operational levels.
Instead of achieving SSI, many clusters are loosely coupled machines.
Using virtualization, one can build many virtual clusters dynamically, upon user demand.
MODULE 1 – Cloud Computing
Major Cluster Design Issues
Unfortunately, a cluster-wide OS for complete resource sharing is not available yet.
Middleware or OS extensions were developed at the user space to achieve SSI at selected
functional levels.
Without this middleware, cluster nodes cannot work together effectively to achieve
cooperative computing.
The software environments and applications must rely on the middleware to achieve high
performance.
The cluster benefits come from scalable performance, efficient message passing, high system
availability, seamless fault tolerance, and cluster-wide job management.
MODULE 1 – Cloud Computing
LSF stands for LOAD SHARING FACILITY
The IBM Spectrum LSF ("LSF", short for load sharing facility) software is industry-leading
enterprise-class software. LSF distributes work across existing heterogeneous IT resources to
create a shared, scalable, and fault-tolerant infrastructure, that delivers faster, more reliable
workload performance and reduces cost.
LSF balances load and allocates resources, and provides access to those resources.
Computational Grids
Like an electric utility power grid, a computing grid offers an infrastructure that couples
computers,
software/middleware, special instruments, and people and sensors together.
The grid is often constructed across LAN, WAN, or Internet backbone networks at a regional,
national, or global scale.
Enterprises or organizations present grids as integrated computing resources.
MODULE 1 – Cloud Computing
They can also be viewed as virtual platforms to support virtual organizations.
The computers used in a grid are primarily workstations, servers, clusters, and
supercomputers. Personal computers, laptops, and PDAs can be used as access devices to a
grid system.
Grid computing is distinguished from conventional high-performance computing systems such
as cluster computing in that grid computers have each node set to perform a different
task/application.
Grid computers also tend to be more heterogeneous and geographically dispersed (thus not
physically coupled) than cluster computers.
Although a single grid can be dedicated to a particular application, commonly a grid is used for
a variety of purposes.
Grid computing offers a way to solve Grand Challenge problems such as protein folding,
financial modeling, earthquake simulation, and climate/weather modeling, and was integral in
enabling the Large Hadron Collider at CERN.
MODULE 1 – Cloud Computing
Grids offer a way of using the information technology resources optimally inside an
organization.
They also provide a means for offering information technology as a utility for commercial and
noncommercial clients, with those clients paying only for what they use, as with electricity or
water.
MODULE 1 – Cloud Computing
Broadband is a telecommunications technology that provides high-speed Internet access
using multiple channels of simultaneous digital signals.
Broadband may transmit on a wired connection using coaxial, fiber optic, twisted-pair cable,
or wirelessly on a cellular network.
Figure shows an example computational grid built over multiple resource sites owned by
different organizations.
The resource sites offer complementary computing resources, including workstations, large
servers, a mesh of processors, and Linux clusters to satisfy a chain of computational needs.
The grid is built across various IP broadband networks including LANs and WANs already used
by enterprises or organizations over the Internet.
The grid is presented to users as an integrated resource pool as shown in the upper half of the
figure.
MODULE 1 – Cloud Computing
Special instruments may be involved such as using the radio telescope in SETI@Home search of
life in the galaxy and the austrophysics@Swineburne for pulsars.
At the server end, the grid is a network.
At the client end, we see wired or wireless terminal devices.
The grid integrates the computing, communication, contents, and transactions as rented
services.
Grid Families
Grid technology demands new distributed computing models, software/middleware support,
network protocols, and hardware infrastructures.
National grid projects are followed by industrial grid platform development by IBM, Microsoft,
Sun, HP, Dell, Cisco, EMC, Platform Computing, and others.
New grid service providers (GSPs) and new grid applications have emerged rapidly, similar to the
growth of Internet and web services in the past two decades.
In Table 1.4, grid systems are classified in essentially two categories: computational or data
grids and P2P grids.
MODULE 1 – Cloud Computing

The TeraGrid integrated high-performance computers, data resources and tools, and
experimental facilities. Resources included more than a petaflops of computing capability and
more than 30 petabytes of online and archival data storage, with rapid access and retrieval
over high-performance computer network connections. Researchers could also access more
than 100 discipline-specific databases.
MODULE 1 – Cloud Computing
petaflops
a unit of computing speed equal to one thousand million million (1015) floating-point
operations per second.
The new Intel® Core™ i9-7980XE Extreme Edition Processor runs at about 4.3 GHz (faster if
overclocked) and thus should calculate to 1.3 teraflops, 1012
Petabyte
An extremely large unit of digital data, one Petabyte is equal to 1,000 Terabytes.
1 TB equals 1,000 gigabytes
One Gigabyte (GB) is about 1 billion bytes, or (1024 Megabytes MB)
MODULE 1 – Cloud Computing
Peer-to-Peer Network Families
An example of a well-established distributed system is the client-server architecture. In this
scenario, client machines (PCs and workstations) are connected to a central server for
compute, e-mail, file access, and database applications.
The P2P architecture offers a distributed model of networked systems.
First, a P2P network is client-oriented instead of server-oriented.
P2P Systems
In a P2P system, every node acts as both a client and a server, providing part of the
systemresources.
Peer machines are simply client computers connected to the Internet.
All client machines act autonomously to join or leave the system freely.
This implies that no master-slave relationship exists among the peers.
MODULE 1 – Cloud Computing
No central coordination or central database is needed.
In other words, no peer machine has a global view of the entire P2P system.
The system is self-organizing with distributed control.
Figure 1.17 shows the architecture of a P2P network at two abstraction levels. Initially, the
peers are totally unrelated.
Each peer machine joins or leaves the P2P network voluntarily.
Only the participating peers form the physical network at any time.
Unlike the cluster or grid, a P2P network does not use a dedicated interconnection network.
The physical network is simply an ad hoc network formed at various Internet domains
randomly using the TCP/IP and NAI protocols.
Thus, the physical network varies in size and topology dynamically due to the free
membership in the P2P network.
MODULE 1 – Cloud Computing
Overlay Networks
Data items or files are distributed in the participating peers.
Based on communication or file-sharing needs, the peer IDs form an overlay network at the
logical level.
This overlay is a virtual network formed by mapping each physical machine with its ID,
logically, through a virtual mapping as shown in Figure.
When a new peer joins the system, its peer ID is added as a node in the overlay network.
When an existing peer leaves the system, its peer ID is removed from the overlay network
automatically.
Therefore, it is the P2P overlay network that characterizes the logical connectivity among the
peers.
MODULE 1 – Cloud Computing
There are two types of overlay networks: unstructured and structured. An unstructured overlay
network is characterized by a random graph.
There is no fixed route to send messages or files among the nodes.
Often, flooding is applied to send a query to all nodes in an unstructured overlay, thus resulting
in heavy network traffic and nondeterministic search results.
Structured overlay networks follow certain connectivity topology and rules for inserting and
removing nodes (peer IDs) from the overlay graph.
Routing mechanisms are developed to take advantage of the structured overlays.
P2P Application Families
Based on application, P2P networks are classified into four groups, as shown in Table 1.5.
The first family is for distributed file sharing of digital contents (music, videos, etc.) on the P2P
network.
This includes many popular P2P networks such as Gnutella, Napster, and BitTorrent, among
others. Collaboration P2P networks include MSN or Skype chatting, instant messaging, and
collaborative design, among others.
MODULE 1 – Cloud Computing
The third family is for distributed P2P computing in specific applications.
For example, SETI@home provides 25 Tflops of distributed computing power, collectively, over
3 million Internet host machines.
Other P2P platforms, such as JXTA, .NET, and FightingAID@home
Is .NET a P2P platform
The .NET Framework and Visual Studio .NET make it easier for developers to write peer
applications by offering a rich UI client programming model, support for scalable Web services,
and easy-to-use networking classes, all within the same powerful development environment.
MODULE 1 – Cloud Computing

Particularly, as P2P systems require the active collaboration of the participants beyond their
selfish interest, many systems include methods designed to lure the most resourceful users
into broader participation, to provide an overall better service.
The methods devised to attract the contribution of users are unfortunately vulnerable to a
particular class of attacks: collusion.
Collusion is broadly defined as any malicious coordinated behavior of a group of users aimed
at gaining undeserved benefits, or at damaging (some) well behaved users.
Open networking describes a network that uses open standards and commodity hardware.
In other words, open networking is the ability a computer system has to be open in terms of
component compatibility -- in both hardware and software -- expandability and extendibility.
MODULE 1 – Cloud Computing
P2P Computing Challenges
P2P computing faces three types of heterogeneity problems in hardware, software, and
network requirements.
There are too many hardware models and architectures to select from; incompatibility exists
between software and the OS; and different network connections and protocols make it too
complex to apply in real applications.
P2P performance is affected by routing efficiency and self-organization by participating peers.
Fault tolerance, failure management, and load balancing are other important issues in using
overlay networks.
Lack of trust among peers poses another problem. Peers are strangers to one another.
Security, privacy, and copyright violations are major worries by those in the industry in terms
of applying P2P technology in business applications.
By replicating data in multiple peers, one can easily lose data in failed nodes.
MODULE 1 – Cloud Computing
Cloud Computing over the Internet
Gordon Bell, Jim Gray, and Alex Szalay [5] have advocated: “Computational science is changing
to be data-intensive.
In the future, working with large data sets will typically mean sending the computations
(programs) to the data, rather than copying the data to the workstations.
This reflects the trend in IT of moving computing and data from desktops to large data centers,
where there is on-demand provision of software, hardware, and data as a service.
This data explosion has promoted the idea of cloud computing.
IBM, a major player in cloud computing, has defined it as follows: “A cloud is a pool of
virtualized computer resources. A cloud can host a variety of different workloads, including
batch-style backend jobs and interactive and user-facing applications.”

Based on this definition, a cloud allows workloads to be


deployed and scaled out quickly through rapid provisioning of virtual or physical machines.
MODULE 1 – Cloud Computing
The cloud supports redundant, self-recovering, highly scalable programming models that allow
workloads to recover from many unavoidable hardware/software failures.
Finally, the cloud system should be able to monitor resource use in real time to enable
rebalancing of allocations when needed.
Internet Clouds
Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically (see Figure 1.18).
The idea is to move desktop computing to a service-oriented platform using server clusters
and huge databases at data centers.
Cloud computing leverages its low cost and simplicity to benefit both users and providers.
Machine virtualization has enabled such cost-effectiveness.
Cloud computing intends to satisfy many user applications simultaneously.
The cloud ecosystem must be designed to be secure, trustworthy, and dependable.
MODULE 1 – Cloud Computing
Some computer users think of the cloud as a centralized resource pool.
Others consider the cloud to be a server cluster which practices distributed computing over
all the servers used.
MODULE 1 – Cloud Computing
The Cloud Landscape
Cloud computing as an on-demand computing paradigm resolves or relieves us from these
problems. Figure 1.19 depicts the cloud landscape and major cloud players, based on three
cloud service models.
Infrastructure as a Service (IaaS) This model puts together infrastructures demanded by users
—namely servers, storage, networks, and the data center fabric.
Platform as a Service (PaaS) This model enables the user to deploy user-built applications
onto a virtualized cloud platform.
PaaS includes middleware, databases, development tools, and some runtime support such as
Web 2.0 and Java.
The platform includes both hardware and software integrated with specific programming
interfaces.
The provider supplies the API and software tools (e.g., Java, Python, Web 2.0, .NET).
MODULE 1 – Cloud Computing
Software as a Service (SaaS) This refers to browser-initiated application software over thousands of paid
cloud customers.
The SaaS model applies to business processes, industry applications, consumer relationship management
(CRM), enterprise resources planning (ERP), human resources (HR), and collaborative applications.
Internet clouds offer four deployment modes: private, public, managed, and hybrid.
The following list highlights eight reasons to adapt the cloud for upgraded Internet applications and web
services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies
MODULE 1 – Cloud Computing
Service-oriented architecture (SOA) is a software development model that allows services to
communicate across different platforms and languages to form applications.
In SOA, a service is a self-contained unit of software designed to complete a specific task.
Service-oriented architecture allows various services to communicate using a loose coupling
system to either pass data or coordinate an activity.
Loose coupling refers to a client of a service remaining independent of the service it requires.
In addition, the client -- which can also be a service -- can communicate with other services,
even when they are not related.
SOA benefits organizations by creating interoperability between apps and services. SOA will
also ensure existing applications can be easily scaled, while simultaneously reducing costs
related to the development of business service solutions.
MODULE 1 – Cloud Computing
The defining concepts of SOA are:
The business value is more important than the technical strategy.
The strategic goals are more important than benefits related to specific projects.
Basic interoperability is more important than custom integration.
Shared services are more important than implementations with a specific purpose.
Continued improvement is more important than immediate perfection.
In this architectural style, business processes are implemented as software services, accessed
through a set of strictly defined application program interfaces (APIs) and bound into
applications through dynamic service orchestration.
As application integration and component-sharing operations became linked to pools of
hosting resources and distributed databases, enterprises needed a way to adapt their
procedure-based development model to the use of remote, distributed components. 
MODULE 1 – Cloud Computing
The general concept that emerged was the remote procedure call (RPC) which allowed one
software process to invoke another as though it were local, even though the other process
might be resident in a different application, computer or data center.
While the concept of services introduced by SOA is an integral part of modern
cloud computing and virtualization,
Implementation
SOA is independent of vendors and technologies. This means a wide variety of products can be
used to implement the architecture. The decision of what to use depends on the end goal of
the system.
SOA is typically implemented with web services such as simple object access protocol (SOAP)
and web services description language (WSDL). Other available implementation options
include:
MODULE 1 – Cloud Computing
A web service is a software system that supports interoperable machine-to-machine
interaction over a network.
It has an interface described in a machine-processable format (specifically, web Service
Definition Language, or WSDL).
web services fulfill a specific task or a set of tasks.
A web service is described using a standard, formal XML notion, called its service description,
that provides all of the details necessary to interact with the service, including message
formats (that detail the operations), transport protocols, and location.
The nature of the interface hides the implementation details of the service so that it can be
used independently of the hardware or software platform on which it is implemented and
independently of the programming language in which it is written.
This independence allows and encourages web service based applications to be loosely
coupled, component oriented, cross-technology implementations.
web services can be used alone or with other web services to carry out a complex aggregation
or a business transaction.
MODULE 1 – Cloud Computing
Even though SOAP has been a preferred option for many companies, for others it is sometimes
too complex and not flexible enough. It is for this reason that many are starting to use services
based on REST to manage large quantities of data.
Other available implementation options include:
Windows Communication Foundation (WCF);
gRPC; and
messaging -- such as with Java Message Service (JMS), ActiveMQ and RabbitMQ.
The WS model of SOA uses WSDL to connect interfaces with services and SOAP to define
procedure or component APIs. WS principles link applications via an enterprise service bus (
ESB), which helps businesses integrate their applications, ensure efficiency and improve 
data governance.
This approach never gained broad traction and is no longer favored by developers.
The REST model has instead gained attention. RESTful APIs offer low overhead and are easy to
understand.
MODULE 1 – Cloud Computing

SOA and microservices


REST and the internet supplanted SOA before it really gained significant traction, confining
SOA to legacy applications, often built around the ESB.
The broader notion of "services" was converted by RESTful thinking into "software processes"
instead of "business processes," and the software-centric view became known as
microservices -- the term in use today.
MODULE 1 – Cloud Computing
SOFTWARE ENVIRONMENTS FOR DISTRIBUTED SYSTEMS AND CLOUDS
Service-Oriented Architecture (SOA)
In grids/web services, Java, and CORBA, an entity is, respectively, a service, a Java object, and a
CORBA distributed object in a variety of languages.
These architectures build on the traditional seven Open Systems Interconnection (OSI) layers
that provide the base networking abstractions.
On top of this we have a base software environment, which would be .NET or Apache Axis for
web services, the Java Virtual Machine for Java, and a broker network for CORBA.
On top of this base environment one would build a higher level environment reflecting the
special features of the distributed computing environment.
This starts with entity interfaces and inter-entity communication, which rebuild the top four OSI
layers but at the entity and not the bit level.
Figure 1.20 shows the layered architecture for distributed entities used in web services and grid
systems.
MODULE 1 – Cloud Computing
In the case of fault tolerance, the features in the Web Services Reliable Messaging (WSRM)
framework mimic the OSI layer capability (as in TCP fault tolerance) modified to match the
different abstractions (such as messages versus packets, virtualized addressing) at the entity levels.
Security is a critical capability that either uses or reimplements the capabilities seen in concepts
such as Internet Protocol Security (IPsec) and secure sockets in the OSI layers.
Web Services Reliable Messaging (WSRM) is a specification that allows two systems to send
messages between each other reliably.
The aim of this is to ensure that messages are transferred properly from the sender to the
receiver. Reliable Messaging is a complex thing to define, but you can think about WSRM as
providing a similar level of guarantee for XML messaging that a JMS system provides in the Java
world.
There is one key difference though - JMS is a standard API or programming model, with lots of
different implementations and wire-protocols underneath it.
WSRM is the opposite - a standard wire-protocol with no API or programming model of its own.
Instead it composes with existing SOAP-based systems
MODULE 1 – Cloud Computing
Entity communication is supported by higher level services for registries, metadata, and
management of the entities.
Here, one might get several models with, for example, JNDI (Jini and Java Naming and
Directory Interface) illustrating different approaches within the Java distributed object model.
The CORBA Trading Service, UDDI (Universal Description, Discovery, and Integration), LDAP
(Lightweight Directory
Access Protocol), and ebXML (Electronic Business using eXtensible Markup Language) are
other examples of discovery and information services.
In the earlier years, CORBA and Java approaches were used in distributed systems rather than
today’s SOAP, XML, or REST (Representational State Transfer)
MODULE 1 – Cloud Computing
Web Services and Tools
Loose coupling and support of heterogeneous implementations make services more attractive
than distributed objects.
Figure 1.20 corresponds to two choices of service architecture: web services or REST systems.
Both web services and REST systems have very distinct approaches to building reliable
interoperable systems.
In web services, one aims to fully specify all aspects of the service and its environment.
This specification is carried with communicated messages using Simple Object Access Protocol
(SOAP).
The hosting environment then becomes a universal distributed operating system with fully
distributed capability carried by SOAP messages.
This approach has mixed success as it has been hard to agree on key parts of the protocol and
even harder to efficiently implement the protocol by software such as Apache Axis.
MODULE 1 – Cloud Computing
In the REST approach, one adopts simplicity as the universal principle and delegates most of
the difficult problems to application (implementation-specific) software.
In a web services language, REST has minimal information in the header, and the message
body (that is opaque to generic message processing) carries all the needed information.
REST architectures are clearly more appropriate for rapid technology environments.
However, the ideas in web services are important and probably will be required in mature
systems at a different level in the stack (as part of the application).
Note that REST can use XML schemas but not those that are part of SOAP; “XML over HTTP” is
a popular design choice in this regard.
MODULE 1 – Cloud Computing
The Evolution of SOA
As shown in Figure 1.21, service-oriented architecture (SOA) has evolved over the years.
SOA applies to building grids, clouds, grids of clouds, clouds of grids, clouds of clouds (also
known as interclouds), and systems of systems in general.
A large number of sensors provide data-collection services, denoted in the figure as SS
(sensor service).
A sensor can be a ZigBee device, a Bluetooth device, a WiFi access point, a personal
computer, a GPA, or a wireless phone, among other things.
Raw data is collected by sensor services.
All the SS devices interact with large or small computers, many forms of grids, databases, the
compute cloud, the storage cloud, the filter cloud, the discovery cloud, and so on.
Filter services ( fs in the figure) are used to eliminate unwanted raw data, in order to respond
to specific requests from the web, the grid, or web services.
A collection of filter services forms a filter cloud
MODULE 1 – Cloud Computing
SOA aims to search for, or sort out, the useful data from the massive amounts of raw data items.
Processing this data will generate useful information, and subsequently, the knowledge for our daily
use.
In fact, wisdom or intelligence is sorted out of large knowledge bases.
Finally, we make intelligent decisions based on both biological and machine wisdom.
Grids versus Clouds
In general, a grid system applies static resources, while a cloud emphasizes elastic resources.
For some researchers, the differences between grids and clouds are limited only in dynamic resource
allocation based on virtualization and autonomic computing.
One can build a grid out of multiple clouds.
This type of grid can do a better job than a pure cloud, because it can explicitly support
negotiated resource allocation.
Thus one may end up building with a system of systems: such as a cloud of clouds, a grid of clouds, or
a cloud of grids, or inter-clouds as a basic SOA architecture.
MODULE 1 – Cloud Computing
PERFORMANCE, SECURITY, AND ENERGY EFFICIENCY
Performance metrics are needed to measure various distributed systems.
Performance Metrics
In a distributed system, performance is attributed to a large number of factors.
System throughput is often measured in MIPS, Tflops (tera floating-point operations per
second), or TPS (transactions per second)
The instructions to be executed over the peak period is generally given in million instructions
per second (MIPS), which identifies the processing power required of a CPU to handle the
message rate
Throughput is the number of instructions that complete in each clock cycle.
For a non-superscalar machine, the throughput will be one instruction per clock cycle.
In a RISC architecture like MIPS each instruction takes one cycle to execute, though memory
references will usually take many cycles to read or write memory.
MODULE 1 – Cloud Computing
Dimensions of Scalability
Users want to have a distributed system that can achieve scalable performance.
Size scalability This refers to achieving higher performance or more functionality by increasing
the machine size.
The word “size” refers to adding processors, cache, memory, storage, or I/O
channels.
The most obvious way to determine size scalability is to simply count the number of
processors installed.
Not all parallel computer or distributed architectures are equally sizescalable.
For example, the IBM S2 was scaled up to 512 processors in 1997. But in 2008, the
IBM BlueGene/L system scaled up to 65,000 processors.
MODULE 1 – Cloud Computing
Dimensions of Scalability
Software scalability This refers to upgrades in the OS or compilers, adding mathematical and
engineering libraries, porting new application software, and installing more user-friendly
programming environments.
Some software upgrades may not work with large system configurations.
Testing and fine-tuning of new software on larger systems is a nontrivial job.

Several modern supercomputers such as the Tianhe-2 use Slurm, which arbitrates contention
for resources across the system.
Slurm is open source, Linux-based, very scalable, and can manage thousands of nodes in a
computer cluster with a sustained throughput of over 100,000 jobs per hour.
MODULE 1 – Cloud Computing
Application scalability This refers to matching problem size scalability with machine size
scalability.
Problem size affects the size of the data set or the workload increase.
Instead of increasing machine size, users can enlarge the problem size to enhance system efficiency or cost-
effectiveness
Technology scalability This refers to a system that can adapt to changes in building technologies,
such as the component and networking technologies discussed in Section 3.1. When
scaling a system design with new technology one must consider three aspects: time, space, and
heterogeneity. (1) Time refers to generation scalability. When changing to new-generation processors,
one must consider the impact to the motherboard, power supply, packaging and cooling,
and so forth. Based on past experience, most systems upgrade their commodity processors every
three to five years. (2) Space is related to packaging and energy concerns. Technology scalability
demands harmony and portability among suppliers. (3) Heterogeneity refers to the use of
hardware components or software packages from different vendors. Heterogeneity may limit the
scalability.
MODULE 1 – Cloud Computing
Scalability versus OS Image Count
In Figure 1.23, scalable performance is estimated against the multiplicity of OS images in
distributed systems deployed up to 2010.
Scalable performance implies that the system can achieve higher speed by adding more
processors or servers, enlarging the physical node’s memory size, extending the disk capacity,
or adding more I/O channels.
The OS image is counted by the number of independent OS images observed in a cluster, grid,
P2P network, or the cloud.
SMP and NUMA are included in the comparison.
An SMP (symmetric multiprocessor) server has a single system image, which could be a single
node in a large cluster.
By 2010 standards, the largest shared-memorySMP node was limited to a few hundred
processors.
The scalability of SMP systems is constrained primarily by packaging and the system
interconnect used.
MODULE 1 – Cloud Computing
NUMA (nonuniform memory access) machines are often made out of SMP nodes with
distributed, shared memory.
A NUMA machine can run with multiple operating systems, and can scale to a few thousand
processors communicating with the MPI library.
For example, a NUMA machine may have 2,048 processors running 32 SMP operating systems,
resulting in 32 OS images in the 2,048-processor NUMA system.
The cluster nodes can be either SMP servers or high-end machines that are loosely coupled
together.
Therefore, clusters have much higher scalability than NUMA machines.
The number of OS images in a cluster is based on the cluster nodes concurrently in use.
The cloud could be a virtualized cluster.
As of 2010, the largest cloud was able to scale up to a few thousand VMs.
MODULE 1 – Cloud Computing
The grid node could be a server cluster, or a mainframe, or a supercomputer, or an MPP.
Therefore, the number of OS images in a large grid structure could be hundreds or thousands
fewer than the total number of processors in the grid.
A P2P network can easily scale to millions of independent peer nodes, essentially desktop
machines
MODULE 1 – Cloud Computing
MODULE 1 – Cloud Computing

At the most basic level, Amdahl's Law is a way of showing that unless a program (or part of a
program) is 100% efficient at using multiple CPU cores, you will receive less and less of a
benefit by adding more cores
MODULE 1 – Cloud Computing
Network Threats and Data Integrity
Clusters, grids, P2P networks, and clouds demand security and copyright protection if they are
to be accepted in today’s digital society.
This section introduces system vulnerability, network threats, defense countermeasures, and
copyright protection in distributed or cloud computing systems.
Threats to Systems and Networks
Network viruses have threatened many users in widespread attacks.
These incidents have created a worm epidemic by pulling down many routers and servers, and
are responsible for the loss of billions of dollars in business, government, and services.
Figure 1.25 summarizes various attack types and their potential damage to users.
As the figure shows, information leaks lead to a loss of confidentiality.
MODULE 1 – Cloud Computing
Loss of data integrity may be caused by user alteration, Trojan horses, and service spoofing
attacks.
A denial of service (DoS) results in a loss of system operation and Internet connections.
Lack of authentication or authorization leads to attackers’ illegitimate use of computing
resources.
Open resources such as data centers, P2P networks, and grid and cloud infrastructures
could become the next targets.
Users need to protect clusters, grids, clouds, and P2P systems.
Otherwise, users should not use or trust them for outsourced work.

You might also like