You are on page 1of 72

TYCS Cloud Computing Unit 1 Notes Compiled by : Ms.

Nivedita Tiwari

Introduction to Cloud Computing


What is Cloud Computing technology?
 Cloud computing is a technology that puts your entire computing infrastructure in
both hardware and software applications online. It uses the internet, remote central
servers to maintain data & applications.
 Gmail, Yahoo mail, Facebook, Hotmail, Orkut, etc are all the most basic and widely
used examples of cloud computing. One does not need his own PC or laptop to
check some stored mail/data/photos in the mailbox but any computer with an
internet connection since the data is stored with the mail service provider on a
remote cloud.
 The technology, in essence, is a geographical shift in the location of our data from
personal computers to a centralized server or ‘cloud’.
 Typically, cloud services charge its customers on a usage basis. Hence it is also
called Software as a Service (SaaS). It aims to provide infrastructure and resources
online in order to serve its clients; Dynamism, Abstraction and Resource Sharing.

Varieties of Cloud Computing


Cloud Computing is classified under various heads. On the basis of the type, usage &
location, it is classified under the following head:
• Public Cloud- When a cloud is available to the general public on a pay-per-use basis,
that cloud is called a ‘Public Cloud’. The customer has no visibility over the location of the
cloud computing infrastructure. It is based on the standard cloud computing model.
Examples of public cloud are Amazon EC2, Windows Azure service platform, IBM’s Blue
cloud.
• Private Cloud- The internal data centers of business organizations which are not made
available to the general public are termed as a private cloud. As the name suggests, the
private cloud is dedicated to the customer itself. These are more secured as compared to
public clouds. It uses the technology of virtualization. A private cloud is hosted on the
company’s own servers. Example of private cloud technology is Eucalyptus and VMware.
• Hybrid Cloud- A combination of private and public cloud is called a hybrid cloud.
Companies use their own infrastructure for normal usage and hire the cloud at events of
heavy network traffic or high data load.

Cloud Computing in Detail :

 Cloud computing is a process that entails accessing of services, including, storage,


applications and servers through the Internet, making use of another company's
remote services for a fee. This enables a company to store and access data or
programs virtually, i.e. in a cloud, rather than on local hard drives or servers.

 Cloud computing has its roots as far back in 1950s when mainframe computers
came into existence. At that time, several users accessed the central computer via
dummy terminals. The only task these dummy terminals could perform was to
enable users access the mainframe computer. The prohibitive costs of this
mainframe devices did not make them economically feasible for organizations to
buy them. That was the time when the idea of provision of shared access to a single
computer occurred to the companies to save costs.

 In 1970s, IBM came out with an operating system (OS) named VM. This allowed
for simultaneous operation of more than one OS. Guest Operating Systems could
be run on every VM, with their own memory and other infrastructure, making it
possible to share these resources. This caused the concept of virtualization in
computing to gain popularity.

1|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 The 1990s witnessed telecom operators begin offering virtualized private network
connections, whose quality of service was as good as those of point-to-point
(dedicated) services at a lesser cost. This paved way for telecom companies' to
offer many users shared access to a single physical infrastructure.

 The other catalysts were grid computing, which allowed major issues to be
addressed via parallel computing; utility computing facilitated computing resources
to be offered as a metered service and SaaS allowed subscriptions, which were
network-based, to applications. Cloud computing, therefore, owes its emergence
to all these factors.

 The three prominent types of cloud computing for businesses are Software-as-a-
Service (SaaS), which requires a company to subscribe to it and access services
over the Internet; Infrastructure-as-a-Service (IaaS) is a solution where large
cloud computing companies deliver virtual infrastructure; and Platform-as-a-
Service (PaaS) gives the company the freedom to make its own custom applications
that will be used by all its entire workforce.

 Clouds are of four types: public, private, community, and hybrid. Through public
cloud, a provider can offer services, including storage and application, to anybody
via the Internet. They can be provided freely or charged on a pay-per-usage
method.

 Public cloud services are easier to install and less expensive, as costs for
application, hardware and bandwidth are borne by the provider. They are scalable,
and the users avail only those services that they use.

 A private cloud is referred to as also internal cloud or corporate cloud, and it called
so as it offers a proprietary computing architecture through which hosted services
can be provided to a restricted number of users protected by a firewall. A private
cloud is used by businesses that want to wield more control over their data.

 As far as the community cloud is concerned, it is a resource shared by more than


one organization whose cloud needs are similar.

 A combination of two or more clouds is a hybrid cloud. Here, the clouds used are a
combination of private, public, or community.

 Cloud computing is now being adopted by mobile phone users too, although there
are limitations, such as storage capacity, life of battery and restricted processing

2|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

power.

 Some of the most popular cloud applications globally are Amazon Web Services
(AWS), Google Compute Engine, Rackspace, Salesforce.com, IBM Cloud Managed
Services, among others. Cloud services have made it possible for small and medium
businesses (SMBs) to be on par with large companies.

 Mobile cloud computing is being harnessed by bringing into existence a new


infrastructure, which is made possible by getting together mobile devices and cloud
computing. This infrastructure allows the cloud to execute massive tasks and store
huge data, as processing of data and its storage do not take place within mobile
devices, but only beyond them. Mobile computing is getting a fillip as customers
are wanting to use their companies' applications and websites wherever they are.

 The emergence of 4G, Worldwide Interoperability for Microwave Access (Wimax),


among others, is also scaling up the connectivity of mobile devices. In addition,
new technologies for mobile, such as, CSS3, Hypertext Markup Language (HTML5)
hypervisor for mobile devices, Web 4.0, etc. will only power the adoption of mobile
cloud computing.

 The main benefits of using cloud computing by companies are that they need not
buy any infrastructure, thus lowering their maintenance costs. They can do away
with the services used when their business demands have been met. It also gives
firms comfort that they have huge resources at beck and call if they suddenly
acquire a major project.

 On the other hand, transferring their data to cloud makes businesses share their
data security responsibility with the provider of cloud services. This means that the
consumer of cloud services reposes lot of trust on the provider of those services.
Cloud consumers control on the services used is lesser than on on-premise IT
resources.

Vision of Cloud Computing:

Here vision of cloud computing means idea behind cloud computing.

1. Cloud computing provides the facility to provision virtual hardware, runtime


environment and services to a person having money.
2. These all things can be used as long as they are needed by the user, there is no
requirement for the upfront commitment.
3. The whole collection of computing system is transformed into a collection of
utilities, which can be provisioned and composed together to deploy systems in
hours rather than days, with no maintenance costs.
4. The long term vision of a cloud computing is that IT services are traded as utilities
in an open market without technological and legal barriers.

3|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

5. In the near future we can imagine that it will be possible to find the solution that
matches with our requirements by simply entering our request in a global digital
market that trades with cloud computing services.
6. The existence of such market will enable the automation of the discovery process
and its integration into its existing software systems.
7. Due to the existence of a global platform for trading cloud services will also help
service providers to potentially increase their revenue.
8. A cloud provider can also become a consumer of a competitor service in order to
fulfill its promises to customers.

What is cloud computing?


 Cloud computing has two meanings. The most common refers to running workloads
remotely over the internet in a commercial provider’s data center, also known as
the “public cloud” model. Popular public cloud offerings—such as Amazon Web
Services (AWS), Salesforce’s CRM system, and Microsoft Azure—all exemplify this
familiar notion of cloud computing. Today, most businesses take a multicloud
approach, which simply means they use more than one public cloud service.
 The second meaning of cloud computing describes how it works: a virtualized pool
of resources, from raw compute power to application functionality, available on
demand.
 When customers procure cloud services, the provider fulfills those requests using
advanced automation rather than manual provisioning. The key advantage is
agility: the ability to apply abstracted compute, storage, and network resources to
workloads as needed and tap into an abundance of prebuilt services.
Cloud Computing Reference Model:

 The reference model for cloud computing is an abstract model that characterizes
and standardizes a cloud computing environment by partitioning it into abstraction
layers and cross-layer functions.

4|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Cloud computing reference model

If we look in to the reference model as seen in above image we will find classification of
Cloud Computing services:

1. Infrastructure-as-a-Service (IaaS),
2. Platform-as-a-Service (PaaS), and
3. Software-as-a-Service (SaaS).
4. Web 2.0

1. Infrastructure as a service (IaaS) is a cloud computing offering in which a vendor


provides users access to computing resources such as servers, storage and networking.
2. Platform as a service (PaaS) is a cloud computing offering that provides users with
a cloud environment in which they can develop, manage and deliver applications.

3. Software as a service (SaaS) is a cloud computing offering that provides users with
access to a vendor’s cloud-based software. Users do not install applications on their local
devices. Instead, the applications reside on a remote cloud network accessed through the
web or an API. Through the application, users can store and analyze data and collaborate
on projects.
4. Web 2.0 is the term used to describe a variety of web sites and applications that allow
anyone to create and share online information or material they have created. A key
element of the technology is that it allows people to create, share, collaborate &
communicate.

CHARACTERISTICS OF CLOUD COMPUTING AS PER NIST:

NIST stands for National institute of standards and technology.


According to NIST there are five essential characteristics of cloud computing:

5|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

1. On Demand Self Service


2. Broad network access
3. Resource pooling
4. Rapid elasticity
5. Measured service

1. On Demand Self Service


User gets on demand self-services. User can get computer services like email, applications
etc. without interacting with each service provider.

Some of the cloud service providers are- Amazon Web Service, Microsoft, IBM,
Salesforce.com

2. Broad network access


Cloud services are available over the network and can be accessed through different clients
such as mobile, laptops etc.

3. Resource pooling
Same resources can be used by more than one customer at a same time.

For example- storage, network bandwidth can be used by any number of customers and
without knowing the exact location of that resource.

4. Rapid elasticity
On users demand cloud services can be available and released. Cloud service capabilities
are unlimited and used in any quantity at any time.

5. Measured service
Resources used by the users can be monitored, controlled. This reports is available
for both cloud providers and consumer.
On the basis of this measured reports cloud systems automatically controls and optimizes
the resources based on the type of services.
Services like- Storage, processing, bandwidth etc.

Benefits of using the cloud for your business


There are many potential advantages to adopting cloud-based solutions for your business.
Depending on your business and data needs, migrating to a cloud environment can result
in the following benefits:
Cost savings
Although the initial price tag for migrating to the cloud can give some businesses sticker
shock, there are attractive opportunities for ROI and cost savings. Operating on the cloud
typically means adopting a pay-as-you-go model, which means you no longer have to pay
for IT you’re not using (whether that’s storage, bandwidth, etc.).
Plus, cloud solutions are particularly affordable for smaller businesses who don’t have the
capital to build out and manage their own IT infrastructures. Greater efficiencies and
economies of scale mean more money in your pocket in the long run.
Reliability
A managed cloud platform is generally much more reliable than an in-house IT
infrastructure, with fewer instances of downtime or service interruptions. Most providers
offer 24/7 support and over 99.9% availability.

6|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

With backups for their backups, you can rest assured your data and applications will be
available whenever you need them.
Mobility
The cloud brings a level of portability unheard of with traditional IT delivery. By managing
your data and software on the cloud, employees can access necessary information and
communicate with each other whenever and wherever they want from their laptop,
smartphone, or other Internet-connected devices.
Cloud-based solutions open up opportunities for more remote work and higher productivity
and efficiency as everyone is assured access to the same updated information at the touch
of a button.

Other Benefits for CSPs and CSCs :


 Automatic service on demand
 Rapid Elasticity
 Measurable Service
 Multiple Tenants
 Sharing of pool of resources
 Price based utilities (Pay what you use)

Understanding the basics of cloud


Cloud computing is taking the world by storm. In fact, 94% of workloads and compute
instances will be processed through cloud data centers by 2021, compared to only 6% by
traditional data centers, according to research by Cisco.
The principle of the cloud isn’t new, but as more and more companies and businesses
switch to cloud-based services, it’s important to understand the nuances of cloud
computing terminology and concepts.

Future Challenges For Cloud Computing


 Harmful competition: The problem with any open technology is that too many
players can step in and crowd the stage. The result is a virtual fistfight where some
are not shy of using wrong tactics to gain the upper hand. This can include false
claims or trying to damage the reputation of a competitor. Obviously, too much
heat leads to unhealthy competition.

 Major failures: A few recent instances recently have cast a shadow over the
reliability of cloud computing. Issues related to outages and data security need to
be taken more seriously by the vendors, especially the big players. This is what is
also causing businesses to instead opt for private cloud, which excludes the benefits
of big data technology.
 Government regulation: Given the critical data involved, governments are
trying to set up compliance standards for cloud computing vendors. If this comes
into effect, many businesses will be forced to close shop or get into a long-drawn-
out cycle of deciphering arcane guidelines. While the intention can be understood,
governments usually should stay away from technology decisions.
 Security issues : Security risks of cloud computing have become the top concern
in 2018 as 77% of respondents stated in the referred survey. For the longest time,
the lack of resources/expertise was the number one voiced cloud challenge. In
2018 however, security inched ahead.

Distributed Systems
A distributed system contains multiple nodes that are physically separate but linked
together using the network. All the nodes in this system communicate with each other and
handle processes in tandem. Each of these nodes contains a small part of the distributed
operating system software. A diagram to better explain the distributed system is −

7|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Types of Distributed Systems


The nodes in the distributed systems can be arranged in the form of client/server systems
or peer to peer systems. Details about these are as follows −

Client/Server Systems
In client server systems, the client requests a resource and the server provides that
resource. A server may serve multiple clients at the same time while a client is in contact
with only one server. Both the client and server usually communicate via a computer
network and so they are a part of distributed systems.

Peer to Peer Systems


The peer to peer systems contains nodes that are equal participants in data sharing. All
the tasks are equally divided between all the nodes. The nodes interact with each other as
required as share resources. This is done with the help of a network.

Advantages of Distributed Systems


Some advantages of Distributed Systems are as follows −

 All the nodes in the distributed system are connected to each other. So nodes can
easily share data with other nodes.
 More nodes can easily be added to the distributed system i.e. it can be scaled as
required.
 Failure of one node does not lead to the failure of the entire distributed system.
Other nodes can still communicate with each other.

8|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Resources like printers can be shared with multiple nodes rather than being
restricted to just one.
Disadvantages of Distributed Systems
Some disadvantages of Distributed Systems are as follows −

 It is difficult to provide adequate security in distributed systems because the


nodes as well as the connections need to be secured.
 Some messages and data can be lost in the network while moving from one node
to another.
 The database connected to the distributed systems is quite complicated and
difficult to handle as compared to a single user system.
 Overloading may occur in the network if all the nodes of the distributed system
try to send data at once.

Distributed Architecture
In distributed architecture, components are presented on different platforms and several
components can cooperate with one another over a communication network in order to
achieve a specific objective or goal.
 In this architecture, information processing is not confined to a single machine
rather it is distributed over several independent computers.
 A distributed system can be demonstrated by the client-server architecture which
forms the base for multi-tier architectures; alternatives are the broker architecture
such as CORBA, and the Service-Oriented Architecture (SOA).
 There are several technology frameworks to support distributed architectures,
including .NET, J2EE, CORBA, .NET Web services, AXIS Java Web services, and
Globus Grid services.
 Middleware is an infrastructure that appropriately supports the development and
execution of distributed applications. It provides a buffer between the applications
and the network.
 It sits in the middle of system and manages or supports the different components
of a distributed system. Examples are transaction processing monitors, data
convertors and communication controllers etc.
Middleware as an infrastructure for distributed system

The basis of a distributed architecture is its transparency, reliability, and availability.

9|P ag e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

The following table lists the different forms of transparency in a distributed system −

Sr.No. Transparency & Description

1
Access
Hides the way in which resources are accessed and the differences in data platform.

2
Location
Hides where resources are located.

3
Technology
Hides different technologies such as programming language and OS from user.

4
Migration / Relocation
Hide resources that may be moved to another location which are in use.

5
Replication
Hide resources that may be copied at several location.

6
Concurrency
Hide resources that may be shared with other users.

7
Failure
Hides failure and recovery of resources from user.

8
Persistence
Hides whether a resource ( software ) is in memory or disk.

Key Characteristics of Distributed Systems

Key characteristics of distributed systems are

 Resource Sharing

10 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Resource sharing means that the existing resources in a distributed system can be

accessed or remotely accessed across multiple computers in the system. Computers in

distributed systems shares resources like hardware (disks and printers), software (files,

windows and data objects) and data. Hardware resources are shared for reductions in

cost and convenience. Data is shared for consistency and exchange of information.

 Heterogeneity

In distributed systems components can have variety and differences in Networks,

Computer hardware, Operating systems, Programming languages and implementations

by different developers.

 Openness

Openness is concerned with extensions and improvements of distributed systems. The

distributed system must be open in terms of Hardware and Softwares. In order to make a

distributed system open,

1. A detailed and well-defined interface of components must be published.

2. Should standardize the interfaces of components

3. The new component must be easily integrated with existing components

 Concurrency

Concurrency is a property of a system representing the fact that multiple activities are

executed at the same time. The concurrent execution of activities takes place in different

components running on multiple machines as part of a distributed system. In addition,

11 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

these activities may perform some kind of interactions among them. Concurrency reduces

the latency and increases the throughput of the distributed system.

 Scalability

Scalability is mainly concerned about how the distributed system handles the growth as

the number of users for the system increases. Mostly we scale the distributed system by

adding more computers in the network. Components should not need to be changed

when we scale the system. Components should be designed in such a way that it is

scalable.

 Fault Tolerance

In a distributed system hardware, software, network anything can fail. The system must

be designed in such a way that it is available all the time even after something has failed.

 Transparency

Distributed systems should be perceived by users and application programmers as a

whole rather than as a collection of cooperating components. Transparency can be of

various types like access, location, concurrency, replication, etc.

Write short note on Web2.0

When it comes to defining web 2.0. the term means such internet applications

which allow sharing and collaboration opportunities to people and help them to

express themselves online.


It’s a simply improved version of the first worldwide web, characterized specifically by
the change from static to dynamic or user-generated content and also the growth of
social media.
The concept behind Web 2.0 refers to rich web applications, web-oriented architecture,
and social web. It refer to changes in the ways web pages are designed and used by the
users, without any change in any technical specifications.

12 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Wikis

A wiki is a collaborative tool that allows students to contribute and modify one or more

pages of course related materials. Wikis are collaborative in nature and facilitate

community-building within a course. Essentially, a wiki is a web page with an

open-editing system.

Blogging

The word blog is actually a shortened form of its original name, "weblog." These weblogs
allowed early internet users to "log" the details of their day in diary-style entries. Blogs
often allow readers to comment, so as they became more common, communities sprung
up around popular blogs.

Social network

Alternatively referred to as a virtual community or profile site, a social network is a


website that brings people together to talk, share ideas and interests, or make new
friends. This type of collaboration and sharing is known as social media. Unlike
traditional media that is created by no more than ten people, social media sites contain
content created by hundreds or even millions of different people.

Examples of social networks

 Classmates : One of the largest websites for connecting high school friends and
keeping in touch with them and future reunions.

13 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Facebook : The most popular social networking

websites on the Internet. Facebook is a popular

destination for users to set up personal space

and connect with friends, share pictures, share

movies, talk about what you're doing, etc.

 Google+ - The latest social networking service

from Google.

 Instagram - A mobile photo sharing service

and application available for the iPhone,

Android, and Windows Phone platforms.

Service-oriented computing
Service orientation is the core reference model for cloud

computing systems. This approach adopts the concept of

services as the main building blocks of application and

system development. Service-oriented computing

(SOC) supports the development of rapid, low-cost,

flexible, interoperable, and evolvable applications and

systems

A service is an abstraction representing a self-describing

and platform-agnostic component that can perform any

function—anything from a simple function to a complex

business process. Virtually any piece of code that

performs a task can be turned into a service and expose

its functionalities through a network-accessible protocol.

A service is supposed to be loosely

coupled, reusable, programming language independent,

and location transparent.

Service-Oriented Architecture

14 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Service-Oriented Architecture (SOA) is an architectural approach in which applications


make use of services available in the network. In this architecture, services are
provided to form applications, through a communication call over the internet.

 SOA allows users to combine a large number of facilities from existing services to
form applications.
 SOA encompasses a set of design principles that structure system development and
provide means for integrating components into a coherent and decentralized
system.
 SOA based computing packages functionalities into a set of interoperable services,
which can be integrated into different software systems belonging to separate
business domains.
There are two major roles within Service-oriented Architecture:

1. Service provider: The service provider is the maintainer of the service and the
organization that makes available one or more services for others to use. To
advertise services, the provider can publish them in a registry, together with a
service contract that specifies the nature of the service, how to use it, the
requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata in the
registry and develop the required client components to bind and use the service.

Services might aggregate information and data retrieved from other services or create
workflows of services to satisfy the request of a given service consumer. This practice
is known as service orchestration Another important interaction pattern is service
choreography, which is the coordinated interaction of services without a single point of
control.
Components of SOA:

15 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Guiding Principles of SOA:

1. Standardized service contract: Specified through one or more service


description documents.
2. Loose coupling: Services are designed as self-contained components, maintain
relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description
documents. They hide their logic, which is encapsulated within their
implementation.
4. Reusability: Designed as components, services can be reused more effectively,
thus reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a
service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that constitute
supplemental metadata through which they can be effectively discovered. Service
discovery provides an effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex
operations can be implemented. Service orchestration and choreography provide
solid support for composing services and achieving business goals.
Advantages of SOA:

 Service reusability: In SOA, applications are made from existing services. Thus,
services can be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be
updated and modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by combining
services picked from different sources, independent of the platform.

16 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Availability: SOA facilities are easily available to anyone on request.


 Reliability: SOA applications are more reliable because it is easy to debug small
services rather than huge codes
 Scalability: Services can run on different servers within an environment, this
increases scalability
Disadvantages of SOA:

 High overhead: A validation of input parameters of services is done whenever


services interact this decreases performance as it increases load and response
time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange messages
to tasks. the number of messages may go in millions. It becomes a cumbersome
task to handle a large number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is
mentioned or not.

1. SOA infrastructure is used by many armies and air force to deploy situational
awareness systems.
2. SOA is used to improve the healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For
example, an app might need GPS so it uses inbuilt GPS functions of the device. This
is SOA in mobile solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and
content.

Service-Oriented Architecture – SOA Features and Benefits

SOA Features, Benefits, and Infrastructure

Feature Benefits Supporting


Infrastructure
Service Improved information flow

Ability to expose internal functionality

Organizational flexibility
Service Re-use Lower software development and Service repository
management costs
Messaging Configuration flexibility Messaging program
Message Monitoring Business intelligence Activity monitor

Performance measurement

Security attack detection


Message Control Application of management policy PDPs and PEPs

Application of security policy


Message Data translation Data translator
Transformation

17 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Message Security Data confidentiality and integrity Encryption engine


Complex Event Simplification of software structure Event processor
Processing
Ability to adapt quickly to different
external environments

Improved manageability and security


Service Composition Ability to develop new function Composition engine
combinations rapidly
Service Discovery Ability to optimize performance, Service registry
functionality, and cost

Easier introduction of system


upgrades
Asset Wrapping Ability to integrate existing assets
Virtualization Improved reliability

Ability to scale operations to meet


different demand levels
Model-driven Ability to develop new functions
Implementation rapidly

UTILITY COMPUTING
Utility computing is a model in which computing resources are provided to the customer
based on specific demand. The service provider charges exactly for the services
provided, instead of a flat rate.
The foundational concept is that users or businesses pay the providers of utility
computing for the amenities used – such as computing capabilities, storage space and
applications services. The customer is thus, absolved from the responsibility of
maintenance and management of the hardware. Consequently, the financial layout is
minimal for the organization.
Utility computing helps eliminate data redundancy, as huge volumes of data are
distributed across multiple servers or backend systems. The client however, can access
the data anytime and from anywhere.

18 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Introduction to Parallel Computing


Computer software were written conventionally for serial computing. This meant that
to solve a problem, an algorithm divides the problem into smaller instructions. These
discrete instructions are then executed on Central Processing Unit of a computer one
by one. Only after one instruction is finished, next one starts.
Real life example of this would be people standing in a queue waiting for movie ticket
and there is only cashier.Cashier is giving ticket one by one to the persons. Complexity
of this situation increases when there are 2 queues and only one cashier.
So, in short Serial Computing is following:
1. In this, a problem statement is broken into discrete instructions.
2. Then the instructions are executed one by one.
3. Only one instruction is executed at any moment of time.
Look at point 3. This was causing a huge problem in computing industry as only one
instruction was getting executed at any moment of time. This was a huge waste of
hardware resources as only one part of the hardware will be running for a particular
instruction and of time. As problem statements were getting heavier and bulkier, so
does the amount of time in execution of those statements. Example of processors are
Pentium 3 and Pentium 4.

Now let’s come back to our real life problem. We could definitely say that complexity
will decrease when there are 2 queues and 2 cashier giving tickets to 2 persons
simultaneously. This is an example of Parallel Computing.
Parallel Computing –
It is the use of multiple processing elements simultaneously for solving any problem.
Problems are broken down into instructions and are solved concurrently as each
resource which has been applied to work is working at the same time.
Advantages of Parallel Computing over Serial Computing are as follows:
1. It saves time and money as many resources working together will reduce the time
and cut potential costs.
2. It can be impractical to solve larger problems on Serial Computing.
3. It can take advantage of non-local resources when the local resources are finite.

19 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

4. Serial Computing ‘wastes’ the potential computing power, thus Parallel Computing
makes better work of hardware.
Types of Parallelism:
1. Bit-level parallelism: It is the form of parallel computing which is based on the
increasing processor’s size. It reduces the number of instructions that the system
must execute in order to perform a task on large-sized data.
Example: Consider a scenario where an 8-bit processor must compute the sum of
two 16-bit integers. It must first sum up the 8 lower-order bits, then add the 8
higher-order bits, thus requiring two instructions to perform the operation. A 16-bit
processor can perform the operation with just one instruction.
2. Instruction-level parallelism: A processor can only address less than one
instruction for each clock cycle phase. These instructions can be re-ordered and
grouped which are later on executed concurrently without affecting the result of the
program. This is called instruction-level parallelism.
3. Task Parallelism: Task parallelism employs the decomposition of a task into
subtasks and then allocating each of the subtasks for execution. The processors
perform execution of sub tasks concurrently.
Why parallel computing?
 The whole real world runs in dynamic nature i.e. many things happen at a certain
time but at different places concurrently. This data is extensively huge to manage.
 Real world data needs more dynamic simulation and modeling, and for achieving
the same, parallel computing is the key.
 Parallel computing provides concurrency and saves time and money.
 Complex, large datasets, and their management can be organized only and only
using parallel computing’s approach.
 Ensures the effective utilization of the resources. The hardware is guaranteed to be
used effectively whereas in serial computation only some part of hardware was
used and the rest rendered idle.
 Also, it is impractical to implement real-time systems using serial computing.
Applications of Parallel Computing:
 Data bases and Data mining.
 Real time simulation of systems.
 Science and Engineering.
 Advanced graphics, augmented reality and virtual reality.
Limitations of Parallel Computing:
 It addresses such as communication and synchronization between multiple sub-
tasks and processes which is difficult to achieve.
 The algorithms must be managed in such a way that they can be handled in the
parallel mechanism.
 The algorithms or program must have low coupling and high cohesion. But it’s
difficult to create such programs.
 More technically skilled and expert programmers can code a parallelism based
program well.
Future of Parallel Computing: The computational graph has undergone a great
transition from serial computing to parallel computing. Tech giant such as Intel has
already taken a step towards parallel computing by employing multicore processors.
Parallel computation will revolutionize the way computers work in the future, for the
better good. With all the world connecting to each other even more than before,
Parallel Computing does a better role in helping us stay that way. With faster
networks, distributed systems, and multi-processor computers, it becomes even more
necessary.

Parallel Computing Definition

20 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Parallel computing is a type of computing architecture in which several processors


simultaneously execute multiple, smaller calculations broken down from an overall
larger, complex problem.

Difference Between Parallel Processing and Parallel Computing

Parallel processing is a method in computing in which separate parts of an overall


complex task are broken up and run simultaneously on multiple CPUs, thereby reducing
the amount of time for processing.

Dividing and assigning each task to a different processor is typically executed by


computer scientists with the aid of parallel processing software tools, which will also
work to reassemble and read the data once each processor has solved its particular
equation. This process is accomplished either via a computer network or via a computer
with two or more processors.

Parallel processing and parallel computing occur in tandem, therefore the terms are
often used interchangeably; however, where parallel processing concerns the number of
cores and CPUs running in parallel in the computer, parallel computing concerns the
manner in which software behaves to optimize for that condition.

Difference Between Sequential and Parallel Computing

21 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Sequential computing, also known as serial computation, refers to the use of a single
processor to execute a program that is broken down into a sequence of discrete
instructions, each executed one after the other with no overlap at any given time.
Software has traditionally been programmed sequentially, which provides a simpler
approach, but is significantly limited by the speed of the processor and its ability to
execute each series of instructions. Where uni-processor machines use sequential data
structures, data structures for parallel computing environments are concurrent.

Measuring performance in sequential programming is far less complex and important


than benchmarks in parallel computing as it typically only involves identifying
bottlenecks in the system. Benchmarks in parallel computing can be achieved with
benchmarking and performance regression testing frameworks, which employ a variety
of measurement methodologies, such as statistical treatment and multiple repetitions.
The ability to avoid this bottleneck by moving data through the memory hierarchy is
especially evident in parallel computing for data science, machine learning parallel
computing, and parallel computing artificial intelligence use cases.

Sequential computing is effectively the opposite of parallel computing. While parallel


computing may be more complex and come at a greater cost up front, the advantage of
being able to solve a problem faster often outweighs the cost of acquiring parallel
computing hardware.

Hardware architecture (parallel computing)


1. Era of computing –
The two fundamental and dominant models of computing are sequential and
parallel. The sequential computing era began in the 1940s and the parallel (and
distributed) computing era followed it within a decade.
2. Computing –
So, now the question arises that what is Computing?
Computing is any goal-oriented activity requiring, benefiting from, or creating
computers. Computing includes designing, developing and building hardware and
software systems; designing a mathematical sequence of steps known as an
algorithm; processing, structuring and managing various kinds of information
3. Type of Computing –
Following are two types of computing :

1. Parallel computing
2. Distributed computing
Parallel computing –

Processing of multiple tasks simultaneously on multiple processors is called parallel


processing. The parallel program consists of multiple active processes (tasks)
simultaneously solving a given problem.
As we learn what is parallel computing and there type now we are going more deeply
on the topic of the parallel computing and understand the concept of the hardware
architecture of parallel computing.

22 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Hardware architecture of parallel computing –


The hardware architecture of parallel computing is disturbed along the following
categories as given below :
1. Single-instruction, single-data (SISD) systems
2. Single-instruction, multiple-data (SIMD) systems
3. Multiple-instruction, single-data (MISD) systems
4. Multiple-instruction, multiple-data (MIMD) systems

SISD

SISD stands for 'Single Instruction and Single Data Stream'. It represents the
organization of a single computer containing a control unit, a processor unit, and a
memory unit.

Instructions are executed sequentially, and the system may or may not have internal
parallel processing capabilities.

Most conventional computers have SISD architecture like the traditional Von-Neumann
computers.

Parallel processing, in this case, may be achieved by means of multiple functional units
or by pipeline processing.

1. Where, CU = Control Unit, PE = Processing Element, M = Memory

Instructions are decoded by the Control Unit and then the Control Unit sends the
instructions to the processing units for execution.

Data Stream flows between the processors and memory bi-directionally.

23 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Examples:

Older generation computers, minicomputers, and workstations

SIMD

SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common
control unit.

All processors receive the same instruction from the control unit but operate on different
items of data.

The shared memory unit must contain multiple modules so that it can communicate with
all the processors simultaneously.

SIMD is mainly dedicated to array processing machines. However, vector


processors can also be seen as a part of this group. MISD

24 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

MISD stands for 'Multiple Instruction and Single Data stream'.

MISD structure is only of theoretical interest since no practical system has been
constructed using this organization.

In MISD, multiple processing units operate on one single-data stream. Each processing
unit operates on the data independently via separate instruction stream.

1. Where, M = Memory Modules, CU = Control Unit, P = Processor Units

Example:

The experimental Carnegie-Mellon C.mmp computer (1971)

MIMD
MIMD stands for 'Multiple Instruction and Multiple Data Stream'.

In this organization, all processors in a parallel computer can execute


different instructions and operate on various data at the same time.

In MIMD, each processor has a separate program and an instruction stream


is generated from each program.

25 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Examples:

Cray T90, Cray T3E, IBM-SP2

Types of Parallelism in Processing


Execution
Data Parallelism
Data Parallelism means concurrent execution of the same task on each multiple
computing core.
Let’s take an example, summing the contents of an array of size N. For a single-core
system, one thread would simply sum the elements [0] . . . [N − 1]. For a dual-core
system, however, thread A, running on core 0, could sum the elements [0] . . . [N/2 −
1] and while thread B, running on core 1, could sum the elements [N/2] . . . [N − 1]. So
the Two threads would be running in parallel on separate computing cores.

Task Parallelism
Task Parallelism means concurrent execution of the different task on multiple
computing cores.
Consider again our example above, an example of task parallelism might involve two
threads, each performing a unique statistical operation on the array of elements. Again
The threads are operating in parallel on separate computing cores, but each is
performing a unique operation.

Bit-level parallelism
Bit-level parallelism is a form of parallel computing which is based on increasing
processor word size. In this type of parallelism, with increasing the word size reduces
the number of instructions the processor must execute in order to perform an operation
on variables whose sizes are greater than the length of the word.

26 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

E.g., consider a case where an 8-bit processor must add two 16-bit integers. First the
8 lower-order bits from each integer were must added by processor, then add the 8
higher-order bits, and then two instructions to complete a single operation. A processor
with 16- bit would be able to complete the operation with single instruction.

Instruction-level parallelism
Instruction-level parallelism means the simultaneous execution of multiple
instructions from a program. While pipelining is a form of ILP, we must exploit it to
achieve parallel execution of the instructions in the instruction stream.
Example
for (i=1; i<=100; i= i+1)
y[i] = y[i] + x[i];

This is a parallel loop. Every iteration of the loop can overlap with any other iteration,
although within each loop iteration there is little opportunity for overlap.

Distributed computing for efficient digital


infrastructures
Today, distributed computing is an integral part of both our digital work life and private life.
Anyone who goes online and performs a Google search is already using distributed computing.
Distributed system architectures are also shaping many areas of business and providing
countless services with ample computing and processing power. In the following, we will explain
how this method works and introduce the system architectures used and its areas of application.

What is distributed computing?


The term “distributed computing” describes a digital infrastructure in which a network of
computers solves pending computational tasks. Despite being physically separated, these
autonomous computers work together closely in a process where the work is divvied up.
The hardware being used is secondary to the method here. In addition to high-performance
computers and workstations used by professionals, you can also integrate minicomputers and
desktop computers used by private individuals.

Distributed hardware cannot use a shared memory due to being physically separated, so the
participating computers exchange messages and data (e.g. computation results) over a network.
This inter-machine communicationoccurs locally over an intranet (e.g. in a data center) or across
the country and world via the internet. Messages are transferred using internet protocols such
as TCP/IP and UDP.

In line with the principle of transparency, distributed computing strives to present itself
externally as a functional unit and to simplify the use of technology as much as possible. For
example, users searching for a product in the database of an online shop perceive the shopping
experience as a single process and do not have to deal with the modular system architecture
being used.

In short, distributed computing is a combination of task distribution and coordinated


interactions. The goal is to make task management as efficient as possible and to find practical
flexible solutions.

27 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

How does distributed computing work?


In distributed computing, a computation starts with a special problem-solving strategy. A single
problem is divided up and each part is processed by one of the computing units. Distributed
applications running on all the machines in the computer network handle the operational
execution.

Distributed applications often use a client-server architecture. Clients and servers share the work
and cover certain application functions with the software installed on them. A product search is
carried out using the following steps: The client acts as an input instance and a user
interface that receives the user request and processes it so that it can be sent on to a server.
The remote server then carries out the main part of the search function and searches a
database. The search results are prepared on the server-side to be sent back to the client and
are communicated to the client over the network. In the end, the results are displayed on the
user’s screen.

Middleware services are often integrated into distributed processes.Acting as a special


software layer, middleware defines the (logical) interaction patterns between partners and
ensures communication, and optimal integration in distributed systems. It provides interfaces
and services that bridge gaps between different applications and enables and monitors their
communication (e.g. through communication controllers). For operational implementation,
middleware provides a proven method for cross-device inter-process
communication called remote procedure call (RPC) which is frequently used in client-server
architecture for product searches involving database queries.

This integration function, which is in line with the transparency principle, can also be viewed as
a translation task. Technically heterogeneous application systems and platforms normally
cannot communicate with one another. Middleware helps them to “speak one language” and
work together productively. In addition to cross-device and cross-platform interaction, middleware
also handles other tasks like data management. It controls distributed applications’ access to
functions and processes of operating systems that are available locally on the connected
computer.

28 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

What are the different types of distributed computing?


Distributed computing is a multifaceted field with infrastructures that can vary widely. It is thus
nearly impossible to define all types of distributed computing. However, this field of computer
science is commonly divided into three subfields:

 cloud computing
 grid computing
 cluster computing

Cloud computing uses distributed computing to provide customers with highly scalable cost-
effective infrastructures and platforms. Cloud providers usually offer their resources through
hosted services that can be used over the internet. A number of different service models have
established themselves on the market:

 Software as a service (SaaS): In the case of SaaS, the customer uses the cloud
provider’s applications and associated infrastructure (e.g. servers, online storage,
computing power). The applications can be accessed with a variety of devices via a thin
client interface (e.g. a browser-based web app). Maintenance and administration of the
outsourced infrastructure is handled by the cloud provider.
 Platform as a service(PaaS): In the case of PaaS, a cloud-based environment is
provided (e.g. for developing web applications). The customer retains control over the

29 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

applications provided and can configure customized user settings while the technical
infrastructure for distributed computing is handled by the cloud provider.
 Infrastructure as a service (IaaS): In the case of IaaS, the cloud provider supplies a
technical infrastructure which users can access via public or private networks. The
provided infrastructure may include the following components: servers, computing and
networking resources, communication devices (e.g. routers, switches, and firewalls),
storage space, and systems for archiving and securing data. As for the customer, they
retain control over operating systems and provided applications.

Grid computing is based on the idea of a supercomputer with enormous computing power.
However, computing tasks are performed by many instances rather than just one. Servers and
computers can thus perform different tasks independently of one another. Grid computing can
access resources in a very flexible manner when performing tasks. Normally, participants will
allocate specific resources to an entire project at night when the technical infrastructure tends to
be less heavily used.

One advantage of this is that highly powerful systems can be quickly used and
the computing power can be scaled as needed. There is no need to replace or upgrade an
expensive supercomputer with another pricey one to improve performance.

Since grid computing can create a virtual supercomputer from a cluster of loosely
interconnected computers, it is specialized in solving problems that are particularly
computationally intensive. This method is often used for ambitious scientific projects
and decrypting cryptographic codes.

Cluster computing cannot be clearly differentiated from cloud and grid computing. It is a more
general approach and refers to all the ways in which individual computers and their computing
power can be combined together in clusters. Examples of this include server clusters, clusters in
big data and in cloud environments, database clusters, and application clusters. Computer
networks are also increasingly being used in high-performance computing which can solve
particularly demanding computing problems.

Different types of distributed computing can also be defined by looking at the system
architectures and interaction models of a distributed infrastructure. Due to the complex
system architectures in distributed computing, the term distributed systems is more often used.

The following are some of the more commonly used architecture models in distributed
computing:

 client-server model
 peer-to-peer model
 multilayered model (multi-tier architectures)
 service-oriented architecture (SOA)

Client-Server Architecture
The client-server architecture is the most common distributed system architecture
which decomposes the system into two major subsystems or logical processes −
 Client − This is the first process that issues a request to the second process
i.e. the server.
 Server − This is the second process that receives the request, carries it out,
and sends a reply to the client.

30 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

In this architecture, the application is modelled as a set of services that are provided
by servers and a set of clients that use these services. The servers need not know
about clients, but the clients must know the identity of servers, and the mapping of
processors to processes is not necessarily 1 : 1

Client-server Architecture can be classified into two models based on the functionality
of the client −

Thin-client model

In thin-client model, all the application processing and data management is carried
by the server. The client is simply responsible for running the presentation software.
 Used when legacy systems are migrated to client server architectures in which
legacy system acts as a server in its own right with a graphical interface
implemented on a client
 A major disadvantage is that it places a heavy processing load on both the
server and the network.

Thick/Fat-client model

In thick-client model, the server is only in charge for data management. The software
on the client implements the application logic and the interactions with the system
user.
 Most appropriate for new C/S systems where the capabilities of the client
system are known in advance

31 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 More complex than a thin client model especially for management. New
versions of the application have to be installed on all clients.

Advantages

 Separation of responsibilities such as user interface presentation and business


logic processing.
 Reusability of server components and potential for concurrency
 Simplifies the design and the development of distributed applications
 It makes it easy to migrate or integrate existing applications into a distributed
environment.
 It also makes effective use of resources when a large number of clients are
accessing a high-performance server.

Disadvantages

 Lack of heterogeneous infrastructure to deal with the requirement changes.


 Security complications.
 Limited server availability and reliability.
 Limited testability and scalability.
 Fat clients with presentation and business logic together.

Multi-Tier Architecture (n-tier Architecture)


Multi-tier architecture is a client–server architecture in which the functions such as
presentation, application processing, and data management are physically
separated. By separating an application into tiers, developers obtain the option of
changing or adding a specific layer, instead of reworking the entire application. It
provides a model by which developers can create flexible and reusable applications.

32 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

The most general use of multi-tier architecture is the three-tier architecture. A three-
tier architecture is typically composed of a presentation tier, an application tier, and a
data storage tier and may execute on a separate processor.

Presentation Tier

Presentation layer is the topmost level of the application by which users can access
directly such as webpage or Operating System GUI (Graphical User interface). The
primary function of this layer is to translate the tasks and results to something that
user can understand. It communicates with other tiers so that it places the results to
the browser/client tier and all other tiers in the network.

Application Tier (Business Logic, Logic Tier, or Middle Tier)

Application tier coordinates the application, processes the commands, makes logical
decisions, evaluation, and performs calculations. It controls an application’s
functionality by performing detailed processing. It also moves and processes data
between the two surrounding layers.

Data Tier

In this layer, information is stored and retrieved from the database or file system. The
information is then passed back for processing and then back to the user. It includes
the data persistence mechanisms (database servers, file shares, etc.) and provides
API (Application Programming Interface) to the application tier which provides
methods of managing the stored data.

33 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Advantages
 Better performance than a thin-client approach and is simpler to manage than
a thick-client approach.
 Enhances the reusability and scalability − as demands increase, extra servers
can be added.
 Provides multi-threading support and also reduces network traffic.
 Provides maintainability and flexibility
Disadvantages
 Unsatisfactory Testability due to lack of testing tools.
 More critical server reliability and availability.

Architectural Design
The software needs the architectural design to represents the design of
software. IEEE defines architectural design as “the process of defining a
collection of hardware and software components and their interfaces to
establish the framework for the development of a computer system.” The
software that is built for computer-based systems can exhibit one of these
many architectural styles.
Each style will describe a system category that consists of :
 A set of components(eg: a database, computational modules) that will
perform a function required by the system.
 The set of connectors will help in coordination, communication, and
cooperation between the components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall
properties of the system.
The use of architectural styles is to establish a structure for all the
components of the system.

34 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Taxonomy of Architectural styles:


1. Data centred architectures:
 A data store will reside at the center of this architecture and is
accessed frequently by the other components that update, add, delete
or modify the data present within the store.
 The figure illustrates a typical data centered style. The client software
access a central repository. Variation of this approach are used to
transform the repository into a blackboard when data related to client
or data of interest for the client change the notifications to client
software.
 This data-centered architecture will promote integrability. This means
that the existing components can be changed and new client
components can be added to the architecture without the permission
or concern of other clients.
 Data can be passed among clients using blackboard mechanism.

2. Data flow architectures:


 This kind of architecture is used when input data to be transformed
into output data through a series of computational manipulative
components.
 The figure represents pipe-and-filter architecture since it uses both
pipe and filter and it has a set of components called filters connected
by pipes.
 Pipes are used to transmit data from one component to the next.
 Each filter will work independently and is designed to take data input of
a certain form and produces data output to the next filter of a specified
form. The filters don’t require any knowledge of the working of
neighboring filters.
 If the data flow degenerates into a single line of transforms, then it is
termed as batch sequential. This structure accepts the batch of data
and then applies a series of sequential components to transform it.

35 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

3. Call and Return architectures: It is used to create a program that is


easy to scale and modify. Many sub-styles exist within this category. Two
of them are explained below.
 Remote procedure call architecture: This components is used to
present in a main program or sub program architecture distributed
among multiple computers on a network.
 Main program or Subprogram architectures: The main program
structure decomposes into number of subprograms or function into a
control hierarchy. Main program contains number of subprograms that
can invoke other components.

4. Object Oriented architecture: The components of a system encapsulate


data and the operations that must be applied to manipulate the data. The
coordination and communication between the components are
established via the message passing.
5. Layered architecture:
 A number of different layers are defined with each layer performing a
well-defined set of operations. Each layer will do some operations that
becomes closer to machine instruction set progressively.
 At the outer layer, components will receive the user interface
operations and at the inner layers, components will perform the
operating system interfacing(communication and coordination with OS)

36 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Intermediate layers to utility services and application software


functions.

Batch Sequential
Batch sequential is a classical data processing model, in which a data transformation
subsystem can initiate its process only after its previous subsystem is completely
through −
 The flow of data carries a batch of data as a whole from one subsystem to
another.
 The communications between the modules are conducted through temporary
intermediate files which can be removed by successive subsystems.
 It is applicable for those applications where data is batched, and each
subsystem reads related input files and writes output files.
 Typical application of this architecture includes business data processing such
as banking and utility billing.

Advantages

 Provides simpler divisions on subsystems.


 Each subsystem can be an independent program working on input data and
producing output data.

Disadvantages

 Provides high latency and low throughput.

37 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Does not provide concurrency and interactive interface.


 External control is required for implementation.

Pipe and Filter Architecture


This approach lays emphasis on the incremental transformation of data by successive
component. In this approach, the flow of data is driven by data and the whole system
is decomposed into components of data source, filters, pipes, and data sinks.
The connections between modules are data stream which is first-in/first-out buffer
that can be stream of bytes, characters, or any other type of such kind. The main
feature of this architecture is its concurrent and incremented execution.

Filter

A filter is an independent data stream transformer or stream transducers. It


transforms the data of the input data stream, processes it, and writes the transformed
data stream over a pipe for the next filter to process. It works in an incremental mode,
in which it starts working as soon as data arrives through connected pipe. There are
two types of filters − active filter and passive filter.
Active filter
Active filter lets connected pipes to pull data in and push out the transformed data. It
operates with passive pipe, which provides read/write mechanisms for pulling and
pushing. This mode is used in UNIX pipe and filter mechanism.
Passive filter
Passive filter lets connected pipes to push data in and pull data out. It operates with
active pipe, which pulls data from a filter and pushes data into the next filter. It must
provide read/write mechanism.

Advantages

 Provides concurrency and high throughput for excessive data processing.

38 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Provides reusability and simplifies system maintenance.


 Provides modifiability and low coupling between filters.
 Provides simplicity by offering clear divisions between any two filters connected
by pipe.
 Provides flexibility by supporting both sequential and parallel execution.

Disadvantages

 Not suitable for dynamic interactions.


 A low common denominator is needed for transmission of data in ASCII
formats.
 Overhead of data transformation between filters.
 Does not provide a way for filters to cooperatively interact to solve a problem.
 Difficult to configure this architecture dynamically.

Pipe

Pipes are stateless and they carry binary or character stream which exist between
two filters. It can move a data stream from one filter to another. Pipes use a little
contextual information and retain no state information between instantiations.

Process Control Architecture


It is a type of data flow architecture where data is neither batched sequential nor
pipelined stream. The flow of data comes from a set of variables, which controls the
execution of process. It decomposes the entire system into subsystems or modules
and connects them.

Types of Subsystems

A process control architecture would have a processing unit for changing the
process control variables and a controller unit for calculating the amount of
changes.
A controller unit must have the following elements −
 Controlled Variable − Controlled Variable provides values for the underlying
system and should be measured by sensors. For example, speed in cruise
control system.
 Input Variable − Measures an input to the process. For example, temperature
of return air in temperature control system
 Manipulated Variable − Manipulated Variable value is adjusted or changed by
the controller.
 Process Definition − It includes mechanisms for manipulating some process
variables.

39 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Sensor − Obtains values of process variables pertinent to control and can be


used as a feedback reference to recalculate manipulated variables.
 Set Point − It is the desired value for a controlled variable.
 Control Algorithm − It is used for deciding how to manipulate process
variables.

Application Areas

Process control architecture is suitable in the following domains −


 Embedded system software design, where the system is manipulated by
process control variable data.
 Applications, which aim is to maintain specified properties of the outputs of the
process at given reference values.
 Applicable for car-cruise control and building temperature control systems.
 Real-time system software to control automobile anti-lock brakes, nuclear
power plants, etc.

Inter Process Communication (IPC)


A process can be of two types:
 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes while a co-
operating process can be affected by other executing processes. Though one can think
that those processes, which are running independently, will execute very efficiently, in
reality, there are many situations when co-operative nature can be utilised for
increasing computational speed, convenience and modularity. Inter process
communication (IPC) is a mechanism which allows processes to communicate with
each other and synchronize their actions. The communication between these processes
can be seen as a method of co-operation between them. Processes can communicate
with each other through both:
1. Shared Memory
2. Message passing
The Figure 1 below shows a basic structure of communication between processes via
the shared memory method and via the message passing method.

An operating system can implement both method of communication. First, we will


discuss the shared memory methods of communication and then message passing.
Communication between processes using shared memory requires processes to share
some variable and it completely depends on how programmer will implement it. One
way of communication using shared memory can be imagined like this: Suppose

40 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

process1 and process2 are executing simultaneously and they share some resources or
use some information from another process. Process1 generate information about
certain computations or resources being used and keeps it as a record in shared
memory. When process2 needs to use the shared information, it will check in the
record stored in shared memory and take note of the information generated by
process1 and act accordingly. Processes can use shared memory for extracting
information as a record from another process as well as for delivering any specific
information to other processes.
Let’s discuss an example of communication between processes using shared memory
method.

Remote Procedure Call (RPC)

i) Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. Producer produces some item and
Consumer consumes that item. The two processes share a common space or memory
location known as a buffer where the item produced by Producer is stored and from
which the Consumer consumes the item, if needed. There are two versions of this
problem: the first one is known as unbounded buffer problem in which Producer can
keep on producing items and there is no limit on the size of the buffer, the second one
is known as the bounded buffer problem in which Producer can produce up to a
certain number of items before it starts waiting for Consumer to consume it. We will
discuss the bounded buffer problem. First, the Producer and the Consumer will share
some common memory, then producer will start producing items. If the total produced

41 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

item is equal to the size of buffer, producer will wait to get it consumed by the
Consumer. Similarly, the consumer will first check for the availability of the item. If
no item is available, Consumer will wait for Producer to produce it. If there are items
available, Consumer will consume it. The pseudo code to demonstrate is provided
below:

Shared Data between the two Processes

#define buff_max 25

#define mod %

struct item{

// different member of the produced data

// or consumed data

---------

// An array is needed for holding the items.

// This is the shared place which will be

// access by both process

// item shared_buff [ buff_max ];

// Two variables which will keep track of

42 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

// the indexes of the items produced by producer

// and consumer The free index points to

// the next free index. The full index points to

// the first full index.

int free_index = 0;

int full_index = 0;

Producer Process Code

item nextProduced;

while(1){

// check if there is no space

// for production.

// if so keep waiting.

while((free_index+1) mod buff_max == full_index);

shared_buff[free_index] = nextProduced;

free_index = (free_index + 1) mod buff_max;

43 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Consumer Process Code

item nextConsumed;

while(1){

// check if there is an available

// item for consumption.

// if not keep on waiting for

// get them produced.

while((free_index == full_index);

nextConsumed = shared_buff[full_index];

full_index = (full_index + 1) mod buff_max;

In the above code, the Producer will start producing again when the (free_index+1)
mod buff max will be free because if it it not free, this implies that there are still items
that can be consumed by the Consumer so there is no need to produce more. Similarly,
if free index and full index point to the same index, this implies that there are no items
to consume.

ii) Messaging Passing Method


Now, We will start our discussion of the communication between processes via
message passing. In this method, processes communicate with each other without
using any kind of shared memory. If two processes p1 and p2 want to communicate
with each other, they proceed as follows:

44 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Establish a communication link (if a link already exists, no need to establish it


again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destinaion) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it is easy
for an OS designer but complicated for a programmer and if it is of variable size then
it is easy for a programmer but complicated for the OS designer. A standard message
can have two parts: header and body.
The header part is used for storing message type, destination id, source id, message
length, and control information. The control information contains information like
what to do if runs out of buffer space, sequence number, priority. Generally, message
is sent using FIFO style.

A remote procedure call is an interprocess communication technique that is used for


client-server based applications. It is also known as a subroutine call or a function call.
A client has a request message that the RPC translates and sends to the server. This
request may be a procedure or a function call to a remote server. When the server
receives the request, it sends the required response back to the client. The client is
blocked while the server is processing the call and only resumed execution after the
server is finished.
The sequence of events in a remote procedure call are given as follows −

 The client stub is called by the client.

45 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 The client stub makes a system call to send the message to the server and puts the
parameters in the message.
 The message is sent from the client to the server by the client’s operating system.
 The message is passed to the server stub by the server operating system.
 The parameters are removed from the message by the server stub.
 Then, the server procedure is called by the server stub.
A diagram that demonstrates this is as follows −

Advantages of Remote Procedure Call


Some of the advantages of RPC are as follows −

 Remote procedure calls support process oriented and thread oriented models.
 The internal message passing mechanism of RPC is hidden from the user.
 The effort to re-write and re-develop the code is minimum in remote procedure calls.
 Remote procedure calls can be used in distributed environment as well as the local
environment.
 Many of the protocol layers are omitted by RPC to improve performance.
Disadvantages of Remote Procedure Call
Some of the disadvantages of RPC are as follows −

 The remote procedure call is a concept that can be implemented in different ways. It
is not a standard.
 There is no flexibility in RPC for hardware architecture. It is only interaction based.
 There is an increase in costs because of remote procedure call.

The following steps take place during a RPC:

46 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

1. A client invokes a client stub procedure, passing parameters in the


usual way. The client stub resides within the client’s own address space.
2. The client stub marshalls(pack) the parameters into a message.
Marshalling includes converting the representation of the parameters into
a standard format, and copying each parameter into the message.
3. The client stub passes the message to the transport layer, which sends
it to the remote server machine.
4. On the server, the transport layer passes the message to a server stub,
which demarshalls(unpack) the parameters and calls the desired server
routine using the regular procedure call mechanism.
5. When the server procedure completes, it returns to the server
stub (e.g., via a normal procedure call return), which marshalls the
return values into a message. The server stub then hands the message to
the transport layer.
6. The transport layer sends the result message back to the client
transport layer, which hands the message back to the client stub.
7. The client stub demarshalls the return parameters and execution
returns to the caller.

Implementing RPC
 doOperation
 getRequest
 sendReply

Distributed Object Framework?


Distributed Object Framework
The acronym DOF (Distributed Object Framework) refers to a technology that allows many different
products, using many different standards, to work together and share information effortlessly across
many different networks (e.g., LAN, WAN, Intranet, Internet—any type of network or mesh). At its core,
DOF technology was designed to network embedded devices, whether simple or complex. However, to
support advanced networking functions for those devices, DOF technology has also evolved into a server
technology, appropriate for services that expand the functionality of networked devices, whether those
services reside on your own physical servers, or you are taking advantage of advanced cloud technology,
such as Amazon Web Services. Ultimately, DOF technology has the flexibility to enhance all products,
from the simplest resource-constrained device to the most powerful of computer networks.
How Does DOF Technology Work?
 DOF technology includes object-oriented networking. This means devices of almost any type can be
networked—securely, with little configuration. We accomplish this through a few simple concepts and
“tools.”
 Actor: An actor is a universal primitive that can send or receive messages, make local decisions, create
more actors, and determine how to respond to consecutive messages.

47 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 DOF Object: In the DOF Object Model, a DOF object can be anything that provides the functionality
defined in a DOF interface. All DOF objects are providers.
 Requestor: A requestor is a node on a DOF network that requests functionality from a provider.
 Provider: A provider is a node on a DOF network that provides functionality defined in one or more DOF
interfaces.
 DOF Interface: A DOF interface defines the items of functionality that a DOF object must provide.
Interface items of functionality are properties, methods, events, and exceptions.
 Node: A node is simply a connection point: either redistributing communication (data) or acting as a
communication endpoint.
 Service: A service is a type of object that can provide centralized functionality to other objects.
 Security: By leveraging common libraries and components, using DOF technology makes security easier
to understand, manage, and integrate into products.

What are Web Services?


Different books and different organizations provide different definitions to Web
Services. Some of them are listed here.
 A web service is any piece of software that makes itself available over the
internet and uses a standardized XML messaging system. XML is used to
encode all communications to a web service. For example, a client invokes a
web service by sending an XML message, then waits for a corresponding XML
response. As all communication is in XML, web services are not tied to any
one operating system or programming language—Java can talk with Perl;
Windows applications can talk with Unix applications.
 Web services are self-contained, modular, distributed, dynamic applications
that can be described, published, located, or invoked over the network to
create products, processes, and supply chains. These applications can be
local, distributed, or web-based. Web services are built on top of open
standards such as TCP/IP, HTTP, Java, HTML, and XML.
 Web services are XML-based information exchange systems that use the
Internet for direct application-to-application interaction. These systems can
include programs, objects, messages, or documents.
 A web service is a collection of open protocols and standards used for
exchanging data between applications or systems. Software applications
written in various programming languages and running on various platforms
can use web services to exchange data over computer networks like the
Internet in a manner similar to inter-process communication on a single
computer. This interoperability (e.g., between Java and Python, or Windows
and Linux applications) is due to the use of open standards.
To summarize, a complete web service is, therefore, any service that −
 Is available over the Internet or private (intranet) networks

48 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Uses a standardized XML messaging system


 Is not tied to any one operating system or programming language
 Is self-describing via a common XML grammar
 Is discoverable via a simple find mechanism

Components of Web Services


The basic web services platform is XML + HTTP. All the standard web services work
using the following components −
 SOAP (Simple Object Access Protocol)
 UDDI (Universal Description, Discovery and Integration)
 WSDL (Web Services Description Language)

How Does a Web Service Work?


A web service enables communication among various applications by using open
standards such as HTML, XML, WSDL, and SOAP. A web service takes the help of

 XML to tag the data
 SOAP to transfer a message
 WSDL to describe the availability of service.
You can build a Java-based web service on Solaris that is accessible from your Visual
Basic program that runs on Windows.
You can also use C# to build new web services on Windows that can be invoked from
your web application that is based on JavaServer Pages (JSP) and runs on Linux.

Web service Design approaches

Design approaches in Web service


There are 2 approaches to implement a web service

1. Top down approach

2. Bottom Up approach

49 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Top down approach

This approach is also called as contract first approach

In this approach, we first define the service contract which is WSDL so it’s called as
Contract first approach.

As the WSDL contains all the information about the service like service definition,message
format,transport protocol,security details,using this WSDL
service code is generated which we call as skeleton,Then service code is filled up as per the
need.

So in simple words, we could say that Top down approach is WSDL first, service later.

50 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Bottom Up approach

This approach is exactly reverse of the Top down approach.

In this approach WSDL is generated at last ,So this bottom up approach is also called
as contract last approach.

First, service code is written,then using the service code,WSDL is generated.

Many tools are available to generate the WSDL file automatically based on the service code.

So in simple words, we could say that Bottom Up approach is WSDL last, service first.

Cloud Computing Architecture


As we know, cloud computing technology is used by both small and large
organizations to store the information in cloud and access it from anywhere at
anytime using the internet connection.

Cloud computing architecture is a combination of service-oriented


architecture and event-driven architecture.

Cloud computing architecture is divided into the following two parts -

o Front End

51 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

o Back End

The below diagram shows the architecture of cloud computing -

Front End
The front end is used by the client. It contains client-side interfaces and applications
that are required to access the cloud computing platforms. The front end includes
web servers (including Chrome, Firefox, internet explorer, etc.), thin & fat clients,
tablets, and mobile devices.

Back End
The back end is used by the service provider. It manages all the resources that are
required to provide cloud computing services. It includes a huge amount of data
storage, security mechanism, virtual machines, deploying models, servers, traffic
control mechanisms, etc.

Note: Both front end and back end are connected to others through a network,
generally using the internet connection.

Components of Cloud Computing Architecture


There are the following components of cloud computing architecture -

1. Client Infrastructure

52 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Client Infrastructure is a Front end component. It provides GUI (Graphical User


Interface) to interact with the cloud.

2. Application

The application may be any software or platform that a client wants to access.

3. Service

A Cloud Services manages that which type of service you access according to the
client’s requirement.

Cloud computing offers the following three type of services:

i. Software as a Service (SaaS) – It is also known as cloud application


services. Mostly, SaaS applications run directly through the web browser means we
do not require to download and install these applications. Some important example
of SaaS is given below –

Example: Google Apps, Salesforce Dropbox, Slack, Hubspot, Cisco WebEx.

ii. Platform as a Service (PaaS) – It is also known as cloud platform services.


It is quite similar to SaaS, but the difference is that PaaS provides a platform for
software creation, but using SaaS, we can access software over the internet without
the need of any platform.

Example: Windows Azure, Force.com, Magento Commerce Cloud, OpenShift.

iii. Infrastructure as a Service (IaaS) – It is also known as cloud


infrastructure services. It is responsible for managing applications data,
middleware, and runtime environments.

Example: Amazon Web Services (AWS) EC2, Google Compute Engine (GCE), Cisco
Metapod.

4. Runtime Cloud

Runtime Cloud provides the execution and runtime environment to the virtual
machines.

5. Storage

Storage is one of the most important components of cloud computing. It provides a


huge amount of storage capacity in the cloud to store and manage data.

6. Infrastructure

It provides services on the host level, application level, and network level.
Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that
are needed to support the cloud computing model.

53 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

7. Management

Management is used to manage components such as application, service, runtime


cloud, storage, infrastructure, and other security issues in the backend and establish
coordination between them.

8. Security

Security is an in-built back end component of cloud computing. It implements a


security mechanism in the back end.

9. Internet

The Internet is medium through which front end and back end can interact and
communicate with each other.

1. Cloud Ecosystem:

2. Users or cloud Subscribers

3. Service Conceptualizers

4. Resource Allocators

5. Cloud Providers

6. Service controller

7. Pricer

8. VM Monitor

Cloud Computing Infrastructure as a Service


(IaaS)
Infrastructure-as-a-Service provides access to fundamental resources such as
physical machines, virtual machines, virtual storage, etc. Apart from these resources,
the IaaS also offers:

 Virtual machine disk storage


 Virtual local area network (VLANs)
 Load balancers
 IP addresses
 Software bundles
All of the above resources are made available to end user via server
virtualization. Moreover, these resources are accessed by the customers as if they
own them.

54 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Benefits
IaaS allows the cloud provider to freely locate the infrastructure over the Internet in a
cost-effective manner. Some of the key benefits of IaaS are listed below:
 Full control of the computing resources through administrative access to VMs.
 Flexible and efficient renting of computer hardware.
 Portability, interoperability with legacy applications.

Full control over computing resources through administrative access to


VMs

IaaS allows the customer to access computing resources through administrative


access to virtual machines in the following manner:
 Customer issues administrative command to cloud provider to run the virtual
machine or to save data on cloud server.
 Customer issues administrative command to virtual machines they owned to
start web server or to install new applications.

Flexible and efficient renting of computer hardware

IaaS resources such as virtual machines, storage devices, bandwidth, IP addresses,


monitoring services, firewalls, etc. are made available to the customers on rent. The
payment is based upon the amount of time the customer retains a resource. Also with

55 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

administrative access to virtual machines, the customer can run any software, even
a custom operating system.

Portability, interoperability with legacy applications

It is possible to maintain legacy between applications and workloads between IaaS


clouds. For example, network applications such as web server or e-mail server that
normally runs on customer-owned server hardware can also run from VMs in IaaS
cloud.

Issues
IaaS shares issues with PaaS and SaaS, such as Network dependence and browser
based risks. It also has some specific issues, which are mentioned in the following
diagram:

Compatibility with legacy security vulnerabilities

Because IaaS offers the customer to run legacy software in provider's infrastructure,
it exposes customers to all of the security vulnerabilities of such legacy software.

Virtual Machine sprawl

The VM can become out-of-date with respect to security updates because IaaS
allows the customer to operate the virtual machines in running, suspended and off

56 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

state. However, the provider can automatically update such VMs, but this mechanism
is hard and complex.

Robustness of VM-level isolation

IaaS offers an isolated environment to individual customers through hypervisor.


Hypervisor is a software layer that includes hardware support for virtualization to split
a physical computer into multiple virtual machines.

Data erase practices

The customer uses virtual machines that in turn use the common disk resources
provided by the cloud provider. When the customer releases the resource, the cloud
provider must ensure that next customer to rent the resource does not observe data
residue from previous customer.

Characteristics
Here are the characteristics of IaaS service model:
 Virtual machines with pre-installed software.
 Virtual machines with pre-installed operating systems such as Windows, Linux,
and Solaris.
 On-demand availability of resources.
 Allows to store copies of particular data at different locations.
 The computing resources can be easily scaled up and down.

Cloud Computing Platform as a Service (PaaS)


Platform-as-a-Service offers the runtime environment for applications. It also offers
development and deployment tools required to develop applications. PaaS has a
feature of point-and-click tools that enables non-developers to create web
applications.
App Engine of Google and Force.com are examples of PaaS offering vendors.
Developer may log on to these websites and use the built-in API to create web-
based applications.
But the disadvantage of using PaaS is that, the developer locks-in with a particular
vendor. For example, an application written in Python against API of Google, and
using App Engine of Google is likely to work only in that environment.
The following diagram shows how PaaS offers an API and development tools to the
developers and how it helps the end user to access business applications.

57 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Benefits
Following are the benefits of PaaS model:

58 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Lower administrative overhead

Customer need not bother about the administration because it is the responsibility of
cloud provider.

Lower total cost of ownership

Customer need not purchase expensive hardware, servers, power, and data storage.

Scalable solutions

It is very easy to scale the resources up or down automatically, based on their


demand.

More current system software

It is the responsibility of the cloud provider to maintain software versions and patch
installations.

Issues
Like SaaS, PaaS also places significant burdens on customer's browsers to maintain
reliable and secure connections to the provider’s systems. Therefore, PaaS shares
many of the issues of SaaS. However, there are some specific issues associated with
PaaS as shown in the following diagram:

59 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Lack of portability between PaaS clouds

Although standard languages are used, yet the implementations of platform services
may vary. For example, file, queue, or hash table interfaces of one platform may differ
from another, making it difficult to transfer the workloads from one platform to another.

Event based processor scheduling

The PaaS applications are event-oriented which poses resource constraints on


applications, i.e., they have to answer a request in a given interval of time.

Security engineering of PaaS applications

Since PaaS applications are dependent on network, they must explicitly use
cryptography and manage security exposures.

Characteristics
Here are the characteristics of PaaS service model:
 PaaS offers browser based development environment. It allows the
developer to create database and edit the application code either via
Application Programming Interface or point-and-click tools.
 PaaS provides built-in security, scalability, and web service interfaces.

60 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 PaaS provides built-in tools for defining workflow, approval processes, and
business rules.
 It is easy to integrate PaaS with other applications on the same platform.
 PaaS also provides web services interfaces that allow us to connect the
applications outside the platform.

PaaS Types
Based on the functions, PaaS can be classified into four types as shown in the
following diagram:

Stand-alone development environments

The stand-alone PaaS works as an independent entity for a specific function. It does
not include licensing or technical dependencies on specific SaaS applications.

Application delivery-only environments

The application delivery PaaS includes on-demand scaling and application


security.

Open platform as a service

Open PaaS offers an open source software that helps a PaaS provider to run
applications.

Add-on development facilities

The add-on PaaS allows to customize the existing SaaS platform.

Cloud Computing Software as a Service (SaaS)


Software-as–a-Service (SaaS) model allows to provide software application as a
service to the end users. It refers to a software that is deployed on a host service and
is accessible via Internet. There are several SaaS applications listed below:

 Billing and invoicing system

61 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Customer Relationship Management (CRM) applications


 Help desk applications
 Human Resource (HR) solutions
Some of the SaaS applications are not customizable such as Microsoft Office
Suite. But SaaS provides us Application Programming Interface (API), which
allows the developer to develop a customized application.

Characteristics
Here are the characteristics of SaaS service model:
 SaaS makes the software available over the Internet.
 The software applications are maintained by the vendor.
 The license to the software may be subscription based or usage based. And it
is billed on recurring basis.
 SaaS applications are cost-effective since they do not require any maintenance
at end user side.
 They are available on demand.
 They can be scaled up or down on demand.
 They are automatically upgraded and updated.
 SaaS offers shared data model. Therefore, multiple users can share single
instance of infrastructure. It is not required to hard code the functionality for
individual users.
 All users run the same version of the software.

Benefits
Using SaaS has proved to be beneficial in terms of scalability, efficiency and
performance. Some of the benefits are listed below:

 Modest software tools


 Efficient use of software licenses
 Centralized management and data
 Platform responsibilities managed by provider
 Multitenant solutions

Modest software tools

The SaaS application deployment requires a little or no client side software


installation, which results in the following benefits:

 No requirement for complex software packages at client side


 Little or no risk of configuration at client side

62 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

 Low distribution cost

Efficient use of software licenses

The customer can have single license for multiple computers running at different
locations which reduces the licensing cost. Also, there is no requirement for license
servers because the software runs in the provider's infrastructure.

Centralized management and data

The cloud provider stores data centrally. However, the cloud providers may store data
in a decentralized manner for the sake of redundancy and reliability.

Platform responsibilities managed by providers

All platform responsibilities such as backups, system maintenance, security,


hardware refresh, power management, etc. are performed by the cloud provider. The
customer does not need to bother about them.

Multitenant solutions

Multitenant solutions allow multiple users to share single instance of different


resources in virtual isolation. Customers can customize their application without
affecting the core functionality.

Issues
There are several issues associated with SaaS, some of them are listed below:

 Browser based risks


 Network dependence
 Lack of portability between SaaS clouds

Browser based risks

If the customer visits malicious website and browser becomes infected, the
subsequent access to SaaS application might compromise the customer's data.
To avoid such risks, the customer can use multiple browsers and dedicate a specific
browser to access SaaS applications or can use virtual desktop while accessing the
SaaS applications.

Network dependence

The SaaS application can be delivered only when network is continuously available.
Also network should be reliable but the network reliability cannot be guaranteed either
by cloud provider or by the customer.

63 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Lack of portability between SaaS clouds

Transferring workloads from one SaaS cloud to another is not so easy because work
flow, business logics, user interfaces, support scripts can be provider specific.

Open SaaS and SOA


Open SaaS uses those SaaS applications, which are developed using open source
programming language. These SaaS applications can run on any open source
operating system and database. Open SaaS has several benefits listed below:

 No License Required
 Low Deployment Cost
 Less Vendor Lock-in
 More portable applications
 More Robust Solution
The following diagram shows the SaaS implementation based on SOA:

Public Cloud Model

64 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Public Cloud allows systems and services to be easily accessible to general public.
The IT giants such as Google, Amazon and Microsoft offer cloud services via
Internet. The Public Cloud Model is shown in the diagram below.

Benefits
There are many benefits of deploying cloud as public cloud model. The following
diagram shows some of those benefits:

65 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Cost Effective

Since public cloud shares same resources with large number of customers it turns
out inexpensive.

Reliability

The public cloud employs large number of resources from different locations. If any
of the resources fails, public cloud can employ another one.

Flexibility

The public cloud can smoothly integrate with private cloud, which gives customers a
flexible approach.

Location Independence

Public cloud services are delivered through Internet, ensuring location


independence.

Utility Style Costing

Public cloud is also based on pay-per-use model and resources are accessible
whenever customer needs them.

High Scalability

66 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Cloud resources are made available on demand from a pool of resources, i.e., they
can be scaled up or down according the requirement.

Disadvantages
Here are some disadvantages of public cloud model:

Low Security

In public cloud model, data is hosted off-site and resources are shared publicly,
therefore does not ensure higher level of security.

Less Customizable

It is comparatively less customizable than private cloud.

Private Cloud Model


Private Cloud allows systems and services to be accessible within an organization.
The Private Cloud is operated only within a single organization. However, it may be
managed internally by the organization itself or by third-party. The private cloud model
is shown in the diagram below.

Benefits
There are many benefits of deploying cloud as private cloud model. The following
diagram shows some of those benefits:

67 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

High Security and Privacy

Private cloud operations are not available to general public and resources are
shared from distinct pool of resources. Therefore, it ensures
high security and privacy.

More Control

The private cloud has more control on its resources and hardware than public cloud
because it is accessed only within an organization.

Cost and Energy Efficiency

The private cloud resources are not as cost effective as resources in public clouds
but they offer more efficiency than public cloud resources.

Disadvantages
Here are the disadvantages of using private cloud model:

Restricted Area of Operation

The private cloud is only accessible locally and is very difficult to deploy globally.

High Priced

Purchasing new hardware in order to fulfill the demand is a costly transaction.

Limited Scalability

The private cloud can be scaled only within capacity of internal hosted resources.

68 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Additional Skills

In order to maintain cloud deployment, organization requires skilled expertise.

Hybrid Cloud Model


Hybrid Cloud is a mixture of public and private cloud. Non-critical activities are
performed using public cloud while the critical activities are performed using private
cloud. The Hybrid Cloud Model is shown in the diagram below.

Benefits
There are many benefits of deploying cloud as hybrid cloud model. The following
diagram shows some of those benefits:

Scalability

It offers features of both, the public cloud scalability and the private cloud scalability.

69 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Flexibility

It offers secure resources and scalable public resources.

Cost Efficiency

Public clouds are more cost effective than private ones. Therefore, hybrid clouds can
be cost saving.

Security

The private cloud in hybrid cloud ensures higher degree of security.

Disadvantages
Networking Issues

Networking becomes complex due to presence of private and public cloud.

Security Compliance

It is necessary to ensure that cloud services are compliant with security policies of
the organization.

Infrastructure Dependency

The hybrid cloud model is dependent on internal IT infrastructure, therefore it is


necessary to ensure redundancy across data centers.

Community Cloud Model


Community Cloud allows system and services to be accessible by group of
organizations. It shares the infrastructure between several organizations from a
specific community. It may be managed internally by organizations or by the third-
party. The Community Cloud Model is shown in the diagram below.

70 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Benefits
There are many benefits of deploying cloud as community cloud model.

Cost Effective

Community cloud offers same advantages as that of private cloud at low cost.

Sharing Among Organizations

71 | P a g e
TYCS Cloud Computing Unit 1 Notes Compiled by : Ms. Nivedita Tiwari

Community cloud provides an infrastructure to share cloud resources and capabilities


among several organizations.

Security

The community cloud is comparatively more secure than the public cloud but less
secured than the private cloud.

Issues
 Since all data is located at one place, one must be careful in storing data in
community cloud because it might be accessible to others.
 It is also challenging to allocate responsibilities of governance, security and
cost among organizations.

72 | P a g e

You might also like