You are on page 1of 33

CLOUD COMPUTING

What is Cloud computing?


Cloud computing is the delivery of computing as a service rather than a product, whereby shared
resources, software, and information are provided to computers and other devices as a utility (like the
electricity grid) over a network (typically the Internet). Cloud computing is a computing paradigm,
where a large pool of systems is connected in private or public networks, to provide dynamically
scalable infrastructure for application, data and file storage.

Forrester defines cloud computing as:

“A pool of abstracted, highly scalable, and managed compute infrastructure capable of


hosting end-customer applications and billed by consumption.”

Characteristics of Cloud computing?


Cloud computing exhibits the following key characteristics:

                             

On demand self-services: computer services such as email, applications, network or server


service can be provided without requiring human interaction with each service provider.
Cloud service providers providing on demand self-services include Amazon Web Services
(AWS), Microsoft, Google, IBM and Salesforce.com. New York Times and NASDAQ are
examples of companies using AWS (NIST). Gartner describes this characteristic as service
based

- Broad network access: Cloud Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick client
platforms such as mobile phones, laptops and PDAs.

- Resource pooling: The provider’s computing resources are pooled together to serve
multiple consumers using multiple-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. The resources include
among others storage, processing, memory, network bandwidth, virtual machines and email
services. The pooling together of the resource builds economies of scale (Gartner).

- Rapid elasticity: Cloud services can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer,
the capabilities available for provisioning often appear to be unlimited and can be purchased
in any quantity at any time.

- Measured service: Cloud computing resource usage can be measured, controlled, and
reported providing transparency for both the provider and consumer of the utilised service.
Cloud computing services use a metering capability which enables to control and optimise
resource use. This implies that just like air time, electricity or municipality water IT services
are charged per usage metrics – pay per use. The more you utilise the higher the bill. Just as
utility companies sell power to subscribers, and telephone companies sell voice and data
services, IT services such as network security management, data center hosting or even
departmental billing can now be easily delivered as a contractual service.

 
Why use Clouds?
Clouds can provide users with a number of different benefits. Many businesses large and
small use cloud computing today either directly (e.g. Google or Amazon) or indirectly (e.g.
Twitter) instead of traditional on-site alternatives. There are a number of reasons why cloud
computing is so widely used among businesses today.
- Reduction of costs – unlike on-site hosting the price of deploying applications in the cloud
can be less due to lower hardware costs from more effective use of physical resources.
- Universal access - cloud computing can allow remotely located employees to access
applications and work via the internet.
- Up to date software - a cloud provider will also be able to upgrade software keeping in
mind feedback from previous software releases.
- Choice of applications. This allows flexibility for cloud users to experiment and choose the
best option for their needs. Cloud computing also allows a business to use, access and pay
only for what they use, with a fast implementation time
 Potential to be greener and more economical - the average amount of energy needed for a
computational action carried out in the cloud is far less than the average amount for an on-site
deployment. This is because different organisations can share the same physical resources
securely, leading to more efficient use of the shared resources.

- Flexibility – cloud computing allows users to switch applications easily and rapidly, using
the one that suits their needs best. However, migrating data between applications can be an
issue.

 Driving factors towards cloud?


A number of pressing factors are driving the growth of cloud computing. I’ll cover some of
the biggest drivers towards cloud computing adoption here.
Improved IT Agility:-As recently as a few years ago, it took far too long for many IT
departments to respond to increasing demand for computing capacity. Too much paperwork,
too many approvals, and a reliance on hard-to-deploy physical servers meant that IT was
often slow to respond to variable organizational needs. Virtualization helped that situation
immensely, and the arrival of cloud computing gives IT organizations even more of an ability
to easily (and cost-effectively) expand and reduce computing resources to meet fluctuating
demands.
Cost Savings and ROI:-Cloud computing isn’t a panacea, but there are clear-cut cases where
moving part of your IT infrastructure to the cloud makes solid operational and financial
sense. Here at Penton Media we recently moved from a cumbersome legacy email newsletter
tool—developed in house—that required an ongoing (and expensive) commitment in terms of
user training and application maintenance to a new cloud-based email newsletter solution. If
you have legacy software applications in your own organization, are they really worth the
time, expense, and human capital needed to keep them running when superior cloud-based
alternatives are available?
Private Cloud vs. Public Cloud:-The concept of the private cloud has gathered steam over
the past 12 months. Public cloud computing services generally rely on having your data on
someone else’s infrastructure. That can be a non-starter for many IT administrators,
especially if your organization operates under tricky auditing, compliance, or data location
requirements. That’s where the private cloud steps in: Leveraging virtualization and
commodity hardware, the private cloud can provide some of the elastic benefits of public
cloud computing without some of the inherent risks that public cloud computing still needs to
address.
Cloud-Savvy IT Staff:-A new breed of IT professionals is stepping into leadership positions
in many organizations. Some fear that cloud computing could mean the end of their careers,
but savvy IT pros realize that someone in the organization has to take the lead in selecting
what IT platforms and services are moved to the cloud while simultaneously educating
management and the rest of the organization why other elements aren’t good candidates for
cloud computing treatment.
 
 
 
–Grid Computing
1: Loosely coupled(Decentralization)
2: Diversity and Dynamism
3: Distributed Job Management & scheduling
 
Cloud computing
1: Dynamic computing infrastructure
2: IT service-centric approach
3: Self-service-based usage model
4: Minimally or self-managed platform
 
Cluster computing
1:Tightly coupled systems
2: Single system image
3: Centralized Job management & scheduling system
 
Distributed Computing 
Is to solve a single large problem by breaking it down into several
tasks where each task is computed in the individual computers of the
distributed system.
 
 

 
GRID COMPUTING
Grid computing is a network-based computational model that has the ability to process large
volumes of data with the help of a group of networked computers that coordinate to solve a
problem together.

Basically, it’s a vast network of interconnected computers working towards a common


problem by dividing it into several small units called grids. It’s based on a distributed
architecture which means tasks are managed and scheduled in a distributed way with no time
dependency

The group of computers acts as a virtual supercomputer to provide scalable and seamless
access to wide-area computing resources that are geographically distributed and present them
as a single, unified resource to perform large-scale applications such as analyzing huge sets
of data.

Function of   Grid Computing and Cloud Computing


The main function of grid computing is job scheduling using all kinds of computing resources
where a task is divided into several independent sub-tasks and each machine on a grid is
assigned a task. After all the sub-tasks are completed they are sent back to the main machine
which handles and processes all the tasks.

Cloud computing involves resource pooling through grouping resources on an as-needed


basis from clusters of servers

BIG DATA?
Big data is a term that describes the large volume of data – both structured and unstructured –
that inundates a business on a day-to-day basis.
But it’s not the amount of data that’s important. It’s what organizations do with the data that
matters.

Big data can be analyzed for insights that lead to better decisions and strategic business
moves.

Big data is often characterized by 3Vs:

–Volume

–Velocity

–Variety

Data can be classified as

–Social Data

–Machine data

–Transactional Data

VOLUME:  The amount of data matters. With big data, you’ll have to process high volumes
of low-density, unstructured data.

This can be data of unknown value, such as Twitter data feeds, clickstreams on a webpage or
a mobile app, or sensor-enabled equipment.

For some organizations, this might be tens of terabytes of data. For others, it may be
hundreds of petabytes.

VELOCITY: is the fast rate at which data is received and (perhaps) acted on.

Normally, the highest velocity of data streams directly into memory versus being written to
disk.

Some internet-enabled smart products operate in real time or near real time and will require
real-time evaluation and action.

VARIETY: Variety refers to the many types of data that are available.

Traditional data types were structured and fit neatly in a relational database.

With the rise of big data, data comes in new unstructured data types.

Unstructured and semi-structured data types, such as text, audio, and video, require additional
preprocessing to derive meaning and support metadata.
IT as a Service
IT as a service (ITaaS) is an operational model where the IT service provider delivers an
information technology service to a business.The IT service provider can be an internal IT
organization or an external IT services company. The recipients of ITaaS can be a line of
business(LOB) organization within an enterprise or a small and medium business (SMB).

The information technology is typically delivered as a managed service with a clear IT


services catalog and pricing associated with each of the catalog items

At its core, ITaaS is a competitive business model where businesses have many options for IT
services and the internal IT organization has to compete against those other external options
in order to be the selected IT service provider to the business. Options for providers other
than the internal IT organization may include IT outsourcing companies and public cloud
providers.

IT as a Service (ITaaS) is a technology-delivery method that treats IT(information


technology) as a commodity, providing an enterprise with exactly the amount of hardware,
software, and support that it needs for an agreed-on monthly fee. In this context, IT
encompasses all of the technologies for creating, storing, exchanging, and using business data

What is ITaaS?
IT as a Service (ITaaS) can be seen as a transformative operational model that allows a line
of business or user to consume Information Technology as a managed service. The services
are cataloged, allowing users to consume and pay only for the services they require. This
operating model can be adopted by internal IT departments or external vendors.

A variety of services, configuration settings, framework guidelines and technologies are


made available to address the unique demands of each user and line of business.

Consumers have a variety of choices and can employ IT solutions prepared to meet the
specific demands, which have already been assessed and prepared for, but the ITaaS service
provider.

They are no longer the tech folks focused on putting out the fire when an end-user fails to
employ a required IT service. Instead, it’s focused on addressing the unique requirements of
internal consumers, allowing them the options to choose the best available resources and
solutions, along with managing the entire experience lifecycle associated with it.
Business Focus: The operations and service architecture of an ITaaS vendor is
designed around the business requirements of the organization instead of the technical
projects and IT infrastructure running on-premise.

IT Management Framework: The leadership should support decisions on


IT services with a focus on business and end-user requirements instead of individual projects
and technology assets

Financing: The pricing of an ITaaS offering should justify the promised cost savings
and value. As a broker and managed service provider, ITaaS providers may have a limited
range of profitability when customers can purchase a service directly from the vendor.

Cross-Functional Expertise: IT departments transitioning to the ITaaS


model will require expertise with business and sales knowledge to connect the right user with
the right external provider. Traditionally, IT departments comprise IT, specialists, with little
contribution to business decisions associated with the choice of new technology deployment.

Agile ITSM: Since ITaaS is focused on the process of serving user requests and
aligning IT with business, it must adopt an appropriate ITSM framework that enables agile
and effective business operations.
User Experience: ITaaS should be able to offer an improved user experience as
compared to traditional IT departments. The catalog listing ITaaS offering should be
exhaustive with transparent pricing models. The managed services should ensure that end-
user requirements are addressed effectively across the service lifecycle.

BENEFITS IF ITaaS
–IT departments in large enterprises typically operate as a single point of contact and service
provider to a large internal user base.

– With the ITaaS model, organizations can employ the same services without operating an IT
department in-house. This model is particularly suitable for SMB firms operating on a limited
budget and resources.

–For large enterprises, internal IT departments can take the role of an ITaaS vendor to
internal users and lines of business. This requires decoupling of IT shops from the
organizational structure, introducing new roles and responsibilities, as well as a cultural
change in the way IT interacts and serves its internal users.

–The ITaaS model offers a simple concept: IT services are consumed effectively when end-
users are given sufficient choices between services—and charged on the consumption basis.
In this context, ITaaS vendors are responsible for maintaining a vast library of IT solutions
and services required by end-users.

–ITaaS vendors take the role of managed service providers and brokers. With the service-
oriented model, they manage and orchestrate the IT service lifecycle: from identifying a user
requirement to supporting the final outcome of an effectively delivered service. It includes
tasks such as finding available solutions, negotiating SLAs, and helping users make well-
informed decisions when selecting a service.

APPLICATION OF CLOUD COMPUTING:


Let’s start elaborating on the top 7 applications of cloud computing.
1. Online Data Storage
Cloud Computing allows storage and access to data like files, images, audio, and
videos on the cloud storage. In this age of big data, storing huge volumes of
business data locally requires more and more space and escalating costs. This is
where cloud storage comes into play, where businesses can store and access data
using multiple devices. 
The interface provided is easy to use, convenient, and has the benefits of high speed,
scalability, and integrated security. 
2. Backup and Recovery
Cloud service providers offer safe storage and backup facility for data and resources
on the cloud. In a traditional computing system, data backup is a complex problem,
and often, in case of a disaster, data can be permanently lost. But with cloud
computing, data can be easily recovered with minimal damage in case of a disaster. 
3. Big Data Analysis
One of the most important applications of cloud computing is its role in extensive
data analysis. The extremely large volume of big data makes it impossible to store
using traditional data management systems. Due to the unlimited storage capacity
of the cloud, businesses can now store and analyze big data to gain valuable
business insights. 
4. Testing and Development
Cloud computing applications provide the easiest approach for testing and
development of products. In traditional methods, such an environment would be
time-consuming, expensive due to the setting up of IT resources and infrastructure,
and needed manpower. However, with cloud computing, businesses get scalable and
flexible cloud services, which they can use for product development, testing, and
deployment.
5. Antivirus Applications
With Cloud Computing comes cloud antivirus software which is stored in the cloud
from where they monitor viruses and malware in the organization’s system and fixes
them. Earlier, organizations had to install antivirus software within their system and
detect security threats.   
6. E-commerce Application
Ecommerce applications in the cloud enable users and e-businesses to respond
quickly to emerging opportunities. It offers a new approach to business leaders to
make things done with minimum amount and minimal time. They use cloud
environments to manage customer data, product data, and other operational
systems
 
7. Cloud Computing in Education  
E-learning, online distance learning programs, and student information portals are
some of the key changes brought about by applications of cloud computing in the
education sector. In this new learning environment, there’s an attractive environment
for learning, teaching, experimenting provided to students, teachers, and researchers
so they can connect to the cloud of their establishment and access data and
information.    
UNIT-2
VIRTUALIZATION?
 It is a method used to divide and present the resources of a physical machine (host), as
multiple execution virtual machines (guests), by adding a layer of abstraction between
the hardware and the applications.
  The layer of abstraction is usually implemented by software called Virtual Machine
Monitor (VMM) that manages the physical hardware resources of the host machine,
and make them available to the virtual machines
 It is a method used to divide and present the resources of a physical machine (host), as
multiple execution virtual machines (guests), by adding a layer of abstraction between
the hardware and the applications.
  The layer of abstraction is usually implemented by software called Virtual Machine
Monitor (VMM) that manages the physical hardware resources of the host machine,
and make them available to the virtual machines

●Virtualization nowadays is the foundation of Cloud Computing


and offers numerous benefits, especially for the enterprise
domain.
●These benefits indeed are translated into cost saving practices
for companies .
●Virtualization brings several benefits to data center operators
and service providers.
HISTORY OF VIRTUALIZATION:
●Virtualization technology was first developed during the 1960's, in an effort to fully
utilize the expensive mainframes through time-sharing.

●IBM's System VM/370, and its predecessor system VM/360 in 1967, set the
foundation of the virtual machine architecture as it is known today .

●At that time it was vital in terms of efficiency to be able to take full advantage of the
mainframes' power, by allowing different entities to run multiple execution
environments which shared the underlying hardware 

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed
on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.
After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
2) Operating System Virtualization:
When the virtual machine software or virtual machine manager (VMM) is installed on the
Host operating system instead of directly on the hardware system is known as operating
system virtualization.
Usage: Operating System Virtualization is mainly used for testing the applications on
different platforms of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed
on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications
 
Next →← Prev

Virtualization in Cloud Computing


Virtualization is the "creation of a virtual (rather than actual) version of something,

such as a server, a desktop, a storage device, an operating system or network

resources".

In other words, Virtualization is a technique, which allows to share a single physical

instance of a resource or an application among multiple customers and

organizations. It does by assigning a logical name to a physical storage and

providing a pointer to that physical resource when demanded.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and hardware is

known as Hardware Virtualization. A Virtual machine provides an environment that

is logically separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host

Machine and that virtual machine is referred as a Guest Machine

5.6M
Robinhood Launches Cash Card To Help People Invest

Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly

installed on the hardware system is known as hardware virtualization.


The main job of hypervisor is to control and monitoring the processor, memory and

other hardware resources.

After virtualization of hardware system we can install different operating system

on it and run different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because

controlling virtual machines is much easier than controlling a physical server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM) is installed

on the Host operating system instead of directly on the hardware system is known

as operating system virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on

different platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly

installed on the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into

multiple servers on the demand basis and for balancing the load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple

network storage devices so that it looks like a single storage device.


Storage virtualization is also implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

CHARACTRISTICS OF VIRTUALIZATION?
1. Increased Security – The ability to control the execution of a guest program in a

completely transparent manner opens new possibilities for delivering a secure, controlled

execution environment. All the operations of the guest programs are generally performed

against the virtual machine, which then translates and applies them to the host programs. 

A virtual machine manager can control and filter the activity of the guest programs, thus

preventing some harmful operations from being performed. Resources exposed by the host

can then be hidden or simply protected from the guest. Increased security is a requirement

when dealing with untrusted code. 

Example-1: Untrusted code can be analyzed in Cuckoo sandboxes environment. 

The term sandbox identifies an isolated execution environment where instructions can be

filtered and blocked before being translated and executed in the real execution

environment. 

Example-2: The expression sandboxed version of the Java Virtual Machine (JVM) refers

to a particular configuration of the JVM where, by means of security policy, instructions

that are considered potentially harmful can be blocked. 

2. Managed Execution – In particular, sharing, aggregation, emulation, and isolation are

the most relevant features. 

 
Functions enabled by a managed execution 

3. Sharing – Virtualization allows the creation of a separate computing environment within

the same host. This basic feature is used to reduce the number of active servers and limit

power consumption. 

4. Aggregation – It is possible to share physical resources among several guests, but

virtualization also allows aggregation, which is the opposite process. A group of separate

hosts can be tied together and represented to guests as a single virtual host. This

functionality is implemented with cluster management software, which harnesses the

physical resources of a homogeneous group of machines and represents them as a single

resource. 

5. Emulation – Guest programs are executed within an environment that is controlled by

the virtualization layer, which ultimately is a program. Also, a completely different

environment with respect to the host can be emulated, thus allowing the execution of guest

programs requiring specific characteristics that are not present in the physical host. 

6. Isolation – Virtualization allows providing guests—whether they are operating systems,

applications, or other entities—with a completely separate environment, in which they are


executed. The guest program performs its activity by interacting with an abstraction layer,

which provides access to the underlying resources. The virtual machine can filter the

activity of the guest and prevent harmful operations against the host. 

Besides these characteristics, another important capability enabled by virtualization is

performance tuning. This feature is a reality at present, given the considerable advances in

hardware and software supporting virtualization. It becomes easier to control the

performance of the guest by finely tuning the properties of the resources exposed through

the virtual environment. This capability provides a means to effectively implement a

quality-of-service (QoS) infrastructure. 

7. Portability – The concept of portability applies in different ways according to the

specific type of virtualization considered.

In the case of a hardware virtualization solution, the guest is packaged into a virtual image

that, in most cases, can be safely moved and executed on top of different virtual machines. 

In the case of programming-level virtualization, as implemented by the JVM or the .NET

runtime, the binary code representing application components (jars or assemblies) can run

without any recompilation on any implementation of the corresponding virtual machine. 

Pros of Virtualization in Cloud Computing :  

 Utilization of Hardware Efficiently –


With the help of Virtualization Hardware is Efficiently used by user as
well as Cloud Service Provider. In this the need of Physical Hardware
System for the User is decreases and this results in less costly.In Service
Provider point of View, they will vitalize the Hardware using Hardware
Virtualization which decrease the Hardware requirement from Vendor
side which are provided to User is decreased. Before Virtualization,
Companies and organizations have to set up their own Server which
require extra space for placing them, engineer’s to check its performance
and require extra hardware cost but with the help of Virtualization the all
these limitations are removed by Cloud vendor’s who provide Physical
Services without setting up any Physical Hardware system.
 Availability increases with Virtualization –
One of the main benefit of Virtualization is that it provides advance
features which allow virtual instances to be available all the times. It also
has capability to move virtual instance from one virtual Server another
Server which is very tedious and risky task in Server Based System.
During migration of Data from one server to another it ensures its safety.
Also, we can access information from any location and any time from any
device.
 Disaster Recovery is efficient and easy –
With the help of virtualization Data Recovery, Backup, Duplication
becomes very easy. In traditional method , if somehow due to some
disaster if Server system Damaged then the surety of Data Recovery is
very less. But with the tools of Virtualization real time data backup
recovery and mirroring become easy task and provide surety of zero
percent data loss.
 Virtualization saves Energy –
Virtualization will help to save Energy because while moving from
physical Servers to Virtual Server’s, the number of Server’s decreases due
to this monthly power and cooling cost decreases which will Save Money
as well. As cooling cost reduces it means carbon production by devices 
also decreases which results in Fresh and pollution free environment.
 Quick and Easy Set up –
In traditional methods Setting up physical system and servers are very
time-consuming. Firstly Purchase them in bulk after that wait for
shipment. When Shipment is done then wait for Setting up and after that
again spend time in installing required software etc. Which will consume
very time. But with the help of virtualization the entire process is done in
very less time which results in productive setup.
 Cloud Migration becomes easy –
Most of the companies those who already have spent a lot in the server
have a doubt of Shifting to Cloud. But it is more cost-effective to shift to
cloud services because all the data that is present in their server’s can be
easily migrated into the cloud server and save something from
maintenance charge, power consumption, cooling cost, cost to Server
Maintenance Engineer etc.

Cons of Virtualization :

 Data can be at Risk –


Working on virtual instances on shared resources means that our data is hosted
on third party resource which put’s our data in vulnerable condition. Any hacker
can attack on our data or try to perform unauthorized access. Without Security
solution our data is in threaten situation.
 Learning New Infrastructure –
As Organization shifted from Servers to Cloud. They required skilled staff who
can work with cloud easily. Either they hire new IT staff with relevant skill or
provide training on that skill which increase the cost of company.
 High Initial Investment –
It is true that Virtualization will reduce the cost of companies but also it is truth
that Cloud have high initial investment. It provides numerous services which are
not required and when unskilled organization will try to set up in cloud they
purchase unnecessary services which are not even required to them.

Discuss the Term Hypervisor with all its Types

The Virtual Machine Monitor (VMM) also known as hypervisor, is the core part of

every system virtualization solution. Implemented as software or hardware it allows

for multiple operating systems to run in a single physical machine Essentially, the

VMM can be seen as a small and light operating system with basic functionality,

responsible for controlling the underlying hardware resources and make them

available to each guest VM.

Hypervisors should be as minimal and light as possible in order to achieve

efficiency and optimal security .All the resources are provided uniformly to each

VM, making it possible for VMs to run on any kind of system regardless of its

architecture or different subsystems.  VMMs have two main tasks to accomplish:

enforce isolation between the VMs; manage the resources of the underlying

hardware pool.

–  Isolation is one of the vital security capabilities that VMMs should offer. All

interactions between the VMs and the underlying hardware should go through the

VMM .It is responsible for mediation between the communications and must be

able to enforce isolation and containment . A VM must be restricted from accessing

parts of the memory that belong to another VM, and similarly, a potential crash or
failure in one VM should not affect the operation of the others .

–  Resource Management

–  Normally the task of managing and sharing the available hardware resources is the

operating system's responsibility. In the case of a virtualized system this

responsibility becomes an integral part of the hypervisor's function .

–  The hypervisor should manage the CPU load balancing, map physical to logical

memory addresses, trap the CPU's instructions, and migrate VMs between physical

systems and so on, while protecting the integrity of each VM and the stability of the

whole system.

–  A substantial number of lines of the hypervisor's code are written to cope with the

managerial tasks it has to deliver. At the same time the code should be as minimal

as possible to avoid security vulnerabilities in the virtualization layer

TYPE 1 Hypervisior

–  Type I hypervisors also known as native or bare-metal.

–  Type I hypervisors can be categorized further depending on their design, which can

be monolithic or microkernel .
–  Type I hypervisors run on top of the hardware. Hypervisors of this type are bootable

operating systems and depending on their design, may incorporate the device

drivers for the communication of the underlying hardware.

–  Type I hypervisors offer optimal efficiency and usually preferred for server

virtualization. By placing them on top of the bare hardware, they allow for direct

communication with it. The security of the whole system is based on the security

capabilities of the hypervisor.

–  In a monolithic design, the device drivers are included in the hypervisor's code. This

approach offers better performance since the communication between applications

and hardware takes place without any intermediation with another entity.

However, this entails that the hypervisor will have a large amount of code to

accommodate the drivers which makes the attack surface even bigger, and the

security of the system could be compromised more easily

–  In the microkernel design, the device drivers are installed in the operating system of

the parent guest. The parent guest is one privileged VM used for creating,

destroying and managing the non-privileged child guest VMs residing in the

system.

–  Each one of the child guest machines that needs to communicate with the hardware
will have to mediate through the parent guest to get access to the hardware.

TYPE 2 Hypervisior
 
–  Type II hypervisors are installed on top of the host operating system and run as

applications (e.g. VMware Workstation).

–  These hypervisors allow for creating virtual machines to run on the operating system,

which in turn provides the device drivers to be used by the VMs.

–  This type of hypervisors are less efficient comparing to Type I hypervisors, since one

extra layer of software is added to the system, making the communications between

application and hardware more complex.

–  The security of a system of this type is essentially relying completely on the security

of the host operating system.

–  Any breach to the host operating system could potentially result in the complete

control over the virtualization layer. 

Discuss concept of Multitenant Architecture in Cloud Computing

Multitenancy is fundamental technology that clouds use to share IT resources cost-

efficiently and securely.

Just like in an apartment building, where many tenants cost-efficiently share the

common infrastructure of the building but have walls and doors that give them

privacy from other tenants, a cloud uses multitenancy technology to share IT

resources securely among multiple applications and tenants (businesses,

organizations, etc.) that use the cloud.

Some clouds use virtualization-based architectures to isolate tenants, others use

custom software architectures to get the job done. In a multi-tenant architecture,


multiple instances of an application operate in a shared environment.

 This architecture is able to work because each tenant is integrated physically, but

logically separated; meaning that a single instance of the software will run on one

server and then serve multiple tenants. In this way, a software application in a

multi-tenant architecture can share a dedicated instance of configurations, data, user

management and other properties.

Importance  of Multitenancy

–  Multi-tenant architectures are found in both public cloud and private cloud

environments, allowing each tenant's data to be separated from each other. For

example, in a multi-tenant public cloud, the same servers will be used in a hosted

environment to host multiple users.

–  Each user is given a separate and ideally secure space within those servers to store

data.The multi-tenant architecture can also aid in providing a better ROI for

organizations, as well as quickening the pace of maintenance and updates for

tenants.

Multi-tenant vs. single-tenant

–  Multi-tenancy can be contrasted with single-tenancy, an architecture in which each

customer has their own software instance and may be given access to source code.

–  In single-tenant architectures, a tenant will have a singular instance of a SaaS

application dedicated to them, unlike multi-tenancy where there are shared services.

Because each tenant is in a separate environment, they are not bound in the same way that

users of shared infrastructure would be; meaning single-tenant architectures are much more
customizable

–  Multi-tenancy is the more used option of the two, as most SaaS services operate on

multi-tenancy.

–   In comparison to single-tenancy, multi-tenancy is cheaper, has more efficient

resource usage, fewer maintenance costs as well as a potential for larger computing

capacity.

–  With a multi-tenant architecture, the provider only has to make updates once. With a

single-tenant architecture, the provider must touch multiple instances of the

software in order to make updates.

–  A potential customer would likely choose a single-tenant infrastructure over multi-

tenancy for the ability to have more control and flexibility in their environment --

typically to address specific requirements.  

Advantages of Multitenant Architecture

–  It is less expensive when compared to other tenant hosting architectures.

–  An offering of pay-for-what-you-need pricing models.

–  Tenants don't have to worry about updates, since they are pushed out by the host

provider.

–  Tenants don't have to worry about the hardware their data is being hosted on.

–  Providers only have to monitor and administrate a single system.

–  The architecture is easily scalable.

Disadvantages  of Multitenant Architecture

–  Multi-tenant apps tend to be less flexible than apps in other tenant architectures, such as
single-tenancy.

–  Multi-tenancy is, in general, more complex than single-tenancy.

–  Multi-tenant apps need stricter authentication and access controls for security.

–  Tenants have to worry about noisy neighbors, meaning someone else on the same

CPU that consumes a lot of cycles, which may slow response time.

–  Downtime may also be an issue depending on the provider.

Discuss the concept of Cloud APIs with help of an example

The ability to enhance the cloud experience and have cross-cloud

compatibility has helped form the Cloud API (Application Programming

Interface) environment. 

 Administrators can integrate applications and other workloads into the

cloud using these APIs.

Understanding the cloud API model isn’t always easy. There are many ways

to integrate into an infrastructure, and each methodology has its own

underlying components.

To get a better understanding of cloud computing and how APIs fit into the

process, it’s important to break down the conversation at a high level. There

are four major areas where cloud computing will need to integrate with

another platform (or even another cloud provider).

PaaS APIs (Service-level):

Also known as Platform-as-a-Service, these service APIs are designed to

provide access and functionality for a cloud environment.


This means integration with databases, messaging systems, portals, and

even storage components.

SaaS APIs (Application-level)

These APIs are also referred to as Software-as-a-Service APIs. Their goal

is to help connect the application-layer with the cloud and underlying IT

infrastructure.  So, CRM and ERP applications are examples of where

application APIs can be used to create a cloud application extension for

your environment.

IaaS APIs (Infrastructure-level)

Commonly referred to as Infrastructure-as-a-Service, these APIs help

control specific cloud resources and their distribution.

For example, the rapid provisioning or de-provisioning of cloud resources is

something that an infrastructure API can help with. Furthermore, network

configurations and workload (VM) management can also be an area where

these APIs are used.

Cloud provider and cross-platform APIs

●      Many environments today don’t use only one cloud provider or even

platform. Now, there is a need for greater cross-platform compatibility.

More providers are offering generic HTTP and HTTPS API integration to

allow their customers greater cloud versatility.

Furthermore, cross-platform APIs allow cloud tenants the ability to access

resources not just from their primary cloud provider, but from others as well.

This can save a lot of time and development energy since organizations can

now access the resources and workloads of different cloud providers and
platforms.

Cloud API

●      Apache (Citrix) CloudStack

●      Amazon Web Services API and Eucalyptus

●      Google Compute Engine

●      Simple Cloud

●      OpenStack API

●      VMware vCloud API

Each solution and platform has its own benefits and challenges. However,

many of them have something in common: Interoperability. For example,

The CloudStack model (although backed by Citrix) still integrates with any

underlying hypervisor and supports other common cloud API models

including AWS API, OpenStack API and even VMware vCloud API.

●      Other solutions, such as Simple Cloud API, are developed and funded by a

number of organizations to create a true cross-platform cloud environment.

Simple Cloud APIs are able to integrate with services from Amazon and

Microsoft. The solution that you chose to work with will depend on the

infrastructure that you are trying to deliver. If storage connectivity is a

concern, look for a platform that easily integrates with various storage

models across a WAN.

Discuss the concept of Billing and Metering of services in

Context of Cloud Computing


Metered Billing

–  From power usage to network traffic, there are a lot of moving parts in the

cloud, necessitating tools that capture and measure this activity in order to

record and report various aspects of system performance.

Organizations using cloud services typically receive daily emails that

provide alerts for spending data, usage spikes, sudden and unexpected

changes, and more. This is called “metering

–  Metered billing is a pricing model in which you pay for a service only based

on the level of usage. For example, the cost of a service might depend on

time used, volume of data processed, or CPU cycles—depending on the

type of service. You receive a monthly bill to pay for your actual level of

usage and nothing more.

–  Metered billing is an advancement made possible by the increasing number

of applications and services being delivered via the cloud.

–   Under a metered-billing pricing model, the cloud-based application must be

able to track your usage level and automatically calculate a price that

matches your usage level.

–   Compared to other pricing models such as multi-year licenses, or even

traditional pay-as-you-go models, metered billing enables a much higher

degree of agility and flexibility in resource use, provisioning capacity on the

fly without incurring excessive costs.

 
Infrastructure as a service and billing and metering services

–  Historically, the high cost of provisioning servers and infrastructure limited the

ability to develop software as a service (SaaS) applications.

–  For example, it would take weeks if not months to plan, order, ship, and install

new server hardware in the data center.

Today, new billing and metering models allow procurement of hardware and

operating systems — known as infrastructure as a service (IaaS) — in less than a

minute

–  The primary concepts of IaaS include:

–  Servers per hour serving an on-demand model

–  Reserved servers for better planning

–  Higher and lower compute resource units based on application performance

–  Volume-based metering on the number of instances consumed

–  Prepaid and reserved infrastructure resources

–  Clustered server resources

Purchasing Options

–  On-Demand Instances–On-Demand Instances let you pay for compute

capacity by the hour with no long-term commitments.

–  Reserved Instances–Reserved Instances provide you with a significant

discount (up to 75%) compared to On-Demand Instance pricing.

–  Spot Instances–Spot Instances allow customers to bid on unused Amazon

EC2 capacity and run those instances for as long as their bid exceeds the
current Spot Price.

Platform as a service and billing and metering services

–  Platform as a service (PaaS) billing and metering are determined by actual

usage, as platforms differ in aggregate and instance-level usage measures.

–  Actual usage billing enables PaaS providers to run application code from

multiple tenants across the same set of hardware depending on the

granularity of usage monitoring.

–  For example, the network bandwidth, CPU utilization, and disk usage per

transaction or application can determine PaaS cost.

–  The primary concepts for PaaS metering and billing include:

–  Incoming and outgoing network bandwidth

–  CPU time per hour

–  Stored data

–  High availability

–  Monthly service charge

SaaS and billing and metering services

–  The traditional concept for billing and metering SaaS applications is a monthly

fixed cost; in some cases, depending on the amount of data or number of

“seats,” the billing and pricing are optimized.

–  The number of users is determined by the number of users the organization

allows to access the SaaS applications, which increases the price of the

monthly fee; in some cases, if certain volumes are met, there is a discount.
–   For instance, sales software provided as a service would cost US$50 per

month per sales agent for a company using the application.

–  The primary concepts for SaaS billing and metering include:

–  Monthly subscription fees

–  Per-user monthly fees

–  The monthly subscription fee is a fixed cost billed per month, often for a

minimum contracted length of agreement of one year.

–  The billing model per month changes the high initial investment from a

software capital cost to a monthly operational expense.

–  This model is especially appealing to small and medium-sized organizations

to help them get started with the software required for their business

initiatives.

Scalability and Elasticity in Cloud Computing?

Cloud Elasticity :
The Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden-up and down in the
requirement so that the workload can be managed efficiently. This elasticity
helps to minimize infrastructural cost. This is not applicable for all kind of
environment, it is helpful to address only those scenarios where the
resources requirements fluctuate up and down suddenly for a specific time
interval. It is not quite practical to use where persistent resource
infrastructure is required to handle the heavy workload.
It is most commonly used in pay-per-use, public cloud services. Where IT
managers are willing to pay only for the duration to which they consumed
the resources.
Example :
Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can
go for Cloud-Elasticity service rather than Cloud Scalability. As soon as the
season goes out, the deployed resources can then be requested for
withdrawal.

Cloud Elasticity :
The Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden-up and down in the
requirement so that the workload can be managed efficiently. This elasticity
helps to minimize infrastructural cost. This is not applicable for all kind of
environment, it is helpful to address only those scenarios where the
resources requirements fluctuate up and down suddenly for a specific time
interval. It is not quite practical to use where persistent resource
infrastructure is required to handle the heavy workload.
It is most commonly used in pay-per-use, public cloud services. Where IT
managers are willing to pay only for the duration to which they consumed
the resources.
Example :
Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can
go for Cloud-Elasticity service rather than Cloud Scalability. As soon as the
season goes out, the deployed resources can then be requested for
withdrawal.
Types of Scalability :
1. Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing resources in the
working environment in an upward direction.

2. Horizontal Scalability –
In this kind of scaling, the resources are added in a horizontal row.

3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the resources
are added both vertically and horizontally.
Difference Between Cloud Elasticity and Scalability :

You might also like