Professional Documents
Culture Documents
Cloud Computing
Cloud Computing
- Broad network access: Cloud Capabilities are available over the network and accessed
through standard mechanisms that promote use by heterogeneous thin or thick client
platforms such as mobile phones, laptops and PDAs.
- Resource pooling: The provider’s computing resources are pooled together to serve
multiple consumers using multiple-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. The resources include
among others storage, processing, memory, network bandwidth, virtual machines and email
services. The pooling together of the resource builds economies of scale (Gartner).
- Rapid elasticity: Cloud services can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer,
the capabilities available for provisioning often appear to be unlimited and can be purchased
in any quantity at any time.
- Measured service: Cloud computing resource usage can be measured, controlled, and
reported providing transparency for both the provider and consumer of the utilised service.
Cloud computing services use a metering capability which enables to control and optimise
resource use. This implies that just like air time, electricity or municipality water IT services
are charged per usage metrics – pay per use. The more you utilise the higher the bill. Just as
utility companies sell power to subscribers, and telephone companies sell voice and data
services, IT services such as network security management, data center hosting or even
departmental billing can now be easily delivered as a contractual service.
Why use Clouds?
Clouds can provide users with a number of different benefits. Many businesses large and
small use cloud computing today either directly (e.g. Google or Amazon) or indirectly (e.g.
Twitter) instead of traditional on-site alternatives. There are a number of reasons why cloud
computing is so widely used among businesses today.
- Reduction of costs – unlike on-site hosting the price of deploying applications in the cloud
can be less due to lower hardware costs from more effective use of physical resources.
- Universal access - cloud computing can allow remotely located employees to access
applications and work via the internet.
- Up to date software - a cloud provider will also be able to upgrade software keeping in
mind feedback from previous software releases.
- Choice of applications. This allows flexibility for cloud users to experiment and choose the
best option for their needs. Cloud computing also allows a business to use, access and pay
only for what they use, with a fast implementation time
Potential to be greener and more economical - the average amount of energy needed for a
computational action carried out in the cloud is far less than the average amount for an on-site
deployment. This is because different organisations can share the same physical resources
securely, leading to more efficient use of the shared resources.
- Flexibility – cloud computing allows users to switch applications easily and rapidly, using
the one that suits their needs best. However, migrating data between applications can be an
issue.
GRID COMPUTING
Grid computing is a network-based computational model that has the ability to process large
volumes of data with the help of a group of networked computers that coordinate to solve a
problem together.
The group of computers acts as a virtual supercomputer to provide scalable and seamless
access to wide-area computing resources that are geographically distributed and present them
as a single, unified resource to perform large-scale applications such as analyzing huge sets
of data.
BIG DATA?
Big data is a term that describes the large volume of data – both structured and unstructured –
that inundates a business on a day-to-day basis.
But it’s not the amount of data that’s important. It’s what organizations do with the data that
matters.
Big data can be analyzed for insights that lead to better decisions and strategic business
moves.
Volume
Velocity
Variety
Social Data
Machine data
Transactional Data
VOLUME: The amount of data matters. With big data, you’ll have to process high volumes
of low-density, unstructured data.
This can be data of unknown value, such as Twitter data feeds, clickstreams on a webpage or
a mobile app, or sensor-enabled equipment.
For some organizations, this might be tens of terabytes of data. For others, it may be
hundreds of petabytes.
VELOCITY: is the fast rate at which data is received and (perhaps) acted on.
Normally, the highest velocity of data streams directly into memory versus being written to
disk.
Some internet-enabled smart products operate in real time or near real time and will require
real-time evaluation and action.
VARIETY: Variety refers to the many types of data that are available.
Traditional data types were structured and fit neatly in a relational database.
With the rise of big data, data comes in new unstructured data types.
Unstructured and semi-structured data types, such as text, audio, and video, require additional
preprocessing to derive meaning and support metadata.
IT as a Service
IT as a service (ITaaS) is an operational model where the IT service provider delivers an
information technology service to a business.The IT service provider can be an internal IT
organization or an external IT services company. The recipients of ITaaS can be a line of
business(LOB) organization within an enterprise or a small and medium business (SMB).
At its core, ITaaS is a competitive business model where businesses have many options for IT
services and the internal IT organization has to compete against those other external options
in order to be the selected IT service provider to the business. Options for providers other
than the internal IT organization may include IT outsourcing companies and public cloud
providers.
What is ITaaS?
IT as a Service (ITaaS) can be seen as a transformative operational model that allows a line
of business or user to consume Information Technology as a managed service. The services
are cataloged, allowing users to consume and pay only for the services they require. This
operating model can be adopted by internal IT departments or external vendors.
Consumers have a variety of choices and can employ IT solutions prepared to meet the
specific demands, which have already been assessed and prepared for, but the ITaaS service
provider.
They are no longer the tech folks focused on putting out the fire when an end-user fails to
employ a required IT service. Instead, it’s focused on addressing the unique requirements of
internal consumers, allowing them the options to choose the best available resources and
solutions, along with managing the entire experience lifecycle associated with it.
Business Focus: The operations and service architecture of an ITaaS vendor is
designed around the business requirements of the organization instead of the technical
projects and IT infrastructure running on-premise.
Financing: The pricing of an ITaaS offering should justify the promised cost savings
and value. As a broker and managed service provider, ITaaS providers may have a limited
range of profitability when customers can purchase a service directly from the vendor.
Agile ITSM: Since ITaaS is focused on the process of serving user requests and
aligning IT with business, it must adopt an appropriate ITSM framework that enables agile
and effective business operations.
User Experience: ITaaS should be able to offer an improved user experience as
compared to traditional IT departments. The catalog listing ITaaS offering should be
exhaustive with transparent pricing models. The managed services should ensure that end-
user requirements are addressed effectively across the service lifecycle.
BENEFITS IF ITaaS
IT departments in large enterprises typically operate as a single point of contact and service
provider to a large internal user base.
With the ITaaS model, organizations can employ the same services without operating an IT
department in-house. This model is particularly suitable for SMB firms operating on a limited
budget and resources.
For large enterprises, internal IT departments can take the role of an ITaaS vendor to
internal users and lines of business. This requires decoupling of IT shops from the
organizational structure, introducing new roles and responsibilities, as well as a cultural
change in the way IT interacts and serves its internal users.
The ITaaS model offers a simple concept: IT services are consumed effectively when end-
users are given sufficient choices between services—and charged on the consumption basis.
In this context, ITaaS vendors are responsible for maintaining a vast library of IT solutions
and services required by end-users.
ITaaS vendors take the role of managed service providers and brokers. With the service-
oriented model, they manage and orchestrate the IT service lifecycle: from identifying a user
requirement to supporting the final outcome of an effectively delivered service. It includes
tasks such as finding available solutions, negotiating SLAs, and helping users make well-
informed decisions when selecting a service.
●IBM's System VM/370, and its predecessor system VM/360 in 1967, set the
foundation of the virtual machine architecture as it is known today .
●At that time it was vital in terms of efficiency to be able to take full advantage of the
mainframes' power, by allowing different entities to run multiple execution
environments which shared the underlying hardware
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed
on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.
After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
2) Operating System Virtualization:
When the virtual machine software or virtual machine manager (VMM) is installed on the
Host operating system instead of directly on the hardware system is known as operating
system virtualization.
Usage: Operating System Virtualization is mainly used for testing the applications on
different platforms of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed
on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications
Next →← Prev
resources".
The machine on which the virtual machine is going to create is known as Host
5.6M
Robinhood Launches Cash Card To Help People Invest
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
Usage:
on the Host operating system instead of directly on the hardware system is known
Usage:
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
Usage:
Server virtualization is done because a single physical server can be divided into
multiple servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
Usage:
CHARACTRISTICS OF VIRTUALIZATION?
1. Increased Security – The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure, controlled
execution environment. All the operations of the guest programs are generally performed
against the virtual machine, which then translates and applies them to the host programs.
A virtual machine manager can control and filter the activity of the guest programs, thus
preventing some harmful operations from being performed. Resources exposed by the host
can then be hidden or simply protected from the guest. Increased security is a requirement
The term sandbox identifies an isolated execution environment where instructions can be
filtered and blocked before being translated and executed in the real execution
environment.
Example-2: The expression sandboxed version of the Java Virtual Machine (JVM) refers
Functions enabled by a managed execution
the same host. This basic feature is used to reduce the number of active servers and limit
power consumption.
virtualization also allows aggregation, which is the opposite process. A group of separate
hosts can be tied together and represented to guests as a single virtual host. This
resource.
environment with respect to the host can be emulated, thus allowing the execution of guest
programs requiring specific characteristics that are not present in the physical host.
which provides access to the underlying resources. The virtual machine can filter the
activity of the guest and prevent harmful operations against the host.
performance tuning. This feature is a reality at present, given the considerable advances in
performance of the guest by finely tuning the properties of the resources exposed through
In the case of a hardware virtualization solution, the guest is packaged into a virtual image
that, in most cases, can be safely moved and executed on top of different virtual machines.
runtime, the binary code representing application components (jars or assemblies) can run
Cons of Virtualization :
The Virtual Machine Monitor (VMM) also known as hypervisor, is the core part of
for multiple operating systems to run in a single physical machine Essentially, the
VMM can be seen as a small and light operating system with basic functionality,
responsible for controlling the underlying hardware resources and make them
efficiency and optimal security .All the resources are provided uniformly to each
VM, making it possible for VMs to run on any kind of system regardless of its
enforce isolation between the VMs; manage the resources of the underlying
hardware pool.
Isolation is one of the vital security capabilities that VMMs should offer. All
interactions between the VMs and the underlying hardware should go through the
VMM .It is responsible for mediation between the communications and must be
parts of the memory that belong to another VM, and similarly, a potential crash or
failure in one VM should not affect the operation of the others .
Resource Management
Normally the task of managing and sharing the available hardware resources is the
The hypervisor should manage the CPU load balancing, map physical to logical
memory addresses, trap the CPU's instructions, and migrate VMs between physical
systems and so on, while protecting the integrity of each VM and the stability of the
whole system.
A substantial number of lines of the hypervisor's code are written to cope with the
managerial tasks it has to deliver. At the same time the code should be as minimal
TYPE 1 Hypervisior
Type I hypervisors can be categorized further depending on their design, which can
be monolithic or microkernel .
Type I hypervisors run on top of the hardware. Hypervisors of this type are bootable
operating systems and depending on their design, may incorporate the device
Type I hypervisors offer optimal efficiency and usually preferred for server
virtualization. By placing them on top of the bare hardware, they allow for direct
communication with it. The security of the whole system is based on the security
In a monolithic design, the device drivers are included in the hypervisor's code. This
and hardware takes place without any intermediation with another entity.
However, this entails that the hypervisor will have a large amount of code to
accommodate the drivers which makes the attack surface even bigger, and the
In the microkernel design, the device drivers are installed in the operating system of
the parent guest. The parent guest is one privileged VM used for creating,
destroying and managing the non-privileged child guest VMs residing in the
system.
Each one of the child guest machines that needs to communicate with the hardware
will have to mediate through the parent guest to get access to the hardware.
TYPE 2 Hypervisior
Type II hypervisors are installed on top of the host operating system and run as
These hypervisors allow for creating virtual machines to run on the operating system,
This type of hypervisors are less efficient comparing to Type I hypervisors, since one
extra layer of software is added to the system, making the communications between
The security of a system of this type is essentially relying completely on the security
Any breach to the host operating system could potentially result in the complete
Just like in an apartment building, where many tenants cost-efficiently share the
common infrastructure of the building but have walls and doors that give them
This architecture is able to work because each tenant is integrated physically, but
logically separated; meaning that a single instance of the software will run on one
server and then serve multiple tenants. In this way, a software application in a
Importance of Multitenancy
Multi-tenant architectures are found in both public cloud and private cloud
environments, allowing each tenant's data to be separated from each other. For
example, in a multi-tenant public cloud, the same servers will be used in a hosted
Each user is given a separate and ideally secure space within those servers to store
data.The multi-tenant architecture can also aid in providing a better ROI for
tenants.
customer has their own software instance and may be given access to source code.
application dedicated to them, unlike multi-tenancy where there are shared services.
Because each tenant is in a separate environment, they are not bound in the same way that
users of shared infrastructure would be; meaning single-tenant architectures are much more
customizable
Multi-tenancy is the more used option of the two, as most SaaS services operate on
multi-tenancy.
resource usage, fewer maintenance costs as well as a potential for larger computing
capacity.
With a multi-tenant architecture, the provider only has to make updates once. With a
tenancy for the ability to have more control and flexibility in their environment --
Tenants don't have to worry about updates, since they are pushed out by the host
provider.
Tenants don't have to worry about the hardware their data is being hosted on.
Multi-tenant apps tend to be less flexible than apps in other tenant architectures, such as
single-tenancy.
Multi-tenant apps need stricter authentication and access controls for security.
Tenants have to worry about noisy neighbors, meaning someone else on the same
CPU that consumes a lot of cycles, which may slow response time.
Interface) environment.
Understanding the cloud API model isn’t always easy. There are many ways
underlying components.
To get a better understanding of cloud computing and how APIs fit into the
process, it’s important to break down the conversation at a high level. There
are four major areas where cloud computing will need to integrate with
your environment.
● Many environments today don’t use only one cloud provider or even
More providers are offering generic HTTP and HTTPS API integration to
resources not just from their primary cloud provider, but from others as well.
This can save a lot of time and development energy since organizations can
now access the resources and workloads of different cloud providers and
platforms.
Cloud API
● Simple Cloud
● OpenStack API
Each solution and platform has its own benefits and challenges. However,
The CloudStack model (although backed by Citrix) still integrates with any
including AWS API, OpenStack API and even VMware vCloud API.
● Other solutions, such as Simple Cloud API, are developed and funded by a
Simple Cloud APIs are able to integrate with services from Amazon and
Microsoft. The solution that you chose to work with will depend on the
concern, look for a platform that easily integrates with various storage
From power usage to network traffic, there are a lot of moving parts in the
cloud, necessitating tools that capture and measure this activity in order to
provide alerts for spending data, usage spikes, sudden and unexpected
Metered billing is a pricing model in which you pay for a service only based
on the level of usage. For example, the cost of a service might depend on
type of service. You receive a monthly bill to pay for your actual level of
able to track your usage level and automatically calculate a price that
Infrastructure as a service and billing and metering services
Historically, the high cost of provisioning servers and infrastructure limited the
For example, it would take weeks if not months to plan, order, ship, and install
Today, new billing and metering models allow procurement of hardware and
minute
Purchasing Options
EC2 capacity and run those instances for as long as their bid exceeds the
current Spot Price.
Actual usage billing enables PaaS providers to run application code from
For example, the network bandwidth, CPU utilization, and disk usage per
Stored data
High availability
The traditional concept for billing and metering SaaS applications is a monthly
allows to access the SaaS applications, which increases the price of the
monthly fee; in some cases, if certain volumes are met, there is a discount.
For instance, sales software provided as a service would cost US$50 per
The monthly subscription fee is a fixed cost billed per month, often for a
The billing model per month changes the high initial investment from a
to help them get started with the software required for their business
initiatives.
Cloud Elasticity :
The Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden-up and down in the
requirement so that the workload can be managed efficiently. This elasticity
helps to minimize infrastructural cost. This is not applicable for all kind of
environment, it is helpful to address only those scenarios where the
resources requirements fluctuate up and down suddenly for a specific time
interval. It is not quite practical to use where persistent resource
infrastructure is required to handle the heavy workload.
It is most commonly used in pay-per-use, public cloud services. Where IT
managers are willing to pay only for the duration to which they consumed
the resources.
Example :
Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can
go for Cloud-Elasticity service rather than Cloud Scalability. As soon as the
season goes out, the deployed resources can then be requested for
withdrawal.
Cloud Elasticity :
The Elasticity refers to the ability of a cloud to automatically expand or
compress the infrastructural resources on a sudden-up and down in the
requirement so that the workload can be managed efficiently. This elasticity
helps to minimize infrastructural cost. This is not applicable for all kind of
environment, it is helpful to address only those scenarios where the
resources requirements fluctuate up and down suddenly for a specific time
interval. It is not quite practical to use where persistent resource
infrastructure is required to handle the heavy workload.
It is most commonly used in pay-per-use, public cloud services. Where IT
managers are willing to pay only for the duration to which they consumed
the resources.
Example :
Consider an online shopping site whose transaction workload increases
during festive season like Christmas. So for this specific period of time, the
resources need a spike up. In order to handle this kind of situation, we can
go for Cloud-Elasticity service rather than Cloud Scalability. As soon as the
season goes out, the deployed resources can then be requested for
withdrawal.
Types of Scalability :
1. Vertical Scalability (Scale-up) –
In this type of scalability, we increase the power of existing resources in the
working environment in an upward direction.
2. Horizontal Scalability –
In this kind of scaling, the resources are added in a horizontal row.
3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the resources
are added both vertically and horizontally.
Difference Between Cloud Elasticity and Scalability :