You are on page 1of 45

UNIT -1

Introduction to Cloud Computing:


The term Cloud refers to a Network or Internet. In other words, we can say that Cloud is something,
which is present at remote location. Cloud can provide services over public and private networks, i.e.,
WAN, LAN or VPN. Applications such as e-mail, web conferencing, customer relationship management
(CRM) execute on cloud.

Why the Name Cloud?


The term “Cloud” came from a network design that was used by network engineers to represent the
location of various network devices and their inter-connection. The shape of this network design was like
a cloud.

The term Cloud Computing refers to manipulating, configuring, and accessing the hardware and
software resources remotely. It offers online data storage, infrastructure, and application. Cloud
Computing provides us means of accessing the applications as utilities over the Internet.
It allows us to create, configure, and customize the applications online. Cloud computing offers platform
independency, as the software is not required to be installed locally on the PC.
Example: AWS, Azure, Google Cloud.
History of Cloud computing:
Cloud computing has as its antecedents both client/server computing and peer-to-peer
distributed computing. It’s all a matter of how centralized storage facilitates collaboration and how
multiple computers work together to increase computing power.

1. Client/Server Computing: Centralized Applications and Storage


In the early days of computing everything operated on the client/server model. All the software
applications, all the data, and all the control resided on huge mainframe computers, otherwise known as
servers. If a user wanted to access specific data or run a program, he had to connect to the mainframe, gain
appropriate access, and then do his business while essentially “renting” the program or data from the
server.
Users connected to the server via a computer terminal, sometimes called a workstation or client.
This computer was sometimes called a dumb terminal because it didn’t have a lot memory, storage space,
or processing power. It was a device that connected the user to and enabled him to use the mainframe
computer.
Client/Server computing is a computing model in which client and server computers communicate
with each other over a network. In client/server computing, a server takes requests from client computers
and shares its resources, applications and/or data with one or more client computers on the network, and a
client is a computing device that initiates contact with a server in order to make use of a shareable resource.

2. Peer-to-Peer Computing(P2P): Sharing Resources


P2P computing defines a network architecture in which each computer has equivalent capabilities and
responsibilities. In the P2P environment, every computer is a client and a server; there are no masters and
slaves. By recognizing all computers on the network as peers, P2P enables direct exchange of resources
and services. There is no need for a central server, because any computer can function in that capacity
when called on to do so. P2P was also a decentralizing concept. Control is decentralized, with all
computers functioning as equals. Content is also dispersed among the various peer computers.

3. Distributed Computing: Providing More Computing Power


One of the most important subsets of the P2P model is that of distributed computing, where idle PCs
across a network or across the Internet are tapped to provide computing power for large, processor-
intensive projects. It’s a simple concept, all about cycle sharing between multiple computers. A personal
computer, running full-out 24 hours a day, 7 days a week, is capable of tremendous computing power.
Most people don’t use their computers 24/7, however, so a good portion of a computer’s resources go
unused. Distributed computing uses those resources.
4. Collaborative Computing: Working as a Group
The goal was to enable multiple users to collaborate on group projects online, in real time. To
collaborate on any project, users must first be able to talk to one another. In today’s environment, this
means instant messaging for text-based communication, with optional audio/telephony and video
capabilities for voice and picture communication. Most collaboration systems offer the complete range of
audio/video options, for full-featured multiple-user video conferencing. In addition, users must be able to
share files and have multiple users work on the same document simultaneously.

5. Cloud Computing: The Next Step in Collaboration


Users from multiple locations within a corporation, and from multiple organizations, desired to
collaborate on projects that crossed company and geographic boundaries. To do this, projects had to be
housed in the “cloud” of the Internet, and accessed from any Internet-enabled location. The concept of
cloud-based documents and services took wing with the development of large server farms, such as those
run by Google and other search companies
On the infrastructure side, IBM, Sun Systems, and other big iron providers are offering the
hardware necessary to build cloud networks. On the software side, dozens of companies are developing
cloud-based applications and storage services. Today, people are using cloud services and storage to
create, share, find, and organize information of all different types.

Essential Characteristics/ Characteristics of Cloud Computing:


Cloud computing has five essential characteristics. Essential means that if any of these
characteristics is missing, then it is not cloud computing.

1. On-demand self-service: A consumer can individually provision computing capabilities, such as


server time and network storage, as needed automatically without requiring human interaction with
each service’s provider.

2. Broad network access: Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms.
3. Elastic resource pooling: The provider’s computing resources are pooled to serve multiple consumers
using a multitenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand. There is a sense of location independence in that the
customer generally has no control or knowledge over the exact location of the provided resources but
may be able to specify the location at a higher level of abstraction (e.g., country, state, or data center).
Examples of resources include storage, processing, memory, and network bandwidth.

4. Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically,
to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available
for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

5. Measured service: Cloud systems automatically control and optimize resource use by leveraging a
metering capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and
reported providing transparency for both the provider and consumer of the utilized service.

Types of Cloud/ Cloud Deployment Models


There are certain services and models working behind the scene making the cloud computing feasible and
accessible to end users. Following are the working models for cloud computing:
 Deployment Models
 Service Models

 Deployment models describe the ways with which the cloud services can be deployed or made
available to its customers, depending on the organizational structure and the provisioning location.
 The classification of the cloud is based on several parameters such as the size of the cloud (number
of resources), type of service provider, location, type of users, security, and other issues. The
smallest in size is the private cloud.
 There are four different cloud models that you can subscribe according to business needs:

1. Public cloud
The cloud infrastructure is provisioned for open use by the general public. It may be owned,
managed, and operated by a business, academic, or government organization, or some combination of
them. It exists on the premises of the cloud provider.
Public cloud consists of users from all over the world. A user can simply purchase resources on an
hourly basis and work with the resources. There is no need of any prebuilt infrastructure for using the
public cloud. Some of the well-known examples of the public cloud are Amazon AWS, Microsoft Azure,
Google App Engine (GAE), IBM Blue Cloud, etc.

Characteristics of public cloud (Advantages)


 Highly scalable: The public cloud is highly scalable. The resources in the public cloud are large in
number and the service providers make sure that all the requests are granted. Hence, the public cloud
is considered to be scalable.
 Affordable: The public cloud is offered to the public on a pay-as-you-go basis; hence, the user has to
pay only for what he or she is using (usually on a per-hour basis). And, this does not involve any cost
related to the deployment.
 Highly available: The public cloud is highly available because anybody from any part of the world
can access the public cloud with proper permission, and this is not possible in other models as
geographical or other access restrictions might be there.
 Stringent SLAs: SLAs are terms and conditions that are negotiated between the service provider and
the user. SLA is very stringent in the case of the public cloud. As the service provider’s business
reputation and customer strength are totally dependent on the cloud services, they follow the SLA
strictly and violations are avoided. These SLAs are very competitive.
Several issues pertaining to the public cloud (Disadvantages):
 SLA: Public cloud has more number of diverse users and so are the numbers of service agreements.
The service provider is answerable to all the users. The SLA will cover all the users from all parts of
the world. The service provider has to guarantee all the users a fair share without any priority.
 Network: Each and every user getting the services of the cloud gets it through the Internet. The
services are accessed through the Internet by all the users, and hence, the service delivery wholly
depends on the network. The service provider is not responsible for the network. The user will be
charged for even if he or she has problem due to the network.
 Multi-tenancy: The resources are shared, that is, multiple users share the resources, hence the term
multitenant. Due to this property, there is a high risk of data being leaked or a possible unprivileged
access.
 Location: The location of the public cloud is an issue. As the public cloud is fragmented and is located
in different regions, the access to these clouds involves a lot of data transfers through the Internet.
 Security and data privacy: Security and data privacy are the biggest challenges in the public cloud.
As data are stored in different places around the globe, data security is a very big issue. A user storing
the data outside his or her country has a risk of the data being viewed by other people as that does not
come under the jurisdiction of the user’s country. Though this might not always be true, but it may
happen.
 Laws and conflicts: The data are stored in different places of the world in different countries. Hence,
data centers are bound to laws of the country in which they are located. This creates many conflicts
and problems for the service providers and the users.
 Cloud management: As the number of users is more, and so the management is difficult. The jobs
here are time critical, and as the number of users increases, it becomes more difficult. Inefficient
management of resources will lead to resource shortage, and user service might be affected. It has a
direct impact on SLA and may cause SLA violation.

2. Private cloud
The cloud infrastructure is provisioned for exclusive use by a single organization comprising
multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a
third party, or some combination of them, and it may exist on or off premises. In on premise model, the
cloud is deployed in organizational premises (on premise) and is connected to the organizational network.
In off premise or outsourced model, the cloud is deployed off premise from the organization. Private
cloud can be deployed using Open source tools such as Openstack, Eucalyptus. An example of a
private cloud is U.S. National Aeronautics and Space Administration (NASA).

Characteristics of the private cloud:


 Secure: The private cloud is secure. This is because usually the private cloud is deployed and managed by the
organization itself, and hence there is least chance of data being leaked out of the cloud.
 Central control: The organization mostly has full control over the cloud as usually the private cloud is managed
by the organization itself.
 Weak Service-Level Agreements (SLAs): Formal SLAs may or may not exist in a private cloud. But if they
exist they are weak as it is between the organization and the users of the same organization. Thus, high
availability and good service may or may not be available.
 The cloud is small in size and is easy to maintain.
There are four types of private clouds.
1. Typical private cloud: Organization hosts the cloud in one of their data centers behind the corporate firewall
and limits the access to the content to internal employees only.
2. Managed private cloud: This type of private cloud is managed by a third party provider. The organization still
owns the infrastructure but management of the facility is with a third party.
3. Hosted private cloud / Leased private cloud: The cloud service provider provides dedicated servers to the
organization, thus eliminating the concerns of multi-tenancy. In this model the organization pays a slightly
higher cost for the use of dedicated servers.
4. Virtual Private Cloud (VPC): The VPC service is offered by a traditional cloud service provider in a multi-
tenant environment with VPN (Virtual Private Network) access to their customers. The organization accesses
the cloud using a VPN connection thereby assuring security. This service is much less expensive compared to
the other three types of private clouds but more expensive than the public cloud.
Disadvantages
 For the private cloud, budget is a constraint.
 The private clouds have loose SLAs.

Public cloud deployments are popular with small and medium sized businesses where as large businesses tend to favor
private cloud deployments because of the service control and security it offers.

3. Community Cloud
The community cloud is the cloud infrastructure that is provisioned for exclusive use by a specific
community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy,
and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in
the community, a third party, or some combination of them, and it may exist on or off premises. The community
cloud offers the benefits of public cloud computing but restricted to a particular industry segment and the security
features of a hosted private cloud. This model is very suitable for organizations that cannot afford a private cloud
and cannot rely on the public cloud either.
Characteristics of community cloud (Advantages)
 Collaborative and distributive maintenance: The community cloud is wholly collaborative, and usually no
single party has full control over the whole cloud (in some cases, it may be controlled by one party). This is
usually distributive, and hence, better cooperation gives better results.
 Partially secure: Partially secure refers to the property of the community cloud where few organizations share
the cloud, so there is a possibility that the data can be leaked from one organization to another, though it is safe
from the outside world. Community cloud is secure than public cloud, but not as secure as private cloud.
 Cost effective: The community cloud is cost effective as the whole cloud is being shared by several
organizations or a community. Usually, not only cost but every other sharable responsibility are also shared or
divided among the groups.
Issues in community cloud (Disadvantages)
 Autonomy of an organization is lost.
 Security features are not as good as the private cloud.
 It is not suitable if there is no collaboration.
Two basic types of Community Cloud models
1. Federated model: Companies belonging to the same sector participate in the community cloud whereby any
unused computing resource in one organization is used by another member organization on demand.
2. Brokered model: A trusted third party serves as the broker and interfaces with the various community cloud
members. The broker is responsible for procuring the various services needed by the industry sector and makes
them available to all the members.
The community cloud is a closed system, available only to member organizations. The major benefit to the
organizations belonging to the community cloud is that they will have greater cost savings in using the applications
needed in that sector. The Brokered model supports this feature better than the Federated model.
The Federated community cloud model has the benefit of sharing the computing resources of member organizations
when they are idle. However, the liability issues with regard to such processing are not settled, especially when
there is a service outage in the middle of a processing. Another issue that is also open is to the responsibility of the
member organization about keeping up the system and how the time share will be paid for.
The Brokered model is much better in handling the liability issues in the community cloud. The Broker is responsible
for contract settlements with the providers, provide the necessary trust for the members for use of the system, and
resolve disputes. Broker will also be able to provide the necessary data for the members for meeting their compliance
obligations. Moreover, the Broker will be able to provide several additional value added services.
4. Hybrid cloud
A hybrid cloud combines multiple clouds (private, community of public) where those clouds retain their
unique identities, but are bound together as a unit. A hybrid cloud may offer standardized or proprietary access to
data and applications, as well as application portability. For instance, an organisation may utilize a public cloud for
some aspect of its business, yet also have a private cloud on premise for data that is sensitive.

Advantages: By having the public cloud component in the architecture the hybrid cloud offers the cloud advantages
of scalability, availability, demand elasticity and pay-as-you-go model. Hybrid cloud is suitable for large businesses
or niche businesses with a compute intensive system that would experience demand fluctuation.
One of the key benefits of hybrid cloud is the ability of the organization to keep on premise the sensitive applications
and move to the public cloud the other applications. Also, this blend of private and public clouds opens up the
possibilities for application developers to come up with new methods that could be tested on a public cloud and
eventually get adopted by private clouds.
Disadvantages/ Issues:
 SLA: SLA is one of the important aspects of the hybrid cloud as both private and public are involved. The
private cloud does not have stringent agreements, whereas the public cloud has certain strict rules to be covered.
 Network: The network is usually a private network, and whenever there is a necessity, the public cloud is used
through the Internet. Unlike the public cloud, here there is a private network also. Thus, a considerable amount
of effort is required to maintain the network.
 Performance: The hybrid cloud is a special type of cloud in which the private environment is maintained with
access to the public cloud whenever required. Thus, here again a feel of an infinite resource is restored. The
cloud provider (private cloud) is responsible for providing the cloud.
 Multi-tenancy: This is an issue in the hybrid cloud as it involves the public cloud in addition to the private
cloud. Thus, this property can be misused and the breaches will have adverse affects as some parts of the cloud
go public.
 Location: Like a private cloud, the location of these clouds can be on premise or off premise and they can be
outsourced. They will have all the issues related to the private cloud; in addition to that, issues related to the
public cloud will also come into picture whenever there is intermittent access to the public cloud.
 Security and privacy: Whenever the user is provided services using the public cloud, security and privacy
become more stringent. As it is the public cloud, the threat of data being lost is high.
 Laws and conflicts: Several laws of other countries come under the purview as the public cloud is involved,
and usually these public clouds are situated outside the country’s boundaries.
 Cloud management: Here, everything is managed by the private cloud service provider.
 Cloud maintenance: Cloud maintenance is of the same complexity as the private cloud; here, only the resources
under the purview of the private cloud need to be maintained. It involves a high cost of maintenance.

Cloud Service Models/Cloud Services:


As cloud computing has developed, different vendors offer clouds that have different services associated
with them. The collection of services offered adds another set of definitions called the service model. The
service model consists of the particular types of services that you can access on a cloud computing
platform. Three service types have been universally accepted:
1. Infrastructure as a Service: IaaS
2. Platform as a Service: PaaS
3. Software as a Service: SaaS

The three different service models taken together have come to be known as the SPI model of cloud
computing.
Infrastructure as a Service
IaaS is the ability given to the infrastructure architects to deploy or run any software on the
computing resources (hardware) provided by the service provider. That is, the hardware is virtualized in
the cloud. The underlying infrastructures (computing resources/hardware) such as compute, network, and
storage are managed by the service provider. Thus, the infrastructure architects are exempted from
maintaining the data center or underlying infrastructure. It is maintained by the service provider. Some
of the popular IaaS providers include Amazon Web Services (AWS), Google Compute Engine,
OpenStack, and Eucalyptus.
The following are the services provided by IaaS provider:
 Compute: Computing as a Service includes virtual central processing units (CPUs) and virtual main
memory for the VMs that are provisioned to the end users.
 Storage: STaaS provides back-end storage for the VM images. Some of the IaaS providers also
provide the back end for storing files.
 Network: Network as a Service (NaaS) provides virtual networking components such as virtual router,
switch, and bridge for the VMs.
 Load balancers: Load Balancing as a Service may provide load balancing capability at the
infrastructure layer.
The following are the benefits provided by IaaS:
 Pay-as-you-use model: The IaaS services are provided to the customers on a pay-per-use basis. This
ensures that the customers are required to pay for what they have used. This model eliminates the
unnecessary spending on buying hardware.
 Reduced TCO: Since IaaS providers allow the IT users to rent the computing resources, they need
not buy physical hardware for running their business. The IT users can rent the IT infrastructure rather
than buy it by spending large amount. IaaS reduces the need for buying hardware resources and thus
reduces the TCO.
 Elastic resources: IaaS provides resources based on the current needs. IT users can scale up or scale
down the resources whenever they want. This dynamic scaling is done automatically using some load
balancers. This load balancer transfers the additional resource request to the new server and improves
application efficiency.
 Better resource utilization: Resource utilization is the most important criteria to succeed in the IT
business. The purchased infrastructure should be utilized properly to increase the ROI. IaaS ensures
better resource utilization and provides high ROI for IaaS providers.
 Supports Green IT: In traditional IT infrastructure, dedicated servers are used for different business
needs. Since many servers are used, the power consumption will be high. This does not result in Green
IT. In IaaS, the need of buying dedicated servers is eliminated as single infrastructure is shared between
multiple customers, thus reducing the number of servers to be purchased and hence the power
consumption that results in Green IT.
The following are the drawbacks of IaaS:
 Security issues: Since IaaS uses virtualization as the enabling technology, hypervisors play an
important role. There are many attacks that target the hypervisors to compromise it. If hypervisors get
compromised, then any VMs can be attacked easily. Most of the IaaS providers are not able to provide
100% security to the VMs and the data stored on the VMs.
 Interoperability issues: There are no common standards followed among the different IaaS providers.
It is very difficult to migrate any VM from one IaaS provider to the other. Sometimes, the customers
might face the vendor lock-in problem.
 Performance issues: IaaS is nothing but the consolidation of available resources from the distributed
cloud servers. Here, all the distributed servers are connected over the network. Latency of the network
plays an important role in deciding the performance. Because of latency issues, sometimes the VM
contains issues with its performance.
.
Platform as a Service
PaaS is the ability given to developers to develop and deploy an application on the development
platform provided by the service provider. PaaS allows the developers to develop their application online
and also allows them to deploy immediately on the same platform. Here, the developers are responsible
for managing the deployed application and configuring the development environment. PaaS consumers or
developers can consume language runtimes, application frameworks, databases, message queues, testing
tools, and deployment tools as a service over the Internet. Thus, it reduces the complexity of buying and
maintaining different tools for developing an application. PaaS services are provided by the service
provider on an on-premise or dedicated or hosted cloud infrastructure. Some of the popular PaaS providers
include Google App Engine, Force.com, Red Hat OpenShift, Heroku, and Engine Yard.
The following are the services provided by PaaS provider:
 Programming languages: PaaS providers provide a wide variety of programming languages for the
developers to develop applications. Some of the popular programming languages provided by PaaS
vendors are Java, Perl, PHP, Python, Ruby, Scala, Clojure, and Go.
 Application frameworks: PaaS vendors provide application frameworks that simplify the application
development. Some of the popular application development frameworks provided by a PaaS provider
include Node.js, Rails, Drupal, Joomla, WordPress, Django, EE6, Spring, Play, Sinatra, Rack, and
Zend.
 Database: Since every application needs to communicate with the databases, it becomes a must-have
tool for every application. PaaS providers are providing databases also with their PaaS platforms. The
popular databases provided by the popular PaaS vendors are ClearDB, PostgreSQL, Cloudant,
Membase, MongoDB, and Redis.
 Other tools: PaaS providers provide all the tools that are required to develop, test, and deploy an
application.
The following are the benefits provided by PaaS:
 Quick development and deployment: PaaS provides all the required development and testing tools
to develop, test, and deploy the software in one place. Most of the PaaS services automate the testing
and deployment process as soon as the developer completes the development. This speeds up
application development and deployment than traditional development platforms.
 Reduces TCO: The developers need not buy licensed development and testing tools if PaaS services
are selected. PaaS allows the developers to rent the software, development platforms, and testing tools
to develop, build, and deploy the application. PaaS does not require high-end infrastructure also to
develop the application, thus reducing the TCO of the development company.
 Different teams can work together: The traditional development platform does not have extensive
support for collaborative development. PaaS services support developers from different places to work
together on the same project. This is possible because of the online common development platform
provided by PaaS providers.
 Ease of use: PaaS provides a wide variety of client tools such as CLI, web CLI, web UI, APIs, and
IDEs. The developers are free to choose any client tools of their choice. Especially, the web UI–based
PaaS services increase the usability of the development platform for all types of developers.
 Less maintenance overhead: In on-premise applications, the development company or software
vendor is responsible for maintaining the underlying hardware. They need to recruit skilled
administrators to maintain the servers. This overhead is eliminated by the PaaS services as the
underlying infrastructure is maintained by the infrastructure providers. This gives freedom to
developers to work on the application development.
 Produces scalable applications: Most of the applications developed using PaaS services are web
application or SaaS application. These applications require better scalability on the extra load. For
handling extra load, the software vendors need to maintain an additional server. It is very difficult for
a new start-up company to provide extra servers based on the additional load. But, PaaS services are
providing built-in scalability to the application that is developed using the PaaS platform.
The following are the drawbacks of PaaS:
 Vendor lock-in: The major drawbacks with PaaS provider are vendor lock-in which is due to lack of
standards. There are no common standards followed among the different PaaS providers. The other
reason for vendor lock-in is proprietary technologies used by PaaS providers. Most of the PaaS vendors
use the proprietary technologies that are not compatible with the other PaaS providers. The vendor
lock-in problem of PaaS services does not allow the applications to be migrated from one PaaS
provider to the other.
 Security issues: Since data are stored in off-premise third-party servers, many developers are afraid
to go for PaaS services. Of course, many PaaS providers provide mechanisms to protect the user data,
and it is not sufficient to feel the safety of on-premise deployment. When selecting the PaaS provider,
the developer should review the regulatory, compliance, and security policies of the PaaS provider
with their own security requirements.
 Less flexibility: PaaS providers do not give much freedom for the developers to define their own
application stack. Most of the PaaS providers provide many programming languages, databases, and
other development tools. But, it is not extensive and does not satisfy all developer needs. Only some
of the PaaS providers allow developers to extend the PaaS tools with the custom or new programming
languages.
 Depends on Internet connection: Since the PaaS services are delivered over the Internet, the
developers should depend on Internet connectivity for developing the application. Even though some
of the providers allow offline access, most of the PaaS providers do not allow offline access. With
slow Internet connection, the usability and efficiency of the PaaS platform do not satisfy the developer
requirements. .

Software as a Service
SaaS is the ability given to the end users to access an application over the Internet that is hosted
and managed by the service provider. Thus, the end users are exempted from managing or controlling an
application, the development platform, and the underlying infrastructure. Some of the popular SaaS
providers include Saleforce.com, Google Apps- Gmail, and Microsoft office 365.
SaaS changes the way the software is delivered to the customers. In the traditional software model,
the software is delivered as a license-based product that needs to be installed in the end user device. Since
SaaS is delivered as an on-demand service over the Internet, there is no need to install the software to the
end user’s devices. SaaS services can be accessed or disconnected at any time based on the end user’s
needs.
The following are the services provided by SaaS provider:
 Business services: Most of the SaaS providers started providing a variety of business services that
attract start-up companies. The business SaaS services include ERP, CRM, billing, sales, and human
resources.
 Social networks: Since social networking sites are extensively used by the general public, many social
networking service providers adopted SaaS for their sustainability. Since the number of users of the
social networking sites is increasing exponentially, cloud computing is the perfect match for handling
the variable load.
 Document management: Since most of the enterprises extensively use electronic documents, most of
the SaaS providers started providing services that are used to create, manage, and track electronic
documents.
 Mail services: E-mail services are currently used by many people. The future growth in e-mail usage
is unpredictable. To handle the unpredictable number of users and the load on e-mail services, most of
the e-mail providers started offering their services as SaaS services.
The following are the benefits provided by SaaS:
 No client-side installation: SaaS services do not require client-side installation of the software. The
end users can access the services directly from the service provider data center without any installation.
There is no need of high-end hardware to consume SaaS services. It can be accessed from thin clients
or any handheld devices, thus reducing the initial expenditure on buying high-end hardware.
 Cost savings: Since SaaS services follow the utility-based billing or pay-as-you-go billing, it demands
the end users to pay for what they have used. Most of the SaaS providers offer different subscription
plans to benefit different customers. Sometimes, the generic SaaS services such as word processors
are given for free to the end users.
 Less maintenance: SaaS services eliminate the additional overhead of maintaining the software from
the client side. SaaS provider itself maintains the automatic updates, monitoring, and other
maintenance activities of the applications.
 Ease of access: SaaS services can be accessed from any devices if it is connected to the Internet.
Accessibility of SaaS services is not restricted to any particular devices.
 Dynamic scaling: SaaS services are popularly known for elastic dynamic scaling. It is very difficult
for on-premise software to provide dynamic scaling capability as it requires additional hardware. Since
the SaaS services leverage elastic resources provided by cloud computing, it can handle any type of
varying loads without disrupting the normal behavior of the application.
 Disaster recovery: With proper backup and recovery mechanisms, replicas are maintained for every
SaaS services. The replicas are distributed across many servers. If any server fails, the end user can
access the SaaS from other servers. It eliminates the problem of single point of failure. It also ensures
the high availability of the application.
 Multi-tenancy: Multi-tenancy is the ability given to the end users to share a single instance of the
application. Multi-tenancy increases resource utilization from the service provider side.
The following are the drawbacks of SaaS:
 Security: Security is the major concern in migrating to SaaS application. Since the SaaS application
is shared between many end users, there is a possibility of data leakage. Here, the data are stored in
the service provider data center. We cannot simply trust some third-party service provider to store our
company-sensitive and confidential data.
 Connectivity requirements: SaaS applications require Internet connectivity for accessing it.
Sometimes, the end user’s Internet connectivity might be very slow. In such situations, the user cannot
access the services with ease. The dependency on high-speed Internet connection is a major problem
in SaaS applications.
 Loss of control: Since the data are stored in a third-party and off-premise location, the end user does
not have any control over the data. The degree of control over the SaaS application and data is lesser
than the on-premise application.
A considerable amount of SaaS software is based on open source software. When open source
software is used in a SaaS, it is referred to as Open SaaS. The advantages of using open source software
are that systems are much cheaper to deploy because you don't have to purchase the operating system or
software, there is less vendor lock-in, and applications are more portable.
Other Cloud Service Models:
 Network as a Service (NaaS) is an ability given to the end users to access virtual network services that are
provided by the service provider. Like other cloud service models, NaaS is a business model for delivering
virtual network services over the Internet on a pay-per-use basis.
 Desktop as a Service (DEaaS) is an ability given to the end users to use desktop virtualization without
buying and managing their own infrastructure. DEaaS is a pay-per-use cloud service delivery model in
which the service provider manages the back-end responsibilities of data storage, backup, security, and
upgrades.
 Storage as a Service (StaaS) is an ability given to the end users to store the data on the storage services
provided by the service provider. STaaS allows the end users to access the files at any time from any place.
Customers can rent the storage from the STaaS provider. STaaS is commonly used as a backup storage for
efficient disaster recovery.
 Data Base as a Service (DBaaS) is an ability given to the end users to access the database service without
the need to install and maintain it. The service provider is responsible for installing and maintaining the
databases. The end users can directly access the services and can pay according to their usage.
 Data as a Service (DaaS) is an ability given to the end users to access the data that are provided by the
service provider over the Internet. DaaS provides data on demand. The data may include text, images,
sounds, and videos. Etc.
Pros and Cons of Cloud Computing:
Advantages
1. On-demand self-service:
2. Broad network access:
3. Resource pooling: Essential Characteristics
4. Rapid elasticity:
5. Measured service:

6. Lower costs: Because cloud networks operate at higher efficiencies and with greater utilization,
significant cost reductions are often encountered.
7. Ease of utilization: Depending upon the type of service being offered, you may find that you do not
require hardware or software licenses to implement your service.

8. Quality of Service: The Quality of Service (QoS) is something that you can obtain under contract
from your vendor for the services they are providing.

9. Reliability: The scale of cloud computing networks and their ability to provide load balancing and
failover makes them highly reliable, often much more reliable than what you can achieve in a single
organization.

10. Availability: Cloud computing enable businesses to have the computer system available for business
use all the time. This is known as service availability. Because of this feature cloud services are able
to provide 24 × 7 customer support.

11. Outsourced IT management: A cloud computing deployment lets someone else manage your
computing infrastructure while you manage your business. In most instances, you achieve considerable
reductions in IT staffing costs.

12. Simplified maintenance and upgrade: Because the system is centralized, you can easily apply
patches and upgrades. This means your users always have access to the latest software versions.

13. Low Barrier to Entry: In particular, upfront capital expenditures are dramatically reduced. In cloud
computing, anyone can be a giant at any time.

14. Globalization: Cloud accelerates globalization in 2 distinct ways: (1) By enabling inter-organizational
collaborations across borders. (2) PaaS allows the spread of services and software without physical
boundaries.

15. Business Continuity: Cloud services support business continuity aspects by storing customer data in
a location different from where the business is located. Since the business accesses their cloud service
using the Internet, they could be up and running from a different location even after a disaster.

16. Storage: The data growth has been exponential for many businesses. Even though the cost of storage
has come down dramatically, the associated cost of protecting and managing stored data is significant.
A cost effective alternative is the cloud storage. The cloud storage service provider is responsible for
data backup and recovery.
17. Pay for Service: Cloud computing supports the pay-for-what-you-use model. This is also known as
pay-as-you-go model. This model enables the customer to not invest in expensive computing hardware
that they use rarely.

18. Multi-Platform Availability: The PaaS service enables the customer to choose any platform that they
need to either offer or test their services. Thus, cloud service supports multi-platform availability.

Drawbacks
1. While cloud computing applications excel at large-scale processing tasks, if your application needs
large amounts of data transfer, cloud computing may not be the best model for you.

2. When you use an application or service in the cloud, you are using something that isn't as
customizable as you might want. Although many cloud computing applications are very capable,
applications deployed on-premises still have many more features than their cloud counterparts.

3. Cloud computing is a stateless system. Although it may seem that you are carrying on a conversation
between client and provider, there is an architectural disconnect between the two. That lack of state
allows messages to travel over different routes and for data to arrive out of sequence, etc. Therefore,
additional overhead in the form of service brokers, transaction managers, and other middleware must
be added to the system. This can introduce a very large performance hit into some applications.

4. Location of data storage: The service providers deploy enough redundancy to guarantee a high degree
of service availability. In order to achieve this, the service provider chooses locations that are
geographically apart. This design feature makes it difficult for customers to know where their data is
stored, not just they are available when they need it.

5. Security and privacy controls: Cloud service providers do not consider that it is their responsibility
to protect and secure customer data. Encryption is one method available for this type of protection. In
that case the encryption key should be stored at the customer site.

6. Service management: Cloud service provider has to manage a number of privileged users who have
access to the servers in which the customer’s applications and data reside.

7. Sometimes, the cloud service provider is unable to provide the service due to financial reasons or
violation of some law that resulted in the service being shut down by law enforcement.

8. During the time the customer uses the service the associated data gets stored in proprietary format of
the service provider. Even if the customer were to think of migrating to a different cloud service
provider, moving the existing data in a readable format is time consuming and expensive. This is called
as vendor lock-in.

9. Multi-tenancy problem: Cloud service providers offer virtual servers to customers by providing
computing power using servers that host multiple clients. The concern is that because data belonging
to different customers reside on the same physical server there is a possibility that such data could be
accessed intentionally or accidentally by other customers.

10. Data that persists on systems after their use is known as data remanence. Potential hackers could
subscribe to a large amount of storage space on a virtual server with the idea of accessing data that
remains in the storage area after use by another customer. This violates the confidentiality aspect of an
information system for clients.

11. To a very great extent cloud service providers are able to provide 24 × 7 service availability but at
times the service experiences outage that extends beyond the acceptable limit for the service uptime
guarantee promised.

12. Cost savings to customers: Most of the customers use SaaS cloud service. Without the capital
investment expenses for IT systems, cloud customers benefit initially. Studies have shown that this
benefit extends for the first 2 years of a service contract for SaaS. Beyond that period customers pay
more for cloud service than what it would cost them to host the service internally.

13. Cloud service’s popularity is based on its availability over the internet. But, internet is not designed
with security in mind. So, cloud customers use the Virtual Private Network (VPN) service offered
by the telecommunications provider. However, this adds to the cost of cloud service and so small and
medium sized businesses will not be able to afford the VPN service.

Benefits of Cloud Computing:

• Lower IT infrastructure and computer costs for users


• Improved performance
• Fewer Maintenance issues
• Instant software updates
• Improved compatibility between Operating systems
• Backup and recovery
• Performance and Scalability
• Increased storage capacity
• Increase data safety.
Cloud Architecture:
 Cloud computing architecture consists of many loosely coupled cloud components. The
architecture is mainly divides the cloud architecture into two parts: Front End and Back End.
 Each end is connected to others through a network, generally to the Internet. The cloud technology
architecture also consists of front-end platforms called the cloud client which comprises servers,
thin & fat client, tablets & mobile devices. The interaction is done through middleware or via web-
browser or virtual sessions. The cloud architecture is a combination of both services oriented
architecture & event-driven architecture. So cloud architecture encompasses all elements of the
cloud environment.

Front End
 The front end is the side of computer user or client.
 It involves the interfaces and the applications that are necessary to access the Cloud Computing
system.

Back End

 The back end is the cloud section of the system. It involves all the resources which are necessary
to give Cloud computing services. It includes huge data storage, virtual machines, security
mechanism , services and deployment models, servers etc.
 back-end to provide the security of data for cloud users along with the traffic control mechanism.
The server also provides the middleware which helps to connect devices & communicate with each
other.

CLOUD ARCHITECTURE:
 The key to cloud computing is a massive network of servers or even individual PCs interconnected
in a grid. These computers run in parallel, combining the resources of each to generate
supercomputing-like power.
 The cloud is a collection of computers and servers that are publicly accessible via the Internet.
 The hardware is typically owned and operated by a third party on a consolidated basis in one or
more data center locations. The machines can run any combination of operating systems.

Figure 1.1,
 Individual users connect to the cloud from their own personal computers or portable devices, over
the Internet. To these individual users, the cloud is seen as a single application, device, or
document. The hardware (OS manages the cloud) in the cloud is invisible.
 This cloud architecture is simple, it requires some intelligent management to connect all those
computers together and assign task processing to multitudes of users.

Figure 1.2,
it all starts with the front-end interface seen by individual users. This is how users select a task or service
(either starting an application or opening a document). The user’s request then gets passed to the system
management, which finds the correct resources and then calls the system’s appropriate provisioning
services. These services carve out the necessary resources in the cloud, launch the appropriate web
application, and either creates or opens the requested document. After the web application is launched, the
system’s monitoring and metering functions track the usage of the cloud so that resources are apportioned
and attributed to the proper users.
Cloud storage:
 One of the primary uses of cloud computing is for data storage. With cloud storage, data is stored
on multiple third-party servers, rather than on the dedicated servers used in traditional networked
data storage.
 When storing data, the user sees a virtual server (it appears as if the data is stored in a particular
place with a specific name).
 In reality, the user’s data could be stored on any one or more of the computers used to create the
cloud. The actual storage location may even differ from day to day or even minute to minute, as
the cloud dynamically manages available storage space.
 But even though the location is virtual, the user sees a “static” location for his data—and can
actually manage his storage space as if it were connected to his own PC.
Advantages of Cloud storage
 Financially Associated -virtual resources in the cloud are cheaper than dedicated physical
resources connected to a personal computer or network.
 Security- data stored in the cloud is secure from accidental erasure or hardware crashes, because
it is duplicated across multiple physical machines; since multiple copies of the data are kept
continually, the cloud continues to function as normal even if one or more machines go offline. If
one machine crashes, the data is duplicated on other machines in the cloud.
 Cloud storage is a service which enables saving the data on offside storage system. This data is
managed by third-party. This data is accessible by a web services API.

Storage Devices
Following are the categories of storage devices:
1) Block Storage Devices – This type of devices provide raw storage to the clients. This raw storage is
separated for creating volumes. A volume is a recognizable unit of data storage.
2) File Storage Devices – The file storage devices are provided to the client in the form of files for
maintaining its file system. Storage data is accessed using the Network File System(NFS).
Storage Classes of cloud
Following are the categories of storage classes:
1) Unmanaged Cloud Storage
The storage is preconfigured for the customer, this is known as unmanaged cloud storage.The customer
cannot format or install his own file system or change drive properties.
2) Managed Cloud Storage
Managed cloud storage provides the online storage space on-demand. This system shows the user like
raw disk that the user can partition and format.
CLOUD SERVICES
Any web-based application or service offered via cloud computing is called a cloud service. Cloud
services can include anything from calendar and contact applications to word processing and
presentations.
Eg: Google , Amazon, Microsoft, are developing various types of cloud services.
With a cloud service, the application itself is hosted in the cloud. An individual user runs the application
over the Internet within a web browser. The browser accesses the cloud service and an instance of the
application is opened within the browser window. Once launched, the web-based application operates
and behaves like a standard desktop application.
The only difference is that the application and the working documents remain on the host’s cloud servers.
Cloud services – advantages:
 If the user’s PC crashes, it doesn’t affect either the host application or the open document; both
remain unaffected in the cloud.
 an individual user can access his applications and documents from any location on any PC.
Companies in the Cloud: Cloud Computing Today
 Google offers a powerful collection of web-based applications, all served via its cloud architecture.
Whether you want cloud-based word processing (Google Docs), presentation software (Google
Presentations), email (Gmail), or calendar/scheduling functionality (Google Calendar), Google has
an offering.
 Microsoft offers its Windows Live suite of web-based applications, as well as the Live Mesh
initiative that promises to link together all types of devices, data, and applications in a common
cloud-based platform.
 Amazon has its Elastic Compute Cloud (EC2), a web service that provides cloud-based resizable
computing capacity for application developers.
 IBM has established a Cloud Computing Center to deliver cloud services and research to clients.

Who Benefits from Cloud Computing?


What types of users, then, are best suited for cloud computing..
1. Collaborators:
The ability to share and edit documents in real time between multiple users is one of the primary benefits
of web-based applications; it makes collaborating easy and even fun.
Example: Google Presentations: You and the department heads can access the main presentation
document at your leisure. The changes one person makes are automatically visible when the other
collaborators access the document. In fact, more than one of you can edit the document at the same time,
with each of your changes happening in real time. Collaborating with a web-based application is both
more convenient and faster than trying to assemble everyone’s pieces into a single document managed by
one member of the team.
2. Road Warriors:
When you work at one office today, at home the next day, and in another city the next, it’s tough to keep
track of all your documents and applications. You may end up with one version of a document on your
work PC, another on your laptop, and a third on your home PC—and that’s if you remember to copy that
document and take it with you from one location to the next. Far better, therefore, if you can access a
single version of your document from any location. When you’re in the office, you log in to your web-
based app and access your stored document. Go home and use your web browser to access the very same
app and document via the Internet. Travel to another city and the same application and document are still
available to you.
3. Cost-Conscious Users:
With cloud computing you can save money on both your hardware and software. Hardware-wise, there’s
no need to invest in large hard disks or super-fast CPUs. Because everything is stored and run from the
web, you can cut costs by buying a less fully featured PC. When it comes to software. Instead of laying
out big bucks for the latest version of Microsoft Office, you can use Google’s versions of these apps
(Google Docs, Spreadsheets, and Presentations) for zero expenditure.
4. Cost-Conscious IT Departments:
When users need more computing power, more servers need to be purchased. This need for more
computing power becomes less of an issue when the organization embraces cloud computing. Instead of
purchasing a new server, the IT staff just redirects the computing request out to the cloud. The servers
that comprise the cloud have plenty of capacity to handle the organization’s increased needs, without the
IT staff having to spend a single dime on new hardware.
5. Users with Increasing Needs :
Hardware-based cost savings also apply to individual computer users. Need more hard disk space to store
all your digital photos and MP3 files. You could purchase a new external hard drive, or you could utilize
lower-cost or free cloud storage instead.

Cloud Computing Applications:


Cloud Computing is applied in almost all the fields like business, entertainment, data storage, social
networking, education, management, entertainment, art etc.

1) Business Applications
Cloud computing constructs more collaborative and easy business with the help of different apps like
Mail Chimp, Chatter, Google Apps for business and QuickBooks.

Mail Chimp
 It provides an e-mail publishing platform.It is a simple email marketing software.
 It provides a various options to design, send and save templates for emails.
Chatter
Chatter app helps to share important information about organization in real time.

Google Apps for Business


 Google provides creating text documents, spreadsheets, presentations etc.
 On Google Docs it allows the business users to share them in a combined way.

Quickbooks
 It provides online accounting solutions for a business.
 It assists in monitoring cash flow, creating VAT returns and creating business reports.

2) Data Storage and Backup Service Applications


For data storage and backup services in cloud following applications are provided.

Box.com - It provides drag and drop service for files. It is necessary for the users to drop the files into
Box and access from anywhere.

Mozy - It provides online backup service for files to prevent data loss.

Joukuu - It is a web-based interface. It helps to show a single list of contents for files stored in Google
Docs, Box.net and Dropbox.

3) Management Applications
For management task time tracking, organizing notes apps are available.

Toggl - It helps to track time period allocated to a particular project.

Evernote - It is designed to create, organize and store different pieces of media. It keeps all stuff like
text document, photo, video, audio or even web page in the cloud.

Outright - It is an accounting app that helps for tracking income, expenses, profits and losses in real
time.

4) Art Applications
Art applications like Moo, etc.

Moo - It provides art services like designing and printing business cards, postcards and mini cards.
5) Entertainment Applications
Entertainment applications like Audio box.fm, etc.

Audio box.fm - It provides streaming service. The music files are stored online and play from the cloud
using own media player of the service.

6) Social Applications
Various social networking services provide websites like Facebook, Twitter, etc.

Facebook - It provides social networking services. On Facebook users can share photos, videos, files,
status and more.

Twitter - It helps in interacting with the public directly. In this, user can follow any organization,
celebrity or any person who is on twitter and can have latest updates regarding them.

DISCOVERING CLOUD SERVICES DEVELOPMENT SERVICES AND TOOLS:

 Cloud computing is at an early stage of its development. This can be seen by observing the large
number of small and start-up companies offering cloud development tools.
 In a more established industry, the smaller players eventually fall by the wayside as larger
companies take center stage.
 Cloud services development services and tools are offered by a variety of companies, both large
and small.
 The most basic offerings provide cloud-based hosting for applications developed from scratch.
 The more fully featured offerings include development tools and pre-built applications that
developers can use as the building blocks for their own unique web-based applications.

Amazon
 Amazon, one of the largest retailers on the Internet, is also one of the primary providers of cloud
development services.
 Amazon has spent a lot of time and money setting up a multitude of servers to service its popular
website, and is making those vast hardware resources available for all developers to use.
 The service in question is called the Elastic Compute Cloud, also known as EC2. This is a
commercial web service that allows developers and companies to rent capacity on Amazon’s
proprietary cloud of servers— which happens to be one of the biggest server farms in the world.
 EC2 enables scalable deployment of applications by letting customers request a set number of
virtual machines, onto which they can load any application of their choice.
 Thus, customers can create, launch, and terminate server instances on demand, creating a truly
“elastic” operation. Amazon’s service lets customers choose from three sizes of virtual servers:

 Small, which offers the equivalent of a system with 1.7GB of memory,160GB of


storage, and one virtual 32-bit core processor.
 Large, which offers the equivalent of a system with 7.5GB of memory,850GB of storage,
and two 64-bit virtual core processors.
 Extra large, which offers the equivalent of a system with 15GB of memory,1.7TB of
storage, and four virtual 64-bit core processors

(In other words, you pick the size and power you want for your virtual server, and Amazon does the rest)

 EC2 is just part of Amazon’s Web Services (AWS) set of offerings, which provides developers
with direct access to Amazon’s software and machines.
 By tapping into the computing power that Amazon has already constructed, developers can build
reliable, powerful, and low-cost web-based applications.
 Amazon provides the cloud (and access to it), and developers provide the rest. They pay only for
the computing power that they use.
 AWS is perhaps the most popular cloud computing service to date. Amazon claims a market of
more than 330,000 customers—a combination of developers, start-ups, and established companies.
Google App Engine

 Google is a leader in web-based applications, so it’s not surprising that the company also offers
cloud development services.
 These services come in the form of the Google App Engine, which enables developers to build
their own web applications utilizing the same infrastructure that powers Google’s powerful
applications.
 The Google App Engine provides a fully integrated application environment. Using Google’s
development tools and computing cloud, App Engine applications are easy to build, easy to
maintain, and easy to scale.
 All you have to do is develop your application (using Google’s APIs and the Python programming
language) and upload it to the App Engine cloud; from there, it’s ready to serve your users.
 As you might suspect, Google offers a robust cloud development environment. It includes the
following features:
 Dynamic web serving
 Full support for all common web technologies
 Persistent storage with queries, sorting, and transactions
 Automatic scaling and load balancing
 APIs for authenticating users and sending email using Google Accounts
 In addition, Google provides a fully featured local development environment that simulates the
Google App Engine on any desktop computer.
 And here’s one of the best things about Google’s offering: Unlike most other cloud hosting
solutions, Google App Engine is completely free to use—at a basic level, anyway.
 A free App Engine account gets up to 500MB of storage and enough CPU strength and bandwidth
for about 5 million page views a month.
 If you need more storage, power, or capacity, Google intends to offer additional resources (for a
charge) in the near future.
IBM

 It’s not surprising, given the company’s strength in enterprise-level computer hardware, that IBM
is offering a cloud computing solution.
 The company is targeting small- and medium-sized businesses with a suite of cloud-based
ondemand services via its Blue Cloud initiative.
 Blue Cloud is a series of cloud computing offerings that enables enterprises to distribute their
computing needs across a globally accessible resource grid.
 One such offering is the Express Advantage suite, which includes data backup and recovery, email
continuity and archiving, and data security functionality—some of the more data-intensive
processes handled by a typical IT department.
 To manage its cloud hardware, IBM provides open source workload-scheduling software called
Hadoop, which is based on the MapReduce software usedGoogle in its offerings. Also included
are PowerVM and Xen virtualization tools,along with IBM’s Tivoli data center management
software.

Salesforce.com
 Salesforce.com is probably best known for its sales management SaaS, but it’s also a leader in
cloud computing development.
 The company’s cloud computing architecture is dubbed Force.com. The platform as a service is
entirely on-demand, running across the Internet.
 Salesforce provides its own Force.com API and developer’s toolkit. Pricing is on a per log-in basis.
Supplementing Force.com is AppExchange, a directory of web-based applications.
 Developers can use AppExchange applications uploaded by others, share their own applications in
the directory, or publish private applications accessible only by authorized companies or clients.
 Many applications in the AppExchange library are free, and others can be purchased or licensed
from the original developers.
 Most existing AppExchange applications are sales related—sales analysis tools, email marketing
systems, financial analysis apps, and so forth. But companies can use the Force.com platform to
develop any type of application.
 In fact, many small businesses have already jumped on the Force.com bandwagon. For example,
an April 2008 article in PC World magazine quoted Jonathan Snyder, CTO of Dreambuilder
Investments, a 10-person mortgage investment company in New York.
 “We’re a small company,” Snyder said, “we don’t have the resources to focus on buying servers
and developing from scratch. For us, Force.com was really a jump-start.”
 Salesforce.com is the Enterprise Cloud Computing Company. Put simply, we provide CRM and
Collaboration applications that you access over the Internet and pay-as-you-go. You can also build
your own apps on our Force.com platform, all without the need to run and manage your own data
centre and software. Find out why more than 87,200 companies have chosen salesforce.com to
help run their business.

PRODUCTS:

1.Accounts and contacts


Everything you need to know about your customers and prospects - all in one place.

2.Marketing and leads


Close that gap between marketing and sales with better quality leads—and more of them.

3.Opportunities and quotes


When you have critical deals in the works, don’t let anything slip through the cracks.
4.Jigsaw data services
Your CRM data just got a whole lot better with real-time contact info and automated data
hygiene.

5.Analytics and forecasting


Get the insight you need to keep your sales on track and moving efficiently

6.Approvals and workflow


Nothing should impede the momentum of your sales efforts. Drag and drop to create automated
processes with these tools.

7.Email and productivity


Don’t change the way you work. With the Sales Cloud , you can work seamlessly with the tools
you already use everyday.

8.Content library
Stop searching aimlessly for that killer presentation… that new product datasheet… that updated
price sheet. It’s right at your fingertips.

9.Genius
Find sales insights when you need them most. Genius connects you with people and resources to
help you close deals.

10.Chatter
Collaborate instantly. Get real-time updates pushed to you on the people, data, and documents
that can help you close your deals.

11.Partners
Stop waiting for partner updates. Now you can have complete visibility into both direct and
indirect sales channels with one view.

12.Mobile
Having the latest information can improve customer relations and accelerate your deals. Stay on
top of your business from any location on any device.

13. AppExchange
Discover hundreds of apps that will expand your sales success. Want more solutions? Look no
further

Salesforce.com Support also offers:

Premier Support
Basic Support Premier Support with Administration

Case limit Unlimited Unlimited Unlimited

2 business
Response time 2 hours 2 hours
days

Online customer portal Included Included Included

Live phone support 12/51 24/7 24/7

Assigned representative Yes (50 users)3 Yes (50 users)2

Health check (annual) Yes (50 users) Yes (50 users)

Developer Support4 Yes Yes

Force.com app
Yes Yes Yes
extensions5

Administration Included

Other Cloud Services Development Tools

 Amazon, Google, IBM, and Salesforce.com aren’t the only companies offering tools for cloud
services developers.
 There are also a number of smaller companies working in this space that developers should
evaluate, and that end users may eventually become familiar with. These companies include the
following:
 3tera (www.3tera.com)
 10gen (www.10gen.com)
 Cohesive Flexible Technologies (www.cohesiveft.com)
 Joyent (www.joyent.com)
 Mosso (www.mosso.com)
 Nirvanix (www.nirvanix.com)
 Skytap (www.skytap.com).
 StrikeIron (www.strikeiron.com)
 Sun Microsystems has an R&D project, dubbed Project Caroline
 (www.projectcaroline.net)

VIRTUALIZATION

LEVELS:

When you use cloud computing, you are accessing pooled resources using a technique called virtualization.
Virtualization assigns a logical name for a physical resource and then provides a pointer to that physical resource
when a request is made. Given a computer system with a certain set of resources, you can set aside portions of those
resources to create a virtual machine (VM). From the standpoint of applications or users, a virtual machine has all
the attributes and characteristics of a physical system but is strictly software that emulates (imitates, follows) a
physical machine. Not all CPUs support virtual machines. EX: x86.

The purpose of a VM is to enhance resource sharing by many users and improve computer performance in terms of
resource utilization and application flexibility. Hardware resources (CPU, memory, I/O devices, etc.) or software
resources (operating system and software libraries) can be virtualized in various functional layers.

The resources in the virtual machine can be accessed by using software called a virtualization layer, also known
as hypervisor or virtual machine monitor (VMM). The OS running in the host machine is called as Host OS and
the OS loaded into a virtual machine is referred to as the Guest OS.

Figure above shows a high-level view of a hypervisor where the host machine resources are shared between
a number of guests, each of which may be running applications and each has direct access to the underlying physical
resources. Multiple VMs can run simultaneously on the same physical machine, and each VM can have different
OS (Guest OS). Each of the Guest OS is isolated and protected from any others and is unaffected by problems or
instability occurring on other VMs running on the same physical machine.

Benefits of Virtualization

1.More flexible and efficient allocation of resources.


2.Enhance development productivity.

3.It lowers the cost of IT infrastructure.

4.Remote access and rapid scalability.

5.High availability and disaster recovery.

6.Pay per use of the IT infrastructure on demand.

7.Enables running multiple operating system.

VIRTUALIZATION STRUCTURES

Hypervisor and Xen Architecture

The hypervisor supports hardware-level virtualization. Depending on the implementation, there are 2 types of
hypervisors: Type 1 and Type 2.

Type 1 hypervisor is also known as bare-metal implementation. A bare metal environment is a computer
system in which a VM is installed directly on hardware rather than within the host operating system (OS). The term
"bare metal" refers to a hard disk, the usual medium on which a computer's OS is installed. The type1 hypervisor
does not have any host OS because they are installed on a bare system.

In type 1, the hypervisor run directly on the hardware platform. The guest OS is not aware that it is not
running on real hardware and does not require any modification. It requires only resources from the host machine.
The hypervisor or VMM coordinates instructions between the guest and the host CPU.

Advantage of type 1: If a single virtual machine crash, it does not affect the rest of the guest operation
system. Therefore, they are considered more secure than type 2. Since they can directly communicate with hardware
resources, they are much faster than type 2 hypervisor.

Type 2 or hosted hypervisor resides on top of the host OS. The hypervisor simply runs as an application
on the host OS and the OS takes care of all the hardware.

Disadvantage of type 2: Since type 2 hypervisors cannot directly communicate with the hardware, they are
less efficient than the type 1.They have more points of failure since anything that affect the stability of the base
operating system can also affect the guest OS and the virtual machine. When the base OS needs a reboot, all the
VM will also be rebooted.
Depending on the functionality, a hypervisor can assume micro-kernel architecture or monolithic hypervisor
architecture.

A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical memory
management and processor scheduling). The device drivers and other changeable components are outside the
hypervisor. EX: Microsoft Hyper-V. A monolithic hypervisor implements all the aforementioned functions,
including those of the device drivers. EX: VMware ESX for server virtualization.

The Xen Architecture


Xen is an open source hypervisor program developed by Cambridge University. Xen is a Type 1
microkernel hypervisor. Xen does not include any device drivers natively. It just provides a mechanism by which
guest OS can have direct access to the physical devices.

The core components of a Xen system are the hypervisor, kernel, and applications. Like other virtualization
systems, many guest OSes can run on top of the hypervisor. However, not all guest OSes are created equal, and one
in particular controls the others. The guest OS, which has control ability, is called Domain 0, and the others are
called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any file
system drivers being available. Domain 0 is designed to access hardware directly and manage devices. Therefore,
one of the responsibilities of Domain 0 is to allocate and map hardware resources for the guest domains (the Domain
U domains). A number of vendors are in the process of developing commercial Xen hypervisors, among them are
Citrix XenServer and Oracle VM.

Full Virtualization
Virtualization can be classified into three categories depending on whether or not the guest operating
system kernel needs to be modified and how privileged and non-privileged instructions are executed: Full
Virtualization, para-virtualization and hardware assisted virtualization.

In a full virtualization scheme, the VM is installed as a Type 1 Hypervisor directly onto the hardware.
Here, the underlying hardware is completely simulated. Multiple different guest OSes can run simultaneously. The
guest OSes are managed by the hypervisor/VMM which controls the flow of instructions between the guest OSes
and the physical hardware. Hypervisor provides each VM with all the services of the physical system. So, the guest
OS is fully disengaged from the underlying hardware. So, the guest OSes does not know it is virtualized and runs
unmodified on a hypervisor.
In full virtualization, the VM is also installed as Type 2 Hypervisor called as hosted virtualization. The
hypervisor runs on top of the host OS. The host OS can be any common operating system (e.g. Windows, Linux, or
MacOS).

Operating Systems uses protection rings to isolate the OS from untrusted user applications. The OS can be protected with
different privilege levels. In protection ring architecture, the rings are arranged in hierarchical order from ring 0 to ring 3.
Ring 0 contains the programs that are most privileged, and ring 3 contains the programs that are least privileged. Normally,
the highly trusted OS instructions will run in ring 0, and it has unrestricted access to physical resources. Ring 3 contains the
untrusted user applications, and it has restricted access to physical resources. The other two rings (ring 1 and ring 2) are
allotted for device drivers. This protection ring architecture restricts the misuse of resources and malicious behavior of
untrusted user-level programs. For example, any user application from ring 3 cannot directly access any physical resources as
it is the least privileged level. But the kernel of the OS at ring 0 can directly access the physical resources as it is the most
privileged level.

In full virtualization, the hypervisor is placed in Ring 0 while the guest OS is lowered to a less privileged
one, Ring 1. Hence, the OS cannot communicate to the physical infrastructure directly. It requires the help of
hypervisors to communicate with the underlying infrastructure. The user applications reside at ring 3.

The guest OSes and their applications consist of noncritical and critical instructions. Noncritical instructions
do not control hardware or threaten the security of the system, but critical instructions do. With full virtualization,
noncritical instructions from user applications run on the hardware directly while critical instructions are discovered
and replaced with traps into the hypervisor/VMM using binary translation at run time.

Always privileged/critical instructions can only be executed in Ring 0. The VMM / hypervisor scans the
instruction stream, identifies the privileged instructions. When these instructions are identified, they are trapped
into the VMM, which emulates the behavior of these instructions. The method used in this emulation is called
binary translation. The translated instruction is executed in the hardware. Therefore, full virtualization combines
binary translation and direct execution.

Benefits:
 This approach provides the best isolation and security for the VMs.
 Different OSs can run simultaneously.
 The virtual guest OS can be easily migrated to work in native hardware.
 It is easy to install and use and does not require any change in the guest OS.
 Guest OS using full virtualization are generally faster
Drawbacks: Binary translation is an additional, overhead, and it reduces the overall system performance
and is rather time-consuming. Binary translation employs a code cache to store translated hot instructions to improve
performance, but it increases the cost of memory usage.

Para-Virtualization

This approach is also known as partial virtualization or OS-assisted virtualization and provides partial
simulation of the underlying infrastructure (i.e., the underlying hardware is not completely simulated). The main
difference between the full virtualization and para-virtualization is the guest OS knows that it is running in
virtualized environment in para-virtualization. But in full virtualization, this information is not known to the guest
OS. Another difference is that the para-virtualization replaces the translation of non-virtualized OS requests with
hypercalls. Hypercalls are similar to system calls and used for the direct communication between OS and
hypervisor. This direct communication between the guest OS and hypervisor improves performance and efficiency.
In full virtualization, the guest OS will be used without any modification. But in para-virtualization, the guest OS
needs to be modified to replace sensitive, critical and non-virtualizable instructions with the hypercalls during
compile time.

Figure below illustrates the concept of a para-virtualized VM architecture. They are assisted by an intelligent
compiler to replace the non-virtualizable OS instructions by hypercalls. The modified guest OS is at the
privileged Ring 0 and issues hypercalls (privileged instructions replaced as hypercalls). Hypervisor receives this
hypercall and accesses the hardware and returns the result. As in full virtualization, noncritical instructions from
user applications run on the hardware directly. Therefore, para-virtualization combines hypercalls and direct
execution.

Advantages:
 It eliminates the additional overhead of binary translation and hence improves the overall system efficiency
and performance.
 It is easier to implement than full virtualization as there is no need for special hardware.

Disadvantages:
 There is an overhead of guest OS kernel modification.
 As para-virtualization cannot support unmodified operating systems (e.g. Windows 2000/XP), its
compatibility and portability is poor.
 Para-virtualization can also introduce significant support and maintainability issues in production
environments as it requires deep OS kernel modifications.

The best example of para-virtualization is the KVM. KVM (Kernel-Based VM) is a Linux para-
virtualization system. Memory management and scheduling activities are carried out by the existing Linux kernel.
The KVM does the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM is a
hardware-assisted para-virtualization tool, which improves performance and supports unmodified guest OSes such
as Windows, Linux, Solaris, and other UNIX variants.

Hardware-Assisted Virtualization
In full and para virtualization, there is an additional overhead of binary translation or modification of guest
OS to achieve virtualization. But in this approach, hardware vendors itself, like Intel and AMD, offer the support
for virtualization, which eliminates much overhead involved in the binary translation and guest OS modification.
Popular hardware vendors like Intel and AMD has given the hardware extension to their x86-based processor to
support virtualization.

Intel and AMD add an additional mode called root privilege mode level (Ring -1) to x86 processors. Here,
Guest OS run at Ring 0, the hypervisor run at Ring -1 and the user application at ring 3. So, all the privileged and
sensitive instructions are trapped in the hypervisor automatically because it is having root privilege. This technique
removes the difficulty of implementing binary translation of full virtualization. It also lets the guest OS run in VMs
without modification (para-virtualization). As in other virtualization approaches, the non-privileged user requests
are directly executed without any translation.

Benefits:
 It reduces the additional overhead of binary translation in full virtualization.
 It eliminates the guest OS modification in para-virtualization.

Drawbacks:

 Only new-generation processors have these capabilities. All x86/x86_64 processors do not support
hardware-assisted virtualization features.
 More number of VM traps result in high CPU overhead, limited scalability, and less efficiency in server
consolidation.
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.
After virtualization of hardware system we can install different operating system on it and run different
applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual machines
is much easier than controlling a physical server.
2) Operating System Virtualization:
When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different platforms of
OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
Server system is known as server virtualization.

Server virtualization is done because a single physical server can be divided into multiple servers on
the demand basis and for balancing the load.

It is the division of physical server into several virtual servers and this division is mainly done to
improvise the utility of server resource. In other word it is the masking of resources that are located in
server which includes the number & identity of processors, physical servers & the operating system.
This division of one physical server into multiple isolated virtual servers is done by server administrator
using software. The virtual environment is sometimes called the virtual private-servers.

In this process, the server resources are kept hidden from the user. This partitioning of physical server
into several virtual environments; result in the dedication of one server to perform a single application
or task.

The primary uses of server virtualization are:

 To centralize the server administration


 Improve the availability of server
 Helps in disaster recovery
 Ease in development & testing
 Make efficient use of server resources.

Approaches to Server Virtualization:

For Server Virtualization, there are three popular approaches.

These are:

1. Virtual Machine model


2. Para-virtual Machine model
3. Operating System (OS) layer Virtualization

Server virtualization can be viewed as a part of overall virtualization trend in the IT companies that
include network virtualization, storage virtualization & management of workload. This trend brings
development in automatic computing. Server virtualization can also used to eliminate server sprawl
(Server sprawl is a situation in which many under-utilized servers utilize more space or consume more
resources than can be justified by their workload) & uses server resources efficiently.
Virtual Machine model: are based on host-guest paradigm, where each guest runs on a virtual replica
of hardware layer. This technique of virtualization provide guest OS to run without modification.
However it requires real computing resources from the host and for this a hypervisor or VM is required
to coordinate instructions to CPU.

Para-Virtual Machine model: is also based on host-guest paradigm & uses virtual machine monitor
too. In this model the VMM modifies the guest operating system's code which is called 'porting'. Like
that of virtual machine, similarly the Para-virtual machine is also capable of executing multiple
operating systems. The Para-virtual model is used by both Xen & UML.

Operating System Layer Virtualization: Virtualization at OS level functions in a different way and
is not based on host-guest paradigm. In this model the host runs a single operating system kernel as its
main/core and transfers its functionality to each of the guests. The guest must use the same operating
system as the host. This distributed nature of architecture eliminated system calls between layers and
hence reduces overhead of CPU usage. It is also a must that each partition remains strictly isolated from
its neighbors because any failure or security breach of one partition won't be able to affect the other
partitions.

Advantages of Server Virtualization

Cost Reduction: Server virtualization reduces cost because less hardware is required.

Independent Restart: Each server can be rebooted independently and that reboot won't affect the
working of other virtual servers.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network storage
devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage: Storage virtualization is mainly done for back-up and recovery purposes.
Types
1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
ApplicationVirtualization:
 Application virtualization helps user to have a remote access of an application from a server.
 The server stores all personal information and other characteristics of the application, but can still
run on a local workstation through internet.
 Example : User who needs to run two different versions of the same software.
Network Virtualization:
 The ability to run multiple virtual networks that each has a separate control and data plan.
 Virtual networking is a technology that facilitates data communication between two or more
virtual machines (VM).
 It is similar to traditional computer networking but provides interconnection between VMs,
virtual servers and other related components in a virtualized computing environment.
 It can be managed by individual parties that potentially confidential to each other.
 Network virtualization, provides a facility to create and provision virtual networks—logical
switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload security
within days or even in weeks.
DesktopVirtualization:
 Desktop virtualization allows the users’ OS to be remotely stored on a server in the data center.
 It allows the user to access their desktop virtually, from any location by different machine.
 Users who wants specific operating systems other than Windows Server will need to have a virtual
desktop.
 Main benefits of desktop virtualization are user mobility, portability, easy management of
software installation, and updates.
StorageVirtualization:
 Storage virtualization is an array of servers (use of multiple servers to provide the same service in
such a way that service will still be available if the servers fails ) that are managed by a virtual
storage system.
 It makes managing storage from multiple sources to be managed and utilized as a single repository.
Software Virtualization
 Managing applications and distribution becomes a typical task for IT departments.
 Installation mechanism differs from application to application.
 Some programs require certain helper applications or frameworks and these applications may have
conflict with existing applications.
 Software virtualization is just like a virtualization but able to abstract the software installation
procedure and create virtual software installations.
 Virtualized software is an application that will be "installed" into its own self-contained unit.
 Example of software virtualization is VMware software, virtual box etc.
Advantages of Software Virtualization

1) Client Deployments Become Easier:

Copying a file to a workstation or linking a file in a network then we can easily install virtual software.

2) Easy to manage:
To manage updates becomes a simpler task. You need to update at one place and deploy the updated
virtual application to the all clients.

3) Software Migration:
Without software virtualization, moving from one software platform to another platform takes much
time for deploying and impact on end user systems. With the help of virtualized software environment
the migration becomes easier

You might also like