You are on page 1of 17

Name- Surya Pratap Singh

Enrolment No.- 0829IT181023


Subject – Cloud Computing
Subject Code- IT702

1. Explain the cloud component in detail.

Answer :

The basic components of cloud computing in a simple topology are divided into 3
(three) parts, namely clients, datacenter, and distributed servers. The three basic
components have specific goals and roles in running cloud computing operations. The
concept of the three components can be described as follows:

• Clients on cloud computing architecture are said to be the exact same things
that are plain, old, everyday local area networks (LANs). They are, typically,
the computers that just sit on your desk. But they might also be laptops, tablet
computers, mobile phones, or PDAs - all big drivers for cloud computing
because of their mobility. Clients are interacting with to manage their
information on the cloud.
• Datacenter is collection of servers where the application to which you
subscribe is housed. It could be a large room in the basement of your building
full of servers on the other side of the world that you access via the Internet. A
growing trend in the IT world is virtualizing servers. That is, software can be
installed allowing multiple instances of virtual servers to be used. In this way,
you can have half a dozen virtual servers running on one physical server.
• Distributed Servers is a server placement in a different location. But the
servers don't have to be housed in the same location. Often, servers are in
geographically disparate locations. But to you, the cloud subscribers, these
servers act as if they're humming away right next to each other.

Another component of cloud computing is Cloud Applications cloud computing in


terms of software architecture. So that the user does not need to install and run
applications using a computer. Cloud Platform is a service in the form of a computing
platform that contains hardware infrastructure and software. Usually have certain
business applications and use services PaaS as its business application infrastructure.
Cloud Storage involves processes delivering data storage as a service. Cloud
Infrastructure is the delivery of computing infrastructure as a service.
Cloud Computing services have several components required, namely:

a. Cloud Clients, a computer or software specifically designed for the use of


cloud computing based services.

Example-

- Mobile - Windows Mobile, Symbian

• Thin Client - Windows Terminal Service, CherryPal


• Thick Client - Internet Explorer, FireFox, Chrome

b. Cloud Services, products, services and solutions that are used and delivered real-
time via internet media.

Example :

• Identity - OpenID, OAuth, etc.


• Integration - Amazon Simple Queue Service.
• Payments - PayPal, Google Checkout.
• Mapping - Google Maps, Yahoo! Maps.

c. Cloud Applications, applications that use Cloud Computing in software


architecture so that users don't need to install but they can use the application using a
computer.

Example :

• Peer-to-peer - BitTorrent, SETI, and others.


• Web Application - Facebook.
• SaaS - Google Apps, SalesForce.com, and others

d. Cloud Platform, a service in the form of a computing platform consisting of


hardware and infrastructure software. This service is a service in the form of a
computing platform which contains infrastructure hardware and software. Usually has
an application certain businesses and use PaaS services as application infrastructure
his business

Example :

• Web Application Frameworks - Python Django, Rubyon Rails, .NET


• Web Hosting
• Propietary

e. Cloud Storage, involves the process of storing data as a service.

Example :

• Database - Google Big Table, Amazon SimpleDB.


• Network Attached Storage - Nirvanix CloudNAS, MobileMe iDisk.

f. Cloud Infrastructure, delivery of computing infrastructure as a service.

Example:

• Grid Computing - Sun Grid.


• Full Virtualization - GoGrid, Skytap.
• Compute - Amazon Elastic Compute Cloud

The 11 main categories other of cloud computing components are as follows:

• SAAS (Storage-as-a-service) - This refers to the disk space we use when we


lack a storage platform and therefore request it as a service
• Database-as-a-service - This component acts as a database directly from a
remote server where its functionality and other features work as if physical DB
is present on the local machine.
• Information-as-a-service - Information that can be accessed remotely from
anywhere called Information-as-a-Service. Highlight the flexibility of accessing
information remotely
• Process-as-a-service - Unlike other components, this component combines
various resources such as data and services. This is mainly used for business
processes where various key services and information are combined to form a
process.
• Application-as-a-service (AaaS) - As the name suggests, this is a complete
package for accessing and using applications. This is made to connect end users
to the internet and end users usually use browsers and the internet to access this
service. This component is the main front-end for end users
• Platform-as-a-service (PaaS) - In this component, the entire application
development process takes place including creating, implementing, storing, and
testing the database.
• Integration-as-a-service - Mostly related to application components that have
been built but must be integrated with other applications. This helps in
mediating between remote servers and local machines.
• Security-as-a-service - Because security is what most people expect in the
cloud, this is one of the most needed components. There are three-dimensional
security principles found on cloud platforms.
• Management / governance-as-a-service (MaaS and GaaS) - This is related to
cloud management, such as resource utilization, virtualization, and server up
and downtime management.
• Testing-as-a-service (TaaS) - Using these components, remote-hosted
applications are tested in terms of design requirements, database functionality,
and security measures among other testing features.
• Infrastructure-as-a-service (IaaS) - This is a complete virtual consideration
of networks, servers, software, and hardware on cloud platforms. Users will not
be able to monitor the backend process, but they will be presented with a
system that is fully configured with all processes set up for direct use.

2.. Compare: Public cloud, Private Cloud and Hybrid cloud


Answer:
Public cloud:
Public cloud are managed by third parties which provide cloud services over the
internet to public, these services are available as pay-as-you-go billing mode.
They offer solutions for minimizing IT infrastructure costs and act as a good
option for handling peak loads on the local infrastructure. They are a goto
option for small enterprises, which are able to start their businesses without
large upfront investments by completely relying on public infrastructure for their
IT needs.
A fundamental characteristic of public clouds is multitenancy. A public cloud is
meant to serve multiple users, not a single customer. A user requires a virtual
computing environment that is separated, and most likely isolated, from other
users.

Private cloud:
Private clouds are distributed systems that work on a private infrastructure and
providing the users with dynamic provisioning of computing resources. Instead
of a pay-as-you-go model as in public clouds, there could be other schemes in
that take into account the usage of the cloud and proportionally billing the
different departments or sections of an enterprise.
Hybrid cloud:
Hybrid cloud is a heterogeneous distributed system resulted by combining
facilities of public cloud and private cloud. For this reason they are also
called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on demand
and to efficiently address peak loads. Here public clouds are needed. Hence, a
hybrid cloud takes advantages of both public and private cloud.

3. Explain cloud computing reference model.


Answer:
The important characteristics of cloud computing is ability to deliver on demand a
variety of services which can be diverse from each other. So, there is a different
perception of cloud computing among the users. However, there is no uniformity
in the perception among the users that we can define cloud computing into three
major categories, and they are called cloud computing reference model.

Type Of Cloud Computing Reference Model

There are three type of cloud computing refrence model which


are Infrastructure as a Service (IaaS),Platform as a Service (PaaS)
and Software as a Services (SaaS).

Infrastructure as a Service (IaaS)

Iaas is the most basic category of cloud computing services. With Iaas, we can rent
IT infrastructure servers, and virtual machines (VMs), storage, networks, and
operating systems from a cloud provider on a pay-as-you-go basis. It’s an instant
computing infrastructure, provisioned and managed over the internet. Virtual
hardware is provided on demand in the form of virtual machines instances. Pricing
can be hourly basis. Virtual storage is either raw disk space or an object store
which is the higher level of abstraction entities rather than file. For example data in
s3 storage which when we access data using JAVA API then we can’t treat them as
file however they are shown as file with s3cmd ls command inside the s3 bucket.
Virtual networking is the collection of the services that manages networking
among virtual instances.

Platform as a Service (PaaS)

PaaS is another category of cloud computing reference model. Paas provides an


environment for building, testing, and deploying software applications. The goal of
PaaS is to help create an application as quickly as possible without having a focus
on managing the underlying infrastructure. PaaS models deliver scalable and
elastic runtime environments on demand and host execution of applications. These
services are backed by a core middleware platform that is responsible for creating
an abstract environment where applications are deployed/executed. Responsibility
of service providers is to provide the scalability and manage fault tolerance
whereas use focus on the logical part of application development which leverage
the use of APIs and libraries provided by PaaS. For Example We want to process
data on spark engine For this We don’t have to install Spark environment and
Scala environment if we are writing code in Scala language. Service providers
already do this for users. We will be using cloud computing services as a platform.

Software as a Services (SaaS)

Saas is software that is centrally hosted and managed for the end customer. It
allows users to connect to and use cloud-based apps over the internet. Common
examples are email, calendars, and office tools such as Microsoft Office 365.SaaS
provides application and services on demand. Most of the common functionalities
of desktop applications (office automation, document management, photo editing,
customer relationship management (CRM)) are provided via web browser which
can make applications more scalable. Applications are shared by multiple users.
For example, social networking sites like Facebook, twitter which are hosted on
cloud we just use them as software. Most of the social networking makes use of
cloud-based infrastructures.
4. Explain virtual desktop infrastructure
Answer:

Virtual desktop infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops. VDI hosts desktop environments on
a centralized server and deploys them to end-users on request.

In VDI, a hypervisor segments servers into virtual machine that in turn host virtual
desktops, which users access remotely from their devices. Users can access these
virtual desktops from any device or location, and all processing is done on the host
server. Users connect to their desktop instances through a connection broker, which is
a software-based gateway that acts as an intermediary between the user and the server.

VDI can be either persistent or nonpersistent. Each type offers different benefits:

• With persistent VDI, a user connects to the same desktop each time, and
users are able to personalize the desktop for their needs since changes are
saved even after the connection is reset. In other words, desktops in a
persistent VDI environment act exactly like a personal physical desktop.
• In contrast, nonpersistent VDI, where users connect to generic desktops and
no changes are saved, is usually simpler and cheaper, since there is no need to
maintain customized desktops between sessions. Nonpersistent VDI is often
used in organizations with a lot of task workers, or employees who perform a
limited set of repetitive tasks and don’t need a customized desktop.

VDI offers a number of advantages, such as user mobility, ease of access, flexibility
and greater security. In the past, its high-performance requirements made it costly and
challenging to deploy on legacy systems, which posed a barrier for many businesses.
However, the rise in enterprise adoption of hypercoverage infrastructure (HCI) offers
a solution that provides scalability and high performance at a lower cost.

Although VDI’s complexity means that it isn’t necessarily the right choice for every
organization, it offers a number of benefits for organizations that do use it. Some of
these benefits include:
• Remote access: VDI users can connect to their virtual desktop from any
location or device, making it easy for employees to access all their files and
applications and work remotely from anywhere in the world.

• Cost savings: Since processing is done on the server, the hardware


requirements for end devices are much lower. Users can access their virtual
desktops from older devices, thin clients, or even tablets, reducing the need
for IT to purchase new and expensive hardware.
• Security: In a VDI environment, data lives on the server rather than the end
client device. This serves to protect data if an endpoint device is ever stolen or
compromised.
• Centralized management: VDI’s centralized format allows IT to easily patch,
update or configure all the virtual desktops in a system.

Although VDI can be used in all sorts of environments, there are a number of use
cases that are uniquely suited for VDI, including:

• Remote work: Since VDI makes virtual desktops easy to deploy and update
from a centralized location, an increasing number of companies are
implementing it for remote workers.
• Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices. Since processing is done
on a centralized server, VDI allows the use of a wider range of devices. It also
offers better security, since data lives on the server and is not retained on the
end client device.
• Task or shift work: Nonpersistent VDI is particularly well suited to
organizations such as call centers that have a large number of employees who
use the same software to perform limited tasks.

5. Explain virtualization also its types in detail.


Answer:

Virtualization is a technique, which allows to share single physical instance of an


application or resource among multiple organizations or tenants (customers). It does
so by assigning a logical name to a physical resource and providing a pointer to that
physical resource on demand.
Creating a virtual machine over existing operating system and hardware is referred as
Hardware Virtualization. Virtual Machines provide an environment that is logically
separated from the underlying hardware.

The machine on which the virtual machine is created is known as host


machine and virtual machine is referred as a guest machine. This virtual machine is
managed by a software or firmware, which is known as hypervisor.

Virtualization is the ability that allows sharing the physical instance of a single
application or resource among multiple organizations or users. This technique is done
by assigning a name logically to all those physical resources & provides a pointer to
those physical resources based on demand.

Over an existing operating system & hardware, we generally create a virtual machine
that and above it, we run other operating systems or applications. This is called
Hardware Virtualization. The virtual machine provides a separate environment that is
logically distinct from its underlying hardware. Here, the system or the machine is the
host & the virtual machine is the guest machine. This virtual environment is managed
by firmware, which is termed as a hypervisor.

Types of Virtualization:

1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
5.Server Virtualization.
6.Data virtualization.

1. Application Virtualization:
Application virtualization helps a user to have remote access of an
application from a server. The server stores all personal information and
other characteristics of the application but can still run on a local
workstation through the internet. Example of this would be a user who
needs to run two different versions of the same software. Technologies
that use application virtualization are hosted applications and packaged
applications.

2. Network Virtualization:
The ability to run multiple virtual networks with each has a separate
control and data plan. It co-exists together on top of one physical network.
It can be managed by individual parties that potentially confidential to
each other.
Network virtualization provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual
Private Network (VPN), and workload security within days or even in
weeks.
3. Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a
server in the data centre. It allows the user to access their desktop
virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a
virtual desktop. Main benefits of desktop virtualization are user mobility,
portability, easy management of software installation, updates, and
patches.
4. Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual
storage system. The servers aren’t aware of exactly where their data is
stored, and instead function more like worker bees in a hive. It makes
managing storage from multiple sources to be managed and utilized as a
single repository. storage virtualization software maintains smooth
operations, consistent performance and a continuous suite of advanced
functions despite changes, break down and differences in the underlying
equipment.
5. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes
place. Here, the central-server(physical server) is divided into multiple
different virtual servers by changing the identity number, processors. So,
each system can operate its own operating systems in isolate manner.
Where each sub-server knows the identity of the central server. It causes
an increase in the performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reduce energy consumption, reduce
infrastructural cost, etc.

6. Data virtualization:
This is the kind of virtualization in which the data is collected from various
sources and managed that at a single place without knowing more about
the technical information like how data is collected, stored & formatted
then arranged that data logically so that its virtual view can be accessed
by its interested people and stakeholders, and users through the various
cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc.

6. Explain server virtualization.


Answer :

It is the division of physical server into several virtual servers and this division is
mainly done to improvise the utility of server resource. In other word it is the masking
of resources that are located in server which includes the number & identity of
processors, physical servers & the operating system. This division of one physical
server into multiple isolated virtual servers is done by server administrator using
software. The virtual environment is sometimes called the virtual private-servers.

In this process, the server resources are kept hidden from the user. This partitioning of
physical server into several virtual environments; result in the dedication of one server
to perform a single application or task.

This technique is mainly used in web-servers which reduces the cost of web-hosting
services. Instead of having separate system for each web-server, multiple virtual
servers can run on the same system/computer.

The primary uses of server virtualization are:


• To centralize the server administration
• Improve the availability of server
• Helps in disaster recovery
• Ease in development & testing
• Make efficient use of server resources.

For Server Virtualization, there are three popular approaches.

These are:

• Virtual Machine model


• Para-virtual Machine model
• Operating System (OS) layer Virtualization

Server virtualization can be viewed as a part of overall virtualization trend in the IT


companies that include network virtualization, storage virtualization & management
of workload. This trend brings development in automatic computing. Server
virtualization can also used to eliminate server sprawl (Server sprawl is a situation in
which many under-utilized servers utilize more space or consume more resources than
can be justified by their workload) & uses server resources efficiently.

1. Virtual Machine model: are based on host-guest paradigm, where each guest
runs on a virtual replica of hardware layer. This technique of virtualization
provide guest OS to run without modification. However it requires real
computing resources from the host and for this a hypervisor or VM is required
to coordinate instructions to CPU.
2. Para-Virtual Machine model: is also based on host-guest paradigm & uses
virtual machine monitor too. In this model the VMM modifies the guest
operating system's code which is called 'porting'. Like that of virtual machine,
similarly the Para-virtual machine is also capable of executing multiple
operating systems. The Para-virtual model is used by both Xen & UML.
3. Operating System Layer Virtualization: Virtualization at OS level functions in a
different way and is not based on host-guest paradigm. In this model the host
runs a single operating system kernel as its main/core and transfers its
functionality to each of the guests. The guest must use the same operating
system as the host. This distributed nature of architecture eliminated system
calls between layers and hence reduces overhead of CPU usage. It is also a
must that each partition remains strictly isolated from its neighbors because
any failure or security breach of one partition won't be able to affect the other
partitions.
Advantages of Server Virtualization

• Cost Reduction: Server virtualization reduces cost because less hardware is


required.
• Independent Restart: Each server can be rebooted independently and that
reboot won't affect the working of other virtual servers.

7. Define trusted cloud computing.


Answer :

Trusted Computing (TC) is a technology developed and promoted by the Trusted


computing group The term is taken from the field of trusted system and has a
specialized meaning. With Trusted Computing, the computer will consistently behave
in expected ways, and those behaviors will be enforced by computer
hardware and software Enforcing this behavior is achieved by loading the hardware
with a unique encryption key inaccessible to the rest of the system.

TC is controversial as the hardware is not only secured for its owner, but also
secured against its owner. Such controversy has led opponents of trusted computing,
such as free software activist Richard Stallman to refer to it instead as treacherous
computing, even to the point where some scholarly articles have begun to place scare
quotes around "trusted computing".

Trusted Computing proponents such as International data corporation the Enterprise


Strategy Group and Endpoint Technologies Associates claim the technology will
make computers safer, less prone to viruses and malware, and thus more reliable from
an end-user perspective. They also claim that Trusted Computing will
allow computers and servers to offer improved computer security, over that which is
currently available. Opponents often claim this technology will be used primarily to
enforce digital right managements policies and not to increase computer security.
8. Explain cloud computing security architecture
Answer :
Architecting appropriate security controls that protect the CIA of information in the cloud can
mitigate cloud security threats. Security controls can be delivered as a service (Security-as-a-
Service) by the provider or by the enterprise or by a 3rd party provider. Security architectural
patterns are typically expressed from the point of security controls (safeguards) – technology
and processes. These security

controls and the service location (enterprise, cloud provider, 3rd party) should be highlighted
in the security patterns.

Security architecture patterns serve as the North Star and can accelerate application
migration to clouds while managing the security risks. In addition, cloud security architecture
patterns should highlight the trust boundary between various services and components
deployed at cloud services. These patterns should also point out standard interfaces, security
protocols (SSL, TLS, IPSEC, LDAPS, SFTP, SSH, SCP, SAML, OAuth, Tacacs, OCSP, etc.) and
mechanisms available for authentication, token management, authorization, encryption
methods (hash, symmetric, asymmetric), encryption algorithms (Triple DES, 128-bit AES,
Blowfish, RSA, etc.), security event logging, source-of-truth for policies and user attributes
and coupling models (tight or loose).Finally the patterns should be leveraged to create
security checklists that need to be automated by configuration management tools like
puppet.

In general, patterns should highlight the following attributes (but not limited to) for
each of the security services consumed by the cloud application figure 4.1:

Logical location – Native to cloud service, in-house, third party cloud. The location may have
an implication on the performance, availability, firewall policy as well as governance of the
service.

Protocol – What protocol(s) are used to invoke the service? For example REST with X.509
certificates for service requests.

Service function – What is the function of the service? For example encryption of the artifact,
logging, authentication and machine finger printing.

Input/Output – What are the inputs, including methods to the controls, and outputs from
the security service? For example, Input = XML doc and Output =XML doc with encrypted
attributes.

Control description – What security control does the security service offer? For example,
protection of information confidentiality at rest, authentication of user and authentication of
application.

Actor – Who are the users of this service? For example, End point, End user, Enterprise
administrator, IT auditor and Architect.
9. Define OLAP.
Answer :

OLAP (for online analytical processing) is software for performing multidimensional


analysis at high speeds on large volumes of data from a data warehouse, data mart, or
some other unified, centralized data store.

Most business data have multiple dimensions—multiple categories into which the data
are broken down for presentation, tracking, or analysis. For example, sales figures
might have several dimensions related to location (region, country, state/province,
store), time (year, month, week, day), product (clothing, men/women/children, brand,
type), and more.

But in a data warehouse, data sets are stored in tables, each of which can organize data
into just two of these dimensions at a time. OLAP extracts data from multiple
relational data sets and reorganizes it into a multidimensional format that enables very
fast processing and very insightful analysis.

10. Explain inter cloud issues.


Answer :

The idea behind an intercloud is that a single common functionality would combine
many different individual clouds into one seamless mass in terms of on-demand
operations. To understand how this works, it’s helpful to think about how existing
cloud computing setups are designed.

Cloud hosting is largely intended to deliver on-demand services. Through careful use
of scalable and highly engineered technologies, cloud providers are able to offer
customers the ability to change their levels of service in many ways without waiting
for physical changes to occur. Terms like rapid elasticity, resource pooling and on-
demand self-service are already part of cloud hosting service designs that are set up to
make sure the customer or client never has to deal with limitations or disruptions.
Building on all of these ideas, the intercloud would simply make sure that a cloud
could use resources beyond its reach by taking advantage of pre-existing contracts
with other cloud providers.

Although these setups are theoretical as they apply to cloud services, telecom
providers already have these kinds of agreements. Most of the national telecom
companies are able to reach out and use parts of another company’s operations where
they lack a regional or local footprint, because of carefully designed business
agreements between the companies. If cloud providers develop these kinds of
relationships, the intercloud could become reality.

As a means toward allowing this kind of functionality, the Institute of Electrical and
Electronics Engineers (IEEE) developed the intercloud testbed in 2013, a set of
technical standards that would go a long way towards helping cloud provider
companies to federate and inter-operate in the kinds of ways theorized in intercloud
design principles.

You might also like