You are on page 1of 8

Department of Information Technology

GC University Faisalabad

Quiz
Class: BS-IT Semester: 7th
Course Title: Cloud computing Course Code: CIT-607

Question No 1:  04

Explain the Properties supported by virtualization technique. Also Desribe the types of
monitoring metrices.

What is Virtualization?
Virtualization is the process of creating a software-based, or virtual, representation of something,
such as virtual applications, servers, storage and networks. It is the single most effective way to
reduce IT expenses while boosting efficiency and agility for all size businesses.
Benefits of Virtualization
 Virtualization can increase IT agility, flexibility and scalability while creating
significant cost savings. Greater workload mobility, increased performance and
availability of resources, automated operations – they’re all benefits of
virtualization that make IT simpler to manage and less costly to own and operate.
Additional benefits include:
 Reduced capital and operating costs.
 Minimized or eliminated downtime.
 Increased IT productivity, efficiency, agility and responsiveness.
 Faster provisioning of applications and resources.
 Greater business continuity and disaster recovery.
 Simplified data center management.
 Availability of a true Software-Defined Data Center..
Properties:
VMs have the following characteristics, which offer several benefits.
 
Partitioning
 Run multiple operating systems on one physical machine.
 Divide system resources between virtual machines.
Isolation
 Provide fault and security isolation at the hardware level.
 Preserve performance with advanced resource controls.
Encapsulation
 Save the entire state of a virtual machine to files.
 Move and copy virtual machines as easily as moving and copying files.
Hardware Independence
 Provision or migrate any virtual machine to any physical server.
Types of Monitoring Metrices:

Server Availability and Uptime


Server uptime reflects the reliability and availability of your servers, stressing on the need to
have your servers always up and running. It is not required to spend every minute checking on
your uptime report, but it is essential to know when your server is down. For production servers,
uptime of less than 99% calls for attention, and less than 95% calls for trouble.

System-level Performance Metrics

CPU, memory, disk usage, and network activity are usually the immediate suspects when you
identify a server performance degradation issue in your data center. Checking on these metrics
help detect servers with insufficient RAM, limited hard drive space, high CPU utilization, or any
bandwidth bottlenecks. This will make it a lot easier to troubleshoot and act fast before you run
into problems with your servers.

Application-level Performance Metrics

The application running on your servers is composed of multiple services and understanding the
intra-service dependencies, connection patterns can be difficult. Monitoring each and every
service and process running on the server can tell which service/process is impacting server
performance, analyze the server load, and manage system resources.

Security-level Performance Metrics

With so many background tasks running in your servers, it can be quite difficult to know what is
being written or modified to or from your files. A monitoring eye to notify of such changes
would be a real time-saver to keep you aware of unauthorized access that could result in the loss
of sensitive data or any improper changes done that can cause data breach and compliance
failure. Knowing when files are modified, content changes are made, or even if specific
resources are accessed can help act as an intrusion detection system and secure your
infrastructure. Another important metric to keep an eye on to avoid security issues are the logs
generated by servers, applications, and security devices. Monitoring these logs can help system
administrators scan and search for errors, problems, specific text patterns, and rules indicating
important events in the log files.

Question No 2:  06

1. Desribe the Cloud cube model. Also explain the dimension of cloud cube model with the
help of diagram.
Ans:

The Cloud Cube

Each form of cloud computing offers different characteristics, varying degrees of


flexibility, different collaborative opportunities and different risks. It can be a challenge
for organizations to determine how to choose the cloud form that best suits their needs.

The Jericho Forum has identified four criteria to differentiate cloud formations from each
other and the manner of their provision. The Cloud Cube Model effectively summarizes
these four dimensions:

1. Internal/External
2. Proprietary/Open
3. Perimeterised/De-perimeterized Architectures
4. Insourced/Outsourced

Dimension 1: Internal/External

This dimension defines the physical location of the data; where does the cloud form
exist – inside or outside organization boundaries? If the cloud form is within the
organization’s physical boundaries, then it is internal. If it is outside the organization’s
physical boundaries, then it is external. It’s important to note that the assumption that
internal is necessarily more secure than external is false. The most secure usage model
is the effective use of both internal and external cloud forms.
Dimension 2: Proprietary/Open

This dimension defines the state of ownership of the cloud technology, services,
interfaces, etc. It indicates the degree of interoperability, as well as enabling
data/application transportability between an organization’s own systems and other cloud
forms and the ability to withdraw your data from a cloud form, or to move it to another
without constraint. This dimension indicates any constraints on being able to share
apps.

“Proprietary” suggests that the organization providing the service is keeping the means
of provision under its ownership. By contrast, “open” clouds use technology that is not
proprietary, which means that there are likely to be more suppliers, and the organization
is not as constrained in terms of ability to share data and collaborate with selected
parties. Experts suggest that open clouds most effectively enhance collaboration
between multiple organizations.

Dimension 3: Perimeterised/De-perimeterised Architectures

This dimension represents the architectural mindset of the organization. It asks if the
organization is operating within its traditional IT perimeter or outside it. De-
perimeterisation relates to the gradual failure, removal, shrinking or collapse of the
traditional silo-based IT perimeter.
“Perimeterised” suggest a system that continues to operate within the traditional IT
perimeter, often characterized by “network firewalls.” This approach is known to inhibit
collaboration. Operating within such areas means extending an organization’s perimeter
into the external cloud computing domain via a VPN and operating the virtual server in
its own IP domain. The organization uses its own directory services to control access.
Once the computing task is complete, the perimeter is withdrawn to its original,
traditional position.

“De-perimeterised” suggests that the system perimeter is designed following the


principles outlined in the Jericho Forum’s Commandments and Collaboration Oriented
Architectures Framework. De-perimeterised areas in the Cloud Cube Model use both
internal and external domains, but the collaboration or sharing of data should not be
seen as internal or external. Rather, it is controlled by and limited to the parties that the
using organizations select.

Dimension 4: Insourced/Outsourced

This dimension has two states in each of the eight cloud forms. It responds to the
question: who do you want running your clouds? “Outsourced” means that the service is
provided by a third party. Insourced means that the service is provided by your own staff
under your control. These states describe the party managing the delivery of the cloud
service(s) used by the organization.
It’s important to note that few organizations that are traditionally bandwidth, software or
hardware providers will be able to smoothly transition to becoming cloud service
providers.  Organizations looking to procure cloud services must develop the ability to
rapidly set up legally binding collaboration agreements, and to close them just as
quickly once they become unnecessary.  When terminating an agreement with a
provider, an organization should ensure that the data is appropriately deleted from the
service provider’s infrastructure (including backups), to avoid risk of a data breach or
leak.

2. Evaluate the concept of cloud computing and analyze the concepts of Virtualization

Institution Definition
US National Institute of Cloud computing is a model for enabling ubiquitous,
Standards and convenient on-demand network access to a shared pool of
Technology configurable computing resources (e.g. networks, servers,
(NIST) storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort
or service provider interaction.
Berkeley RAD Lab Cloud computing refers to both the applications delivered
as services over the Internet and the hardware and
systems software in the datacenters that provide those
services. The services themselves have long been referred
to as Software as a Service (SaaS), so we use that term. The
datacenter hardware and software is what we will call the
cloud.

The first definition focuses more on the purpose of cloud computing, the latter
concentrates more on the components of cloud computing. They show that cloud
computing can be understood as a service model for computing services based on a set
of computing resources that can be accessed in a flexible, elastic, on-demand way with
low management effort.

The following characteristics, which are generally inherent to cloud computing,


but not necessarily a feature of every cloud computing solution, shed further light on
the concept. They include according to NIST (2011), Armbrust et al. (2009), and
Schubert et al. (2010):

On-demand self service: Users of clouds can “unilaterally provision computing


capabilities, such as server time and network storage, as needed
automatically without requiring human interaction with each service
provider” (NIST, 2011).

Availability of “infinite” computing resources: Cloud users do not have to plan


the provision of their computing resources in advance as they have the
potential to access computing resources on demand.

Rapid elasticity and adaptability: Elasticity is one of the key features of cloud
computing. Computing resources can be provisioned in an elastic and rapid
way that allows adaptation to changing requirements such as the amount of
data supported by a service or the number of parallel users. Users can buy
computing services at any time at various granularities. They are able to up-
and downscale those services according to their needs.

Elimination of up-front commitment: Users of cloud services do not have to


make heavy, upfront IT investments allowing companies to start small and to
successively increase hardware and software resources only when needed. In
addition, small and medium enterprises have a much easier and more
affordable access to state-of-the-art applications and platforms which were
only available for larger companies before.

Short-term pay for use: Users are able to pay for their use of cloud services on
a short-time basis thereby only paying for the time they use the computing
resources and release them when they do not need them anymore. Companies
are thus able to reduce capital expenses (capex) and convert them into
operating expenses (opex).

Network access: Computer services are accessed over the network and through
standard mechanisms which allow users to connect several devices to the
cloud (e.g. laptops, mobile phones).

Pooling of resources: Providers’ cloud services “are pooled to serve multiple


customers using a multitenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumer
demand” (NIST, 2011). As a consequence, users of cloud computing usually
do not know and are unable to control where the provided cloud computing
resources are exactly located. It is, however, in certain cases possible to
specify the location at a more abstract level (e.g. continent, country).

Measured service and adaptability: Cloud computing systems are able to react
on-time to changes in the amount of computing resources requested and thus
automatically control and leverage resources.

Analyze the concepts of Virtualization:

In the mid-1990s, organizations began to understand that not all physical


hardware used had their full potential explored. Some applications, for
example, could only be used in a particular type of hardware or operating
system, causing the company to invest in specific server structures to meet
each demand.

In this way, virtualization emerged as a solution to some problems: companies


could partition a particular hardware to run two or more different
applications or operating systems and thus achieve greater efficiency through
the reduction of costs associated with the acquisition of equipment and the
entire infrastructure required for the maintenance of datacenters.

In a simple language: If your company has payroll management software that


requires the Windows 7 operating system and other inventory management
software that needs a Linux operating system distribution. Without a
virtualization solution it would be necessary to use two servers for each OS.
Using a virtualized environment, it would be possible to install both Linux
and Windows operating systems on the same hardware together with their
respective applications. Virtualization then comes with the intention of
simulating hardware, through software.

You might also like