You are on page 1of 17

Abstract

The field of cloud computing is still in its infancy as far as


implementation and usage, partly because it is heavily promoted by
technology advancement and is so high resource dependent that
researches in academic institutions have not had many
opportunities to analyze and experiment with it. However, cloud
computing arises from the IT technicians desire to add another layer
of separation in processing information. At the moment, a general
understanding of cloud computing refers to the following concepts:
grid computing, utility computing, software as a service, storage in
the cloud and virtualization. These refer to a client using a
provider's service remotely, also known as in the cloud. Even if
there is an existent debate on whether those concepts should be
separated and dealt with individually, the general consensus is that
all those terms could be summarized by the cloud computing
umbrella. Given its recent development and scarcity of academic
published work, many discussions on the topic of cloud security
have surfaced from engineers in companies that provide the
aforementioned services. Nevertheless, academia is developing in a
significant presence, being able to address numerous issues.

1|Page
Table Of Index
Introduction………………………………………… 3
History………………………………………………… 5
Key features………………………………………… 5
Layers………………………………………………….. 7
Architecture..………………………………………. 7
Cons&Prons of Cloud Computing…………. 9
Security Challenges…………………………….. 10
Hardness of Implementation………………..14
Elements of Cloud Computing……………….16
Overview……………………………………………….17
Conclusion……………………………………………..17

2|Page
Cloud Computing

What is Cloud Computing: Cloud Computing is a term that


is often discussed about the web these days and often attributed to different
things that -- on the surface -- don't seem to have that much in common. So
just what is Cloud Computing? I've heard it called a service, a platform, and
even an operating system. Some even link it to such concepts as grid
computing, which is a way of taking many different computers
and linking them together to form one very big computer.

A basic definition of cloud computing is the use of the Internet


for the tasks you perform on your computer. The "cloud"
represents the Internet.

“Cloud computing is the provision of dynamically scalable and often virtualized


resources as a service over the Internet. Users need not have knowledge of,
expertise in, or control over the technology infrastructure in the “cloud” that
supports them. Cloud computing services often provide common business
applications online that are accessed from a web browser, while the software and
data are stored on the  servers.”

When someone talks about the cloud computing system, it is very helpful to
divide this system into two sections, one is front end and other is backend.
They are connected with each other via a network and mostly internet is used
for fulfilling the requirement. The front side is the interface for the user and
the back end is the cloud section for the whole system.

3|Page
Front end
The front end of the cloud computing system comprises the client’s device (or
it may be computer network) and some applications are needed for accessing
the cloud computing system. All the cloud computing systems do not give the
same interface to users. Web services like electronic mail programs control
some existing web browsers such as Firefox, Microsoft’s internet explorer or
safari. Other type of systems has some unique application which provides the
network access to its clients. Front end is a technical term which refers to the
interface though which a user can use some kind of services, so don’t get
confused with this term.

Back end
Back end refers to the some physical peripherals. In cloud computing  back
end is cloud itself which may encompasses of various computer machines,
data storage systems, servers. Group of these clouds make a whole cloud
computing system. Theoretically, any cloud computing system can include
practically any type of computer machine program that can be imagined by a
human being such as from video games to data processing, software
development to entertainment. Usually, every application would have its
individual dedicated server for services.

A central server is established which is used for administering the whole


system. It is also used for monitoring client’s demand as well as traffic to
ensure that everything of system runs without any problem. There are some
set of rules, generally called as protocols which are followed by this server
and it uses a very special type of software known termed as middleware.
Middleware allow computers that are connected on network to make
communication with each other.

If any cloud computing service provider has many customers, then there’s
likely to be very high demand for huge storage space. Many companies that
are service providers need hundreds of storage device with digital in nature.
Cloud computing systems require minimum twice the units of storage devices,
system needs to keep all the information of its clients. That is because devices
like, computer often break down. The cloud computing system must have a
copy of all the data of its client’s. Making copy of data is called redundancy.

4|Page
History: The underlying concept of cloud computing dates back to the
1960s, when John McCarthy opined that "computation may someday be
organized as a public utility." Almost all the modern-day characteristics of
cloud computing (elastic provision, provided as a utility, online, illusion of
infinite supply), the comparison to the electricity industry and the use of
public, private, government and community forms was thoroughly explored in
Douglas Parkhill's 1966 book, The Challenge of the Computer Utility.

The actual term "cloud" borrows from telephony in that telecommunications


companies, who until the 1990s primarily offered dedicated point-to-point
data circuits, began offering Virtual Private Network (VPN) services with
comparable quality of service but at a much lower cost. By switching traffic to
balance utilization as they saw fit, they were able to utilize their overall
network bandwidth more effectively. The cloud symbol was used to denote
the demarcation point between that which was the responsibility of the
provider from that of the user. Cloud computing extends this boundary to
cover servers as well as the network infrastructure. The first scholarly use of
the term “cloud computing” was in a 1997 lecture by Ramnath Chellappa.

Key features:
 Agility improves with users' ability to rapidly and inexpensively re-
provision technological infrastructure resources.
 Application Programming Interface (API) accessibility to software that
enables machines to interact with cloud software in the same way the
user interface facilitates interaction between humans and computers.
Cloud Computing systems typically use REST-based APIs.
 Cost is claimed to be greatly reduced and capital expenditure is
converted to operational expenditure. This ostensibly lowers barriers to
entry, as infrastructure is typically provided by a third-party and does
not need to be purchased for one-time or infrequent intensive
computing tasks. Pricing on a utility computing basis is fine-grained
with usage-based options and fewer IT skills are required for
implementation (in-house).

5|Page
 Device and location independence enable users to access systems using
a web browser regardless of their location or what device they are using
(e.g., PC, mobile). As infrastructure is off-site (typically provided by a
third-party) and accessed via the Internet, users can connect from
anywhere.
 Multi-tenancy enables sharing of resources and costs across a large pool
of users thus allowing for:
o Centralization of infrastructure in locations with lower costs
(such as real estate, electricity, etc.)
o Peak-load capacity increases (users need not engineer for highest
possible load-levels)
o Utilization and efficiency improvements for systems that are often
only 10–20% utilized.
 Reliability is improved if multiple redundant sites are used, which
makes well designed cloud computing suitable for business continuity
and disaster recovery. Nonetheless, many major cloud computing
services have suffered outages, and IT and business managers can at
times do little when they are affected.
 Scalability via dynamic ("on-demand") provisioning of resources on a
fine-grained, self-service basis near real-time, without users having to
engineer for peak loads. Performance is monitored and consistent and
loosely coupled architectures are constructed using web services as the
system interface. One of the most important new methods for
overcoming performance bottlenecks for a large class of applications is
data parallel programming on a distributed data grid.[38]
 Security could improve due to centralization of data,[39] increased
security-focused resources, etc., but concerns can persist about loss of
control over certain sensitive data, and the lack of security for stored
kernels.[40] Security is often as good as or better than under traditional
systems, in part because providers are able to devote resources to
solving security issues that many customers cannot afford.[41] Providers
typically log accesses, but accessing the audit logs themselves can be
difficult or impossible. Furthermore, the complexity of security is
greatly increased when data is distributed over a wider area and / or
number of devices.
 Maintenance of cloud computing applications is easier, since they don't
have to be installed on each user's computer. They are easier to support
and to improve since the changes reach the clients instantly.

6|Page
 Metering means that cloud computing resources usage should be
measurable and should be metered per client and application on a daily,
weekly, monthly, and yearly basis.
 Electronic Recycling The costs of electronic recycling are shifted to the
hands of the cloud provider.

Layers:
The Internet functions through a series of network protocols that form a stack
of layers, as shown in the figure (or as described in more detail in the OSI
model). Once an Internet connection is established among several computers,
it is possible to share services within any one of the
following layers.

Client
A cloud client consists of computer hardware and/or
computer software that relies on cloud computing for application delivery, or
that is specifically designed for delivery of cloud services and that, in either
case, is essentially useless without it. Examples include some computers,
phones and other devices, operating systems and browsers.

Application
Cloud application services or "Software as a Service (SaaS)" deliver software
as a service over the Internet, eliminating the need to install and run the
application on the customer's own computers and simplifying maintenance
and support. People tend to use the terms ‘SaaS’ and ‘cloud’ interchangeably,
when in fact they are two different things. Key characteristics include:

 Network-based access to, and management of, commercially available


(i.e., not custom) software

7|Page
 Activities that are managed from central locations rather than at each
customer's site, enabling customers to access applications remotely via
the Web
 Application delivery that typically is closer to a one-to-many model
(single instance, multi-tenant architecture) than to a one-to-one model,
including architecture, pricing, partnering, and management
characteristics
 Centralized feature updating, which obviates the need for downloadable
patches and upgrades.

Platform
Cloud platform services or "Platform as a Service (PaaS)" deliver a computing
platform and/or solution stack as a service, often consuming cloud
infrastructure and sustaining cloud applications. It facilitates deployment of
applications without the cost and complexity of buying and managing the
underlying hardware and software layers.

Infrastructure
Cloud infrastructure services, also known as "Infrastructure as a Service
(IaaS)", delivers computer infrastructure - typically a platform virtualization
environment - as a service. Rather than purchasing servers, software, data-
center space or network equipment, clients instead buy those resources as a
fully outsourced service. Suppliers typically bill such services on a utility
computing basis and amount of resources consumed (and therefore the cost)
will typically reflect the level of activity. IaaS evolved from virtual private
server offerings.

Cloud infrastructure often takes the form of a tier 3 data center with many
tier-4 attributes, assembled from hundreds of virtual machines.

Server
The servers layer consists of computer hardware and/or computer software
products that are specifically designed for the delivery of cloud services,
including multi-core processors, cloud-specific operating systems and
combined offerings

8|Page
Cons. & Prons. Of Cloud Computing
Pros:
Simplicity – Entrepreneurs have enough on their plates as it is. Solutions that
can simplify any part of their business operations are a welcome addition.
Hosting in the cloud can streamline and simplify actions such as “pass thru”
billing to end-users. In some cases, cloud hosting providers can even bill your
customers directly.

Cost Effectiveness – Cloud hosting has a low cost of entry. There are no
capital expenses to bear and it doesn’t require “IT-like” personnel to join you
staff. Again, for a startup that isn’t depending on their site as a main business
conduit this is a very inexpensive way to get going.

Moves as quickly as your business – Cloud hosting is extremely fast to


implement in most cases and claims to be infinitely scalable. It also supports
multi-platform development environments.

Doesn’t have what you don’t need — If you’re a startup with no critical data
on your Web site or applications, the security level of cloud hosting may be
plenty.

Cons:
Performance — In a cloud environment, all sites are competing for the
hardware resources. If multiple Web sites spike coincidentally, it can result in
everyone slowing down. Additionally, with cloud you never really know how
much performance is available to you. The claim is that you get unlimited
scalability, however many of the clouds’ early adopters are finding that is not
the case as their Web site resource requirements grow and over-exceed this
elusive capacity.

Security – Cloud hosting is simply not the most secure environment at


present. If you’re looking to achieve and maintain data privacy requirements
for PCI compliance, HIPAA compliance, SOX, E-commerce, and so on, then
cloud hosting is not the solution for you.

9|Page
Redundancy – One of the misconceptions of cloud hosting is that it’s hosted
“in the sky and not in a datacenter,” which is not true. Cloud hosting resides in
a single datacenter. Recently a large hosting provider’s data center went down
leaving a lot of cloud hosted Web sites in the dark. The site owners had a huge
reality check and quickly learned of the single-points of failure within a cloud
environment.

Cost —The cloud gives businesses a hands-free method to scale their hosting,
however some problems can arise that are financially surprising. For starters,
automatic scaling can make people extremely lazy. If you’re not paying
attention to your usage, you just might get a huge surprise on your next bill.
One thing that’s a rising concern is hackers running up their victims’ hosting
bills. One method that’s being used is a simple low-level DDoS attack
(Distributed Denial of Service), which won’t take your site down but will keep
your server very busy. Since you pay for usage with cloud hosting, your costs
can spin wildly out of control. So if you’re using cloud hosting, make sure to
pay daily attention to your usage.

Ultimately, while cloud hosting may provide some distinct benefits and cost
advantages for start-ups or non-critical Web sites, it isn’t well suited for
mission-critical sites and SaaS applications. In particular, you cannot achieve
compliance mandates of HIPAA, PCI, SOX, etc. when storing data in and
serving applications from “the cloud.”

Security Challenges
Start-up companies often lack the protection measures to weather off an
attack on their servers due to the scarcity of resources - poor programming
that explores software vulnerabilities (PHP, JavaScript, etc.) open ports to
firewalls or inexistent load-balance algorithms susceptible to denial of service
attacks. For this reason, new companies are encouraged to pursue cloud
computing as the alternative to supporting their own hardware backbone.
However cloud computing does not come without its pitfalls. For starters, a
cloud is a single point of failure for multiple resources. Even though network
carriers such as AT&T believe a distributed cloud structure is the right
implementation, it faces major challenges in finding the optimal approach for
low power transmission and high network availability; some people believe
that major corporations will shy away from implementing cloud solutions in

10 | P a g e
the near future due to ineffective security policies. One problem comes from
the fact that different cloud providers have different ways to store data, so
creating a distributed cloud implies more challenges to be solved between
vendors.
Data Security
Security refers to confidentiality, integrity and availability, which pose major
issues for cloud vendors.
Confidentiality refers to who stores the encryption keys - data from company
A, stored in an encrypted format at company B must be kept secure from
employees of B; thus, the client company should own the encryption keys.
Integrity refers to the face that no common policies exist for approved data
exchanges; the industry has various protocols used to push different software
images or jobs. One way to maintain data security on the client side is the use
of thin clients that run with as few resources as possible and do not store any
user data, so passwords cannot be stolen. The concept seems to be impervious
to attacks based on capturing this data. However, companies have
implemented systems with unpublished APIs, claiming that it improves
security; unfortunately, this can be reversed engineered; also, using DHCP and
FTP to perform tasks such as firmware upgrades has long been rendered as
insecure. Nevertheless, products from Wyse are marketed with their thin
client as one of the safest, by using those exact features. Lastly, the most
problematic issue is availability, as several companies using cloud computing
have already experienced downtime (Amazon servers subject to what
appeared to be a denial of service attack). Other things to keep in mind are
contract policies between clients and vendors, so that data belongs only to the
client at all times, preventing third parties to be involved at any point. Also,
authentication should be backed by several methods like password plus flash
card, or password plus finger print, or some combination of external
hardware and password. One benefit of cloud computing is that client
software security does not need to be enforced as strictly as before. This
aspect concerns the view of cloud computing as software as a service, as it
becomes more important to ensure security of data transfer rather than a
traditional secure application life cycle.

11 | P a g e
Cloud Computing Security Issues
Cloud Computing Challenges and Related Security Issues identified seven
issues that need to be addressed before enterprises consider switching to the
cloud computing model. They are as follows:
Privileged user access - information transmitted from the client through the
Internet poses a certain degree of risk, because of issues of data ownership;
enterprises should spend time getting to know their providers and their
regulations as much as possible before assigning some trivial applications first
to test the water regulatory compliance - clients are accountable for the
security of their solution, as they can choose between providers that allow to
be audited by 3rd party organizations that check levels of security and
providers that don't data location - depending on contracts, some clients
might never know what country or what jurisdiction their data is located data
segregation - encrypted information from multiple companies may be stored
on the same hard disk, so a mechanism to separate data should be deployed
by the provider recovery - every provider should have a disaster recovery
protocol to protect user data investigative support - if a client suspects faulty
activity from the provider, it may not have many legal ways pursue an
investigation long-term viability - refers to the ability to retract a contract and
all data if the current provider is bought out by another firm
Given that not all of the above need to be improved depending on the
application at hand, it is still paramount that consensus is reached on the
issues regarding standardization.

Security Benefits

There are definitely plenty of concerns regarding the inability to trust cloud
computing due to its security issues. However, cloud computing comes with
several benefits that address data security. The following sections looks into
addressing concepts such as centralized data, incident response or logging.
Centralized Data refers to the approach of placing all eggs in one basket. It
might be dangerous to think that if the cloud goes down, so does the service
they provide, but at the same time, it is easier to monitor. Storing data in the
cloud voids many issues related to losing laptops or flash drives, which has
been the most common way of losing data for large enterprises or government
organizations. The laptop would only store a small cache to interface with the
thin client, but the authentication is done through the network, in the cloud.

12 | P a g e
In addition to this, when a laptop is known to be stolen, administrators can
block its attempted access based on its identifier or MAC address. Moreover, it
is easier and cheaper to store data encrypted in the cloud that to perform disk
encryption on every piece of hardware or backup tape.
Incident Response refers to the ability to procure a resource such as a
database server or supercomputing power or use a testing environment
whenever needed. This bypasses the supplemental red tape associated with
traditional requesting of resources within the corporate world. Also, if a
server is down for re-imaging or disk clean-up, the client may easily create
similar instances of their environment on other machines, improving the
acquisition time. From a security standpoint, cloud providers already provide
algorithms for generating hashes or checksums whenever a file is stored in
the cloud, which bypasses the local/client need for encrypting. This does not
imply that clients should not encrypt the data before sending it, but merely
that the service is already in place for them.
Password Assurance Testing is a service that can be used to harness the
computational power of the cloud in attempts to break into a company's
system by guessing passwords. This approach minimizes resources and time
spent on the client side. Logging benefits come from the idea that the client
need not worry about storage space for log files and enjoys a faster way of
searching through them. Moreover, it allows for a convenient way to observe
which user accessed certain resources at any given time.
Improvement of Secure Software refers to several aspects in the development
lifecycle of a product. Initially,
a company that is thinking of placing their application in the cloud knows that
the cost of running the application are directly proportional with the number
of processing cycles, thus creating an incentive for an optimal implementation.
Secondly, it becomes easier to monitor the effects of various security policies
implemented in the software, without the overhead of traditional switching
environments from development to production or to testing. Creating a new
environment simply means creating a clone of the extant one.
Thirdly, software runs behind an architecture that is built for secure
transactions at a physical, data link, network and transport layer, making it
easier to design the application without the outspoken need of a security
software engineer. Moreover, some cloud providers may use code scanning to
detect vulnerabilities in the application code.

13 | P a g e
So why is it hard?
There are several factors that make Incident Management one of the most
difficult and expensive of all the ITIL processes. By no means, this is an
exhaustive list. Please feel free to add to it.

Complex System Architecture


Over the last 60 years, IT industry has seen breakneck growth. IT services
have evolved to meet increasingly sophisticated and complex business
demands. A typical IT service today includes the following:

Hardware

 One or more servers or virtual machines


 SAN storage
 Network components
 Backup servers

Software

 Hypervisor (if virtualized)


 Operating system
 One or more databases
 One or more web servers
 One or more application servers
 Load balancing servers
 Monitoring software
 Interfaces to internal and external services

In the above I am not even talking about Business Continuity which adds their
own layers. This results in a complex architecture which is difficult to
understand and manage. What’s more, the architecture is often not
documented adequately and is not up to date.

Poorly architected or missing processes

14 | P a g e
In addition to inadequate documentation, many IT departments do not have
processes to manage their IT service. This results in ad-hoc and sometimes
unauthorized changes resulting in cascading effects.

Silo effect caused by super specialization among IT


professionals
As a result of complex architectures super specialists are becoming necessary
to manage them. This creates silos in which super specialists operate with
specialist jargon that is only comprehensible within the silos but not
elsewhere. When serious incidents are reported, it is not uncommon to find
half a dozen domain experts spending valuable time on swat calls.

Incomplete monitoring of processes and systems


For a variety of reasons, not all of the processes and systems that belong to an
IT Service are monitored. While there seems to be no alternative to this
because of cost and resource issues, it results in blind spots. An unmonitored
Incident in one stack may result in an unpredictable Incident in another, but
may take a long time to diagnose because no one is aware of the original
Incident.

Lessons learned do not propagate


Even though domain experts may have excellent trouble shooting skills, once
a difficult Incident has been resolved, often they do not have the tools to
spread the knowledge. Search engines have reduced this problem somewhat
by providing tag based searches. Complex Incidents that have multiple or
cascading root causes cannot easily be captured in a community knowledge
base. This results in frequent re-inventing of the wheels.

Missing or unclear context in exception handling


IT hardware and software are often developed in an environment that is far
removed from the ecosystems where they eventually end up. When
exceptions do occur, the exception handlers usually do not understand the
context and therefore do not provide a comprehensible explanation.

15 | P a g e
There are many other reasons why Incident Management remains hard. There
is a tendency to throw resources at Incidents when underlying cause is poorly
architected software, infrastructure or business processes. Sufficient attention
is not paid to training IT professionals in troubleshooting which remains an
art form. Finally it is getting more and more expensive to hire trained
professionals and IT budgets shrinking.

Better automation and autonomics provide some relief from Incident


Management but that is a topic for another blog.

Overview
The cloud is the resource that incorporates routers, firewalls, gateway, proxy
and storage servers. The interaction among these entities needs to occur in a
secure fashion. For this reason, the cloud, just like any data center,
implements a boundary protection also known as the demilitarized zone
(DMZ). The most sensitive information is stored behind the DMZ. Other
policies that run in the cloud are resource priority and application
partitioning. Resource priority allows processes or hardware requests in a

16 | P a g e
higher priority queue to be serviced first. Application partitioning refers to the
usage of one server or storage device for various clients that may have data
encrypted differently. The cloud should have policies that divide the users'
view of one application from the backend information storage. This may be
solved by using virtualization, multiple processors or network adaptors.

Conclusion
Cloud computing is still struggling in its infancy, with positive and negative
comments made on its possible implementation for a large-sized enterprise.
IT technicians are spearheading the challenge, while academia is bit slower to
react. Several groups have recently been formed, such as the Cloud Security
Alliance or the Open Cloud Consortium, with the goal of exploring the
possibilities offered by cloud computing and to establish a common language
among different providers. In this boiling pot, cloud computing is facing
several issues in gaining recognition for its merits. Its security deficiencies
and benefits need to be carefully weighed before making a decision to
implement it. However, the future looks less cloudy as far as more people
being attracted by the topic and pursuing research to improve on its
drawbacks.

**********************************************************
******************
*************************************************
*******************************

17 | P a g e

You might also like