You are on page 1of 67

Security IN CLOUD COMPUTING

Presented By: Prof. Mukesh Parmar


Flow of presentation :
Introduction

Security issues Data

issues Performance

issues

Energy related issues

Bandwidth related issues

Fault tolerance Conclusion


WHAT IS CLOUD COMPUTING?

Cloud computing means using multiple server computers via a


digital network, as though they were one computer.

We can say , it is a new computing paradigm, involving data and/or


computation outsourcing, with

– Infinite and elastic resource scalability


– On demand “just-in-time” provisioning
– No upfront cost … pay-as-you-go

The name cloud computing was inspired by the cloud symbol that's often
used to represent the Internet in flowcharts and diagrams.
BENEFITS

Traditionally, without a cloud, a web server runs as a single


computer or a group of privately owned computers

If the computer's website or web application suddenly becomes more


popular, and the amount of requests are far more than the web server can
handle, the response time of the requested pages will be increased due to
overloading. On the other hand, in times of low load much of the
capacity will go unused.

If the website, service, or web application is hosted in a cloud,


however, additional processing and compute power is available from
the cloud provider.

If the website suddenly becomes more popular, the cloud can


automatically direct more individual computers to work to serve pages
for the site, and more money is paid for the extra usage. If it becomes
unpopular, however, the amount of money due will be less. Cloud
computing is popular for its pay-as-you-go pricing model.
In the past computing task there were not possible without the installation
of application software on a user's computer. A user bought a license for
each application from a software vendor and obtained the right to install
the application on one computer system.

• With the development of local area networks (LAN) and more


networking capabilities, the client-server model of computing was
born, where server computers with enhanced capabilities and large
storage devices could be used to host application services and data
for a large workgroup.
Architecture

The two most significant components of


cloud computing architecture are known as

the front end


& the back
end.
The front end is the part seen by the client, i.e., the computer user. This
includes the client’s network (or computer) and the applications used to
access the cloud via a user interface such as a web browser.

The back end of the cloud computing architecture is the cloud itself,
comprising various computers, servers and data storage devices.

Cloud architecture, the systems architecture of the software systems


involved in the delivery of cloud computing, typically involves multiple
cloud components communicating with each other over loose coupling
mechanism such as messaging queue.
Data storage

Cloud storage is a model of networked computer data


storage where data is stored on multiple virtual servers, in general
hosted by third parties, rather than being hosted on dedicated servers.

Hosting companies operate large data centers; and people who require
their data to be hosted buy or lease storage capacity from them and use
it for their storage needs.

The data center operators, in the background, virtualizes the resources


according to the requirements of the customer and expose them as virtual
servers, which the customers can themselves manage. In the physical
sense, the resource may span across multiple servers.
Cloud computing services are broadly
divided into three categories:
Infrastructure as a Service (IaaS) :

This is the base layer of the cloud stack. It serves as a foundation for the
other two layers, for their execution. The keyword behind this stack is
Virtualization.

your application will be executed on a virtual computer (instance). You


have the choice of virtual computer, where you can select a
configuration of CPU, memory & storage that is optimal for your
application.

The whole cloud infrastructure viz. servers, routers, hardware based load-
balancing, firewalls, storage & other network equipments are provided by
the IaaS provider.
Some common examples are Amazon, GoGrid, 3 Tera, etc.
Platform as a Service (PaaS)

• Here, a layer of software, or development environment is


encapsulated & offered as a service, upon which other higher levels
of service can be built.

• The customer has the freedom to build his own applications,


which run on the provider‟s infrastructure.

•To meet manageability and scalability requirements of the


applications, PaaS providers offer a predefined combination of OS
and application servers.

• such as LAMP platform (Linux, Apache, MySql and PHP


• Google‟s App Engine, Force.com, etc are some of the popular PaaS
examples
Software as a Service (SaaS)

In this model, a complete application is offered to the customer, as a


service on demand. A single instance of the service runs on the cloud &
multiple end users are serviced.

On the customers‟ side, there is no need for upfront investment in servers


or software licenses, while for the provider, the costs are lowered,
since only a single application needs to be hosted & maintained.

Today SaaS is offered by companies such as Google, Salesforce,


Microsoft, etc.
Software as a Service (SaaS)

• In this model, a complete application is offered to the customer, as a


service on demand. A single instance of the service runs on the cloud
& multiple end users are serviced.

•On the customers‟ side, there is no need for upfront


investment in servers or software licenses, while for the provider,
the costs are lowered, since only a single application needs to be
hosted & maintained.

•Today SaaS is offered by companies such as Google, Salesforce,


Microsoft, etc.
DEPLOYMENT MODELS
Private cloud:
The cloud infrastructure is owned or leased by a single
organization and is operated solely for that organization.

Community cloud:
The cloud infrastructure is shared by several
organizations and supports a specific community that has shared
concerns (e.g., mission, security requirements, policy).

Public cloud:
The cloud infrastructure is owned by an organization selling
cloud services to the general public or to a large industry group.
Hybrid cloud:
The cloud infrastructure is a composition of two or more
clouds that remain unique entities but are bound together by
standardized orproprietary technology.
ISSUES IN CLOUD COMPUTING

Security issues
- Physical security
- Operational security
- Programmatic security

Data issues
- Data backup
- Data usage
- Data loss
- Data integrity
- Data theft
Performance issue

Design issues
- Energy management
- Novel cloud architectures
- Software Licensing

Reliability

Legal issuues
- The Physical
Location of
your Data
- Responsibility
of your Data
- Intellectual
Property
SECURITY ISSUES

Security is generally perceived as a huge issue for the cloud

The survey found that while 58 percent of the general population


and 86 percent of senior business leaders are excited about the
potential of cloud computing, more than
90 percent of these same people are concerned about the
security, access and privacy of their own data in
the cloud.

There is a possibility where a malicious user can penetrate the cloud


by impersonating a legitimate user, there by infecting the entire cloud
thus affecting many customers who are sharing the infected cloud.
Security Is the Major Challenge
 Some of the security problem which is faced by the Cloud
computing,

 Data Integrity
When a data is on a cloud anyone from any location can access those data’s
from the cloud. Cloud does not differentiate between a sensitive data from a
common data thus enabling anyone to access those sensitive data’s. Thus there is a
lack of data integrity in cloud computing

 Data Theft
Most of the cloud Vendors instead of acquiring a server tries to lease a server
from other service providers because they are cost affective and flexible for
operation.

The customer doesn’t know about those things, there is a high possibility that
the data can be stolen from the external server by a malicious user.
Security on Vendor level
Vendor should make sure that the server is well secured from all the external
threats it may come across. A Cloud is good only when there is a good security
provided by the vendor to the customers.

Security on User level


Even though the vendor has provided a good security layer for the
customer, the customer should make sure that because of its own action, there
shouldn’t be any loss of data or tampering of data for other users who are
using the same Cloud.

Information Security
Security related to the information exchanged between different
hosts or between hosts and users. This issues pertaining to secure
communication, authentication, and issues concerning single sign
on and delegation.
THERE MAY BE

Physical security :
- Physical location of data centers; protection of data centers
against disaster and intrusion.

 How much safe is data from Natural disaster?

- Data can be redundantly store in multiple physical location.


- Physical location should be distributed across world.

 Data Location
- When user use the cloud, user probably won't know exactly where
your data is hosted, what country it will be stored in?
•Traditional Security

- These concerns involve computer and network intrusions or attacks that


will be made possible or at least easier by moving to the cloud.

Concerns in this category include:


Authentication and Authorization :
- The enterprise authentication and authorization framework does
not naturally extend into the cloud. How does a company meld its existing
framework to include cloud resources? Furthermore, how does an
enterprise merge cloud security data (if even available) with its own
security metrics and policies?

VM-level attacks.
- Potential vulnerabilities in the VM technology used by cloud
vendors are a potential problem in multi-tenant architectures.
•Third-party data control

Cloud computing facilitates storage of data at a remote site to maximize


resource utilization. As a result, it is critical that this data be protected and
only given to authorized individuals.

This essentially amounts to secure third party publication of data that is


necessary for data outsourcing, as well as external publications.

The legal implications of data and applications being held by a third party
are complex and not well understood. There is also a potential lack of
control and transparency when a third party holds the data.

All this is prompting some companies to build private clouds to avoid these
issues and yet retain some of the advantages of cloud computing.
Operational security

Who has access?


- Access control is a key concern, because insider attacks
are a huge risk. A potential hacker is someone who has been
entrusted with approved access to the cloud.

- Anyone considering using the cloud needs to look at who is


managing their data and what types of controls are applied to these
individuals.

What type of training does the provider offer their


customers ?

- This is actually a rather important item, because people will


always be the weakest link in security. Knowing how your provider
trains their customers is an important item to review.
What is the long-term viability of
the provider?

- How long has the cloud provider been in business


and
what is their track record. If they go out of business, what happens to
your data? Will your data be returned, and if so, in what format?

What is the disaster recovery/business continuity


plan ?
- While you may not know the physical location of your
services, it is physically located somewhere. All physical locations face
threats such as storms, natural disasters, and loss of power.
- In case of any of these events, how will the cloud
provider respond, and what guarantee of continued services are they
promising?
Cloud Computing Attacks

 As more companies move to cloud computing, look for hackers to follow.


Some of the potential attack vectors criminals may attempt include:

Denial of Service (DoS) attacks


-Some security professionals have argued that the cloud is more
vulnerable to DoS attacks, because it is shared by many users, which
makes DoS attacks much more damaging.
- Twitter suffered a devastating DoS attack during 2009.

Side Channel attacks


– An attacker could attempt to compromise the cloud by placing a
malicious virtual machine in close proximity to a target cloud server and
then launching a side channel attack.
Authentication attacks
Authentication is a weak point in hosted and virtual services and is
frequently targeted. There are many different ways to authenticate
users; for example, based on what a person knows, has, or is.
The mechanisms used to secure the authentication process and
the methods used are a frequent target of attackers.
• Man-in-the-middle cryptographic attacks
This attack is carried out when an attacker places himself between
two users. Anytime attackers can place themselves in the
communication’s path, there is the possibility that they can
intercept and modify communications.
AUTHENTICATION
In the cloud environment, authentication and access control are
more important than ever since the cloud and all of its data are accessible
to anyone over the Internet. The TPM(see note) can easily provide
stronger authentication than username and passwords.

When a user is fired or reassigned, the customer’s identity


management system can notify the cloud provider in real- time so that
the user’s cloud access can be modified or revoked within second.

If the fired user is logged into the cloud, they can be


immediately disconnected. Trusted Computing enables authentication of
client PCs and other devices, which also is critical to ensuring security in
cloud computing.
key guidelines :-

Carefully plan the security and privacy aspects of cloud


computing solutions before engaging them.

• Planning helps to ensure that the computing environment is as


secure as possible and is in compliance with all relevant
Organizational policies and that data privacy is maintained.

• To maximize effectiveness and minimize costs, security and privacy must


be considered from the initial planning stage at the start of the systems
development life cycle.

• Attempting to address security after implementation and deployment is


not only much more difficult and expensive, but also more risky.
 Understand the cloud computing environment offered by the
cloud provider and ensure that a cloud computing solution
satisfies organizational security and privacy requirements.

Cloud providers are generally not aware of a specific organization’s security


and privacy needs.

Organizations should require that any selected public cloud computing solution
is configured, deployed, and managed to meet their security, privacy, and other
requirements.

Critical data and applications may require an agency to undertake a


negotiated service agreement in order to use a public cloud.

Other alternatives include cloud computing environments with a more suitable


deployment model, such as a private cloud, which offers an organization
greater oversight and control over security and privacy.
 Ensure that the client-side computing environment meets
organizational security and privacy requirements for cloud
computing.

Cloud computing encompasses both a server and a client side. Maintaining


physical and logical security over clients can be
troublesome, especially with embedded mobile devices such as smart phones.
Built-in security mechanisms often go unused or can be overcome or
circumvented without difficulty by a knowledgeable party to gain control over
the device.
Because of their ubiquity, Web browsers are a key element for client- side
access to cloud computing services. Clients may also entail small lightweight
applications that run on desktop and mobile devices to
access services.
The various available plug-ins and extensions for Web browsers are notorious for
their security problems. Many browser add-ons also do not provide automatic
updates, increasing the persistence of any existing vulnerabilities.
Maintain accountability over the privacy and
security of
data and applications implemented and deployed in public
cloud computing environments

Organizations should employ appropriate security management practices and


controls over cloud computing. Strong management practices are essential for
operating and maintaining a secure cloud computing solution.

Establishing a level of confidence about a cloud service environment


depends on the ability of the cloud provider to provision the security
controls necessary to protect the organization’s data and applications.
Server-Side Protection.
Virtual servers and applications, need to be secured both physically and
logically.

organizational policies and procedures, hardening of the operating


system and applications should occur to produce virtual machine images
for deployment.

Care must also be taken to provision security for the virtualized


environments in which the images run.

virtual firewalls can be used to isolate groups of virtual machines from


other hosted groups, such as production systems from development systems
or development systems from other cloud-resident systems

Carefully managing virtual machine images is also important to avoid


accidentally deploying images under development or containing
vulnerabilities.
Some issues and the precautions that apply as a set of
recommendations for organizations to follow when planning,
reviewing, negotiating, or initiating a public cloud service
outsourcing arrangement.

Governance :
Extend organizational practices pertaining to the policies,
procedures, and standards used for application development and service
provisioning in the cloud, as well as the design, implementation, testing, and
monitoring of deployed or engaged services.
Put in place audit mechanisms and tools to ensure
organizational practices are followed throughout the system lifecycle.

Compliance :
Understand the various types of laws and regulations that impose
security and privacy obligations on the organization.
Review and assess the cloud provider’s offerings with respect to the
organizational requirements to be met and ensure that the contract terms
adequately meet the requirements.
Data Protection :
Evaluate the suitability of the cloud provider’s data
management solutions for the organizational data concerned.
Availability :
Ensure that during an intermediate or prolonged disruption or a
serious disaster, critical operations can be immediately resumed and that all
operations can be eventually reinstituted in a timely and organized manner.

Trust :
Incorporate mechanisms into the contract that allow visibility into
the security and privacy controls and processes employed by the cloud
provider, and their performance over time.
Institute a risk management program that is flexible enough to
adapt to the continuously evolving and shifting risk landscape.
Identity and Access
Management Ensure that adequate safeguards are in place to
secure authentication, authorization, and other identity and access
management functions.
DATA ISSUES

Data Loss :-
Data loss is a very serious problem in Cloud computing. If the
vendor closes due to financial or legal problems there will be a loss of
data for the customers. The customers won’t be able to access those data’s
because data is no more available for the customer as the vendor shut
down.

Data Location :-
When it comes to location of the data nothing is transparent
even the customer don’t know where his own data’s are located. The
Vendor does not reveal where all the data’s are stored. The Data’s won’t
even be in the same country of the Customer, it might be located
anywhere in the world.
• Data Lock-In :-
Software stacks have improved interoperability among
platforms, but the APIs for Cloud Computing itself are still essentially
proprietary, or at least have not been the subject of active standardization.
Thus, customers cannot easily extract their data and
programs from one site to run on another.
For example, an online storage service called The Linkup shut
down on August 8, 2008 after losing access as much as 45% of customer
data [12]. The Linkup, in turn, had relied on the online storage service
Nirvanix to store customer data, and now there is finger pointing between
the two organizations as to why customer data was lost.
The obvious solution is to standardize the APIs so that a
SaaS developer could deploy services and data across multiple Cloud
Computing providers so that the failure of a single company would not
take all copies of customer data with it.
Data segregation :-
Data in the cloud is typically stored in a shared environment
whereby one customer’s data is stored alongside another customer’s data.
hence it is difficult to assure data segregation.

customers should review the cloud vendor’s architecture to


ensure proper data segregation is available and that data leak prevention
(DLP) measures are in place.

Nearly all service providers now support SSL(secure


socket layer-protocol) connections to ensure that the provider is
encrypting the data traversing the network.

When the service provider provides encryption for the


consumer’s data, the consumer should be concerned with the protocols
and implementation of the encryption system, as these two factors dictate
the effectiveness of the encryption system.
Data Confidentiality and Auditability :-
Current cloud offerings are essentially public (rather
than private) networks, exposing the system to more attacks.

Auditability could be added as an additional layer beyond the


reach of the virtualized guest OS (or virtualized application environment),
providing facilities arguably more secure than those built into the
applications themselves and centralizing the software responsibilities
related to confidentiality and auditability into a single logical layer.

Data integrity and data theft :-


(as we have shown it as a part of security
issues)
Deletion of data :-

•An essential point is that data that has to be deleted by


the user because he or she no longer needs it or may no longer
process it for another reason is also deleted by the provider and no
more copies of data are available.

•This can lead to problems, in particular in connection


with backups that are created by the provider if these contain data
belonging to a number of his customers and targeted deletion of
individual data items proves financially unreasonable or technically
inappropriate in terms of feasibility.

•Data deletion is also of prime importance


when terminating the contract with the provider.
Restitution of data :-
Upon termination of the contract, the orderly return of
data to the user has to be ensured. This requires sufficiently long periods
of notice for the user to be able to take the necessary measures to ensure
the availability and constant further processing of data after termination
of the contract. The form in which the data is to be delivered to the user
by the provider must also be ascertained.

Service level agreements :-


According to the purpose for which the data is
processed it is important to agree on binding service levels for
availability and data recovery and if necessary, safeguarded by
supporting fixed penalties in the event of non-compliance with the
agreed service levels.
Topics covered till now…

Introduction

Types of services and architecture

Security issues

Data related issues


Topics will be covered…

Performance issues

Bandwidth related issues

Cloud interoperability

Energy related issues Fault

tolerance Conclusion
WHY PERFOMANCE ?
PERFOMANCE ISSUES

Poor application performance causes companies to lose customers,


reduce employee productivity, and reduce bottom line revenue.

• Application crashes due to poor performance cost money and impact


morale. If applications cannot adequately perform during an increase in
traffic, businesses lose customers and revenue

• Sluggish access to data, applications, and Web pages frustrates


employees and customers alike, and some performance problems and
bottlenecks can even cause application crashes and data losses.

• Positive employee productivity relies on solid and reliable application


performance to complete work accurately and quickly.
 In general the issues may be…

• Poor application performance or application hang-ups :


Usually the application is starved for RAM or CPU cycles, and
faster processors or more RAM is added.

• Slow access to applications and data :


Bandwidth is usually the cause, and the most common solution is to add
faster network connections.

• When companies or cloud vendors take the simplistic “more hardware solves
the problem” approach to cloud performance, they waste money.

• Hence, Adding virtual machines may be a short-term solution to the problem, but
adding machines is a manual task. If a company experiences a sudden spike in
traffic, how quickly will the vendor notice the spike and assign a technician to
provision more resources to the account?
• Storage, CPU, memory, and network bandwidth all come into play at
various times during typical application use.

• For example, Application switching places demands on the CPU as one


application is closed, flushed from the registers, and another application is
loaded. If these applications are large and complex, they put a greater demand
on the CPU.

• Serving files from the cloud to connected users stresses a number of


resources, including disk drives, drive controllers, and network
connections when transferring the data from the cloud to the user.

• Therefore, one of the most common and costly responses to scaling issues
by vendors is to over-provision customer installations to accommodate a
wide range of performance issues.
To system performance through hardware
and software throughput gains is defeated
when a system is swamped by multiple,
simultaneous demands.
• That 10 gigabit pipe slows considerably when it serves hundreds of requests
rather than a dozen. The only way to restore higher effective throughput and
performance in such a “swamped resources” scenario is
to scale – add more of the resource that is overloaded.

Horizontal and Vertical Scalability :

When increasing resources on the cloud to restore or


Improve application performance, administrators can scale either
horizontally (out) or vertically (up), depending on the nature of the
resource constraint.
 VERTICAL SCALING :

Vertical scaling (up) entails adding more resources to the


same computing pool. -- for example, adding more RAM, disk, to handle
an increased application load.
Vertical scaling can handle most sudden, temporary peaks
in application demand on cloud infrastructures since they are not typically
CPU intensive tasks.

 HORIZONTAL SCALING :

Horizontal scaling (out) requires the addition of more machines


devices to the computing platform to handle the increased demand
Sustained increases in demand, however, require horizontal scaling
and load balancing to restore and maintain peak performance.
Administrative and Geographical Scalability

• While adding computing components or virtual resources is a logical


means to scale and improve performance, few companies realize that
the increase in resources may also necessitate an increase in
administration

• Hence, Companies with critical cloud applications may also consider


geographical scaling as a means to more widely distribute application load
demands or as a way to move application access closer to dispersed
communities of users or customers.

• Geographical scaling may also be necessary in environments where it is


impractical to host all data or applications in one central location.
Bandwidth requirement

Security concerns have long dominated much of the cloud


conversation and caused many companies to deliberate about
getting started in the cloud

But while the focus has been on cloud security, another potential
bottlenecks are on the way like – bandwidth requirement.

Since bandwidth is rarely a problem for companies exploring the


cloud in a small way, But as they start expanding their cloud
footprint and running production-oriented applications, data
movement takes on a completely different scale.

As enterprises start to move real workloads out to the cloud look for
bandwidth to become top of mind.
The problem arise when…
when you have dozens of developers all trying to use cloud
resources?

When you put high-transaction processes in the cloud that need to


“talk back” to your data center?

When you are trying to move a lot of video or graphics between your
business users and the cloud?

Hence , Network usage is about to get much more


demanding, and the traffic will need to flow without bottlenecks (or
saturating the network) for an organization’s cloud strategy to work.
The scenario in most cloud is, at low load, App Engine will not
dedicate much server resource to an application, letting a single
server monitor the application.

When this server is subjected to an extremely heavy load, the single App
Engine server appears to make connection and service every request that
arrives to an application at least partially, regardless of the number and
size.

In the meantime, it appears to be calling for assistance from the other


servers in the cluster in order to distribute the load efficiently.
This would probably result in a delay in servicing a request for the
client.

According to the Network Performance Frustration Research Report by


Dimension Data ,The Internet traffic that includes cloud services of 2015
will be at least 50 times larger than it was in 2006.
Thus the network growth at these levels will require a dramatic
expansion of bandwidth, storage, and traffic management
The proposed solutions are…

With the increase of cloud traffic, some cloud service providers


direct their client’s traffic to the geographically closest available
servers.

Use of High Speed Edge Routers : -


Another requirement for traffic problem elimination is
installing high-performance, intelligent routers at the edge of the
network, through which operators can efficiently manage bandwidth
while delivering cloud services over cable infrastructure.

Edge routers focus on processing large numbers of cloud packets with


simplified per packet logic.
To be effective edge routers also need to
offer support advanced- load balancing to
guarantee the optimization of network
infrastructure assets.
• There is also a proposed solution to use optical fiber to connect
all the nodes to improve bandwidth. But the problem is increase
in cost.

• Other problem is that this will not be going to happen globally


in near future since replacement of these technologies will cost
high and cannot be employed globally in one day.

• So , some cloud vendors applied this technology only for


connecting cloud servers and has improve up to some extent
Cloud interoperability

There may be situations where an organization or enterprise needs to be


able to work with multiple cloud providers.

Cloud interoperability and the ability to share various types of


information between clouds become important in such scenarios.

This broad area of cloud interoperability is sometimes known as


cloud federation.

"Cloud federation manages consistency and access controls


when two or more independent geographically distributed
clouds share either authentication, files, computing
resources, command and control, or access to storage
resources."
The following are some of the considerations in cloud federation:

1.An enterprise user wishing to access multiple cloud services would be


better served if there were just a single sign-on scheme. This scheme
may be implemented through a central trusted authentication server to
which all the cloud services interface could be used.

2.An often-ignored concern for cloud confederation is charging or


billing and reconciliation. Management and billing systems need to work
together for cloud federation to be a viable option. This reality is
underlined by the fact that clouds rely on per-use billing.

Cloud federation is a relatively new area in cloud computing. It is


likely that standards bodies will first need to agree upon a set of
requirements before the service interfaces can be defined and
subsequently realized.
ENERGY RELATED ISSUES

Cloud computing is rapidly growing in importance as increasing numbers


of enterprises and individuals are shifting their workloads to cloud service
providers. Services offered by cloud providers such as Amazon,
Microsoft, IBM, and Google are implemented on thousands of servers
spread across multiple geographically distributed data centers.

The electricity costs involved in operating a large cloud


infrastructure of multiple data centers can be enormous. In fact,
cloud service providers often must pay for the peak power they draw,
as well as the energy they consume.
Lowering these high operating costs is one of the challenges facing cloud
service providers.

Moreover, there are other crucial problems that arise from high power
consumption. Insufficient or malfunctioning cooling system can lead to
overheating of the resources reducing system reliability and devices
lifetime.

In addition, high power consumption by the infrastructure leads to


substantial carbon dioxide (Co2) emissions contributing to the
greenhouse effect.
Solutions :-
Geographical distribution of the data centers exposes many
opportunities for cost savings due to more energy consumption.

First, the data centers are often exposed to different electricity


markets, meaning that they pay different energy and peak power
prices.

Finally, the data centers may be located in areas with widely different
outside temperatures, which have an impact on the amount of cooling
energy used.
Solutions :-
Geographical distribution of the data centers exposes many
opportunities for cost savings due to more energy consumption.

the data centers are often exposed to different electricity markets,


meaning that they pay different energy and peak power prices.

Finally, the data centers may be located in areas with widely different
outside temperatures, which have an impact on the amount of cooling
energy used.
Given the different characteristics of the data centers’ energy
consumptions, energy prices, and peak power prices, it becomes clear
that we can lower operating costs by intelligently placing (distributing)
the computational load across the wide area.

Load distribution policy for distributing client load across multiple


data centers to minimize electricity cost.

To reduce energy consumption and cost, each data center only keeps as
many servers active as necessary to service the current workload.
FAULT TOLERANCE

Fault Tolerance is one of the key issues of cloud computing. Fault


tolerance is concerned with all the techniques necessary to enable a system
to tolerate software faults.

These software faults may or may not manifest themselves during


systems operations, but when they do, software fault tolerant techniques
should provide the necessary mechanisms of the software system to
prevent system failure occurrences.

Fault tolerance techniques are employed during the procurement, or


development, of the software. When a fault occurs, these techniques
provide mechanisms to the software system to prevent system failure from
occurring
FAULT TOLERANCE POLICIES :

Fault tolerance (FT) policies can typically be listed into two sets:
reactive fault tolerance policies and proactive fault tolerance
policies.

While reactive fault tolerance policies reduces the effect of failures on


application execution when the failure effectively occurs; proactive fault
tolerance policies keeps applications alive by avoiding failures through
preventative measures.

The principle of proactive action is to avoid clouds from faults, errors and
failures by predicting them and proactively replace the suspected
components by other correctly working components providing the same
function.
There are some approach like. . .

- Micro reboot techniques

-Filtering malicious input Another

approach is HA PROXY.

HA Proxy stands for High Availability Proxy and is used by


companies for load balancing and server fail over in the cloud.
Companies do not want their website to go down, or worse, for
users to notice the site is down.

In HA Proxy there is typically a load balancer to distribute the load


among a pool of web servers.
Whenever a server goes down it is taken out of the pool until it is once
again ready to handle requests.

HA Proxy has the ability to perform this task by doing periodic health
checks on all the servers in a cluster. Even if one of the application servers
is not working, users will still have the availability to the application.

HA Proxy will properly handle the request from users by redirecting


them to the second server, giving the impression that all is well.

It monitors all the flow on the network and also health of different
servers whenever any server fails it will redirect user request to
another server and inform administrator about that faults.
Conclusion :

• cloud computing is technology which enables the user to access


resources using front end machines , there is no need to install
any software.

• It helps in convert CapEx into Opex . But every technology has pros
and cons cloud computing has also various issues associated with
it. . cloud computing provides many services like PaaS,IaaS,SaaS.

• There are many issues and solutions are highlighted in this topic
like security issues, privacy issues, data related issues, energy
related issues etc. We are using one of them services like Google
docs, Gmail but we do not find such issues related with it.
Hence I conclude that this issues comes consider whenever we
consider it with big level companies , they are not going to affect
much more as single user.

Some of the issues like bandwidth problems will not be longer due to
technology are increasing and speed will not affect longer. So there are
good scope in this field.

You might also like