You are on page 1of 44


Chapter 1

Technology enhancement usually leads to easier and convenient life for mankind but it has
not always been positive for Mother Nature – our planet. Last few decades have seen huge
technological development along with leapfrog increase in consumption as the two most
populous countries of the world join the bandwagon. Earlier with consumption confined to
few select countries,emission and other side effects, though harmful was causing slow-
poisoning.But the needs of this new club of users from China, India, Brazil and many other
so called emerging economies are overwhelming Mother Nature particularly her capabilities
to sustain the overall Green House Gas (GHG) emission. This in essence has sped the
obvious side effects – like increase in average global temperature, irregular weather pattern,
changing wind pattern, elevation of sea level, impacting plant and animal kingdom. The
impact on the nature of our planet has brought us to the junction where future needs to be
considered carefully, to sustain and make our planet green again, so as to prevent a
catastrophe from happening.

While Information and Communication Technology (ICT) is one of the greenest

technologies, at global scale of penetration and perpetual usage, the side effect is not
negligible.Impact on green starts from the decision of material to use for a given technology
or product to usage and finally decommissioning of the product,i.e., the product ends its
useful lifetime. Nature is further impacted by policy, rules and regulations in place and their
enforcement to check the harm done by a given product at all stages of its lifecycle.

1.1 Impact of ICT on GHG Emissions

In this section we present a brief overview of GHG emission from ICT. We first present an
overview of emission and in subsequent sub-sections give further details regarding emission
from telecom sector, data centers and enduser devices respectively.

1.1.1Overall Impact :-



The ICT sector has grown at an extraordinary pace over the last two decades transforming
society and economy. ICT impacts business, lifestyle and family relationships unlike never
before. As the ICT sector grows, GHG emission from the ICT sector will continue to grow.
Below are a few examples of the explosive growth of ICT:
The number of computers connected to the Internet is expected to cross 3 billion by
2011.According to some projections, by 2020 the number of devices connected to the
Internet will be around 50 billion. Today there are more than 1.5 billion users of Internet. As
more and more users from developing nations start using Internet, this number will see a
significant increase over the years. Many of these users from developing nations will access
Internet via their mobile phones.
Global mobile phone penetration is already reaching 50% while the number of mobile
phone users in India as of May 2010 has already crossed 617 million with an annual growth
of close to 50% .
For most economies, the share of Gross Domestic Product (GDP) attributable to the
ICT sector is already quite significant and is increasing each year. In India, ICT sector
contributed about 5.8% of the national GDP in Fiscal Year 2009. Share of GDP attributable
to ICT sector in developed economies such as United Kingdom is close to 7% . As of 2007,
the ICT sector was responsible for about 2% of total Carbon emissions at over 0.8 billion
tones of CO2 equivalent. With the kind of growth happening in the ICT sector, total
emissions from this sector is estimated to rise to about 1.4 billion tones of emission by 2020.
Segment wise contribution towards the total carbon footprint of the ICT sector is shown in
Figure 1.



Fig 1 Global ICT footprint by sector

1.1.2 Impact of Telecom Infrastructure and Devices :-

Since 2007, 37% of all ICT emission is due to telecom infrastructure and devices. This
includes emission caused by mobile network infrastructure,mobile devices, and fixed
broadband and narrowband devices. Increasingly as more and more people get access to
mobile telephony, the total emissions from mobile devices and infrastructure will increase
correspondingly. By the year 2020, as much as 25% of the total ICT Carbon footprint will be
from telecom devices and infrastructure. This amounts to almost 349 million tons of CO2
equivalent. Figure 2 shows the break up of the same into various segments. We can see that
over half of the contribution would come from mobile networks alone. Mobile network
equipment is operated nonstop round the clock and 365 days a year. As the number of mobile
subscribers increase, more number of cell sites are added to the network and the energy bill
for maintaining the network continues to soar. Almost 80% of a mobile operator’s energy
consumption is due to radio base station equipment. The remaining energy is consumed in
core networks. Sub-optimal network design leads to significant inefficiencies with respect to
energy consumption and therefore carbon footprint. Within the radio base stations site and
equipment, significant opportunities exist to improve energy efficiency of radio equipment,



signal processing and associated circuitry, power amplifiers, power supply and air

Fig 2 CO2 footprint of telecom devices and infrastructure by 2020 (Mt: Million tons).

1.1.3 Impact of Data Center :-

Data Centers are the fastest growing segment of ICT and are major contributors of carbon
emission. Rapid growth in use of Internet, web applications, online services, Voice Over
Internet Protocol (VOIP), IP Television(IPTV) and enterprise Information Technology (IT)
needs has resulted in proliferation of data centers . Web services providers are building
cavernous warehouse scaled data centers to meet their growing needs. As of 2007, 14% of all
ICT emission is caused by Data Centers. This includes both corporate data centers and as the
Internet data centers where large scale consumer facing web applications such as search
engines and social networking sites are hosted. Roughly 50% of the emission due to data
centers is due to power system losses and cooling loads. Of the remaining, the bulk of the
emission is caused by the energy consumed to power up low cost commodity servers that
now dominate most data centers.



Fig 3 Power consumotion by equipments in data centre.

1.1.4 Impcat of End User Devices :-

By far the largest contribution to CO2 emissions in the ICT sector is from end user devices
such Personal Computers (PCs) and peripherals [32]. This because there are already more
than a billion PCs worldwide and the number is expected to touch 4 billion by 2020. PCs
alone will be the single largest contributor to ICT emissions responsible for almost 42% of
all ICT emissions.By 2020, two major technological changes are expected to take place:
(1) Desktop PCs will be largely replaced by the more energy efficient laptops,

(2) Almost all Cathode Ray Tube displays will be replaced by energy efficient

Liquid Crystal Displays. Both will bring significant efficiencies; however the increase in
number of PCs will mean that the total CO2 footprint in 2020 will be three times that of 2002

Fig 4. Power consumption by end user devices

The Internet is often represented as a cloud and the term “cloud computing” arises
from that analogy. Accenture defines cloud computing as the dynamic provisioning of IT



capabilities (hardware, software, or services) from third parties over a network. McKinsey
says that clouds are hardware-based services offering compute, network and storage capacity
where: hardware management is highly abstracted from the buyer; buyers incur infrastructure
costs as variable OPEX [operating expenditures]; and infrastructure capacity is highly elastic
(up or down).1 The cloud model differs from traditional outsourcing in that customers do not
hand over their own IT resources to be managed. Instead they plug into the cloud, treating it
as they would an internal data center or computer providing the same functions.

The increasing availability of high-speed Internet and corporate IP connections is

enabling the delivery of new network-based services. While Internet-based mail services
have been operating for many years, service offerings have recently expanded to include
network-based storage and network-based computing. These new services are being offered
both to corporate and individual end users . Services of this type have been generically called
cloud computing services. The cloud computing involves the provision, by a service
provider, of large pools of high performance computing resources and high-capacity storage
devices that are shared among end users as required..There are many cloud service models,
but generally, end users subscribing to the service have their data hosted by the service
provider, and have computing resources allocated on demand from the pool. The service
provider’s offering may also extend to the software applications required by the end user. To
be successful, the cloud service model also requires a high speed network to provide
connection between the end user and the service provider’s infrastructure.

In Cloud computing end users share a large, centrally managed pool of storage and
computing resources, rather than owning and managing their own systems [5]. There are
many definitions of cloud computing, and discussion within the IT industry continues over
the possible services that will be offered in the future.The broad scope of cloud computing is
succinctly summarize as:

Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources that can be rapidly provisioned and released with
minimal management effort or service provider interaction



Often using existing data centers as a basis, cloud service providers invest in the
necessary infrastructure and management systems, and in return receive a time-based or
usage-based fee from end users. The end user in turn sees convenience benefits from having
data and services available from any location, from having data backups centrally managed,
from the availability of increased capacity when needed. One of the most important point is
that, for many users it averts the need for a large oneoff investment in hardware, sized to suit
maximum demand, and requiring upgrading every few years . Further benefits flow from the
centralized maintenance of software packages, data backups, and balancing the volume of
user demands across multiple servers or multiple data center sites. A number of organizations
are already hosting and/or offering cloud computing services.

But while its financial benefits have been widely discussed,the shift in energy usage
in a cloud computing model has received little attention. Through the use of large shared
servers and storage units, cloud computing can offer energy savings in the provision of
computing and storage services, particularly if the end user migrates toward the use of a
computer or a terminal of lower capability and lower energy consumption. At the same time,
cloud computing leads to increases in network traffic and the associated network energy
consumption. Thus here we are trying to explore the balance between server energy
consumption,network energy consumption, and end-user energy consumption, to present a
fuller assessment of the benefits of cloud computing. The issue of energy consumption in
information technology equipment has been receiving increasing attention in recent years and
there is growing recognition of the need to manage energy consumption across the entire
information and communications technology (ICT) sector.And that is why we need to discuss
Green Cloud Computing in order to make cloud computing more eco-efficent and green.



Fig 5 Power consumption by some network devices used in cloud:



Chapter 2
Literature survey

Cloud computing has been defined by National Institute of Standards and Technology as a
model for enabling convenient, on-demand network access to a shared pool of configurable
computing resources (e.g., networks, servers,storage, applications, and services) that can be
rapidly provisioned and released with minimal management effort or cloud provider
interaction. Cloud computing can be considered a new computing paradigm insofar as it
allows the utilization of a computing infrastructure at one or more levels of abstraction, as an
on-demand service made available over the Internet or other computer network. Because of
the implications for greater flexibility and availability at lower cost, cloud computing is a
subject that has been receiving a good deal of attention lately.

Cloud computing services benefit from economies of scale achieved through versatile
use of resources, specialization, and other practicable efficiencies. However, cloud
computing is an emerging form of distributed computing that is still in its infancy. The term
itself is often used today with a range of meanings and interpretations. Much of what has
been written about cloud computing is definitional, aimed at identifying important paradigms
of use and providing a general taxonomy for conceptualizing important facets of service.

There are three types of cloud computing

1. Public Cloud Computing

2. Private Cloud Computing

3. Community/Hybrid Cloud Computing

2.1 Public Cloud Computing:

Public cloud computing is one of several deployment models that have been defined. A
public cloud is one in which the infrastructure and other computational resources that it
comprises are made available to the general public over the Internet. It is owned by a cloud
provider selling cloud services and, by definition, is external to an organization. Incase of
public cloud end user need not worry about any type of maintainence job etc. They simply



have to put all the data that they wish to access on the centrlized server provided by the
service provider. The public cloud computing is extremely suitable for end user with low
budget and for those people who wish to have access to their data from anywhere in the

2.2 Private Cloud Computing:

The second type of cloud computing service is private cloud computing. A private cloud is
one in which the computing environment is operated exclusively for an organization. It may
be managed either by the organization or a third party, and may be hosted within the
organization’s data center or outside of it. A private cloud gives the organization greater
control over the infrastructure and computational resources than does a public cloud.A
private cloud is hosted within an enterprise, behind its firewall, and intended only to be used
by that enterprise. In such cases, the enterprise invests in and manages its own cloud
infrastructure, but gains benefits from pooling a smaller number of centrally maintained
high-performance computing and storage resources instead of deploying large numbers of
lower performance systems. Further benefits flow from the centralized maintenance of
software packages, data backups, and balancing the volume of user demands across multiple
servers or multiple data center sites.
A private cloud computing model is suitable for a company which has to maintain a
big network, or many no of users. By using a private cloud computing model a company
greatly save its expenses to maintain its network. By using these model the cost of
maintainence of database,software,packages,diffrent log files etc can be drastically reduced.
For further benefits company can opt for buying sysytem with low sysytem confugration for
the purpose of cost cutting without compramising on their work efficency these is because all
the stroage and computational jobs can be done on the centralized server rather than
individual systems.

2.3 Community/Hybrid Cloud Computing:

The other deployment models that fall between public and private clouds are community
clouds and hybrid clouds. A community cloud is somewhat similar to a private cloud, but the
infrastructure and computational resources are shared by several organizations that have
common privacy, security, and regulatory considerations, rather than for the exclusive use of



a single organization. A hybrid cloud is a composition of two or more clouds (private,

community,or public) that remain unique entities but are bound together by standardized or
proprietary technology that enables interoperability.
Large companies can afford to build and expand their own data centers but small- to
medium-sized enterprises often choose to house their IT infrastructure in someone else’s
facility. A colocation center is a type of data center where multiple customers locate network,
server and storage assets, and interconnect to a variety of telecommunications and other
network service providers with a minimum of cost and complexity

Taking into consideration the oppurtunites,benefits and deveopment in cloud

computing many big names in the feild of IT industry have started working on finding and
developing applications and softwares for support of cloud computing. List consist of big
players like Amazon, Microsoft, Apple, Google etc. At present Amazon is heading the list.
Microsoft have developed MS Windows Azure for cloud computing, Google is working on
Chromium OS which is likely to be released by the end of 2011 and Apple is expected to
come up with a cloud OS by next year so there is a lot of revolution going in the feild of
Cloud Computing which makes it one of the most hottest topic in IT industry.



Chapter 3
Cloud Service Models And Security Issues in Cloud

Cloud Computing is a broad and ill-defined term, but in essence it amounts to virtualised
third-party hosting. That is, rather than renting part or all of an actual physical server from a
hosting company, you rent a certain amount of server resources. Your server runs inside a
virtual container which can be moved from one physical server to another without
interruption of service. Such a container is also capable of spanning multiple physical
machines, giving it potentially limitless resources.

A web server typically has three tiers to it:

1. The physical infrastructure

2. the operating system platform,

3. the web application software being run.

Depending on what we choose we have diffrent Cloud Computing Models. Lets discuss
them one by one.

3.1 Models of Cloud Computing:-

3.1.1 Software as a Service(SaaS) Model:-
Consumer software is traditionally purchased with a fixed upfront payment for a license and
a copy of the software on appropriate media. This software license typically only permits the
user to install the software on one computer. When a major update is applied to the software
and a new version is released, users are required to make a further payment to use the new
version of the software. Users can continue to use an older version, but once a new version of
software has been released, support for older versions is often significantly reduced and
updates are infrequent.
With the ubiquitous availability of broadband Internet, software developers are
increasingly moving towards providing software as a service. In this service,clients are



charged a monthly or yearly fee for access to the latest version of software. Additionally, the
software is hosted in the cloud and all computation is performed in the cloud. The client’s PC
is only used to transmit commands and receive results. Typically, users are free to use any
computer connected to the Internet.However, at any time, only a fixed number of instances of
the software are permitted to be running per user. One example of software as a service is
Google Docs.When a user exclusively uses network- or Internetbased software services, the
concept is similar to a thin client model, where each user’s client computer functions
primarily as a network terminal, performing input, output, and display tasks, while data are
stored and processed on a central server. Thin clients were popular in office environments
prior to the widespread use of PCs. In this scenario, data storage and processing is always
performed in the cloud and we are thus able to significantly reduce the functionality, and
consequently,the power consumption, of the client’s PC.

“Software as a Service” has of late become something of a marketing buzzword, and

is inevitably misused, but there seem to be two types of offering that fall into this
category.The first is virtualisation of internal systems – a large company’s CRM or ERP is
moved from intranet servers into the cloud and hosted on shared servers. This applies both to
custom-written systems and commercial applications like Microsoft SharePoint. There are a
number of companies that specialise in single-application hosting of this type, and a useful
directory to these is available here.

The second type is the simplest form of cloud computing: effectively virtualised
shared-server space into which you put anything you want. Thi may be an internal business
application, but it could equally be a public-facing website. This type, works almost exactly
the same way as regular shared server hosting. The host gives you access to a virtual
directory to which you can upload a single website’s files, but any changes to the platform or
environment have to be handled by the hosting company. For multiple web applications, you
need multiple SaaS virtual sites, which are charged separately but can all be managed though
a single web console.

So how is SaaS better than an ordinary shared server? With SaaS, the website exists
in a virtual container, which can be moved from physical server to physical server without
interruption of service.This makes it much easier to transfer the site to more powerful



hardware as the need arises, but all of the other limitations of shared servers still apply: very
limited configurability, no third-party software or background services, and so on.

3.1.2 Storage as a Service Model:-

Through storage as a service, users can outsource their data storage requirements to the
cloud. All processing is performed on the user’s PC, which may have only a solid state drive
(e.g., flash-based solid-state storage), and the user’s primary data storage is in the cloud. Data
files may include documents, photographs, or videos. Files stored in the cloud can be
accessed from any computer with an Internet connection at any time. However, to make a
modification to a file, it must first be downloaded, edited using the user’s PC and then the
modified file uploaded back to the cloud. The cloud service provider ensures there is
sufficient free space in the cloud and also manages the backup of data. In addition, after a
user uploads a file to the cloud, the user can grant read and/or modification privileges to
other users. One example of storage as a service is the Amazon Simple Storage service.

3.1.3 Processing as a Service Model:-

Processing as a service provides users with the resources of a powerful server for specific
large computational tasks . The majority of tasks, which are not computationally demanding,
are carried out on the user’s PC. More demanding computing tasks are uploaded to the cloud,
processed in the cloud, and the results are returned to the user. Similar to the storage service,
the processing service can be accessed from any computer connected to the Internet. One
example of processing as a service is the Amazon Elastic Compute Cloud service.When
utilizing a processing service, the user’s PC still performs many small tasks and is
consequently required to be more powerful than the thin client considered in the software
service. However, the user’s computer is not used for large computationally intensive tasks
and so there is scope to reduce its cost and energy consumption, relative to a standard
consumer PC, by using a less powerful computer.



Fig 6 Comparisions of Cloud Service Models

3.1.4 Infrastructure as a Service (IaaS) Model:-

For all the convenience of hosted web applications, there is a lot to be said for the control
that comes with having access to the entire server, from the operating system on up. In cloud
computing, this is possible through Virtual Private Servers(VPS). With a Virtual Private
Server, all that is abstracted away is the hardware. Logging in to a VPS is practically
indistinguishable from logging in to a remote server via Terminal Services. If you have a
dedicated server in a hosting centre somewhere, moving to VPS should leave you with an
almost identical experience.Virtual Server technology is not new. Developers have been
using it for years to set up test platforms for new applications, or to subdivide physical
machines into multiple logical servers. Using a virtual machine to host a web application was
uncommon, however, because the virtual server would by definition have to be less powerful
than the physical server on which it was running. What makes IaaS interesting is that this
limit no longer applies: a VPS can now be shared across multiple physical servers, limited
only by network latency.
VPS is available through a number of companies, including WebFusion, CloudNine
,Joyent, Nirvanix and ParaScale. The biggest player in the market, though, is Amazon.

VPS machine images can be saved as a file which can be deployed as a virtual server later.
This means that one effectively has a “clean install” of the required environment available at
all times as a rollback position in the event of a catastrophic failure.Moreover, the state of the
running virtual server can be “saved” at any time, providing a very convenient backup
procedure, albeit one that would be difficult to test without incurring charges from the
provider (the server image would be in a proprietary format, so the only way to test would be
to pay them to launch an instance of it). The basic cost of a VPS server appears to be slightly
lower than that of an equivalent dedicated platform. As with all cloud solutions, cost scales
according to use, so costs come down if traffic levels fall, which is very useful for anyone
whose revenue stream depends on traffic.




One potential problem is that Virtual Private Servers are typically incapable of passing a PCI
Security Audit, making them unsuitable for high-security functions like handling credit card
information. One could not, therefore, use a VPS to host an e-commerce site. In addition,
there is the issue of loss of control. Providers like Amazon reserve the right to shut off the
server without prior notice if it is behaving in a way that leads them to believe it has been
compromised by hackers, or if they think we are using it for unethical activities like
spamming. This means that if you were to end up on a blacklist by mistake, the consequences
would be worse than with a non-cloud server.

3.1.5 Platform as a Service (PaaS) Model:-

Platform as a Service is a compromise between SaaS and IaaS. The real difference is that
instead of having space on a Windows-based server, you use a specifically-written cloud
platform, which you access through a web interface, and you build your site using tools
specific to that platform. This platform is considerably more configurable than an SaaS site,
but at the same time doesn’t require all of the maintenance and administration of a full IaaS
VPS. The major PaaS platforms also compare favourably to SaaS in terms of the extra
services they offer – a variety of vendor-supplied tools and SDKs to assist in development
and maintenance. PaaS is the newest category of cloud hosting options, and is likely to be the
most fiercely competitive market, with Microsoft and Google going head-to-head.

Any PaaS solution comes with the advantage of minimising the developer’s maintenance
time while still providing a considerable amount of customisation and configuration.There is
also an argument to be made that since the two biggest players in the industry – Microsoft
and Google – are investing so heavily in PaaS cloud computing, there is a certain
inevitability to their emergence as a standard. This means that the odds of useful third party
tools being developed for PaaS systems are very high

Vendor lock-in is always a concern when it comes to PaaS. One would have to write
applications to be tailored to the chosen platform, and migrating an application out of that
platform onto a standard dedicated server would be a problem. As with IaaS, full compliance



with security standards like PCI would be a problem. All PaaS platforms share the
disadvantage that there is a limit to the options available in terms of third-party applications.
This is mainly caused by the relative novelty of the platforms, which means very few
developers have released compatible versions of their software at this stage. Both Microsoft
Azure and the Google Application Engine require developers to port their software to the
new platform. It will take some time before the full range of software currently available on a
dedicated Windows Server becomes available on PaaS.

These are a few diffrent models of Cloud Computing that come under green Cloud
Computing. Basically every green cloud model is a cloud computin model but not every
Cloud Computing model is a green cloud computing model.

3.2 Security Issues in Cloud Computing :-

Cloud computing can and does mean different things to different people. The common
characteristics most share are on-demand scalability of highly available and reliable pooled
computing resources, secure access to metered services from nearly anywhere, and
dislocation of data from inside to outside the organization. While aspects of these
characteristics have been realized to a certain extent, cloud computing remains a work in
progress.The emergence of cloud computing promises to have far-reaching effects on the
systems and networks of federal agencies and other organizations. Many of the features that
make cloud computing attractive, however, can also be at odds with traditional security
models and controls.

So while discussing furthur about Cloud Computing we need to consider very

important and crucial aspects like security issues in Cloud Computing. A few issues of them
have been discussed in these report as follows:

3.2.1 Trust
Under the cloud computing paradigm, an organization relinquishes direct control over many
aspects of security and, in doing so, confers an unprecedented level of trust onto the cloud



A) Insider Access.
Data processed or stored outside the confines of an organization, its firewall, and
other security controls bring with it an inherent level of risk. The insider security threat is a
well-known issue for most organizations and, despite the name, applies as well to outsourced
cloud services. Insider threats go beyond those posed by current or former employees to
include contractors, organizational affiliates, and other parties that have received access to an
organization’s networks, systems, and data to carry out or facilitate operations. Incidents may
involve various types of fraud, sabotage of information resources, and theft of confidential
information. Incidents may also be caused unintentionally—for instance, a bank employee
sending out sensitive customer information to the wrong Google mail account. Moving data
and applications to a cloud computing environment operated by a cloud provider expands the
insider security risk not only to the cloud provider’s staff, but also potentially among other
customers using the service. For example, a denial of service attack launched by a malicious
insider was demonstrated against a well-known IaaS cloud. The attack involved a cloud
subscriber creating an initial 20 accounts and launching virtual machine instances for each,
then using those accounts to create an additional 20 accounts and machine instances in an
iterative fashion, exponentially growing and consuming resources beyond set limits.

B) Composite Services.
Cloud services themselves can be composed through nesting and layering with other
cloud services. For example, a SaaS provider could build its services upon the services of a
PaaS or IaaS cloud. The level of availability of the SaaS cloud would then depend on the
availability of those services. Cloud services that use third party cloud providers to outsource
or subcontract some of their services should raise concerns, including the scope of control
over the third party, the responsibilities involved, and the remedies and recourse available
should problems occur. Trust is often not transitive, requiring that third-party arrangements
be disclosed in advance of reaching an agreement with the cloud provider, and that the terms
of these arrangements are maintained throughout the agreement or until sufficient
notification can be given of any anticipated changes.Liability and performance guarantees
can become a serious issue with composite cloud services. For example, a consumer storage-
based social networking service closed down after losing access to a significant amount of



data from 20,000 of its subscribers. Because it relied on another cloud provider to host
historical data, and on yet another cloud provider to host its newly launched application and
database, direct responsibility for the cause of the failure was unclear and never resolved.

C) Visibility.
Migration to public cloud services relinquishes control to the cloud provider for
securing the systems on which the organization’s data and applications operate.
Management, procedural, and technical controls used in the cloud must be commensurate
with those used for internal organizational systems or surpass them, to avoid creating gaps in
security. Since metrics for comparing two computer systems are an ongoing area of research,
making such comparisons can be a formidable task . Cloud providers are typically reluctant
to provide details of their security and privacy, since such information might be used to
devise an avenue of attack. Moreover, detailed network and system level monitoring by a
cloud subscriber is generally not part of most service arrangements, limiting visibility and the
means to audit operations directly.Transparency in the way the cloud provider operates is a
vital ingredient for effective oversight over system security and privacy by an organization.
To ensure that policy and procedures are being enforced throughout the system lifecycle,
service arrangements should include some means for gaining visibility into the security
controls and processes employed by the cloud provider and their performance over time.
Ideally, the organization would have control over aspects of the means of visibility, such as
the threshold for alerts and notifications or the level of detail and schedule for reports, to
accommodate its needs.

3.2.2 Architecture
The architecture of the software systems used to deliver cloud services comprises hardware
and software residing in the cloud. The physical location of the infrastructure is determined
by the cloud provider as is the implementation of the reliability and scalability logic of the
underlying support framework. Virtual machines often serve as the abstract unit of
deployment and are loosely coupled with the cloud storage architecture. Applications are
built on the programming interfaces of Internet-accessible services, which typically involve
multiple cloud components communicating with each other over application programming



interfaces. Many of the simplified interfaces and service abstractions belie the inherent
complexity that affects security.

Attack Surface.
The hypervisor or virtual machine monitor is an additional layer of software between an
operating system and hardware platform that is used to operate multi-tenant virtual machines.
Besides virtualized resources, the hypervisor normally supports other application
programming interfaces to conduct administrative operations, such as launching, migrating,
and terminating virtual machine instances. Compared with a traditional non-virtualized
implementation, the addition of a hypervisor causes an increase in the attack surface. The
complexity in virtual machine environments can also be more challenging than their
traditional counterparts, giving rise to conditions that undermine security.For example,
paging, checkpointing, and migration of virtual machines can leak sensitive data to persistent
storage, subverting protection mechanisms in the hosted operating system intended to prevent
such occurrences. Moreover, the hypervisor itself can potentially be compromised. For
instance, a vulnerability that allowed specially crafted File Transfer Protocol (FTP) requests
to corrupt a heap buffer in the hypervisor, which could allow the execution of arbitrary code
at the host, was discovered in a widely used virtualization software product, in a routine for
Network Address Translation (NAT)

A) Virtual Network Protection.

Most virtualization platforms have the ability to create software-based switches and
network configurations as part of the virtual environment to allow virtual machines on the
same host to communicate more directly and efficiently. For example, for virtual machines
requiring no external network access, the virtual networking architectures of most
virtualization software products support same-host networking, in which a private subnet is
created for intra-host communications. Traffic over virtual networks may not be visible to
security protection devices on the physical network, such as network-based intrusion
detection and prevention systems. To avoid a loss of visibility and protection against intra-
host attacks, duplication of the physical network protection capabilities may be required on
the virtual network.



B) Client-Side Protection.
A successful defense against attacks requires securing both the client and server side
of cloud computing. With emphasis typically placed on the latter, the former can be easily
overlooked. Web browsers, a key element for many cloud computing services, and the
various available plug-ins and extensions for them are notorious for their security problems.
Moreover, many browser add-ons do not provide automatic updates, increasing the
persistence of any existing vulnerabilities. Maintaining physical and logical security over
clients can be troublesome, especially with embedded mobile devices such as smart phones.
Their size and portability can result in the loss of physical control. Built-in security
mechanisms often go unused or can be overcome or circumvented without difficulty by a
knowledgeable party to gain control over the device. Smart phones are also treated more as
fixed appliances with a limited set of functions, than as general-purpose systems. No single
operating system dominates and security patches and updates for system components and
add-ons are not as frequent as for desktop clients, making vulnerabilities more persistent with
a larger window of opportunity for exploitation.

The increased availability and use of social media, personal Webmail, and other
publicly available sites also have associated risks that are a concern, since they can
negatively impact the security of the browser, its underlying platform, and cloud services
accessed, through social engineering attacks. For example, spyware was reportedly installed
in a hospital system via an employee’s personal Webmail account and sent the attacker more
than 1,000 screen captures, containing financial and other confidential information, before
being discovered . Having a backdoor Trojan, keystroke logger, or other type of malware
running on a client does not bode well for the security of cloud or other Web-based services
it accesses. As part of the overall security architecture for cloud computing, organizations
need to review existing measures and employ additional ones, if necessary, to secure the
client side. Banks are beginning to take the lead in deploying hardened browser environments
that encrypt network exchanges and protect against keystroke logging

C) Server-Side Protection.



Virtual servers and applications, much like their non-virtual counterparts, need to be
secured in IaaS clouds, both physically and logically. Following organizational policies and
procedures, hardening of the operating system and applications should occur to produce
virtual machine images for deployment. Care must also be taken to provision security for the
virtualized environments in which the images run. For example, virtual firewalls can be used
to isolate groups of virtual machines from other hosted groups, such as production systems
from development systems or development systems from other cloud-resident systems.
Carefully managing virtual machine images is also important to avoid accidentally deploying
images under development or containing vulnerabilities. Hybrid clouds are a type of
composite cloud with similar protection issues. In a hybrid cloud the infrastructure consists
of a private cloud composed with either a public cloud or another organization’s private
cloud. The clouds themselves remain unique entities, bound together by standardized or
proprietary technology that enables unified service delivery, but also creates
interdependency. For example, identification and authentication might be performed through
an organization’s private cloud infrastructure,as a means for its users to gain access to
services provisioned in a public cloud. Preventing holes or leaks between the composed
infrastructures is a major concern with hybrid clouds, because of increases in complexity and
diffusion of responsibilities. The availability of the hybrid cloud, computed as the product of
the availability levels for the component clouds, can also be a concern; if the percent
availability of any one component drops, the overall availability suffers proportionately.

3.2.3. Data Protection

Data stored in the cloud typically resides in a shared environment collocated with data from
other customers. Organizations moving sensitive and regulated data into the cloud, therefore,
must account for the means by which access to the data is controlled and the data is kept

A) Data Isolation.
Data can take many forms. For example, for cloud-based application development, it
includes the application programs, scripts, and configuration settings, along with the
development tools. For deployed applications, it includes records and other content created or
used by the applications, as well as account information about the users of the applications.



Access controls are one means to keep data away from unauthorized users; encryption is
another. Access controls are typically identity-based, which makes authentication of the
user’s identity an important issue in cloud computing. Database environments used in cloud
computing can vary significantly. For example, some environments support a multi-instance
model, while others support a multi-tenant model. The former provide a unique database
management system running on a virtual machine instance for each cloud subscriber, giving
the subscriber complete control over role definition, user authorization, and other
administrative tasks related to security. The latter provide a predefined environment for the
cloud subscriber that is shared with other tenants, typically through tagging data with a
subscriber identifier. Tagging gives the appearance of exclusive use of the instance, but relies
on the cloud provider to establish and maintain a sound secure database environment.

Various types of multi-tenant arrangements exist for databases. Each arrangement

pools resources differently, offering different degrees of isolation and resource efficiency
Other considerations also apply. For example, certain features like data encryption are only
viable with arrangements that use separate rather than shared databases. These sorts of
tradeoffs require careful evaluation of the suitability of the data management solution for the
data involved. Requirements in certain fields, such as healthcare, would likely influence the
choice of database and data organization used in an application. Privacy sensitive
information, in general, is a serious concern. Data must be secured while at rest, in transit,
and in use, and access to the data must be controlled. Standards for communications
protocols and public key certificates allow data transfers to be protected using cryptography.
Procedures for protecting data at rest are not as well standardized, however, making
interoperability an issue due to the predominance of proprietary systems. The lack of
interoperability affects the availability of data and complicates the portability of applications
and data between cloud providers. Currently, the responsibility for cryptographic key
management falls mainly on the cloud service subscriber. Key generation and storage is
usually performed outside the cloud using hardware security modules, which do not scale
well to the cloud paradigm. NIST’s Cryptographic Key Management Project is identifying
scalable and usable cryptographic key management and exchange strategies for use by
government, which could help to alleviate the problem eventually.8 Protecting data in use is



an emerging area of cryptography with little practical results to offer, leaving trust
mechanisms as the main safeguard .

B) Data Sanitization.
The data sanitization practices that a cloud provider implements have obvious
implications for security. Sanitization is the removal of sensitive data from a storage device
in various situations, such as when a storage device is removed from service or moved
elsewhere to be stored. Data sanitization also applies to backup copies made for recovery and
restoration of service, and also residual data remaining upon termination of service. In a
cloud computing environment, data from one subscriber is physically commingled with the
data of other subscribers, which can complicate matters. For instance, many examples exist
of researchers obtaining used drives from online auctions and other sources and recovering
large amounts of sensitive information from them. With the proper skills and equipment, it is
also possible to recover data from failed drives that are not disposed of properly by cloud

3.2.4. Availability
In simple terms, availability is the extent to which an organization’s full set of computational
resources is accessible and usable. Availability can be affected temporarily or permanently,
and a loss can be partial or complete. Denial of service attacks, equipment outages, and
natural disasters are all threats to availability. The concern is that most downtime is
unplanned and can impact the mission of the organization.

A) Temporary Outages.
Despite employing architectures designed for high service reliability and availability,
cloud computing services can and do experience outages and performance slowdowns. A
number of examples illustrate this point. In February 2008, a popular storage cloud service
suffered a three-hour outage that affected its subscribers, including Twitter and other startup
companies. In June 2009, a lightning storm caused a partial outage of an IaaS cloud that
affected some users for four hours. Similarly, in February 2008, a database cluster failure at a



SaaS cloud caused an outage for several hours, and in January 2009, another brief outage
occurred due to a network device failure. In March 2009, a PaaS cloud experienced severe
degradation for about 22 hours due to networking issues related to an upgrade.

At a level of 99.95% reliability, 4.38 hours of downtime are to be expected in a year.

Periods of scheduled maintenance are also usually excluded as a source of downtime in SLAs
and able to be scheduled by the cloud provider with short notice. The level of reliability of a
cloud service and its capabilities for backup and recovery need to be addressed in the
organization’s contingency planning to ensure the recovery and restoration of disrupted cloud
services and operations, using alternate services, equipment, and locations, if required. Cloud
storage services may represent a single point of failure for the applications hosted there. In
such situations, the services of a second cloud provider could be used to back up data
processed by the primary provider to ensure that during a prolonged disruption or serious
disaster at the primary, the data remains available for immediate resumption of critical

B) Denial of Service.
A denial of service attack involves saturating the target with bogus requests to
prevent it from responding to legitimate requests in a timely manner. An attacker typically
uses multiple computers or a botnet to launch an assault. Even an unsuccessful distributed
denial of service attack can quickly consume large amounts of resources to defend against
and cause charges to soar. The dynamic provisioning of a cloud in some ways simplifies the
work of an attacker to cause harm. While the resources of a cloud are significant, with
enough attacking computers they can become saturated [Jen09]. For example, a denial of
service attack against a code hosting site operating over an IaaS cloud resulted in more than
19 hours of downtime. Besides attacks against publicly accessible services, denial of service
attacks can occur against internally accessible services, such as those used in cloud
management.Internally assigned non-routable addresses, used to manage resources within a
cloud provider’s network, may also be used as an attack vector. A worst-case possibility that
exists is for elements of one cloud to attack those of another or to attack some of its own



3.2.5 Identity and Access Management

Data sensitivity and privacy of information have become increasingly an area of concern for
organizations and unauthorized access to information resources in the cloud is a major
concern. One recurring issue is that the organizational identification and authentication
framework may not naturally extend into the cloud and extending or changing the existing
framework to support cloud services may be difficult. The alternative of employing two
different authentication systems, one for the internal organizational systems and another for
external cloud-based systems, is a complication that can become unworkable over time.
Identity federation, popularized with the introduction of service oriented architectures, is one
solution that can be accomplished in a number of ways, such as with the Security Assertion
Markup Language (SAML) standard or the OpenID standard.

A) Authentication.
A growing number of cloud providers support the SAML standard and use it to
administer users and authenticate them before providing access to applications and data.
SAML provides a means to exchange information, such as assertions related to a subject or
authentication information, between cooperating domains. SAML request and response
messages are typically mapped over the Simple Object Access Protocol (SOAP), which relies
on the eXtensible Markup Language (XML) for its format. SOAP messages are digitally
signed. For example, once a user has established a public key certificate for a public cloud,
the private key can be used to sign SOAP requests.SOAP message security validation is
complicated and must be carried out carefully to prevent attacks. For example, XML
wrapping attacks have been successfully demonstrated against a public IaaS cloud. XML
wrapping involves manipulation of SOAP messages. A new element (i.e., the wrapper) is
introduced into the SOAP Security header; the original message body is then moved under
the wrapper and replaced by a bogus body containing an operation defined by the attacker.
The original body can still be referenced and its signature verified, but the operation in the
replacement body is executed instead.

B) Access Control.



SAML alone is not sufficient to provide cloud-based identity and access

management services. The capability to adapt cloud subscriber privileges and maintain
control over access to resources is also needed. As part of identity management, standards
like the eXtensible Access Control Markup Language (XACML) can be used by a cloud
provider to control access to cloud resources, instead of using a proprietary interface.
XACML focuses on the mechanism for arriving at authorization decisions,which
complements SAML’s focus on the means for transferring authentication and authorization
decisions between cooperating entities. XACML is capable of controlling the proprietary
service interfaces of most providers, and some cloud providers already have it in place.
Messages transmitted between XACML entities are susceptible to attack by malicious third
parties, making it important to have safeguards in place to protect decision requests and
authorization decisions from possible attacks, including unauthorized disclosure, replay,
deletion and modification.

3.2.6 Software Isolation

High degrees of multi-tenancy over large numbers of platforms are needed for cloud
computing to achieve the envisioned flexibility of on-demand provisioning of reliable
services and the cost benefits and efficiencies due to economies of scale. To reach the high
scales of consumption desired, cloud providers have to ensure dynamic flexible delivery of
service and isolation of subscriber resources. Multi-tenancy in cloud computing is typically
done by multiplexing theexecution of virtual machines from potentially different users on the
same physical server. It is important to note that applications deployed on guest virtual
machines remain susceptible to attack and compromise, much the same as their non-
virtualized counterparts. This was dramatically exemplified by a botnet found operating out
of an IaaS cloud computing environment

A) Hypervisor Complexity.
The security of a computer system depends on the quality of the underlying software
kernel that controls the confinement and execution of processes.A virtual machine monitor or
hypervisor is designed to run multiple virtual machines, each hosting an operating system
and applications, concurrently on a single host computer, and to provide isolation between
the different guest virtual machines.A virtual machine monitor can, in theory, be smaller and



less complex than an operating system. These characteristics generally make it easier to
analyze and improve the quality of security, giving a virtual machine monitor the potential to
be better suited for maintaining strong isolation between guest virtual machines than an
operating system is for isolating processes. In practice, however, modern hypervisors can be
large and complex, comparable to an operating system, which negates this advantage. For
example, Xen, an open source x86 virtual machine monitor, incorporates a modified Linux
kernel to implement a privileged partition for input/output operations, and KVM,another
open source effort, transforms a Linux kernel into a virtual machine monitor. Understanding
the use of virtualization by a cloud provider is a prerequisite to understanding the security
risk involved.

B) Attack Vectors.
Multi-tenancy in virtual machine-based cloud infrastructures, together with the
subtleties in the way physical resources are shared between guest virtual machines, can give
rise to new sources of threat. The most serious threat is that malicious code can escape the
confines of its virtual machine and interfere with the hypervisor or other guest virtual
machines. Live migration, the ability to transition a virtual machine between hypervisors on
different host computers without halting the guest operating system, and other features
provided by virtual machine monitor environments to facilitate systems management, also
increase software size and complexity and potentially add other areas to target in an attack.
Several examples illustrate the types of attack vectors possible. The first is mapping the
cloud infrastructure. While seemingly a daunting task to perform, researchers have
demonstrated an approach with a popular IaaS cloud. By launching multiple virtual machine
instances from multiple cloud subscriber accounts and using network probes, assigned IP
addresses and domain names were analyzed to identify service location patterns. Building on
that information and general technique, the plausible location of a specific target virtual
machine could be identified and new virtual machines instantiated to be eventually co-
resident with the target.

Once a suitable target location is found, the next step for the guest virtual machine is
to bypass or overcome containment by the hypervisor or to takedown the hypervisor and



system entirely. Weaknesses in the provided programming interfaces and the processing of
instructions are common targets for uncovering vulnerabilities to exploit. For example, a
serious flaw that allowed an attacker to write to an arbitrary out-of-bounds memory location
was discovered in the power management code of a hypervisor by fuzzing emulated I/O
ports. A denial of service vulnerability, which could allow a guest virtual machine to crash
the host computer along with the other virtual machines being hosted, was also uncovered in
a virtual device driver of a popular virtualization software product.More indirect attack
avenues may also be possible. For example, researchers developed a way for an attacker to
gain administrative control of guest virtual machines during a live migration, employing a
man-in-the-middle attack to modify the code used for authentication. Memory modification
during migration presents other possibilities, such as the potential to insert a virtual machine-
based rootkit layer below the operating system. A zero-day exploit in HyperVM, an open
source application for managing virtual private servers, purportedly led to the destruction of
approximately 100,000 virtual server-based Websites hosted by a service provider. Another
example of an indirect attack involves monitoring resource utilization on a shared server to
gain information and perhaps perform a side-channel attack, similar to attacks used in other



Chapter 4
Green Cloud Architetcture

Green Computing enables companies to meet business demands for cost-effective, energy-
efficient, flexible, secure & stable solutions while being environmentally responsible.There is
no denying it: the cost of energy is out of control and it affects every industry in the world,
including information technology. As the old adage goes, "out-of-sight, out-of-mind." With
neither drainage pipes nor chimneys, it's easy to forget that our clean, cool data centers can
have significant impact on both the corporate budget and the environment. Every data center
transaction requires power. Every IT asset purchased must eventually be disposed of, one
way or another. Efficiency, equipment disposal and recycling, and energy consumption,
including power and cooling costs, have become priority for those who manage the data
centers that make businesses run.

“Green Computing” is defined as the study and practice of using computing resources
efficiently through a methodology that combines reducing hazardous materials, maximizing
energy efficiency during the product’s lifetime, and recycling older technologies and defunct

Most data centers built before 2001 were designed according to traditional capacity
models and technology limitations, which forced system architects to expand capacity by
attaching new assets. In essence, one server per workload, with every asset requiring
dedicated floor space, management, power and cooling. These silo infrastructures are
inherently inefficient, leading to asset underutilization, greater hardware expenditure and
higher total energy consumption. In a 2006 study, the respected research firm IDC found that
the expense to power and cool a company’s existing install-base of servers equated to 45.8%
of new IT spending. The analyst group forecasted that server power and cooling expense
could amount to 65.8% of new server spending by 2011. Also according to the IDC:

1. Right now, 50 cents of every dollar spent on IT equipment is devoted to powering and
cooling; by 2011 that per unit cost might well approach 70 cents of every dollar.



2. Experience has shown that growing companies typically add more servers, rather than
implementing a consolidation or virtualization solution. More servers mean larger
utility bills and potentially greater environmental issues.

3. Between 2000 and 2010 sever installations will grow by 6 times and storage by 69
times. (IBM/Consultant Studies)

4. U.S. energy consumption by data centers is expected to almost double in the next five
years (U.S. EPA, August 2007)

5. U.S. commercial electrical costs increased by 10% from 2005 to 2006 (EPA Monthly
Forecast, 2007)

6. Data center power and cooling costs have increased 800% since 1996.
(IBM/Consultant Studies)

7. Over the next five years, it is expected that most U.S. data centers will spend as much
on energy costs as on hardware, and twice as much as they currently do on server
management and administration costs. (IBM/Consultant Studies)

People in IT industry are reassessing data center strategies to determine if energy

efficiency should be added to the list of critical operating parameters. Issues of concern

1. Reducing data center energy consumption, as well as power and cooling costs

2. Security and data access are critical and must be more easily and efficiently managed

3. Critical business processes must remain up and running in a time of power drain or

These issues are leading more companies to adopt a Green Computing plan for
business operations, energy efficiency and IT budget management. Green Computing is
becoming recognized as a prime way to optimize the IT environment for the benefit of the
corporate bottom line – as well as the preservation of the planet. It is about efficiency, power
consumption and the application of such issues in business decision-making. Simply stated,



Green Computing benefits the environment and a company’s bottom line. It can be a win/win
situation, meeting business demands for cost-effective, energy-efficient, flexible, secure and
stable solutions, while demonstrating new levels of environmental responsibility.

4.1 Green Cloud Architetcture:

Fig 7 Green Cloud Architetcture

As discussed above, cloud computing platform as the nextgeneration IT infrastructure
enables enterprises to consolidate computing resources, reduce management complexity and
speed the response to business dynamics. Improving the resource utilization and reduce
power consumption are key challenges to the success of operating a cloud computing

To address such challenges, we design the GreenCloud architecture and the

corresponding Green Cloud exploratory system. The exploratory system monitors a variety
of system factors and performance measures including application workload, resource
utilization and power consumption, hence the system is able to dynamically adapt workload
and resource utilization through VM live migration. Therefore, the GreenCloud architecture
reduces unnecessary power consumption in a cloud computing environment. Figure 7



demonstrates the GreenCloud architecture and shows the functions of components and their
relations in the architecture.

Monitoring Service monitors and collects comprehensive factors such as application

workload, resource utilization and power consumption, etc. The Monitoring Service is built
on top of IBM Tivoli framework and Xen, where the IBM Tivoli framework is a CORBA-
based system management platform managing a large number of remote locations and
devices; Xen is a virtual machine monitor (VMM). The Monitoring Service serves as the
global information provider and provides on-demand reports by performing the aggregation
and pruning the historical raw monitoring data to support to intelligent actions taken by
Migration Manager.

Migration Manager triggers live migration and makes decision on the placement of
virtual machines on physical servers based on knowledge or information provided by the
Monitoring Service. The migration scheduling engine searches the optimal placement by a
heuristic algorithm, and sends instructions to execute the VM migration and turn on or off a
server. A heuristic algorithm to search an optimal VM placement and the implementation
details of Migration Manager will be discussed in Section IV. The output of the algorithm is
an action list in terms of migrate actions (e.g. Migrate VM1 from PM2 to PM4) and local
adjustment actions(e.g. Set VM2 CPU to 1500MHz).

Managed Environment includes virtual machines, physical machines, resources,

devices, remote commands on VMs, and applications with adaptive workload, etc.

E-Map is a web-based service with Flash front-end. It provides a user interface (UI)
to show the real-time view of present and past system on/off status, resource consumption,
workload status, temperature and energy consumption in the system at multiple scales, from
high-level overview down to individual IT devices (e.g. servers and storage devices) and
other equipment (e.g. wateror air-cooling devices). E-map is connected to the Workload
Simulator, which predicts the consequences after a given actions adopted by the Migration
Monitor through simulation in real environment.



Workload Simulator accepts user instructions to adapt workload,e.g. CPU utilization,

on servers, and enables the control of Migration Manager under various workloads. Then, E-
Map collects the corresponding real-time measurements, and demonstrates the performance
of the system to users. Therefore,users and system designers will verify the effectiveness of a
certain algorithm or adjust parameters of the algorithm to achieve better performance.

Asset Repository is a database to store the static server information, such as IP

address, type, CPU configuration,memory setting, and topology of the servers. The
GreenCloud IDC management framework is running and accessible to IBM internal staffs
and customers. They can view up-to-date status of resources, configure their applications,
allocate resources, and experience the live management system.

4.2 Guidelines for Successful Green Cloud Computing:-

Green Computing involves a range of services and technologies based on best practices for
reducing energy usage. As noted above, IBM recommends a comprehensive five-step plan in
developing energy-efficient, cost-effective, environmentally responsible information
technology operations. Analyses of the five steps follow.

1. Diagnose :-
It is difficult to manage what cannot be measured, particularly when it comes to
energy efficiency. It has been estimated that 40% of small and mid-size businesses in the
United States do not know how much they spend on overall energy costs for their IT systems.
It is important for a company to collect accurate, detailed information on its energy
efficiency as a first step in pinpointing areas for potential improvement and to identify
existing systems ready for retirement. Mainline and IBM provide Energy Efficiency
Assessments, which are proven tools for diagnosing the energy demands of physical
infrastructure and IT equipment.

2. Build :–
After identifying needs and solution requirements, and reviewing Energy
EfficiencyAssessments, the second step includes planning and designing the new solution



including building or preparing facilities for replacements, migrations or upgrades.

Implementing best practices, innovative technologies and solution expertise will result in
improved operations while reducing costs.

3. Virtualize :–
Virtualization can produce the fastest and greatest impact on energy efficiency in an
information technology center. Consolidating an IT infrastructure can increase utilization and
lower annual power costs. Reducing the number of servers and storage devices through
virtualization strategies can create a leaner data center without sacrificing performance. Less
complexity, reduced cost, better utilization and improved management are all benefits of
server, storage and desktop virtualization, and helps achieve Green Computing.

4. Manage :–
Data center energy consumption is managed through provisioning and virtualization
management software, providing important power alerts, as well as trending, capping and
heat measurements. Such software can reduce power consumption by 80% annually.

5. Cool :–
Excessive heat threatens equipment performance and operating stability. Innovative
IBM cooling solutions for inside and outside the data center minimize hotspots and reduce
energy consumption. IBM's patented Rear Door Heat eXchanger "cooling doors" are now
available across most IBM Systems offerings. While requiring no additional fans or
electricity, they reduce server heat output in data centers up to 60% by utilizing chilled water
to dissipate heat generated by computer systems.

4.3 Critical Components of a Solid Green Computing Solution

Modern information technology systems rely on a complicated combination of
people, networks, hardware and application solutions. For that reason, Green Computing
must address increasingly sophisticated issues. However, most analysts agree that any Green
Computing solution must include these key components:
� Server, Storage and Desktop Virtualization on Solid Hardware Platforms

� Proven, Reliable and Flexible Software



4.4 Pros and Cons of Cloud Computing :-

Each and every technology comes with some advantages and some disadvantages and cloud
computing is no exception to it. It dose have advantages and counter disadvantages. Lets
discuss each of them breifly.

Advantages of Cloud Computing :-

1. The great advantage of cloud computing is “elasticity”: the ability to add capacity or
applications almost at a moment’s notice. Companies buy exactly the amount of
storage, computing power, security and other IT functions that they need from
specialists in data-center computing. They get sophisticated data center services on
demand, in only the amount they need and can pay for, at service levels set with the
vendor, with capabilities that can be added or subtracted at will.

2. The metered cost, pay-as-you-go approach appeals to small- and medium-sized

enterprises; little or no capital investment and maintenance cost is needed. IT is
remotely managed and maintained, typically for a monthly fee, and the company can
let go of “plumbing concerns”. Since the vendor has many customers, it can lower the
per-unit cost to each customer. Larger companies may find it easier to manage
collaborations in the cloud, rather than having to make holes in their firewalls for
contract research organizations. SaaS deployments usually take less time than in-
house ones, upgrades are easier, and users are always using the most recent version of
the application. There may be fewer bugs because having only one version of the
software reduces complexity.

3. If at all any company goes for cloud storeage in that case it dose not need to purchase
the hardware that will be required for implementing and createing network for
themseleves.It will also help them in cost cutting as there wont be any need for



appointing of staff for managing and maintaining the network. All they need to do is
outsource their need to some cloud managing company.

4. These technique is also quite eco-friendly and economic. This is because instead of
purchasing a new server and using it less than its capacity we use the amount of
server space as per our requirement. These also helps off-loading the power
requirement for processsing in peak hours.

5. As in these technique we use the server as per our need so the space required to
strore,mangae and cool the server also reduces drastically. These not only helps us in
saving energy but it also proves to be economically vaible in all aspects.

Disadvantages of cloud computing :-

1. In the cloud you may not have the kind of control over your data or the performance
of your applications that you need, or the ability to audit or change the processes and
policies under which users must work.

2. Different parts of an application might be in many places in the cloud. Monitoring

and maintenance tools are immature. It is hard to get metrics out of the cloud and
general management of the work is not simple.

3. There are systems management tools for the cloud environment but they may not
integrate with existing system management tools, so you are likely to need two
systems. Nevertheless, cloud computing may provide enough benefits to compensate
for the inconvenience of two tools.

4. Cloud customers may risk losing data by having them locked into proprietary formats
and may lose control of data because tools to see who is using them or who can view
them are inadequate.



5. Data loss is a real risk. In October 2009 1 million US users of the T-Mobile Sidekick
mobile phone and emailing device lost data as a result of server failure at Danger, a
company recently acquired by Microsoft.

6. It may not be easy to tailor service-level agreements (SLAs) to the specific needs of a
business. Compensation for downtime may be inadequate and SLAs are unlikely to
cover concomitant damages, but not all applications have stringent uptime
requirements. It is sensible to balance the cost of guaranteeing internal uptime against
the advantages of opting for the cloud. It could be that your own IT organization is
not as sophisticated as it might seem.

7. Standards are immature and things change very rapidly in the cloud. All IaaS and
SaaS providers use different technologies and different standards. The storage
infrastructure behind Amazon is different from that of the typical data center (e.g., big
Unix file systems). The Azure storage engine does not use a standard relational
database; Google’s App Engine does not support an SQL database. So you cannot just
move applications to the cloud and expect them to run. At least as much work is
involved in moving an application to the cloud as is involved in moving it from an
existing server to a new one. There is also the issue of employee skills: staff may
need retraining and they may resent a change to the cloud and fear job losses.

Bear in mind, though, that it is easy to underestimate risks associated with the current
environment while overestimating the risk of a new one. Cloud computing is not risky for
every system. Potential users need to evaluate security measures such as firewalls, and
encryption techniques and make sure that they will have access to data and the software or
source code if the service provider goes out of business.



Chapter 5
Future Scope

Uptil now we had discussed what is cloud computing? Why it has became such a buzz in IT
industry these days? What are its advantages and disadvantages? Then we studied about
Green Cloud architeture. From all these we can conclude is that still there is a lot of scope of
improvement in cloud architeture so as to make it more economical,reliable,energy efficent
and eco-friendly.

In short if we need to summerize the working of cloud computing then we can

summerize it as follows:-

1. Firstly the end user will have to register himself with some cloud service provider like
Amazon, for cloud service. And then according to the need of the coustmer he will be
provided service under some some type of cloud service model

2. Whatever data coustmer needs will be stored at the data centres that are built and
maintained by the cloud serrvice provider and than user can get access to that data
from any place in world and for that ofcourse he will need an active internet

All these is pretty much similar to signing up for some type mail service like gmail wherein
we create user account and then we can read and send mails to any one by means of our mail

Here comes into picture the actual entites that we need to focus. As mentioned above
whenever any user signs himself with any cloud service provider he puts his entire data,that



he wants to have access to from any place, onto the centralized server of the service provider.
Now inorder to store these enourmous amount of data, cloud service providers need to have
big data centres with server machines and many other expensive networking gadgets. Infact
couple the service providers may end up with having a grid arciteture in their place onto
which they store these massive amount of data.

Now all of use know a grid architeture is very huge,complicated and expensive to
maintain plus it has overhead of cooling. So the cloud service provider will have to not only
make a huge one time investment in expensive networking components that will be used to
constructint the grid but also they will have to put in a lot of investment in cooling and
maintaining their data centre. Moreover the life the life these arch. is also very less as
compared to the amount of investment that we need to put in them. Say for instance a
supercomputer like Param Yuva rated 68th as best supercomputer worl wide is expected to
have a life span of about 6 to 7 years. These is very less if we need to put in about 1millon $
to first of all purchase it and then again spend about 1millon $ dollars annualy in maintaing
and cooling the same.

Now we suggest some modifications from our end by which we can improve the
efficency of cloud to great extent which will inturn make it much more economic and viable
and at the same time we will also be able to address some of key issues in cloud computing
as discussed before in chapter 4.

According to the current architeture of cloud whatever data we want to put into the
cloud we store it with our cloud service provider. So what we suggest is that instead of
having these data in cloud we will have these data in some shared directory on the user hard
disk and instead of having the actual user data we will simply store the location from where
the user is subscribing for the service. Now whenever user signs up for the service all that the
service provider needs to maintain is attributes like MAC address of coustmer then diffrent
parameters like DNS address, router address etc from which we will able to track the users
sysytem even if he logs into the service via diffrent networks. Now whatever data user needs
to have access to he will place it in some local directory which is again stored in his system
say cloudshare, and he need to put all that data into that directory.



Now at the service providers database what we have is username,password and

diffrent attributes of client machine by which it can be found on any network throughout the
world. By these we can drastically reduce the amount of data that is to be stored onto the
centralized servers. If the amount of storeage requirement by the service provider is less than
all he needs to purchase and maintain will be simply a PC with good confugration like 4GB
RAM and about 1TB HDD that can be used to store the details about the client. By these not
only the money that would have been spent on purchasing network components will be saved
but also as the amount of data that has to be stored onto the centralized server will decrease
significantly, it will consequently lead to fall in elecricity conumption and cooling
requirements of the data centre.

No doubt these system seems to be very economic in terms of monetry gains but we
cannot ignore security in IT industry for the sake of monetry gains. So keeping in mind we
have designed the security system our architecture. As mentioned earlier service provider
database will consist of username and password of the coustmer.Inorder to have more secure
access to data, coustomers should create user accounts on their systems too. Now whenever
any person will try to gain access to data by means of cloud then first of all he needs to know
the username and password by which the authentic user has signed up himself with the
service provider, in addtion to these he needs to know the username and password by which
he will be able to access the autentic users machine. Thus it will be a two layer authentication
system which will make cloud much more secure.

Another key security issue that had been discussed in the earlier section was secure
deletion of data fron cloud. By means of these architecture we may be able to address these
security concern. Whenever any data is erased from any storeage media, let it be cloud or any
storeage device, then by using various data recovery softwares these deleted data can be
recovered from any storeage medium. At times these can be useful but again in couple of
cases it can be a serious issue especially if the concerned parties are like CBI,FBI,RAW etc.
So secure deletion of data from cloud is not only important from security point of veiw but it
is also important from the user privacy prespective. Now if any cloud services are
implemented using the above mentioned technique then we will be able to track these issue.
In order to remove data from cloud all that a user needs to do is to remove the respective files



from the cloud shared folder and either store it in some other part of disk or delete it from his
disk too. By these other users wont be able to get access to deleted data.

By the above architecture of cloud we may be able to solve a number of issues in

cloud computing but still a couple of issues like platform dependencies then diffrences in OS
and others still persisit and that is where the future scope in the feild of cloud computing lies.

It is not at all the case that the above technique can completely replace the existing
cloud computing architecture. But it is best suitable for storeage as a service model. But even
if we implement the same in diffrent model like IaaS, PaaS etc. even then these technique can
prove to be very effective as it takes the benefit of reduced storeage requirement.



Chapter 6

We considered both public and private clouds and included energy consumption in switching
and transmission as well as data processing and data storage.We discussed difrrent models of
cloud computing like SaaS, IaaS, PasS etc. Any future service is likely to include some
combination of each of these service models. Power consumption in transport represents a
significant proportion of total power consumption for cloud storage services at medium and
high usage rates. For typical networks used to deliver cloud services today, public cloud
storage can consume of the order of three to four times more power than private cloud
storage due to the increased energy consumption in transport. Nevertheless, private and
public cloud storage services are more energy efficient than storage on local hard disk drives
when files are only occasionally accessed. However, as the number of file downloads per
hour increases, the energy consumption in transport grows and storage as a service consumes
more power than storage on local hard disk drives. The number of users per server is the
most significant determinant of the energy efficiency of a cloud software service. Cloud
software as a service is ideal for applications that require average frames rates lower than the
equivalent of 0.1 screen refresh frames per second. Significant energy savings are achieved
by using lowend laptops for routine tasks and cloud processing services for computationally
intensive tasks, instead of a midrange or high-end PC, provided the number of
computationally intensive tasks is small. Energy consumption in transport with a private
cloud processing service is negligibly small. Our broad conclusion is that the energy
consumption of cloud computing needs to be considered as an integrated supply chain
logistics problem, in which processing, storage, and transport are all considered together.
Using this approach, we have shown that cloud computing can enable more energy-efficient



use of computing power, especially when the users’ predominant computing tasks are of low
intensity or infrequent. However, under some circumstances, cloud computing can consume
more energy than conventional computing where each user performs all computing on their
own PC. Even with energy-saving techniques such as server virtualization and advanced
cooling systems, cloud computing is not always the greenest computing technology.

References :-
[1] Green Cloud Computing: Balancing Energy in Processing, Storage and Transport
By Jayant Baliga, Robert W. A. Ayre, Kerry Hinton, and Rodney S. Tucker.

[2] Back to Green by Anand R. Prasad, Subir Saha, Prateep Misra, Basavaraj Hooli and
Masahide Murakami

[3] Cloud Computing by Prasanna Pachwadkar Sunil Joglekar

[4] GreenCloud: A New Architecture for Green Data Center by Liang Liu, Hao Wang,
Xue Liu, Xing Jin, WenBo He, QingBo Wang, Ying Chen

[5] Guidelines on Security and Privacy in Public Cloud Computing by Wayne Jansen
And Timothy Grance

[6] Cloud Computing: A Brief Summary by Neil Turner.

[7] White Paper: 5 Steps to a Successful Green Computing Solution by Mainline and