You are on page 1of 28

cloud

computing
IEEE

Prototy
pe

A digital m
agazine
in support
of
the IEEE C
loud
Computin
g Initiative

What's Special?
3

Fraudulent
Resource
Consumption
14

Securing the Cloud

May/June 2013

cloud
computing
IEEE

Technical Cosponsors

Securing the Cloud


One of the largest problems to be addressed in cloud computing is that of
securityidentifying threats, understanding responses, and evaluating the tradeoffs involved in making this emerging technology secure for common use. This
prototype issue collects together articles on this topic as a sample of what to expect
with this new strategic publication, first issue to be published in early 2014.

1 Guest Editors Introduction


Jon Rokne

3 Whats Special About Cloud Security?


Peter Mell

6 Toward Accountability in the Cloud


Siani Pearson

12 Public Sector Clouds Beginning to Blossom:


Efficiency, New Culture Trumping Security Fears
Greg Goth

14 The Insecurity of Cloud Utility Models

Joseph Idziorek, Mark F. Tannian, and Doug Jacobson

20 The Threat in the Cloud


Matthew Green

23 Implementing Effective Controls


in a Mobile, Agile, Cloud-Enabled Enterprise
Dave Martin

Cloud Computing Initiative Steering Committee


Steve Diamond (Chair)
David Bernstein
Nim Cheung
Mark Davis

Kathy Grise (Program Manager)


Michael Lightner
Mary Lynne Nielsen
Jon Rokne

Jennifer Schopf
Doug Zuckerman

Angela Burgess

Robin Baldwin

Marian Anderson

Executive Director

Manager, Editorial Services

Evan Butterfield

Sandra Brown

Director, Products & Services

Senior Business Development Manager

IEEE Computer Society Staff

Senior Advertising Coordinator

cloud@computer.org

Guest Editors Introduction

Cloud
Computing:
Transforming
Information
Technology
Jon Rokne University of Calgary

he migration of information
and processes to the cloud is
transforming not only where
computing is done but, fundamentally,
how it is done. Cloud computing solves
many conventional computing problems, including handling peak loads,
installing software updates, and utilizing excess computing cycles, but the
2013 IEEE

new technology has also created new


challenges in data security, data ownership, transborder data storage, and the
training of highly skilled cloud computing professionals. As more in the corporate and academic worlds invest in
this technology, IT professionals working environments are also changing
dramatically.
Published by the IEEE Computer Society

Taking Initiative

Recognizing that cloud computing is


poised to be the dominant form of computing in the future, IEEE has funded a Cloud
Computing Initiative (CCI) to coordinate
its cloud-related activities. To that end, the
IEEE CCI has established tracks for cloud
computing standards, conferences, publications, and educational materials. The
Cloud Computing Initiative portal site
(http://cloudcomputing.ieee.org) presents information on all these topics.
The CCI publications track is tasked
with developing a slate of cloud computing-related periodicals. To date, it
has provided seed funding for two publications: IEEE Transactions on Cloud
Computing, launched in 2013, and IEEE
Cloud Computing magazine, which will
be available in early 2014. These publications aim to provide a focused home
for cloud-related research and feature
articles so that cloud researchers can
publish their most important work,
informing other professionals of new
developments in the field.

In this Issue

To highlight the IEEE CCIs activities


and serve as a preliminary announcement of the cloud publications that will
IEEE Cloud Computing

Guest Editors Introduction

An Invitation

onsider this a personal invitation from me, Chris


Miyachi, to join the Computer Societys Special Technical Community on Cloud Computing (CS STC CC) that Im
chairing. Special Technical Communities are put together by
the Computer Society to establish nimble groups to address
emerging interests. The CS STC CC is for members and run by
members. Our charter is to provide members with accurate,
vendor-neutral information that will demystify ITs top cloudrelated concerns (such as ensuring adequate security, framing
service-level agreements, impacts on staffing, and enabling
rapid scaling up and down). We welcome people new to cloud
computing, and we seek out experts in our community from
both industry and academia.
What makes us different from other blogs and forums on
cloud computing? Two thingsour members and the power of
the IEEE and the IEEE Computer Society. We work closely with
the IEEE Cloud Computing Initiative, which produces content
on cloud computing. We contribute to the CCI social networking sites (Facebook, LinkedIn, and Twitter) and we will provide
content for the IEEE Cloud Computing web site (http://
cloudcomputing.ieee.org).
But we need you. We need you to not only join the CS STC
CC, which is free to all IEEE Computer Society members, but to
volunteer for one of our open positions. We need you to join the
conversation on our social networking sites. And we, as a com-

appear later this year and next, the IEEE


Computer Society publications team
has created this IEEE Cloud Computing
supplement, reprinting cloud computing articles from other IEEE Computer
Society magazines. The six articles in this
supplement cover a wide range of cloudrelated issues, focused particularly on
security topics.
We open with IT Professionals
Whats Special About Cloud Security? in which author Peter Mell claims
that developers can address cloud
security issues by creatively applying
techniques developed for other technologies. In Toward Accountability
in the Cloud, Siani Pearson explores
cloud computing consumers concerns
about data and privacy protection, calling for taking context into account and
avoiding one-size-fits-all approaches.
Public Sector Clouds Beginning to
Blossom, is a news feature in which
2

IEEE Cloud Computing

munity need to continue to grow together to understand the


rapidly changing world of cloud computing.
When I first started to work on projects related to cloud
computing , I wondered why the concept was taking off now.
After all, I can remember 25 years ago when thin clients where
going to take over the world. And then they didnt. One of the
reasons they didnt was that the price of hard drives reduced
dramatically changine the economics of putting data on your
own computer. But with the rise of large server farms required
to power Amazon and Google, the model of buying virtual
server and storage began to make more economic sense.
And here we are today. Prices continue to decrease as
service increases and competition keeps everyone on their toes.
Is cloud computing here to stay or will we be back to personal
systems in the future? Open source advocates (http://www.
guardian.co.uk/technology/2008/sep/29/cloud.computing.
richard.stallman) believe that cloud computing will force
people to buy into proprietary systems. We as consumers of
cloud systems will determine what the future of the cloud will
look like. Do we want a common interface to the cloud that will
allow us to move freely from one cloud provider to the next? If
so we will need to push for that with cloud providers.
This kind of conversation is exactly the kind we will be
having at the CS CC STC, so join the discussion today at www.
computer.org/cc .

author Greg Goth explores cloud computings attraction for the financially
strapped public sector. In The Insecurity of Cloud Utility Models, Joseph
Idziorek, Mark Tannian, and Doug
Jacobson examine an issue that isnt
immediately obvious: in the pay-asyou-go cloud billing process, fraudulent
consumptionby a botnet, for examplecan lead to significant financial
harm for legitimate users. From there,
we move on to Matthew Greens discussion of the security risks associated
with running cryptographic services in
cloud-based virtual machines in The
Threat in the Cloud. In our last article,
Implementing Effective Controls in a
Mobile, Agile, Cloud-Enabled Enterprise, Dave Martin dissects the technical and cultural changes required
of IT security teams as businesses
increasingly rely on mobile and cloudbased activities.

his prototype illustrates the type


of articles the IEEE Computer
Society is already publishing on cloud
computing, and it hints at the reliable,
insightful content you can expect to find
in IEEE Cloud Computing. For subscription information, be sure to visit www.
computer.org/cloudcomputing.
The CCI Publications track would like
to have broad representation from IEEE
societies with interests in cloud computing. If you wish to participate in the
ongoing discussion of the publications
initiatives, please contact me via email.

Jon Rokne is the IEEE CCI Publica-

tions track chair. He is a professor


and former head of the computer science department at the University of
Calgary and the past vice president of
publications for IEEE. Contact him
at rokne@ucalgary.ca.
May/June 2013

Securing the Cloud

Whats Special
About Cloud
Security?
Peter Mell US National Institute of Standards and Technology

lthough cloud security concerns have consistently ranked


as one of the top challenges to
cloud adoption,1 its not clear what security
issues are particular to cloud computing. To
approach this question, I attempt to derive
cloud security issues from various cloud definitions and a reference architecture.

Defining Cloud Computing


The European Network and Information
Security Agency (ENISA) defines cloud
computing as an on-demand service model
for IT provision, often based on virtualization
and distributed computing technologies.2 It
says that cloud computing architectures have
highly abstracted resources, near-instant scalability and flexibility, nearly instantaneous
provisioning, shared resources, service on
demand, and programmatic management.
2013 IEEE

The US National Institute of Standards


and Technology (NIST) has also published
a cloud definition, which it has submitted
as the US contribution for an international
standard.3 According to NIST,
Cloud computing is a model for
enabling ubiquitous, convenient, ondemand network access to a shared pool
of configurable computing resources
(for example, networks, servers, storage,
applications, and services) that can be
rapidly provisioned and released with
minimal management effort or service
provider interaction.

The NIST definition lists five essential characteristics of cloud computing:


on-demand self-service, broad network
access, resource pooling, rapid elasticity or
Published by the IEEE Computer Society

expansion, and measured service. It also lists


three service modelssoftware as a service
(SaaS), platform as a service (PaaS), and
infrastructure as a service (IaaS)and four
deployment modelsprivate, community,
public, and hybridthat, together, categorize ways to deliver cloud services.
NIST has also published a cloud computing reference architecture.4 As Figure
1 shows, this architecture outlines the five
major roles of cloud consumer, provider,
broker, auditor, and carrier.
These definitions and reference architecture provide a foundation from which we can
begin to analyze cloud security issues.

Cloud Security Controls


Lets first look at cloud security controls documented within the Cloud Security Alliance
(CSA) security control framework, which was
informed by both the ENISA and NIST definitions. The CSA guidance, version 2.1, contains 98 different cloud security controls from
13 domains, which aim to help evaluate initial
cloud risks and inform security decisions.5
This body of work would seem to indicate
that, based on published cloud definitions,
we can identify 98 cloud-specific security controls. However, all 98 controls have
been mapped to existing implementationindependent security control frameworks.6
This includes NIST Special Publication
800-53 and the International Organization
for Standardization 27001-2005. Based on
this evaluation, these security controls dont
seem unique to cloud computingUS government and internationally standardized
general-purpose security controls cover all
known CSA cloud security controls.
The US governments Federal Risk
and Authorization Management Program
(FedRAMP, www.fedramp.gov) for cloud
computing also uses the NIST cloud definition.7 Instead of creating new cloud security
controls, FedRAMP published a selection
of existing general-purpose controls from
the NIST Special Publication 800-53 security control catalog (www.gsa.gov/graphics/
staffoffices/FedRAMP_Security_Controls
_Final.zip). Thus, the FedRAMP controls
are also generically applicable.
This lack of novel security controls for
the cloud might arise from the fact that
cloud computing is the convergence of
IEEE Cloud Computing

Securing the Cloud

Cloud provider
Cloud
broker

Cloud orchestration

Cloud
consumer

Cloud service
management

Service layer
SaaS

Security
audit
Privacy
impact audit

Provisioning/
configuration

Resource abstraction and


control layer

Service
aggregation

Service
arbitrage

Physical resource layer

Portability/
interoperability

Hardware
Performance
audit

Privacy

IaaS

Security

PaaS

Cloud
auditor

Service
intermediation

Business
support

Facility

Cloud carrier

Figure 1. NIST cloud computing reference architecture. It outlines five major roles: cloud consumer, provider, broker, auditor, and carrier.

many different technology areas, including


broadband networks, virtualization, grid
computing, service orientation, autonomic
systems, and Web 2.0. Each of these underlying technology areas has been independently
addressed by existing general-purpose security controls, so it seems logical to assume we
can address the composition of these technology areas using these same general-
purpose security controls.
However, the cloud paradigm might still
present security issues that require a novel
application of the set of existing generalpurpose security controls. Evidence for
this argument lies in the fact that each CSA
cloud security control was mapped to multiple controls from the general-purpose control frameworks.

Derivation of
Cloud Security Issues
To show the existence of these security issues,
I list a sampling derived from the initial cloud
4

IEEE Cloud Computing

definitions and reference architecture. Many


of the essential cloud characteristics, definitional models, and architectural components
suggest cloud security issues.

Cloud Brokers
This reference architecture actor implies
security composition challenges within composed clouds, such as a SaaS built on an IaaS.

On-Demand Delivery
This cloud characteristic suggests security
challenges associated with the business user
being able to easily and instantly obtain
new computing resources that must be presecured on delivery.

Resource Pooling
This cloud characteristic guides customers
toward a put all your eggs in one basket
approach that might let users concentrate
security resources on a single basket but
that also heightens the need for backup and

resiliency solutions. From a cloud customer


perspective, this characteristic reveals the
possibility that attacks against one customer
could inadvertently affect another customer
using the same shared resources.

Service Models
The cloud definition service models reveal
challenges with multitenancy in a resource
pooled environment. All service models
have data multitenancy, while PaaS and IaaS
additionally have processing multitenancy in
which user processes might attack each other
and the cloud itself.

Infrastructure as a Service
This service model reveals challenges with
using virtualization as a frontline security
defense perimeter to protect against malicious cloud users.

Broad Network Access


This cloud characteristic shifts the security
May/June 2013

model to account for possibly untrustworthy


client devices that are fully reliant on the network for service.

Measured Service
This cloud characteristic reveals the need to
measure cloud usage to promote overall cloud
availability.

he cloud computing paradigm appears


to present special security issues that
will require research and careful consideration. At this point, however, these issues
dont appear to require completely new security controls but instead the creative application of existing security techniques.

Acknowledgments
Certain products or organizations are identified
in this document, but such identification does
not imply recommendation by the US National
Institute of Standards and Technology (NIST)
or other agencies of the US government, nor
does it imply that the products or organizations
identified are necessarily the best available for
the purpose. This article reflects the authors

personal opinionsnot the opinions of the US


Department of Commerce or NIST.

References
1. IT Cloud Services User Survey, Part 2,
IDC Enterprise Panel, Aug. 2008; www.
clavister.com/documents/resources/
white-papers/clavister-whp-security-in
-the-cloud-gb.pdf.
2. Cloud Computing: Benefits, Risks, and
Recommendations for information Security, European Network and Information
Security Agency, Nov. 2009; www.enisa.
europa.eu/act/rm/files/deliverables/
cloud-computing-r i sk-assessment/
at_download/fullReport.
3. Final Version of NIST Cloud Computing Definition Published, NIST Tech
Beat, 25 Oct. 2011; www.nist.gov/itl/csd/
cloud-102511.cfm.
4. F. Liu et al., NIST Cloud Computing Reference Architecture, NIST recommendation, Sept. 2011; http://collaborate.nist.
gov/twiki-cloud-computing/pub/Cloud
Computing/R eferenceArchitecture
Ta x o n o m y / N I S T _ S P _ 5 0 0 - 2 9 2 _
-_090611.pdf.

5. Security Guidance for Critical Areas of


Focus in Cloud Computing V2.1, Cloud
Security Alliance, Dec. 2009; https://
cloudsecurityalliance.org/wp-content/
uploads/2011/07/csaguide.v2.1.pdf.
6. Cloud Controls Matrix, Version 1.2,
Cloud Security Alliance, Aug. 2011;
h t t p s ://cloudsecurityalliance.org/
research/initiatives/ccm.
7. S. VanRoekel, Memorandum for Chief
Information Officers, Executive Office of
the President, 8 Dec. 2011, footnotes 5 and
6; www.cio.gov/fedrampmemo.pdf.
Peter Mell is a computer scientist at the US
National Institute of Standards and Technology. His research interests include big data
technology, cloud computing, vulnerability
databases, and intrusion detection. Contact
him at mell@nist.gov.

This article originally appeared in IT


Professional, July/August 2012; http://
doi.ieeecomputersociety.org/10.1109/
MITP.2012.84.

Get Involved

with the IEEE Cloud Computing


Cloud Computing has widespread
impact across how we access todays
applications, resources, and data.
The IEEE Cloud Computing Initiative
(CCI) intends to lead the way by
collaborating across the interested
IEEE societies and groups for a wellcoordinated and cohesive plan in
the areas of big data, conferences,
education, publications, standards,
testbed, and dedicated web portal.

Follow us on
@ieeecloud

IEEE Cloud
Computing

IEEECloudComputing

Get involved
The CCI offers many opportunities to
participate, influence, and contribute
to this technology.
Contact us
cloudcomputing@ieee.org

Current opportunities
Submit a paper or help organize at
one of our conferences. Contribute
an article to our new Transactions on
Cloud Computing publication. Be a
part of the P2302 standards working
group for intercloud interoperability
and federation.
Save the Date
Cloud Computing for Emerging
Markets (CCEM), 1618 October 2013,
Bangalore, India (cloudcomputing.
ieee.org/ccem)
Check out
the Cloud Web Portal for the latest
information on the CCIs activities.
cloudcomputing.ieee.org

Securing the Cloud

Toward
Accountability
in the Cloud
Siani Pearson HP Labs

Accountability is likely to become a core concept in both the


cloud and in new mechanisms that help increase trust in cloud
computing. These mechanisms must be applied in an intelligent
way, taking context into account and avoiding a one-size-fits-all
approach.

he US National Institute of Standards and Technology defines


cloud computing as a model for
enabling convenient, on-demand network
access to a shared pool of configurable computing resources (for example, networks,
servers, storage, applications, and services)
that can be rapidly provisioned and released
with minimal management effort or service
provider interaction. In short, the cloud offers
a huge potential both for efficiency and new
business opportunities (especially in service
composition), and is almost certain to deeply
transform our IT. Not only will cost savings
occur due to economies of scale on the service provider side and pay-as-you-go models,
but business risk also decreases because organizations have less need to borrow money for
upfront investment in infrastructure.
However, to help realize these benefits, we

IEEE Cloud Computing

must address two primary barriers: lack of consumer trust and the complexity of compliance.
Here, I argue that the concept of accountability is key to addressing these issues.

Barriers to Cloud Adoption


Lack of consumer trust is commonly recognized as a key inhibitor to moving to
software-as-a-service (SaaS) cloud models.
People have increasing expectations that
companies with which they share their data
will protect it and handle it responsibly. Furthermore, compared to traditional server
architectures, cloud consumers are more
concerned about their datas integrity, security, and privacy as focus shifts from server
health to data protection. However, current
terms of service push risk back on consumers and offer little remediation or assurance.
Potential cloud customers perceive a lack of

transparency and relatively less control than


with traditional models, which is of particular concern in the context of sensitive information. Some cases have arisen in which
cloud service providers (CSPs) have been
forced by subpoena to hand over data stored
in the cloud, and a fear persists that governments might get access to information stored
in servers within their countries. Moreover,
it isnt clear what would happen if things
went wrong. Would providers notify users if
a privacy breach occurred? Who would be at
fault in such cases? Working out how victims
could obtain redress is complex and hard
to ascertain. Its also difficult to determine
whether data has been properly destroyed
(as it should be, for example, in the case of
a CSPs bankruptcy or if a customer wishes
to switch to a different CSP). So, people are
concerned about weak trust relationships
along the chain of service provision, especially as regards on-demand models in which
users might have to find CSPs quickly; in
such cases, trust wont necessarily be transitive along the chain.
A second barrier to cloud migration is the
difficulty CSPs can have with compliance
across geographic boundaries. Dataflows
tend to be global and dynamic. Location matters from a legal viewpoint, leading to regulatory complexity. Complying with legislation
can be difficult with regard to transborder
dataflow requirements and determining
which laws apply and which courts should
preside. Issues such as unauthorized secondary data usage and inappropriate data retention are also difficult to address.
These two issuestrust and the complexity of complianceare closely linked.
CSPs have both legal and ethical obligations to ensure privacy and protect data and
thereby demonstrate their services trustworthy nature.
This higher risk to privacy and security
in cloud computing is a magnification of
issues faced in subcontracting and offshoring. Consumers arent the only ones worried
about privacy and security concerns in the
cloud.1 The European Network and Information Security Agency (ENISA)s cloud computing risk assessment report states loss of
governance as a top risk of cloud computing, especially for infrastructure as a service
(IaaS).2 Data loss or leakages is also one

Published by the IEEE Computer Society

2013 IEEE

of the top seven threats the Cloud Security


Alliance (CSA) lists in its Top Threats to
Cloud Computing report.3 The clouds autonomic and virtualized aspects can bring new
threats, such as cross-VM (virtual machine)
side-channel attacks, or vulnerabilities due
to data proliferation, dynamic provisioning, the difficulty in identifying physical
servers location, or a lack of standardization. Although service composition is easier
in cloud computing, some services might
have a malicious source. All these privacy
and security risks might actually decrease,
however, if users move from a traditional IT
model to a cloud model with CSPs who have
expertise in privacy and security.
Accountability can help us tackle these
challenges in trust and complexity. Its especially helpful for protecting sensitive or
confidential information, enhancing consumer trust, clarifying the legal situation
in cloud computing, and facilitating crossborder data transfers. My focus here is on
data-protection issues in the cloud. The term
data protection has more of a privacy focus
in Europe but a broader data security context
in the US. I focus primarily on privacy, but
some of these issues transcend personal data
handling and generalize to other types of
data, beyond privacy concerns.

What Is Accountability?
For several years, computer science has used
the term accountability to refer to a narrow
and imprecise requirement thats met by
reporting and auditing mechanisms. Here,
however, I use the term in the context of corporate data governance. Accountability (for
complying with measures that give effect to
practices articulated in given guidelines) has
been present in many core frameworks for
privacy protection, most notably the Organization for Economic Cooperation and
Development (OECD)s privacy guidelines
(1980),4 Canadas Personal Information
Protection and Electronic Documents Act
(2000),5 and Asia Pacific Economic Cooperation (APEC)s Privacy Framework (2005).6
More recently, region block governance models are evolving to incorporate
accountability and responsible information use, and regulators are increasingly
requiring that companies prove theyre
accountable. In particular, legislative


authorities are developing frameworks


such as the EUs Binding Corporate Rules
(BCRs) and APECs Cross Border Privacy
Rules to provide a cohesive and more practical approach to data protection across disparate regulatory systems. For example, BCRs
require that organizations demonstrate that
they are, and will be, compliant with
requirements that EU Data Protection
Authorities (DPAs) have defined for transferring data outside the EU. More recently,
several groups have highlighted accountabilitys significance and utility in introducing innovations to the current legal
framework in response to globalization and
new technologies (see The Future of Privacy, from the Article 29 Working Party,7 its
opinion of July 2010,8 and the Madrid Resolutions global data protection standards,
which the International Conference of Data
Protection and Privacy Commissioners
adopted in October 2009).
The Galway project started by privacy
regulators and privacy professionals defines
accountability in the context of these latest
regulations:
Accountability is the obligation to act
as a responsible steward of the personal
information of others, to take responsibility for the protection and appropriate
use of that information beyond mere
legal requirements, and to be accountable for any misuse of that information.9

Central components of this notion are


transparency, responsibility, assurance, and
remediation. With regard to responsibility,
organizations must demonstrate that theyve
acknowledged and assumed responsibility,
in terms of both having appropriate policies
and procedures in place and promoting good
practices that include correction and remediation for failure and misconduct. Such organizations must employ responsible decision
making and, in particular, report, explain,
and be answerable for the consequences of
decisions theyve made with regard to data
protection.

Retrospective vs. Prospective


Accountability
Some have argued that to provide accountability, we must shift from hiding information

to ensuring that only appropriate uses


occur.10 Information usage should be transparent so that we can determine whether a
use is appropriate under a given set of rules.
CSPs can maintain a history of data manipulation and inferences (providing transparency) that can then be checked against the
policies that govern them. This provides retrospective accountabilitythat is, if actor
A performs action B, then we can review B
against a predetermined policy to decide if
A has done something wrong and so hold A
accountable.
We must extend this approach to include
prospective effects because, for instance, the
environment might changefor instance,
new risks might arise for data subjects
because the service provisioning chain
alters, the location of the physical servers
storing or processing data changes, a CSP
has new ownership, or a new type of attack
occurs. Reducing the risk of disproportionate harm to data subjects thereby reduces
negative consequences for data controllers.
To do this, we must build in processes and
reinforce good practices such that liability doesnt arise in the first place.11 This is a
reflexive privacy process that isnt static, and
in which the data controller must conduct an
ongoing assessment of harm and a privacy
review process throughout the contractual
or service provision chain.
Broadly speaking, an accountability
approach in accordance with current regulatory thinking requires organizations to

commit to accountability and establish

policies consistent with recognized external criteria;


provide transparency and mechanisms for
individual participation, including sharing
these policies with stakeholders and soliciting feedback;
use mechanisms to implement these policies, including clear documentation and
communication (encompassing an organizations ethical code), support from all
levels within the organizational structure,
tools, training, education, ongoing analysis, and updating;
allow validationthat is, provide means
for external enforcement, monitoring, and
auditing; and
provide mechanisms for remediation,
7

Securing the Cloud

which should include event management


(such as dealing with data breaches) and
complaint handling.

uses personal information to ensure that the


contracted partners to whom it supplies this
data are compliant, wherever they might
reside worldwide. So, the communities responsible for data stewardship (who are typically
organizational IT security, legal, operations,
and compliance staff) place responsibilities
and constraints on other individuals or on
how systems operate, and these constraints
are met along the chain of provision.

and controls that make the most sense for


their business situation, rather than a onesize-fits-all prescriptive set of rules;
employing various degrees of accountI argue that we can extend the third item
ability; it might be that more stringent
in this list to encompass both preemptive
standards and tests for accountability
approaches (to assess risk and avoid privacy
could facilitate proof of CSPs readiness
harm) and reactive approaches that provide
to engage in certain activities (such as
transparency and auditing. These privacy
those that involve processing highly sensipolicies and mechanisms must take into
tive data) or even relieve them of certain
account the entire life cycle of personal data
administrative burdens (such as renotificausage, including deletion. Companies must Intelligent Accountability
tion of minor changes in processing); and
think about not only what data theyll col- Baroness ONeill first proposed the idea of developing clever, automated analysis,
lect and how they plan to use it but also what intelligent accountability as a means
automated internal policy enforcement,
potential harm the proposed use of
and other technologies to enhance
that data could cause to individuals.
enforcement and avoid increasing the
Organizations must employ responsible
Without going into the intricacies of
human burden.
legal ownership, the data subject is
decision making and report, explain, and
normally, in a fundamental sense, the
As an integral part of an intellibe answerable for decisions theyve made. gent accountability approach, organireal owner of his or her data and is
ultimately the person harmed in the
zations will need to spend time and
event of a privacy breach; this person
resources analyzing what it means
should be empowered and supported. For to provide greater accountability with- to them and gaining management support
example, if youre tracking someones behav- out damaging professional performance for implementing necessary changes.
ior online, under an accountability approach in her 2002 Reith Lectures on A Quesyou might provide clear notice that track- tion of Trust (www.bbc.co.uk/radio4/ How to Provide
ing is happening, an explanation of how you reith2002/). She argued that much of what Accountability in the Cloud
plan to use the data, and a mechanism for individuals and organizations must account Accountability promotes the implementaindividuals to opt out of tracking and request for isnt easily measured and cant be reduced tion of practical mechanisms whereby legal
that you delete previous tracking data about to a set of stock performance indicators. requirements and guidance are translated
them.
ONeill said that intelligent accountability into effective data protection. Legislation
requires more attention to good gover- and policies tend to apply at the data level,
nance and fewer fantasies about total con- but mechanisms for accountability can exist
Data Stewardship
A closely related notion to accountability is trol and that good governance is possible at various levels, including system and data
data stewardship.12 In a cloud model, many only if institutions are allowed some margin levels. Solution builders could provide data
different cloud providers in an ecosystem for self-governance of a form appropriate to controllers with a toolbox of measures to
enable the construction of custom-built
consume IT. Understanding such ecosys- their particular tasks.
We must introduce accountability in solutions whereby controllers could taitems can be challenging, and we must make
a paradigm shift in our thinking. Security and an intelligent way, or trust wont increase, lor measures to their context (taking into
privacy management evolves into an infor- and the overall effect could be quite nega- account the systems involved, the type of
mation stewardship problemthat is, how tive with regard to the increased administra- data, dataflows, and so on).
We can codesign legal mechanisms, proorganizations can properly look after and pro- tive burden. As relates to the cloud, intelligent
cedures, and technical measures to support
tect information (in a broader sense than just accountability could involve
this approach. We might integrate design elepersonal data) on behalf of the data owners,
subjects and third parties. In the cloud, estab- moving away from box checking and ments to support
lishing risks and obligations, implementing
static privacy mechanisms;
appropriate operational responses, and assessing potential harms to data subjects prospective (and proactive) accountdealing with regulatory requirements will
before exposing data to risks; this would
ability, using preventive controls and
be more difficult than with traditional server
be part of ongoing risk assessment and miti- retrospective (and reactive) accountarchitectures. The notions of transparency
gation, for which privacy impact assessments
ability, using detective controls.
and assurance are more relevant and data
(PIAs) are one important tool;
controllers and CSPs must ensure chains allowing organizations more flexibility
Preventive controls can help miti
gate
of accountability. Accountability places
in how they provide data protection so whether an action continues or takes places
a legal responsibility on an organization that
that they can use internal mechanisms at all (for example, an access list that governs
8

IEEE Cloud Computing

May/June 2013

Another mechanism were researching is and auditing. By these means, the accountwho can read or modify a file or database,
or network and host firewalls that block all the use of sticky policies, in which machine- able organizations can ensure that all who
but allowable activity). The cloud is a spe- readable policies (defining allowed usage and process data observe their obligations to procial example of how businesses must assess associated obligations) are attached to data tect it, irrespective of where that processing
and manage risk better.13 Preventive controls within the cloud and travel with it. Other occurs.
for the cloud include risk analysis and deci- mechanisms include risk assessment, decision support tools, policy enforcement (for sion support, obfuscation in the cloud, and Moving Forward
example, machine-readable policies, privacy- policy translation from higher-level policies Current regulatory structure places too much
enhanced access control, and obligations), to machine-readable ones that are enforced emphasis on recovering and not enough on
trust assessment, obfuscation techniques, and audited. We dont have the space here to trying to get organizations to proactively
describe all this work, so Ill just briefly out- reduce privacy and security risks. New data
and identity management.
Organizations can use detective controls line three examples of our research.
governance models for accountability can
First, weve worked with the HP Privacy provide a basis for providing data protection
to identify privacy or security risks that go
against policies and procedures (for example, Office to develop and deploy a tool called when people use cloud computing. Accountintrusion-detection systems, polability is becoming more integrated
icy-aware transaction logs, language Accountability places a legal responsibility into our self-regulatory programs as
frameworks, and reasoning tools).
well as future privacy and data proon an organization to ensure that the
Detective controls for the cloud
tection frameworks globally. If CSPs
include auditing, tracking, reporting,
contracted partners to whom it supplies dont think beyond mere compliand monitoring. In addition, correcance and demonstrate a capacity for
data are compliant.
tive controls are necessary (such as
accountability, regulations will likely
an incident management plan or disdevelop that could be difficult to folpute resolution) that can help fix an
low and might stifle innovation; a
undesired outcome thats already occurred. the HP Privacy Advisor that takes employees backlash might also arise from data subjects.
These controls complement each other: a through a series of dynamically generated
Strengthening
an
accountability
combination would ideally be required for contextual questions and outputs the risk approach and making it more workable
for privacy compliance in any new product, by developing intelligent ways to apply
accountability.
Provision of accountability wouldnt occur service, or program. It encodes HPs privacy accountability and information stewardonly via procedural means, especially for the rulebook and other sources and provides ship is a growing challenge. It goes beyond
cloud, which is an automated and dynamic privacy by design guidance. An associated traditional approaches to protect data (such
environment: technology can play an impor- workflow with privacy managers ensures as security and the avoidance of liability) in
tant role in enhancing solutions by enforcing that employees address the suggested actions that it includes complying with and upholdpolicies and providing decision support, assur- mitigating these risks.
ing values and obligations, and enhancing
The Cloud Stewardship Economics proj- trust. Hewlett-Packard is actively working in
ance, security, and so on.
Procedural measures for accountabil- ect is defining mathematical and economic this area to produce practical solutions, both
ity include determining CSPs capabilities models of the cloud ecosystem and the dif- on the policy (HP Privacy Office) and techbefore selecting one, negotiating con- ferent choices cloud stakeholders face. The nical fronts (HP Labs).
tracts and service-level agreements (SLAs), goal is to help cloud consumers, providers,
At present were just starting to see some
restricting the transfer of confidential data to regulators, and other stakeholders explore technical work emerging from other parties
CSPs, and buying insurance. Organizations and predict the consequences of different in this area. The CSAa non-profit orgashould also appoint a data-protection officer, policies, assurance mechanisms, or even ways nization formed to promote the use of best
regularly perform privacy impact assessments of regulating accountability. This can facili- practices for providing security assurance
on new products and services, and put tate consumer choice; as chains of providers within cloud computinghas a Govermechanisms in place to allow quick response become more complex, the models can high- nance, Risk Management, and Compliance
light how and why evidence sharing is likely (GRC) stack that includes two very relto data subject access and deletion requests.
Technical measures for accountability can to provide necessary assurance.
evant activities: CloudAudit, which aims
Finally, were working to achieve account- to provide a technical foundation to enable
include encryption for data security mitigation, privacy infomediaries, and agents to help ability using contractual assurances along transparency and trust in private and pubincrease trust. We must also be able to rely on the service provision chain from CSPs to lic cloud systems, and the Trusted Cloud
infrastructure to maintain appropriate separa- accountable organizations, enhanced on Initiative, which is working toward certifytions, enforce policies, and report information the technical side by enforcement of corre- ing trusted clouds. HyTrust Appliance is a
accurately. At HP Labs, were investigating sponding machine-readable policies propa- hypervisor consolidated log report and polhow to build and exploit trusted virtualized gated with (references to) data through the icy-enforcement tool that logs from a system
cloud, integrated risk assessment, assurance, perspective. The Commonwealth Scientific
platforms with precisely these properties.


Securing the Cloud

and Industrial Research Organization


(CSIRO) has produced a prototype in which
CSPs are accountable for faulty services. The
Computer Sciences Corporation (CSC) is
developing a CloudTrust protocol that will
promote CSP transparency.

t HP Labs, our broader vision is to


deliver seamless, secure, contextaware experiences for a connected world.
The richness, choice, and convenience of
how we interact with our devices and a
pervasive computing environment will be
enhanced. At the same time, we want this
to be safe and ultimately controlled by end
users. Weve been introducing and will continue to research new innovative techniques
to uphold HPs ethics and values internally
and demonstrate this to our stakeholders
and customers.

References
1. R. Gellman, Privacy in the Clouds: Risks

to Privacy and Confidentiality from Cloud


Computing, World Privacy Forum, 2009;
www.worldprivacyforum.org/pdf/WPF
_Cloud_Privacy_Report.pdf.
2. Cloud Computing: Benefits, Risks and Recommendations for Information Security, D.
Catteddu and G. Hogben, eds., ENISA,
Nov. 2009; www.enisa.europa.eu/act/
rm/files/deliverables/cloud-computing
-risk-assessment/at_download/f ull
Report.
3. Top Threats to Cloud Computing, version
1.0, tech. report, Cloud Security Alliance,
Mar. 2010; https://cloudsecurityalliance.
org/topthreats/csathreats.v1.0.pdf.
4. Guidelines Governing the Protection of Privacy and Transborder Flow of Personal Data,
Organization for Economic Cooperation
and Development (OECD), 1980.
5. Personal Information Protection and Electronic Documents Act (PIPEDA), Canada,
schedule 1, principle 1, 2000.
6. APEC Privacy Framework, Asia-Pacific
Economic Cooperation, 2005; www.apec.

IEEE Computer Society Offers


Cloud Computing Course Series
As part of its mission to support the needs of those in the computing industry,
the IEEE Computer Society has developed a series professional development
courses on Cloud Computing. These products have been developed by IEEE-CS
staff as well as a large number of subject matter experts chosen from Society
membership and other authoritative sources. These courses are part of the
Computer Societys Specialty Course Series, and will include an overview
and concept course, in-depth courses, and various other products, addressing
the essential concepts and elements of the Cloud from the perspective of the
business and IT decision-maker.
Managers are often faced with having to decide if, and how to upgrade their
IT infrastructure, and how to pay for it. In an environment of tight budgets and
soaring hardware and software costs, they are also looking for alternatives to
making huge investments that will have to be upgraded again and again. The
Cloud can be that solution. Managers need information to make intelligent
decisions however.
Questions pertaining to Cloud economics, security, regulation and governance,
metrics and migration are introduced and discussed in the Cloud Computing
course series. In the final analysis, managers must be able to answer key
questionsis the Cloud the right place for my IT infrastructure and data? Is it a
good business decision? How do I migrate to the Cloud?
This course series examines these and other key concepts.
Learn more about this exciting new Cloud course seriescontact Dorian
McClenahan at the Certification and Professional Education Group at
dmcclenahan@computer.org.

10

IEEE Cloud Computing

org/Groups/Committee-on-Trade-and
-Investment/~/media/Files/Groups/
ECSG/05_ecsg_privacyframewk.ashx.
7. The Future of Privacy: Joint Contribution to the Consultation of the European
Commission on the Legal Framework
for the Fundamental Right to Protection
of Personal Data, EU Article 29 Working Party, WP168, Dec. 2009; http://
ec.auropa.eu/justice/policies/privacy/
docs/wpdocs/2009/wp168_en.pdf.
8. Opinion 3/2010 on the Principle of
Accountability, EU Article 29 Working Party, WP173, July 2010; http://
ec.europa.eu/justice/policies/privacy/
docs/wpdocs/2010/wp173_en.pdf.
9. Galway Project Plenary Session Introduction,
Galway Project, 28 Apr. 2009, p. 5.
10. D. Weitzner et al., Information Accountability, Comm. ACM, vol. 51, no. 6, 2008,
pp. 8287.
11. S. Pearson and A. Charlesworth,
Accountability as a Way Forward for Privacy Protection in the Cloud, Proc. 1st Intl
Conf. Cloud Computing, LNCS 5931, M.G.
Jaatun, G. Zhao, and C. Rong, eds., 2009,
pp. 131144.
12. D. Pym and M. Sadler, Information Stewardship in Cloud Computing, Intl J. Service
Science, Management, Engineering and Technology, vol. 1, no. 1, 2010, pp. 5067.
13. A. Baldwin and S. Shiu, Managing Digital Risk: Trends, Issues, and Implications
for Business, tech. report, Lloyds 360 Risk
Insight, 2010.
Siani Pearson is a senior researcher in the
Cloud and Security Research Lab at HP
Labs Bristol. Her current research focus is
on privacy-enhancing technologies, accountability, and the cloud. Pearson has a PhD in
artificial intelligence from the University of
Edinburgh. Shes a technical lead on regulatory compliance projects with the HP Privacy Office and HP Enterprise Services and
on the collaborative TSB-funded Ensuring
Consent and Revocation project. Contact
her at siani.pearson@hp.com.
This article originally appeared in
IEEE Internet Computing, July/August
2011; http://doi.ieeecomputersociety.
org/10.1109/MIC.2011.98.

May/June 2013

SUBMIT

NOW

IEEE TRANSACTIONS ON

Cloud Computing

The IEEE Transactions on Cloud Computing will publish peer reviewed articles that provide
innovative research ideas and applications results in all areas relating to cloud computing.
Topics relating to novel theory, algorithms, performance analyses and applications of
techniques relating to all areas of cloud computing will be considered for the transactions.
The transactions will consider submissions specifically in the areas of cloud security, trade-offs
between privacy and utility of cloud, cloud standards, the architecture of cloud computing,
cloud development tools, cloud software, cloud backup and recovery, cloud interoperability,
cloud applications management, cloud data analytics, cloud communications protocols,
mobile cloud, liability issues for data loss on clouds, data integration on clouds, big data on
clouds, cloud education, cloud skill sets, cloud energy consumption, cloud applications in
commerce, education and industry. This title will also consider submissions on Infrastructure
as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), and Business
Process as a Service (BPaaS).

TCC EDITor-In-CHIEF
Rajkumar Buyya
Director, Cloud Computing and Distributed Systems (CLOUDS) Lab, The University of Melbourne

TCC Steering Committee Members


IEEE CoMpuTEr SoCIETy
Jon Rokne (SC Chair)
Tom Conte
Irena Bojanova
Dejan Milojicic
IEEE CoMMunICaTIonS SoCIETy
Vijar Bhargave
Vincent Chan

IEEE SySTEMS CounCIl SoCIETy


Paolo Carbone
IEEE powEr & EnErgy SoCIETy
Jie Li
Badrul Chowdhury
IEEE ConSuMEr ElECTronICS SoCIETy
Stu Lipoff

For more information please visit: http://www.computer.org/tcc

www.computer.org/cloud

11

Securing the Cloud

Public Sector
Clouds Beginning
to Blossom
Efficiency, New Culture
Trumping Security Fears
Greg Goth

s governments around the world


continue to grapple with sluggish
economies, cloud computing is
emerging as a possible answer to demands for
reducing public sector spending.
Cloud computing is starting to emerge
as a technology thats proven, says Neil
McEvoy, president of the Toronto-based
Cloud Best Practices Network. Government, McEvoy says, is an ideal context
for this new technology and the value it can
bring. Its designed to allow multiple organizations to consolidate different levels of
the technology stack in a manner that helps
them drive much more efficient use of infrastructure. If you look at the utilization of IT
traditionally, huge amounts of servers are
racked up for peak usage. For all other times,
they are horribly underused. Thats simply
not an effective use of taxpayers money in a
time of economic crunch.
A birds eye view of public sector
cloud computing might actually lead to an

12

IEEE Cloud Computing

incomplete conclusion: taken as government-wide initiatives, many cloud strategies


seem to be stalled in political maneuvering
or concerns about intruderswhether
theyre agents of other governments or
independent hackersgaining access to
sensitive areas of government networks.
However, numerous agency-by-agency cloud
solutions have either already been implemented or are about to launch. And a wide
array of supporting organizations, including governmental agencies such as the US
National Institute of Standards and Technology (NIST) and private sector organizations such as the TechAmerica Foundation,
are creating an ecosystem of public sector
cloud architectural requirements and best
practices that correlate with new cloud grid
installations.

Setting the Standards


NIST issued two documents in February
2011 that are widely considered the keystone

documents for cloud architecture definitions


(http://tinyurl.com/4dmorxe) and cloud
security (http://tinyurl.com/4juldru). These
documents, McEvoy says, are becoming the de
facto standards for governments worldwide.
NIST has grown to become the root
authority for the cloud computing industry globally, he says. The NIST document
defines cloud at a high level, and then more
specifically, the detailed recommendations
in areas like information security are very
actively followed. In fact, the core cloud
computing initiative for the government
of Canada is what they call the Government
Community Cloud, and thats based on the
NIST model of the same name. Then, on
top of the NIST-compatible architecture,
they are layering their expertise on more
Canadian-specific requirements.
Jennifer Kerber, vice president for
homeland security and federal civilian policy
at the TechAmerica Foundation, says network architects and administrators in government agencies can see incentives on two
fronts encouraging more cloud adoption.
In the initial big push for cloud computing within the federal government, Kerber
says, many network administrators and managers were unsure about the clouds benefits.
But when you look at the US government
and private sector markets and see the difference in efficiency gains in the private sector
over government in a 10-, 20-, and 30-year
periodand realize a lot of that is through
embracing technological innovationin the
current fiscal environment, its a natural [direction] for the federal government to look.
Former federal CIO Vivek Kundra, who
spearheaded the Obama administrations
cloud first policy, recently answered a New
York Times news story that quoted officials
in the Defense and State departments who
were wary of security issues in the cloud
with an opinion piece in the publication at
the end of August.
Some agencies, like the General Services
Administration, have embraced cloud computing; the agency has cut the IT costs on things as
simple as its email system by over 50 percent,
Kundra wrote. But other agencies have balked.
The State Department, for instance, has raised
concerns about whether the cloud approach
introduces security risks, since data is stored off
site by private contractors.

Published by the IEEE Computer Society

2013 IEEE

But cloud computing is often far more


secure than traditional computing, because
companies like Google and Amazon can
attract and retain cybersecurity personnel of a
higher quality than many governmental agencies. Government employees are so accustomed to using cloud services like Dropbox
and Gmail in their personal lives that, even
if their agencies dont formally permit cloud
computing, they use it for work purposes
anyway, creating a shadow IT that leads to
a more vulnerable organization than would a
properly overseen cloud computing system.
Federal research agencies are initiating cloud security programs, such as
DARPAs CRASH (Clean-slate Redesign
of Resilient, Adaptive, Secure Hosts) and
Mission-
Oriented Resilient Clouds, but
one cloud computing executive thinks these
efforts are a long way from bearing fruit on a
wide basis.
I see that as a pretty long-term theoretical thing, several years and several millions
of dollars away, says Michael Sutton, vice
president of security for cloud security vendor Zscaler. I dont think agencies should be
waiting for any silver bullet that will let them
know, OK, the clouds now secure enough
for me. The approach should be no different
than it always has been. Youll always have
data of varying levels of classification and risk,
and you have to look at those, decide what is
appropriate for the cloud today and what is
not, and move in an appropriate fashion.

A Common Path
The latest trends on government clouds
seem to be following the agency-byagency scenario: while the British governments top-level G-Cloud initiative
seems to have stalled out in a change of government, the UKs National Health Service
quietly signed an agreement with Zscaler
to provide the NHS with its product.
The thing that really helped us was, it was
just a massive environment, and very disparate, Sutton says. Different hospitals would
have their own IT departments, and it was
spread like that through the entire country.
An offering like Zscaler was very desirable
to them because it didnt require deploying
new hardware, and they didnt have to deal
with certain pieces of hardware and software
not working with everything. You could
www.computer.org/cloud

just float everything through the cloud and


it would work in all these environments.
Sutton says a key factor in governmental cloud adoption will be the method by
which private sector cloud vendors and their
public-agency counterparts demark their
respective responsibilities.
There are certainly plenty of existing
guidelines as to how things have to operate
they have to comply with FISMA [Federal Information Security Management
Act], for instance and none of that is
going to go away, but now its harder to define
boundaries, he says. Certainly, vendors and
the private sector will have to help with that.
For example, Sutton says Amazons discrete federal cloud approach (which gained
FISMA approval in September) illustrates
the private sectors recognition that government entities could require separate
platforms.
We have a public cloud, and we recognize that not all agencies are going to be able
to adopt that, either because they have unique
requirements or are more conservative, he
says. So we realize were going to have to
build some private clouds for government, or
were going to have to have hybrid clouds with
certain components under their control.
In nations with underdeveloped cloud
resources, such as Canada, McEvoy
believes moving to the cloud can create a
fantastic opportunity for the government to
enjoy a double whammy of a benefit.
If the ministries responsible for economic
development and IT could more closely
coordinate their cloud strategies, McEvoy says,
they could outsource the public sector infrastructure they need while simultaneously
bootstrapping several Canadian companies
who could go on to expand internationally.

Creating New Culture


Ideally, governments at national, regional,
and local levels should be able to share
cloud computing resources, whether
theyre infrastructure and platforms or, as
Mc
Evoy believes, applications developed in one jurisdiction that can then
be ported to others with similar tasks.
However, a pioneering cloud approach
coordinated by the US Centers for Disease Control and Prevention is demonstrating that cloud computing can also lead to

cultural changes in how public agencies


interact with vital public data.
The CDCs BioSense program,
launched in 2003 as a federal government-housed and controlled platform
meant to address bioterrorism concerns,
is about to launch a comprehensive redesigna cloud-based data reporting and
analysis platform in which multiple public
health agencies will share both data and governance. The new BioSense platform, hosted
by Amazon Web Services, will be governed by
the Association of State and Territorial Health
Officials (ASTHO), in coordination with the
Council of State and Territorial Epidemiologists (CSTE), the US National Association
of County and City Health Officials (NACCHO), and the International Society for
Disease Surveillance (ISDS).

aha Kass-Hout, the CDCs program manager for the BioSense program, says the new collaborative approach
will better mirror how local and regional
public health agencies deal with possible
outbreaks of disease or attack.
Biosurveillance is really about the local
context anyway, Kass-Hout says, and in
redesigning BioSense, we had to be cognizant
not just of legal issues such as data use agreements but also of respecting business logic at
the various levels and the best practice procedures theyve instituted. Data should flow
from providers to local departments and
upward, but it should also flow horizontally.
Local and state health departments have the
best relationship with providers, they understand the context in which an event has
happened, and they understand their
population more than anybody else. If we
can make sure they have ownership of that
data and the initial vetting of it is there, that
would be the basis to truly start stitching a
regional and national picture.

Greg Goth is a freelance technology writer


based in Connecticut.
This article originally appeared in IEEE
Internet Computing, November/December 2011; http://doi.ieeecomputersociety.
org/10.1109/MIC.2011.155.
13

Securing the Cloud

The Insecurity
of Cloud Utility
Models
Joseph Idziorek, Mark F. Tannian, and Doug Jacobson Iowa State
University

Cloud-based services are vulnerable to attacks that seek to


exploit the pay-as-you-go pricing model. A botnet could perform
fraudulent resource consumption (FRC) by consuming the
bandwidth of Web-based services, thereby increasing the cloud
consumers financial burden.

key feature that has led to the


early adoption of public cloud
computing is the utility pricing
model, which governs the cost of computing resources consumed. Similar to public
utilities, such as gas and electricity, cloud
consumers only pay for the resources
(storage, bandwidth, and computer
hours) they consume and for the time they
use such resources. In accordance with the
terms of agreement of the cloud service provider (CSP), cloud consumers are responsible for all computational costs incurred in
their leased compute environments, regardless of whether the resources were consumed
in good faith.
Common use cases for corporations
that have adopted public cloud computing
include website and Web application hosting
and e-commerce. Like any Internet-facing

14

IEEE Cloud Computing

presence, these cloud-based services are


vulnerable to distributed denial-of-service
(DDoS) attacks. Such attacks are well
known, and the associated risks have been
well researched. Here, we explore a more
subtle attack on Web-based services hosted
in the cloud. Given the pay-as-you-go pricing, cloud-hosted Web services are vulnerable to attacks that seek to exploit this model.
An attacker (for example, a botnet) can perform a fraudulent resource consumption (FRC)
attack by consuming the metered bandwidth
of Web-based services, increasing the cloud
consumers financial burden.1,2
In the scenario in Figure 1, a botnet
comprising potentially thousands of bot clientsis consuming Web resources hosted
in the cloud by mimicking legitimate client
behavior. To the cloud-based Web application, the intention of incoming requests is

either unknown or not considered, so each


request is serviced with a reply, resulting
in a fractional cost for the cloud consumer.
Because this vulnerability, up until now,
hasnt been largely discussed, determining
this threats overall effect on the cloud community is difficult. Rather, we focus here
on describing the vulnerability to increase
awareness, analyzing the risk for an individual cloud consumer and discussing methods
for FRC prevention, detection, attribution,
and mitigation.

The Utility Model


The utility model is attractive to a cloud consumer because the low entry cost removes
the burden of major capital expenses. However, although convenient, the utility model
isnt without its risksthe financial liability
for resources consumed is unlimited. CSPs,
such as Amazon EC2 and Rackspace, charge
US$0.12 per Gbyte (up to 40 Tbytes) and
$0.18 per Gbyte, respectively, for outbound
data transfers.3,4
As Figure 1 shows, the cloud consumer
(the victim) incurs a cost each time a cloud
application (the attack target) services a
reply. A high volume of requests can be
costly. Malicious use is even more burdensome, because the additional run-up in
expenses has no associated business value.
As it stands today, CSPs dont monitor cloud
consumers applications, so its up to the
cloud consumer to prevent, monitor, and
respond to such fraudulent behavior.

Fraudulent Resource
Consumption
To better understand the FRC attack, consider the time-series visualization of a Web
server log shown in Figure 2.1,2 The y-axis
depicts the number of requests per second,
and as the x-axis shows, the time series covers a two-week period. As is common, the
modeled Web server capacity is sufficiently
over-provisionedthis represents a conservative estimate, given the capacity of CSP
Web servers. Superimposed on top of normal Web activity are serviced requests from
an FRC attack.
As Figure 2 shows, initial attack intensity beyond normal activity is in the nuisance activity region, because the resultant
costs are insignificant to the cloud consumer.

Published by the IEEE Computer Society

2013 IEEE

Control

Attack
clients
$

Botmaster

(Bots)

Internet
$

$
$

Cloud consumer

$
Legitimate
clients

Cloud-based
Web applicaiton

CSP access
point

Public internet

CSP network

Figure 1. A cloud network-attack diagram. Botnets can exploit the cloud utility model to perform fraudulent resource consumption (FRC),
making consumers incur unexpected costs from dishonest use.

However, as malicious activity intensifies


beyond this region, the malicious costs to
the cloud consumer start to become a matter
of concern; this transition point is labeled J1.
Malicious activity that exceeds J1 enters into
the FRC attack region. Within this region,
bounded by J1 and J2, an FRC attack doesnt
significantly degrade the Web servers quality
of service (QoS).
If the attack intensity increases above
J2, the request volume will reach a point at
which the Web server QoS starts to significantly degrade. At this point, current application-layer DDoS detection and mitigation
schemes are effective.5 An objective of FRC
attack mitigation research is to improve
detection sensitivity that will push J2 closer
to J1, thus narrowing the FRC attack region
by detecting attacks that are legitimate transactions but differ in the requestors intent.
As shown on the right side of Figure 2,
the probability of detecting an FRC attack
increases as the attack intensity increases.
Although nothing prevents an attacker from
exploiting the utility model with an attack
intensity in the DDoS attack region, such a
blatant action carries a higher risk of detection and ultimately mitigation. Depending
on the attack objectives and the cost to the
attacker, a modest request intensity within
the FRC attack region over an extended
duration of time has a higher chance of success, because this is considerably more difficult for a victim to mitigate.
Faced with such an attack, current DDoS
mitigation schemes, firewalls, and intrusion


prevention and detection systems would


be rendered ineffective, because individual
fraudulent requests are protocol compliant
and attack rates dont degrade the Web server
QoS. As a result, and given the utility pricing
model, the potential for an FRC attack fundamentally changes requirements of Webbased anomaly detection for the cloud.
Figure 3 depicts an FRC attack as a slowand-low assault or death by a thousand
requests. Unlike short-lived DDoS attacks,
the duration of an FRC attack could last
weeks or months if not detected. Because
resources maliciously consumed are additive to that of normal traffic, the aggregate
of legitimate and malicious resource use is
reflected in a cloud consumers monthly bill.
Availability in the context of this discussion isnt a binary measure in which the
system is nearly incapacitated at the time of
the attack. The technical infrastructure of a
website hosted in a CSP environment will
have no trouble functioning while an FRC
attack is underway. Instead, availability is a
long-term consideration defined as the cloud
consumers ability to withstand the financial
consequences of an FRC attack over a prolonged time period.

FRC Risk
Adopting the public cloud model brings
with it new and old security risks. Here, we
focus on the risk introduced by the utility
pricing model by discussing the likelihood
and effects of an FRC attack.
The likelihood of a cloud consumer

falling victim to an FRC attack depends on


the attackers skill level, computing capacity,
and motivation as well as his or her ability to
exploit the utility pricing model. This pricing
vulnerability is literally hiding in plain sight,
because CSPs openly publish their pricing
metrics. From a technical standpoint, all
thats necessary for an attacker to exploit this
vulnerability is to make standard requests for
Web content that the cloud consumer makes
publicly available. Although a large botnet
is the worst-case threat source, conceivably
any Internet-connected device could perform an FRC attack with a Perl script making
HTTP GET requests or using the Low-Orbit
Ion Cannonan open-source tool that has
fueled recent DDoS attacks.6
As evidenced by the growing number,
capacity, and sophistication of both botnets
and DDoS attacks, the worst-case threat
sources undoubtedly possess the skills
and resources to mount a sustained and
effective FRC attack. The only real factor
preventing an FRC attack is a lack of motivation. Yet similar to those who orchestrate DDoS attacks, the motive of an FRC
attacker could range from ego and hacktivism to monetary gain, extortion, revenge,
competitive advantage, or economic espionage.7 If recent history is any guide, those
who control botnets could perform an FRC
attack to promote a political agenda or support an ideological viewpoint.
For the victim, the direct monetary effect
of an FRC attack is a function of the average request intensity and attack duration. To
15

Securing the Cloud

130

DDoS attack region

FRC attack

110

J1
Nuisance activity
Normal activity

90
70
50
30

Probability of detection

Requests per second

150
J2
FRC attack region

J1

10
0

7
Days

10

11

12

13

14

Figure 2. Malicious-requests behavior. The initial attack intensity (labeled J1) results in insignificant costs for the cloud consumer. However,
as malicious activity intensifies beyond this nuisance activity region, the cost to the consumer starts to become a matter of concern. Yet
distributed denial-of-service detection schemes arent effective at this lower intensity level (below J2).

enumerate one end of the extreme, a weeklong DDoS attack launched from a 250,000
node botnet in 2011 peaked at 45 Gbps.8 If the
aforementioned attack peak was sustained on
a cloud instance at $0.12/Gbyte, the resultant
costs would have been $0.675 per second
which adds up to $411,264 per week.
On the other end of the FRC attack
region, consider the website modeled in Figure 2. At an average normal request rate of
three requests per second, a 250,000-node
botnet could double the data usage costs if
each bot client generated just two requests
per day. Clearly, given the capacity of modern-day networks and computers, the bot
clients in this example could significantly
increase their daily request quota and multiply the attack cost by orders of magnitude.
However, once a bot clients usage footprint
eclipses the expected behavior of legitimate
clients, the risk of being identified as malicious greatly increases.

Defending Against
an FRC Attack
Defending against an FRC attack is a significant challenge to the cloud consumer,
owing to the atypical and unassuming
nature of the attack. As is the case with most
attack risks, the cloud consumer has four
primary objectives: prevention, detection,
attribution, and mitigation.

Prevention
A common way to prevent the exploitation
of a vulnerability is to download and apply
a patch for it. However, in the context of this
16

IEEE Cloud Computing

discussion, the bug isnt a software defect


but a common business model deployed
by CSPs. Until this vulnerability is actually
exploited, the cloud business model isnt
likely to change. So in lieu of a patch for this
vulnerability, there are several, albeit limited,
prevention options.
The use of authentication on a target website would significantly reduce the
amount of exploitable resources, but we
dont consider it here because we assume the
cloud consumer wants to host public content. Similarly, graphical puzzles (Captcha
tests) could be used as a preemptive solution to differentiate humans and zombie
computers. However, the use of such a test
could be detrimental to the overall goals of
a public-facing website, because these types
of tests will result in a certain percentage of
legitimate clients being unable or unwilling
to solve such puzzles.
Another option would be for the cloud
consumer to work with application and content developers to minimize the resource footprint of common or average requests. Limiting
the impact of client requests increases FRC
attacker costs and risk of detection. Unfortunately, without a utility model patch, these
controls wont thwart a motivated attacker. So
with limited prevention capability, the next
line of defense is detection.

Detection
FRC detection aims to identify malicious
traffic consumption. Because an FRC
attack is subtle, previous application-layer
DDoS solutions that focus on high request

intensities arent suitable.9 Instead, initial


FRC-detection approaches focus on behavioral metrics derived from Web server log
files that seek to profile the aggregate webpage request choices of a websites client
base.2 Three measuresthe Spearman,
Overlap, and Zipf metricsrespectively
characterize the accuracy, completeness,
and relative proportionality of ranked
requests between two adjacent windows of
observed logs (for example, two three-day
windows).2 Together, these three metrics
provide consistent measures with which
to describe normal behavior and perform
anomaly detection.
However, for the sake of brevity, we dont
present empirical results here. The conclusion stemming from this work is that an
attacker, without knowledge of the training
dataset (historical Web server log), has a difficult time requesting an impactful volume of
Web documents while adhering to the structure of normal traffic. Thus our proposed
methodology,2 which focuses on characterizing aggregate Web traffic, is effective for
detecting even minor increases in fraudulent
Web activity, well before the resultant costs
are harmful.
The most practical detection approach
is the classic review your bills approach.
Reviewing bills over time to determine if
theyre within an expected range can help
expose an FRC attack. Log analyzers might
also help identify outlier application usage,
triggering an investigation of suspicious clients. A casual inspection, however, wont
catch a savvy FRC attacker.
May/June 2013

Attribution

Mitigation
Reactive solutions rely on accurate detection
and attribution. We must consider the potential for legitimate clients being errantly classified as malicious. As a result, approaches like
blacklisting first-time offenders might prove
heavy-handed. Less absolute mitigation
strategies include imposing a back-off timeout to anomalous clients in which requests


Malicious resource use

Aggregate FRC attack cost curve

Actual cost

Attribution in this context is the ability to


accurately differentiate legitimate clients
from FRC attack clients. Like the previously
discussed DDoS detection solutions, current attribution solutions are geared toward
detecting malicious clients that consume
a significant volume of requests in a very
short time. Previous work has focused on
scrutinizing the increased inter-request (the
time between successive Web document
requests) or intersession (the time between
Web browsing sessions) arrival request rates
of malicious clients in comparison to the rate
profile for normal users.10 Again, its contrary
to FRC attack objectives for a single attack
client to behave in a fashion similar to one
participating in a DDoS attack.
The challenge in this research area will
be to minimize the number of falsely identified legitimate clients while decreasing the
impact of fraudulent clients. Recent research
indicates that normal client behavior can
be characterized by client actions such as
request volume per client, Web documents
requested, and Web session parameters (for
example, requests per session and number of
sessions).11 If attack clients that arent privy
to normal usage activity exceed a set threshold on these characteristics, theyre flagged
as malicious.
This attribution methodology aims to be
transparent to clients, and it operates under
the condition that all clients are innocent
until their usage footprint proves otherwise.
Limiting the impact of individual clients
reduces the overall risk of an FRC attack.
Its important to note that this methodology
is not rate-based; rather, its sensitive to the
accumulated requests an attacker invokes.
Therefore, the choices an attacker makes
could allow a malicious client to be deemed
anomalous after it invokes a minimal number
of requests.

Legitimate
resource use

Billing period

Figure 3. Aggregation of an FRC attacka slow-and-low assault. Unlike short-lived DDoS


attacks, an FRC attack could last weeks or months if not detected.

from an IP address arent all serviced. Similarly, suspicious clients could also be served
a graphical puzzle to prove that the client is
indeed a human.
These reactive approaches are available
today, and each has its own tradeoffs. However, with limited detection and attribution
solutions available, the deployment and maintenance of such solutions will be challenging.

firewalls were new and thought to be sufficient to secure a connected enterprise. In


reality, attacks were occurring, as intrusiondetection systems soon pointed out. Perhaps the utility model has been exploited
and, as an IT community, were presently
ill-equipped to detect its presence or identify its culprits.

References

etting any client with access to the


Internet consume resources that are
in turn metered and billed exposes the
cloud consumer to a risk thats only mitigated by time, detection, and accountability. Until recently, this vulnerability has
been neglected. Unless utility models are
restructured to remove the vulnerability of
an FRC attack, research in detection and
attribution is necessary to ensure the longterm sustainability of cloud consumers and
remove one more impediment that could
dissuade organizations from adopting public cloud computing.
To the best of our knowledge, there have
been no known public acknowledgements
of an FRC attack occurring on the public
cloud. However, the absence of such knowledge doesnt confirm that the utility model
vulnerability hasnt or wont be exploited.
Back in the early 1990s, Internet-facing

1. J. Idziorek and M. Tannian, Exploiting


Cloud Utility Models for Profit and Ruin,
Proc. 2011 IEEE 4th Intl Conf. Cloud Computing (Cloud 11), IEEE, 2011, pp. 3340.
2. J. Idziorek, M. Tannian, and D. Jacobson, Detecting Fraudulent Use of Cloud
Resources, Proc. 3rd ACM Workshop
on Cloud Computing Security Workshop
(CCSW 11), ACM, 2011, pp. 6172.
3. Amazon EC2 Pricing, Amazon Web Services, 2012; http://aws.amazon.com/ec2/
pricing.
4. Cloud Servers Pricing, Rackspace
Cloud Servers, 2012; www.rackspace.
com/cloud/cloud_hosting_products/
servers/pricing.
5. S. Kandula et al., Botz-4-Sale: Surviving
Organized DDoS Attacks that Mimic Flash
Crowds, Proc. 2nd Conf. Symp. Networked
Systems Design & Implementation, Usenix,
2005, pp. 287300.
17

Securing the Cloud

6. L. Page, Join in the Wikileaks DDoS


War from your iPhone or iPad, The Register, 10 Dec. 2010; www.theregister.
co.uk/2010/12/10/loic_for_iphone.
7. G. Stonebumer, A. Goguen, and A. Feringa, Risk Management Guide for Information Technology Systems, NIST
Special Publication 800-30, July 2002.
8. L. Constantin, Denial-of-Service Attack
Are on the Rise, Anti-DDoS Vendors
Report, IDG News Service; 7 Feb. 2012;
www.pc world.com/businesscenter/
article/249438/denialofservice_attacks
_are_on_the_rise_antiddos_vendors
_report.html.
9. S. Wen et al., Cald: Surviving Various
Application-layer DDoS Attacks that
Mimic Flash Crowd, Proc. 2010 4th Intl
Conf. Network and System Security (NSS
10), IEEE, 2010; pp. 247254.
10. S. Ranjan et al., DDoS-Shield: DDoSResilient Sc h ed u l i ng to Co u n te r
A p p l i c at i o n L ayer Attacks, IEEE/

ACM Trans. Networking, Feb. 2009, pp.


2639.
11. J. Idziorek, M. Tannian, and D. Jacobson,
Attribution of Fraudulent Resource Consumption in the Cloud, Proc. 2012 IEEE
5th Intl Conf. Cloud Computing (Cloud
12), IEEE, 2012, pp. 99106.
Joseph Idziorek is a PhD candidate in the
Department of Computer and Electrical Engineering at Iowa State University. His research
interests broadly include anomaly detection
and more specifically the detection and attribution of FRC attacks on the cloud utility
model. Idziorek received his BS in computer
engineering from St. Cloud State University.
Contact him at idziorek@iastate.edu.
Mark F. Tannian is a PhD candidate in the
Department of Computer and Electrical
Engineering at Iowa State University. His
research interests include user-centered
design and information security visualization

in addition to cloud computing security. Tannian received his MS in electrical engineering


from George Washington University. Contact him at mtannian@iastate.edu.
Doug Jacobson is a University Professor in
the Department of Computer and Electrical Engineering at Iowa State University,
where he serves as the director of the Information Assurance Center. His research
interests include Internet-scale event and
attack generation environments. Jacobson
received his PHD in computer engineering
from Iowa State University. Contact him at
dougj@iastate.edu.

This article originally appeared in IT


Professional, March/April 2013; http://
doi.ieeecomputersociety.org/10.1109/
MITP.2012.43.

IEEE CLOUD 2013

IEEE 6th International Conference on Cloud Computing

June 27July 2, 2013

Santa Clara Marriott, CA, USA

Change we are leading is the theme of CLOUD 2013. Cloud computing has become a scalable
services consumption and delivery platform in the eld of services computing. The technical
foundations of cloud computing include service-oriented architecture (SOA) and virtualizations
of hardware and software. The goal of cloud computing is to share resources among the cloud
service consumers, cloud partners, and cloud vendors in the cloud value chain.

Register today!

http://www.thecloudcomputing.org/2013

18

IEEE Cloud Computing

May/June 2013

Focus on
Your Job Search
IEEE Computer Society Jobs helps you easily find
a new job in IT, software development, computer engineering, research, programming, architecture, cloud
computing, consulting, databases, and many other
computer-related areas.
New feature: Find jobs recommending or requiring the
IEEE CS CSDA or CSDP certifications!
Visit www.computer.org/jobs to search technical job
openings, plus internships, from employers worldwide.

http://www.computer.org/jobs

The IEEE Computer Society is a partner in the AIP Career Network, a collection of online job sites for scientists, engineers, and computing professionals. Other partners include Physics Today, the American Association of Physicists in Medicine (AAPM), American
Association of Physics Teachers (AAPT), American Physical Society (APS), AVS Science and Technology, and the Society of Physics
Students (SPS) and Sigma Pi Sigma.

Securing the Cloud


and the University of Wisconsin demonstrated
that you can extract cryptographic keys from
one VM to another, even when all the standard cloud security features are in place.2 This
is made possible thanks to side channelspathways that leak sensitive data much the same way
a hotel wall leaks sound.

Side Channels

The Threat
in the Cloud
Matthew Green Johns Hopkins University

eople like to tell us that the cloud is


the future. Id love to write this off
as hype, but this time the hype is
probably accurate. Though a few traditionalists might still choose to run their own balky
hardware, the next generation of online services will almost certainly run on somebody
elses servers, using somebody elses software. Needless to say, this has major implications for data security.
Take the popular photo-sharing site
Instagram, for instance. Rather than purchasing or renting servers, Instagrams developers deployed the entire service using
rented instances on Amazons popular EC2
cloud-computing service (EC2 is short
for Elastic Compute Cloud; http://aws.
amazon.com/ec2).1 Although Instagram is
hardly a security product, it does manage
private user data with cryptographic services
20

IEEE Cloud Computing

such as Secure Sockets Layer (SSL) and


Secure Shell. This implies the use of publickey cryptography and the corresponding
presence of secret keys, all stored on hardware that Instagram doesnt control.
This might not be a problem in a traditional
datacenter environment. However, cloud computing platforms often mingle user tasks across
shared physical hardware. Most users are blissfully unaware of this mingling because cloud
providers carefully isolate individual customers into separate virtual machines (VMs),
much the way hotels isolate guests in separate
rooms. In theory, a VM should keep nosy users
from stealing their neighbors sensitive data.
But when it comes to VMs that perform
cryptography, some new research tells us that
the existing protections might not be sufficient.
To this end, a team of researchers from the University of North Carolina, RSA Laboratories,

Side-channel attacks have played a major


role in the history of cryptography. Usually,
these attacks occur when a machine leaks
details of its internal operation through some
unexpected vectorfor example, computation time or electromagnetic emissions.
The cloud environment offers a bonanza of
potential side channels because different
VMs share physical resourcesfor example,
processor, instruction cache, or diskon a
single computer. If an attacking program can
carefully monitor those resources behavior,
it can theoretically determine what another
program is doing with them.
This threat has long been discussed by
cloud security experts but has largely been
dismissed by providers. This is because in
this area, turning theory into practice turns
out to be surprisingly difficult. There are
many reasons for this. For one thing, cloud
providers often run many different VMs on
the same server, which tends to add noise
and foil an attackers careful measurements.
The Virtual Machine Manager (VMM) software itself adds more noise, and places a barrier between the attacking user and the bare
metal of the server. Moreover, individual
VMs are routinely swapped between different cores of a multicore server, which makes
it difficult to know what youre actually measuring. All of these factors combine to make
side-channel attacks extremely challenging.

The New Attack


The new research focuses on the Xen VMM,
which is the software Amazon uses to run its
EC2 service. Although the attack isnt implemented in EC2 itself, it focuses on similar
hardware: multicore servers with simultaneous multithreading (SMT) turned off. The
threat model assumes that the attacker and
victim VM are coresident on the machine
and that the victim is decrypting an Elgamal
ciphertext using libgcrypt v.1.5.0 (www.gnu.
org/software/libgcrypt).

Published by the IEEE Computer Society

2013 IEEE

Elgamal encryption is a great case for


side-channel attacks because you implement it by taking a portion of the ciphertext,
which well call x, and computing xe mod N,
where e is the secret key and N is (typically) a
prime number. This exponentiation is implemented by a square-and-multiply algorithm
that depends fundamentally on the secret
keys bits (see Figure 1). If the ith bit of e is 1,
steps M (multiply) and R (modular reduce)
execute. If that bit is 0, they dont. The keys
bits result in a distinctive set of computations
that can be detected if the attacking VM can
precisely monitor the hardware state.
Side-channel attacks employing squareand-multiply have been around for a while.
They date back at least to the mid-to-late
1990s,3 using power and operating time as a
channel, and theyve been repeatedly optimized as technology has progressed. More
recent attacks have exploited cache misses in
a shared instruction cache (typical in hyperthreading environments) as a way for one
process to monitor another.4
However, no one had applied these
attacks to the full Xen VM setting. Such an
application is challenging for various reasons, including

the difficulty of getting the attacking pro-

cess to run frequently enough to take precise measurements,


the problem that virtual CPUs (VCPUs)
can be assigned to different cores or that
irrelevant VCPUs can be assigned to the
same core, and
noisy measurements that give only probabilistic answers about which operations
occurred on the target process.
The task facing these researchers was therefore to overcome all these noise sources and
still recover useful information from the
attacked VM.

Exploiting Cache Misses


At a fundamental level, this new attack is similar to previous attacks that worked by measuring the behavior of the shared instruction
cache.4 The attacking VM first primes the L1
instruction cache by allocating continuous
memory pages. It then executes a series of
instructions to load the cache with cacheline-sized blocks it controls.


SquareMult(x, e, N):
let en, , e1 be the bits
y 1
for i = n down to 1 {
y Square(y)
y ModReduce(y, N)
if ei = 1 then {
y Mult(y, z)
y ModReduce(y, N)
}
}
return y

of e

(S)
(R)
(M)
(R)

Figure 1. The square-and-multiply algorithm.2 Its operation depends fundamentally on the


secret keys bits.

Next, the attacker gives up execution


and hopes that the target VM will run next
on the same coreand, moreover, that the
target is running the square-and-multiply
algorithm. If it is, the target will cause a
few cache-line-sized blocks of the attackers
instructions to be evicted from the cache.
The key to the attack is that the choice of
which blocks are evicted depends highly on
the operations the attacker conducts.
To see what happened, the attacking VM
must recover control as quickly as possible.
It then probes to see which blocks have been
evicted, by executing the same instructions
and timing the results. If a given block has
been evicted, execution will result in a cache
miss and a measurable delay. By compiling a
list of the missing blocks, the attacker gains
insight into which instructions might have
executed while the target VM was running.
A big challenge for the attacker is to
regain control quickly. Wait too long, and all
kinds of things will happen; the state of the
cache wont give any useful information.
Normally, Xen doesnt allow VCPUs
to rapidly regain control, but exceptions
exist: Xen gives high priority to VCPUs
that receive an interrupt. The researchers
exploited this by running a 2-VCPU VM,
in which the second VCPUs only job was
to issue interprocessor interrupts to get the
first VCPU back in control as quickly as possible. Using this approach, they could get
back in the saddle within about 16 microseconds. This is an eternity in processing
time, but its short enough to give useful
information.

Making Order out of Chaos


The problem with the attack described above
is that the attacking VM has no control over
where in the computation it will jump in. It
could get just a small fragment of the squareand-multiply algorithm (which comprises
hundreds or thousands of operations). It
could jump into the OS kernel. It could even
get the wrong VM because VMs can run on
any core. Moreover, the data can be pretty
noisy.
The solution to these problems is what
makes the new research so fascinating. First,
the researchers didnt just monitor one single executionthey assumed that the device
was constantly decrypting different ciphertexts, all with the same key. Indeed, this sort
of repeated decryption is precisely what happens inside an SSL webserver.
Next, the researchers applied machinelearning techniques to identify which of
the many possible instruction sequences
were associated with particular cache measurements. This required them to train the
algorithm on the target hardware, with the
target VCPU conducting square, multiply,
and modular-reduce calls to build a training model. During the attack, they further
processed the data using a hidden Markov
model to eliminate errors and bogus measurements that cropped up from noncryptographic processes.
Even after all this work, an attacker winds
up with thousands of fragments, some containing errors or low-confidence results.
These can be compared against each other
to reduce errors, then stitched together to
21

Securing the Cloud

References
S1: SRSRMRSMRSRSRSMR
S2:
MRSRSRSRMR**SRMRSR
S3:
SRMRSRSR
S4:
MRSRSRSR**SRMRSR
S5:
MR*RSRMRSRMRSR
S6:
MRSRSRMRSRSRSRMR
-----------------------------------------------SRSRMRSRMRSRSRSMRSRSRMRSRSRSRMRSRMRSRSRMRSRMRSR

Figure 2. Reconstructing six fragments to form a single spanning sequence. This process
can recover the private key. In the fragments and recreated sequence, M, R, and S stand for
multiplication, modular-reduce, and square calls. Bold letters indicate overlapping instruction
sequences.

recover the secret key itself. This problem has


been solved in many other domains (most
famously, DNA sequencing); the techniques
used here are quite similar.
Figure 2 illustrates this process using an
invented example that reconstructs six fragments to form a single spanning sequence.
This is a huge simplification of a very neat
and complexprocess thats well described
in the research paper.

The Outcome
With everything in place, the researchers
attacked a 4,096-bit Elgamal public key, which
(owing to an optimization in libgcrypt) had
a 457-bit private key e. After several hours of
data collection, they obtained about 1,000
key-related fragments, of which 330 were long
enough to be useful for key reconstruction.
These let the attackers reconstruct the full key
with only a few missing bits, which they could
guess using brute force.
And that, as they say, is the ballgame.

What Does This Mean for Cloud


Cryptography?
Before you start pulling down your cloud
VMs, a few points of order.
First, theres a reason these researchers
conducted their attack with libgcrypt and
Elgamal and not, say, OpenSSL and RSA
(which would be a whole lot more useful).
Thats because libgcrypts Elgamal implementation is the cryptographic equivalent
of a 1984 Stanley lawnmower engine. It uses
22

IEEE Cloud Computing

textbook square-and-multiply with no ugly


optimizations to get in the way. OpenSSL
RSA decryption, on the other hand, is more
like a 2012 Audi turbodiesel. It uses windowing, Chinese remainder theorem, blinding,
and two types of multiplication, all of which
make these attacks much more challenging.
Second, this attack requires perfect conditions. As proposed, it works only with
two VMs and, as we mentioned, requires
training on the target hardware. This isnt a
fundamental objection, especially because
real cloud services do use much identical
hardware. However, it does mean that messinessthe kind you get in real cloud deploymentswill be more of an obstacle than it
was in the research setting.
Finally, before you can target a VM, you
must get your attack code onto the same
hardware as your target. This seems like a
pretty big challenge. Unfortunately, some
slightly older research indicates that this is
feasible in existing cloud deployments.5 In
fact, for only a few dollars, researchers were
able to colocate themselves with a given target VM with about 40 percent probability.6

n the short term, you certainly shouldnt


panic about this, especially given
how elaborate the attack is. But this new
research does indicate that we should be
thinking hard about side-channel attacks
and how to harden our cloud platforms to
deal with them. 

1. What Powers Instagram: Hundreds of


Instances, Dozens of Technologies, Instagram, 2012; http://instagram-engineering.tumblr.com/post/13649370142/what
-powers-instagram-hundreds-of
-instances-dozens-of.
2. Y. Zhang et al., Cross-VM Side Channels
and Their Use to Extract Private Keys,
Proc. 19th ACM Conf. Computer and Communications Security (CCS 12), ACM,
2012, pp. 305316.
3. P.C. Kocher, Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS,
and Other Systems, Proc. 16th Ann. Intl
Cryptology Conf. Advances in Cryptology
(Crypto 96), Springer, 1996, pp. 104113;
www.cryptography.com/public/pdf/TimingAttacks.pdf.
4. C. Percival, Cache Missing for Fun
and Profit, 2005; http://css.csail.mit.
edu/6.858/2012/readings/ht-cache.pdf.
5. T. Ristenpart et al., Cross-VM Vulnerabilities in Cloud Computing, presentation at
29th Intl Cryptology Conf. (Crypto 09)
rump session, 2009; http://rump2009.
cr.yp.to/8d9cebc9ad358331fcde611bf45f
735d.pdf.
6. T. Ristenpart et al., Hey, You, Get off of My
Cloud: Exploring Information Leakage in
Third-Party Compute Clouds, Proc. 16th
ACM Conf. Computer and Communications
Security (CCS 09), ACM, pp. 199212.

Matthew Green is a cryptographer and


research professor at Johns Hopkins Universitys Information Security Institute. Contact
him at matthewdgreen@gmail.com.

Illustration by Robert Stack.

This article originally appeared in IEEE


Security & Privacy, January/February
2013; http://doi.ieeecomputersociety.
org/10.1109/MSP.2013.20.

May/June 2013

Securing the Cloud

Implementing
Effective Controls
in a Mobile, Agile,
Cloud-Enabled
Enterprise
Dave Martin EMC

Several approaches change security clichs into reality by


removing the barriers of culture and trust that often prevent the
implementation of effective controls.

s security professionals, we
attend meetings and pronounce that security is everyones responsibility, and everyone nods
in agreement. But in reality, everyone still
believes that the security team still has the
ball. Another favorite clich is that security
should be built in, not bolted on; however,
finding tangible examples of fully integrated
or built-in security is difficult.
As security practitioners, its hard not
to blame ourselves for these realities
weve done little to really push these
agendas forward. After all, were all paid
paranoids with trust issues that lead us
2013 IEEE

to implement solutions with added layers


of controls that dont address root cause,
enabling ongoing bad behaviors from the
people we dont trust.
In this article, I assert that we must break
this cycle, build partnerships, implement
effective security controls, and improve the
long-term effectiveness of our control environments in an ever-complex, agile cloudand mobile-enabled world.

A Vision of a Better Future


Using layers of security is likely a good idea,
but creating layers to mask the true root
cause is not. For example, although there are
Published by the IEEE Computer Society

legitimate use cases for Web application firewalls, theyre often used to address unknown
vulnerabilities in underlying infrastructure
and application layers or to provide security
log visibility. Other network-based controls,
such as network data loss prevention, can
be effective, but they add complexity and
cost, reduce agility, and introduce additional
points of failure to critical operating environments. Vulnerability-scanning services
provide critical health data about our environments to address weaknesses in our asset
and configuration management systems, but
having the application platform and hosts
report on the operating environments configuration and patch levels would be more
useful. We continue to leverage controls ineffectively, partly by habit and partly because
weve failed to address the fundamental
issues by often assuming that processes and
infrastructure cannot be made inherently
more secure and that they require complex
bolted on layers of security.
Given the current environments forcing
functions, agility, mobility, virtualization,
evolving threats, and cost, we must look for
new ways to build our infrastructure and
systems. Maintaining existing layers of complexity will involve large amounts of automation, configuration, and change management.
Trying to ensure that an application stack
remains protected using bolted-on layers of
security as it moves from datacenter to datacenter will become a huge challenge. We
must plan now, not by eliminating controls
(although we should take this opportunity to
review them), but by examining the controls
we need and how theyre applied.
We must foster a stronger relationship
with the application development teams to
ensure they have adequate training on security and threats and follow a solid software
development life cycle (SDLC). These teams
are vital in integrating our controls directly
into the application stack. We must empower
them to use integration APIs to many of our
common controls and fully embed them into
the protected application. For example, where
better to deliver data loss prevention than in
the application itself? An API call can validate when its acceptable for the application
to transmit data given the context of user, role,
device, and so forth. The application is better
able to make this decision than, say, a bump
IEEE Cloud Computing

23

Securing the Cloud

in the wire struggling to deal with encrypted


data with little or no context. These types of
security and risk decisions should be applied
in the application business logic layer.
Application development teams must
also produce intelligent log streams. Traditionally, these logs are used for troubleshooting and debugging, but an external system
might be necessary to combine several log
events to produce a security event. A better
log stream would include more security-relevant details on traditional events and those
produced by embedding security controls.
The applicationwith its context of business rules, user activity, and a unique sense
of data valueshould produce securitytargeted logs as well as highlight risky transactions. These logs must still be parsed and
aggregated to give a full enterprise picture,
but they will enable security incident and
monitoring teams to detect and respond to
detailed application incidents more effectively. In addition, because the control layer
is integrated, there are fewer controls to
reconfigure if the application moves.

Moving to the New Paradigm


Moving to this new paradigm will be a journey. We have too many traditional implementations, and beyond that, we need to
retrain our IT departments, update our control toolboxes, and address the question of
who is responsible for security.
First, we must address the foundational
components of the enterprise security program, ensuring the SDLC has solid metrics to
verify that controls are effective and correctly
implemented along with strong quality assurance and testing processes and tools. Development teams should perform these tasks
with governance by the information security or risk functions. We should also check
that configuration and change management
is well executed in the target environment.
Vulnerability and misconfiguration should
be well managed, and configuration management systems should be monitored in real
time, addressing control gaps and vulnerability in a timely manner.
With this solid foundation, we should next
look to our collection of controls. Many might
already have the hooks to be implemented
through APIs; this is a good place to begin the
transition. Control implementations might
24

IEEE Cloud Computing

require reevaluating other technologies or


methods. In addition, we should update log
standards to ensure that intelligent logs are
being produced and that theyre reaching the
incident monitoring team with the correct
context and response procedures.
These technical and process changes are
the easy part of the transition. The long-term
culture will be harder to address. Security
teams are thought of as the implementation
point of control processes, and often, these
teams dont believe anyone else will implement controls. Its time that we, as security
professionals, start challenging ourselves on
these assumptions. This will take time and
require directed effort: a combination of
training IT practitioners on the real threats
and the controls that combat them; reimagining controls and how we use them; and
improving measurement, governance, and
accountability processes.

s with any modifications in environments with legacy technology, processes and people wont change overnight. We
must act with sponsorship across IT leadership, picking targets to demonstrate the benefits of this approach. By measuring benefits
over time and applying these concepts when
applications are re-platformed, we can complete the transition, creating an infrastructure
that is simpler, more agile, and cheaper and
that has more effective integrated controls. 

Dave Martin is the chief security officer at


EMC. His research interests include adaptive controls, cloud risk management, and
incident detection and response. Martin
received a BEng in manufacturing systems engineering from the University of
Hertfordshire in England. Hes a Certified
Information Systems Security Professional.
Contact him at dave.martin@emc.com.

Illustration by Peter Bollinger.


This article originally appeared in IEEE
Security & Privacy, January/February
2013; http://doi.ieeecomputersociety.
org/10.1109/MSP.2013.1.

PURPOSE: The IEEE Computer Society is the


worlds largest association of computing
professionals and is the leading provider of
technical information in the field. Visit our
website at www.computer.org.
OMBUDSMAN: Email help@computer.org.
Next Board Meeting: 1314 June 2013,
Seattle, WA, USA
EXECUTIVE COMMITTEE
President: David Alan Grier
President-Elect: Dejan S. Milojicic; Past President:
John W. Walz; VP, Standards Activities: Charlene
(Chuck) J. Walrad; Secretary: David S. Ebert;
Treasurer: Paul K. Joannou; VP, Educational
Activities: Jean-Luc Gaudiot; VP, Member &
Geographic Activities: Elizabeth L. Burd (2nd
VP); VP, Publications: Tom M. Conte (1st VP);
VP, Professional Activities: Donald F. Shafer; VP,
Technical & Conference Activities: Paul R. Croll;
2013 IEEE Director & Delegate Division VIII: Roger
U. Fujii; 2013 IEEE Director & Delegate Division
V: James W. Moore; 2013 IEEE Director-Elect &
Delegate Division V: Susan K. (Kathy) Land

BOARD OF GOVERNORS
Term Expiring 2013: Pierre Bourque, Dennis J.
Frailey, Atsuhiro Goto, Andr Ivanov, Dejan S.
Milojicic, Paolo Montuschi, Jane Chu Prey, Charlene
(Chuck) J. Walrad
Term Expiring 2014: Jose Ignacio Castillo
Velazquez, David. S. Ebert, Hakan Erdogmus, Gargi
Keeni, Fabrizio Lombardi, Hironori Kasahara, Arnold
N. Pears
Term Expiring 2015: Ann DeMarle, Cecilia Metra,
Nita Patel, Diomidis Spinellis, Phillip Laplante, JeanLuc Gaudiot, Stefano Zanero

EXECUTIVE STAFF
Executive Director: Angela R. Burgess; Associate
Executive Director & Director, Governance:
Anne Marie Kelly; Director, Finance &
Accounting: John Miller; Director, Information
Technology & Services: Ray Kahn; Director,
Membership Development: Violet S. Doan;
Director, Products & Services: Evan Butterfield;
Director, Sales & Marketing: Chris Jensen

COMPUTER SOCIETY OFFICES


Washington, D.C.: 2001 L St., Ste. 700,
Washington, D.C. 20036-4928
Phone: +1 202 371 0101 Fax: +1 202 728 9614
Email: hq.ofc@computer.org
Los Alamitos: 10662 Los Vaqueros Circle, Los
Alamitos, CA 90720 Phone: +1 714 821 8380
Email: help@computer.org
Membership & Publication Orders
Phone: +1 800 272 6657 Fax: +1 714 821 4641
Email: help@computer.org
Asia/Pacific: Watanabe Building, 1-4-2 MinamiAoyama, Minato-ku, Tokyo 107-0062, Japan
Phone: +81 3 3408 3118 Fax: +81 3 3408 3553
Email: tokyo.ofc@computer.org

IEEE BOARD OF DIRECTORS


President: Peter W. Staecker; President-Elect:
Roberto de Marca; Past President: Gordon
W. Day; Secretary: Marko Delimar; Treasurer:
John T. Barr; Director & President, IEEE-USA:
Marc T. Apter; Director & President, Standards
Association: Karen Bartleson; Director & VP,
Educational Activities: Michael R. Lightner; Director
& VP, Membership and Geographic Activities:
Ralph M. Ford; Director & VP, Publication Services
and Products: Gianluca Setti; Director & VP,
Technical Activities: Robert E. Hebner; Director &
Delegate Division V: James W. Moore; Director &
Delegate Division VIII: Roger U. Fujii

revised 22 Jan. 2013

May/June 2013

IEEE CloudCom 2013


5th IEEE International Conference on Cloud Computing Technology and Science

25 December 2013

Bristol, United Kingdom


The Cloud is a natural evolution of distributed computing
and of the widespread adoption of virtualization and serviceoriented architecture (SOA). In cloud computing, IT-related
capabilities and resources are provided as services, via
the Internet and on-demand, accessible without requiring
detailed knowledge of the underlying technology. The
IEEE International Conference and Workshops on Cloud
Computing Technology and Science, steered by the Cloud
Computing Association, aim to bring together researchers
who work on cloud computing and related technologies.

Register today!

http://2013.cloudcom.org

The Community for Technology Leaders

Focused on Your Future


Now when you join or renew your IEEE Computer
Society membership, you can choose a membership
package focused specifically on advancing your
career:

Software and Systemsincludes IEEE Software


Digital Edition
Information and Communication Technologies
(ICT)includes IT Professional Digital Edition
Security and Privacyincludes IEEE Security
& Privacy Digital Edition
Computer Engineeringincludes IEEE Micro
Digital Edition

In addition to receiving your monthly issues of


Computer magazine, hundreds of online courses and
books, and savings on publications and conferences,
each package includes never-before-offered benefits:

A digital edition of the most-requested leading


publication specific to your interest
A monthly digital newsletter developed
exclusively for your focus area
Your choice of three FREE webinars from the
extensive IEEE Computer Society collection
Downloads of 12 free articles of your choice from
the IEEE Computer Society Digital Library (CSDL)
Discounts on training courses specific to your
focus area

Join or renew today at www.computer.org/membership

You might also like