Professional Documents
Culture Documents
Cloud Computing
Whats Inside: Key insights, strategies and best practices for cloud computing ,including:
Data storage considerations
Tips to negotiate cloud contracts
Statistics on cloud adoption
Security services put to the test
And much more.
2011
Introduction
By Lenny Heymann, Executive Vice President, Interop
Interop has compiled a strategic guide featuring delivering key insights, important strategies and best practices
for cloud computing.
The days of cloud computing services being used for seasonal capacity are gone. Organizations are
incorporating cloud services in to their IT practices on a permanent basis. The capital and operational costs are
compelling. Ensuring adequate application performance and having redress defined before you commit to a
service will set expectations and spell out everyones responsibility.
Successful cloud computing programs are integrated with your IT initiatives and your business, and like any outsource arrangement,
you need to ensure if and when you can safely put sensitive information in to the cloud without exposing yourself to more risk.
Partitioning strategies can help but do require forethought.
Luckily standards such as cloud audit are being developed and adopted which defines how cloud providers can describe their
security practices to customers.
Cloud computing is an evolving space addressing many needs to successfully incorporate cloud services into your organization. A
thoughtful approach to selecting a cloud service and designing applications to take advantage of cloud services are key.
Table of Contents
4
10
11
12
13
15
2011
You cant. So we were surprised when a recent InformationWeek Analytics APM survey revealed scant use of monitoring when it
comes to software as a service or apps on public cloud services. Just 28% of respondents use APM tools to monitor most of the
cloud applications that they use, while 70% monitor only a few or none at all.
The problem is, the application architecture transformation brought about by the adoption of cloud services requires an equally
transformational approach to performance monitoring. One example: Some APM vendors are embracing the cloud as part of the
solution via the concept of monitoring and management as a service, or MaaS. MaaS platforms from companies like AppDynamics,
BlueStripe, and Coradiant can automate tasks typically involved in setting up APM software, including agent installation and
component relationship mapping. The monitoring service can be used for both on-premises applications and those in a public cloud
environment.
In other cases, when it comes to SaaS and infrastructure-as-a-service platforms, IT teams are using synthetic transaction tools that
simulate real application traffic and data payloads; they help test the user experience and discover bottlenecks and other problems
that affect speed, transaction completeness, and
availability. Whether you adopt APM as a service
or adapt your in-house techniques, metrics
Do You Use APM Tools to Monitor SaaS or Public Cloud Apps?
will need to change to suit evolving application
We dont use
and business service priorities with regard to
Saas or public
2%
Yes, most of them
28%
transaction processing, Web page load times,
cloud applications
and so on. For example, companies may need to
tweak data collection and analysis steps to give
managers a more coherent and up-to-date picture
of application performance. Rather than being
concerned about each component individually, the
No
38%
Yes, to monitor a
32%
collection perspective must shift to the overall user
few of them
experiencein most cases, the transaction view.
Going to the cloud wont save money if you lose
business because of poor performance, so make
Data: InformationWeek Analytics Application Performance Management Survey
the APM transition a priority, not an afterthought.
of 100 business technology professionals using APM. August 2010.
Decide whether your current toolsets can extend
visibility over both on-premises and cloud or
virtualized applications, so that you can have confidence and certainty about performance.
David Stodder is chief analyst at Perceptive Information Strategies. Write to us at iwletters@techweb.com.
2011
The most popular cloud computing option is public cloud computing. A common example is Web-based e-mail like Googles Gmail.
In the public cloud scenario, the customer generally has no control or knowledge over the exact location of the provided resources.
Usually the customer is presented with a standard service level agreement with limited or no ability to tailor the terms of use.
Without the ability to tailor the service parameters to a companys business, it is likely that public cloud solutions will not meet export
compliance standards, if such needs exist.
Recently, some cloud service providers have been marketing their services as export control compliant. Knowing the basic U.S.
export control rules governing technical data should help companies decide whether cloud computing services being offered to them
meet their export compliance needs for all their systems and applications.
IT departments must determine whether export-controlled data may be contained on their systems and work with their legal
department to formulate a plan for handling such data inside or outside of the cloud. For the purposes of this discussion, controlled
technical data is data controlled under the International Traffic in Arms Regulations (ITAR) or the Export Administration Regulations
(EAR). Typically, this information is in the form of blueprints, drawings, models, formulae, specifications, photographs, plans,
instructions, or documentation regarding an export-controlled item or service.
U.S. companies are prohibited from exporting controlled technical data to certain foreign countries without an export license. For
example, sending an e-mail with export-controlled technical data to a customer in India would be an export of the data to India and
could require export authorization.
The rules also restrict the release of export-controlled technical data to certain foreign nationals, inside or outside the U.S., without an
export authorization. (To do so would be considered an export to that persons country of citizenship.) Companies are often surprised
by this rule. For example, if an American engineer in the U.S. walks blue prints for the manufacture of an export-controlled item down
the hall to his colleague who happens to be an Indian citizen, or e-mails them to him, this would be considered an export to India and
could require export authorization.
Companies in the defense industry should also be aware that, under ITAR, merely giving foreign nationals access to defense technical
data, whether or not the foreign national actually views it, is considered an export that requires authorization.
2011
In the public cloud scenario, the customer generally has no control over or knowledge of the exact location of its data, and in fact,
there could be multiple copies of its data in multiple locations. Providing export-controlled data to a data center located outside the
U.S. could be considered an export to the data center location, which could require export authorization.
Additionally, once the company hands over its data to the service provider, the customer has limited control over who has access to
the data. From a security perspective, that is no doubt of great concern. In addition to requiring strong security controls, companies
with export-controlled data must implement measures to prohibit foreign nationals from having access to their export-controlled data.
Companies wary of turning over their data to public clouds have been considering private cloud models, in which the cloud service
provider constructs a cloud solely for one organization, or hybrid clouds, which enable data and application portability between a
private cloud and a public cloud (so more sensitive data can be kept in the private environment).
Any scenario in which a third-party service provider has access to your companys export-controlled data introduces risk of improper
disclosure to that third party for which your company could be liable.
To minimize the risk of improper disclosure of your export-controlled data, following are some key questions to ask the cloud
provider:
How is the cloud service set up to comply with U.S. export controls?
Where in the world will your data be stored?
How is sensitive data segregated and controlled?
Would any foreign nationals have access to your data?
Does an auditable trail exist?
Its important for IT departments to have answers to these questions as they evaluate cloud services.
Marsha McIntyre is an attorney at Hughes Hubbard & Reed LLP who focuses on export controls and sanctions. Prior to joining Hughes Hubbard & Reed,
McIntyre worked at the U.S. Department of State, Office of the Legal Adviser, providing guidance on international trade issues.
2011
This combination of keeping inventories scaled back, relying more than ever on just-in-time (JIT) inventory ordering and then having
to blend ordering and production systems with systems of thousands of suppliers around the world finally created enough critical
mass to overturn traditional industry wisdom about keeping supply chain systems in-house and protecting against leakages of
production information and intellectual property.
Added pressure from outsourcing to suppliers around the world for least cost manufacturing was certifying all of these suppliers
to communicate in secure environments with corporate IT systems. Supplier certification is a painful, iterative process capable of
overwhelming an entire IT staff. The fix was obvious: Why not go to a supply chain, cloud service provider that already has 80 percent
of your supplier base certified and ready to plug in to your supply chain?
There are two aspects to the benefits of cloud computing models for manufacturing and logistics organizations, said John Brand,
Research Director at Hydrasight, an IT research and analysis firm. One is to remove the internal costs associated with running your
own IT infrastructure. The second is the benefit of increased visibility across organizational boundaries, particularly if a third party
is involved. In fact, when you consider what cloud-based e-mail services can do for the control and removal of spam and viruses,
cloud-based supply systems can similarly reduce the noise within the supply chain to simplify and speed up data exchange.
Perhaps the most pivotal question for companies seeking the cloud, however, is the degree of integration their businesses and
systems require with their supplier bases. There are two fundamental cloud computing approaches to be considered: either a
Web portal that provides real time communications and collaboration capabilities between companies and their suppliers, or a fully
integrated business-to-business (B2B) solution that not only provides real time communications and collaborations between all
parties, but that also performs transaction processing and data base updates in real time.
The reality is that organizations are often better served by data intermediaries who aggregate and value add to the data that passes
through the supply chain. Most often this data is made anonymous or heavily obscured to ensure privacy and integrity, while giving
organizations greater insight and intelligence into data which can be reasonably shared between parties, with the right security
policies and protocols in place. These data hubs can provide very rich services beyond simple data aggregation, reporting and
analytics, Brand said.
Many companies adopt the Web portal approach to a cloud-based supply chain and achieve fast results, getting their entire
supplier bases online in a matter of several weeks. Other companies, mostly large enterprises with robust supply chain integration
requirements, require considerable B2B integration with suppliers and with cloud-based supply chain solutions, just as they would
with an internal supply chain system. In these cases, integration can be tricky, and is a project that gets into months rather than
weeks.
The biggest obstacles are usually inflated expectations for business users, obstructionary IT departments, and poor alignment in IT
infrastructure, said Brand. Getting the balance right between the level of investment required to move into cloud-based services
and the execution of a migration and management plan is still a significant challenge.
2011
2011
in the vendors network. This process may remove metadata from the file which can result in many issues in the event of litigation.
Companies have found themselves subject to sanctions in litigation because metadata is missing from data relevant to the litigation.
Accordingly, you need to consider requiring the vendor to keep a full copy of the data, with all metadata, or you need to retain full
copies.
Also, dont overlook your ability to help self-insure against risks associated with a cloud agreement. While the vendor should have
technology errors & omissions insurance, consider getting a cyber-liability policy for your business. Cyber-liability insurance can
protect you against unauthorized access to a computer system, theft or destruction of data, hacker attacks, denial of service attacks
and malicious code or violations of privacy regulations. To avoid sticker shock from escalating prices, you should attempt to lock in
any recurring fees for a period of time (one to three years) and thereafter an escalator based on CPI or other third-party index should
apply.
If you are considering moving some business functions into the cloud, keep in mind the difference between the cloud and traditional
software and protect your business accordingly.
Christopher C. Cain is a partner with the law firm of Foley & Lardner LLP, practicing in the firms Information Technology & Outsourcing and Transactional &
Securities practices. He routinely counsels clients on the legal, technical and transactional issues arising in technology transactions.
2010
2009
73%
70%
53%
51%
48%
45%
43%
9%
7%
2011
Not being able to assess and validate compliance and security efforts within various
cloud computing models is one of the biggest challenges cloud computing now faces. First, when a business tries to query a cloud
provider, there may be lots of misunderstanding about what is really being asked for. For instance, when a business asks if the
provider conducts periodic vulnerability assessments, and the provider responds affirmative they could be acknowledging an annual
review, a quarterly review, or a daily vulnerability assessment. Perhaps they check yes when really all they perform is an annual
penetration test. Too much ambiguity.
Additionally, cloud providers cant spend all of their time fielding questions about how they manage their infrastructure. And,
regrettably, not many public cloud providers offer much transparency into their controls. And no, SAS 70 audits dont really account
for much of anything when it comes to security.
To help clear the fog, an organization that just formed this year and is moving fast in the area of cloud management, CloudAudit.org,
has emerged with what it hopes will be part of the solution. The group is developing a common way for cloud computing providers to
automate how their services can be audited and assessed and assertions provided on their environment for Infrastructure-, Platform-,
and Software-as-a-Service providers. Consumers of these services would also have an open, secure, and extensible way to use
CloudAudit with their service providers.
The group currently boasts about 250 involved in the effort, from end users, auditors, system integrators, and cloud providers
representing companies such as Akamai, Amazon Web Services, enStratus, Google, Microsoft, Rackspace, VMware, and many
others.
Last week the group released its first specification to the IETF as a draft, as well as CompliancePacks that map control objectives to
common regulatory mandates, such as HIPAA, PCI DSS, and ISO27002 and COBIT compliance frameworks.
As (if) CloudAudit is embraced by cloud providers, businesses should be able to shop and compare services much more intelligently.
Also, it could help some cloud business users feel more comfortable moving regulated data (where its permitted) to a public provider.
For cloud service providers, CloudAudit can help them to more cost-effectively handle the number of audit requests each year. And,
who knows, such transparency may even be a boost to business.
Building a standard is one thing, getting it adopted, working, and embraced by industry is quite another. Next post Ill will bring you a
discussion with a cloud management provider who has already begun putting CloudAudit to use.
2011
Both OpenStack and Apache Deltacloud are projects with similar goals. They seek to build
out a set of Representational State Transfer or lightweight REST APIs that allow outsiders
to tap into the services of a cloud provider over an HTTP network. Red Hat CTO Brian Stevens, not wishing to sound unrealistic,
said, yes, two sets of open source APIs coming into existence at roughly the same time will compete with each other. In fact, they will
be a little different and used for different purposes.
The DMTF has a set of already suggested APIs in hand from Oracle, Fujitsu, VMware and Telefonica, and its goal is to produce a set
of cloud APIs that will work for those vendors and the customers they seek to supply.
The Rackspace/NASA Nebula-based OpenStack backers are looking to provide systems that could be used by a cloud services
suppliers who want to manage up to a million servers. Its software will be aimed at the service provider and allow many service
providers to look the same and be dealt with the same way by their customers. RightScale CTO Thorsten von Eicken went
somewhat out of his way to say that this is a true open source project, meaning it will include a variety of vendor participants and
form a community around the resulting code.
But he could have just said an open source project. The use of true open source project tells me that this group is a little nervous
about its open source standing. It is after all a group of vendors who each have a direct commercial interest in the outcome. Looked
at from that perspective, the Deltacloud project in the Apache incubator looks like an even more true open source project, open to
developers from around the world, each of whom will have a minimal direct commercial interest in the outcome.
I dont really care about the hair-splitting. The OpenStack project reminds me of XenSource, the company formed behind the open
source hypervisor that was backed by IBM, Sun, Oracle, HP and others. But I view XenSource as less successful in attracting
multitudes of developers to its cause than some other projects because of that vendor domination. If the agenda is being set by
Oracle and IBM, how many independent developers, working for nothing, are going to spend time on the project or choose its output
for their next project? That worry doesnt affect Oracle, et al too much because they have thousands of existing customers ready to
work with the alternative they provide.
The Apache Deltacloud project is more of a grassroots project, possibly more likely to be picked up and used by a variety of
grassroots developers and enterprise developers seeking to build an internal cloud. If enough of these implementations come into
being, then the cloud suppliers will take notice. Perhaps theyve already implemented OpenStack as the means to get to a rapidly
scalable infrastructure quickly. Theres no reason why they couldnt dedicate part of that infrastructure to being activated by a set of
APIs already in use inside the enterprise.
In effect we need all three of these open source APIs efforts to accomplish different goals and allow the cloud to become a form
computing that connects to many different customers and implement varied styles of computing. We are well on our way.
Emerging technology always comes with a learning curve. Here are some real-world lessons about cloud computing from early adopters. Download the
latest all-digital issue of InformationWeek for that story and more. (Free registration required.)
2011
10
2011
11
You can find the full report, complete with a detailed analysis of each service reviewed,
here. Generally speaking, SaaS-based Web security works like this: Proxy your outbound Internet traffic through the closest point of
presence that your Web security vendor provides, and the provider skims out the malware mixed in with legitimate Web traffic.
As we prepared to launch this review, we were skeptical about the whole concept of Web security in the cloud. The idea of routing
your outbound Web traffic through a third-party proxy seemed like mayonnaise on a hamburger: just not appealing. We can now say
that after completing this review, well eat that mayo burger.
But that doesnt mean were giving up ketchup. The fact is, premises-based Web malware products from the likes of Bluecoat,
Finjan, Websense, McAfee and others are still our first choice for protecting users in corporate offices. They have proven themselves
to be scalable, reliable and effective. We cant yet say the same for SaaS Web security products--the market is simply too immature.
We need more assurance that these services will scale sufficiently before were ready to recommend wholesale adoption.
However, theres never been an efficient way to extend on-premises protection to remote users. Bluecoat and others offer halfhearted products via proxy clients, but routing traffic from an employee on the West Coast to a Web gateway filter on the East Coast
isnt efficient. This is where a Web security service can make a difference. We think that at present, midsize and large enterprises
that need to protect road warriors and small branch and remote offices should consider supplementing an on-premises product with
a SaaS-based offering. A service is easy to deploy, particularly for a subset of your entire employee base, and has low capital and
operational costs. In addition, user groups will get a similar level of protection as with an on-premises product.
Without fail, everyone we speak with about Web security in the cloud asks about the overhead added to the browsing experience,
so latency testing was a key element of our reviews. As we report our latency findings, keep these points in mind: First, many factors
go into the latency equation, so your mileage may vary from ours. Second, each providers cloud may have served us content out of
cache, potentially making one providers latency better than anothers. In addition, cloud providers have different geographical points
of presence; those with sites closer to our Boston-based lab had an advantage. Third, latency measurements should not be used
to rate one vendor over another, but rather should be used to broadly illustrate the impact of routing traffic through any providers
service.
The key takeaway for us in the labs is that the average user will probably never know that he or she is using a third party for Web
security. We generally found that most sites added only 0.1 or 0.2 seconds of latency. For sites that had many objects to fetch, we
saw a few extra seconds of latency, but that was at the absolute highest end of the spectrum.
The accuracy of each providers URL and malware filter was another key component of our testing. There were no curveballs in this
section of the testing: The goal was to simply see how well each vendor appropriately categorized and blocked our attempts to
access a set of well known spyware/malware domains.
We randomly selected 10 URLs from a huge database of known malware domains maintained from malwaredomains.com, and we
ran each vendor through the testing simultaneously. Because of the random nature of our domain selection, its fair to argue that
our results may not tell the entire story in terms of just how accurately each vendor can filter malware. For the most part, we were
2011
12
extremely pleased with how well each malware filter worked. With so many sites compromised on the global Internet, its impossible
to expect any URL filter to stop each and every potential threat. However, if nine out of 10 sites that are known to host malware can
be filtered right from the get-go, IT managers have a tiered security capability when the service is combined with endpoint antivirus
and malware protection.
We also set out to see how well each vendor implemented Web security above and beyond simple URL filtering. We stuffed the eicar
test virus, which antivirus vendors support for testing signature-based detection systems, into an encrypted zip file, a self extracting
zip, and a zip file that was recursively zipped multiple times. All the providers in our roundup were able to detect the virus stuffed
inside the self-extracting and recursively zipped file.
We were initially able to slip the encrypted zip file past everyone. However, Barracuda, Zscaler, and Webroot have administrative
controls that can be configured to disallow password-protected and encrypted zip files. Symantec did not natively provide this
functionality, and McAfee had no administratively configurable attachment policy at all.
Next, we tried to slip a booby-trapped PDF through each providers security scanners. Our infected PDF contained active code that
could be used to install a downloader for distributing malware to infected clients. Every vendor detected and filtered the exploit.
We also threw a booby-trapped JPG and SWF file at each provider, with the same results. While not every service was extremely
configurable from a management perspective, all of the providers performed well at the core function of detecting nefarious
attachments and filtering them in the cloud.
For other attacks, we felt Zscaler had the most robust security capability. For example, Zscaler was the only provider that was able to
prevent cookie theft via cross-site scripting attacks. Zscaler accomplished this by applying a watermark to each cookie dropped onto
the computer. If those cookie contents were acessed by a third-party site, the Zscaler cloud assumed a cookie theft was underway
and blocked the transaction. Zscaler can also enforce policy at the cloud level against other XSS attacks. The Zscaler service offers
application, P2P, and file control features, bringing the most impressive array of security features to the cloud.
The SaaS Web security providers offered good out-of-the-box reporting capabilities. In fact, there are some big name on-premises
appliances that can learn a lesson from how the cloud vendors are doing it. Most of the vendors in our lineup have very useful
dashboards that quickly displayed a range of pertinent information. You can get snapshots of top URLs, bandwidth consumed by
user and spyware/virus activity from all the players in our roundup. On the whole, Barracuda and Zscaler offered the most robust and
elegant reporting engines in the group. Both services offer a good selection of pre-canned reports, and you can drill down for more
detailed usage. All of the vendors, except for McAfee, offered the ability to download PDF-based reports, as well as the ability to
schedule the creation of new reports for quick access when needed.
In terms of value-added features, Zscaler includes basic data loss prevention functions as part of its core offering. Customers can
turn on dictionaries and enforce DLP policies for the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-LeachBliley Act (GLBA), Payment Card Industry (PCI) Data Security Standard and others through the management interface. Zscaler can
also apply those DLP policies to attachments and instant messages, so long as the transaction traverses the Zscaler proxy.
Webroot impressed us with its ability to execute proactive vulnerability scans against any system protected by its cloud. The
vulnerability assessment results were linked to explanations of each vulnerability, along with remediation instructions.
Compared with the cost of on-premises protection, cloud-based Web security is a relatively reasonable proposition. And, if cash is
tight, the Web services model may be a much more appealing option for smaller organizations. Prices are relatively similar across the
board, ranging from $1.50 to $5.00 per user per month.
If youre not ready to do Web Security in the cloud, youre not alone. But based on our experience in the labs, the protection
technology is robust. Organizations with no on-premises Web security should consider adopting a provider. The sell will be tougher
for organizations with significant investments in on-premises equipment. As mentioned before, we see remote access and protection
of small branch/home offices as the best use of these services for midsize and large organizations that already have Web security
gateways in place at the main office. Meanwhile, as the market and the providers mature, we expect Web security services to grow
into a viable option across the board.
2011
13
2011
14