You are on page 1of 28

Executives guide to the 21st century data center

Copyright 2013 CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Executives guide to the 21st century data center


Copyright 2013 by CBS Interactive Inc. All rights reserved. TechRepublic and its logo are trademarks of CBS Interactive Inc. All other product names or services identified throughout this book are trademarks or registered trademarks of their respective companies. Reproduction of this publication in any form without prior written permission is forbidden. Published by TechRepublic April 2013 Disclaimer The information contained herein has been obtained from sources believe to be reliable. CBS Interactive Inc. disclaims all warranties as to the accuracy, completeness, or adequacy of such information. CBS Interactive Inc. shall have no liability for errors, omissions, or inadequacies in the information contained herein or for the interpretations thereof. The reader assumes sole responsibility for the selection of these materials to achieve its intended results. The opinions expressed herein are subject to change without notice. TechRepublic 1630 Lyndon Farm Court Suite 200 Louisville, KY 40223 Online Customer Support: http://techrepublic.custhelp.com/

Credits
Editor In Chief
Jason Hiner

Head Technology Editor


Bill Detwiler

Head Blogs Editor


Toni Bowers

Senior Editors
Mark Kaelin Jody Gilbert Selena Frye Mary Weilage Sonja Thompson Teena Hammond

Graphic Designer
Kimberly Smith

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Contents
04 Introduction 05 Make your new data center a star 06 Dont let old data center concepts hold your business back 08 Can a data center really drive economic recovery? 09 Choosing a DCIM provider 10 The 21st century data center: Three game-changers 12 Determining the worth of an IaaS VM 14 IaaS provider comparison reveals market trends for the cloud 17 The role of UPS systems in energy efficient data centers 18 Inside the Backblaze data center and its Pod 3.0 architecture 20 Rackspace: An overview of a major cloud player 22 Is data sovereignty a real concern? 24 Googles new Asia data centers will boost speed of services 25 Hong Kong is planning underground data centers 26 Indias IaaS market to grow 40 percent until 2015

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Introduction
Todays data centers are more critical than ever as the nerve centers of modern business. However, if their resources arent managed diligently, the costs can easily spiral out of control. There are new technologies that can make data centers more flexible, more powerful, and far more efficient. But most companies arent starting from scratch, so migration projects have to be carefully managed and need to show a clear return-oninvestment. At the same time, cloud and service providers like Amazon AWS and Microsoft Azure are wooing companies to stop building their own data centers and to purchase data center capacity on-demand and scale it up and down as needed. To help you sort out the options and develop the best data center strategies, ZDNet and TechRepublic pulled together this collection of information, analysis, perspective, and advice. We hope these insights on data center trends and technologies will help you make smarter decisions and steer your organization in the right direction.

Sincerely, Jason Hiner Editor in Chief

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Make your new data center a star


By David Chernicoff

There was a time, in the not too distant past, when your data centers were very much behind-the-scene operations. A few people worked inside, and the outward presentation was whatever applications your users had that made use of the IT workload of the data center. But data centers have, over the last few years, developed a much higher profile. Its not so much that their role has changed, though roles have certainly adapted to modern technology trends. Its that data centers have become the high-profile representation of a businesss approach to the world (and hence, the customers of the business). The already outdated New York Times article a few months back that painted data centers as the Montgomery Burns of the business world, gleefully wasting power with not a care in the world, highlighted to many people who dont have a clue what a data center actually is or does, that data centers were out there, and with all the power that was being consumed, must be doing something. This type of media coverage has added another aspect to your data center managementwhat is now considered a social aspect. While it seems odd to think about the impact of social media on your data center, consider this: The current drive toward more efficient computing platforms, using sustainable or renewable resources, and being a better corporate citizen of the world isnt being done in a vacuum. Corporations are spending money to make existing data centers more efficient, to reduce power consumption, to find ways to reuse waste energy. And new data center designs tout their ground-up higher efficiencies and how they will provide better ROI and improve CAPEX and OPEX as they apply to the data center. But many companies are missing an opportunity to get even more return on their data center expenditures. While businesses that build datacenters or that are completely data center focused (such as Google and Facebook) make a concerted effort to play up their datacenters and their efforts to be more environmentally friendly, its not something you hear from more mundane data center operators. But perhaps it should be. If your company is spending the money to implement more efficient operations, develop business processes that have a smaller negative environmental footprint, or are just saving the company significant costs due to improved energy efficiency or operation flow, those are all feel good marketing opportunities for the business. And turning expense into opportunity just makes good business sense.

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Dont let old data center concepts hold your business back
By David Chernicoff

The traditional data centerthat dedicated facility that houses the equipment necessary to support an IT workloadis clearly going the way of the Dodo. Regardless of how much your business wants to keep direct control of its IT assets, the concept of the data center simply cant be what it used to be if you want a successful business. Consider this: The planning for data center construction has almost always presumed that the facility would have a 20-year life span. Ignoring the IT part of the equation for the moment, this meant that the supporting infrastructure was also designed with that 20-year number in mind. Disregarding the items that could be considered consumables, such as the batteries in your backup systems, no one building data centers expected to make changes to the cooling, power distribution, backup generators, and UPS systems during that 20-year lifespan. And it can be conceded that no one was actually putting up a building that wouldnt last much longer than the 20-year lifespan of the data center. Conversely, the components that actually performed the IT workloadservers, storage, networks, etc.rarely had an expected lifespan of more than five years. This meant that in your average data center, the IT equipment would be completely replaced four times over the expected data center lifetime. This also meant that large-scale app dev projects had an underlying infrastructure issue; there was always the specter of the dreaded hardware refresh cycle to worry about, plus the potential refresh of operating system software done in parallel. The rate of technology change has accelerated significantly for both the basic infrastructure of the data center and the applications running within. A data center built only five years ago, to the standard of the day, is likely to be hopelessly archaic when it comes to energy efficient cooling and power, especially given that its design may even be a few years older. Where the focus was once solely on the ability of the data center to deliver the necessary IT services, there is now a much higher level of concern for what it actually costs to deliver those services over and above the direct costs of the IT workload. Where facilities and IT once worked without ongoing interaction, each unit carefully hoarding its perceived prerogatives, the reality has become that a tight integration of the two delivers the greatest benefit to the business. The ability to respond to changing businesses climates by modifying the way IT works has to be matched by the ability of a facilities organization to deliver that same sort of rapid response.

The ability to respond to changing businesses climates by modifying the way IT works has to be matched by the ability of a facilities organization to deliver that same sort of rapid response.

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Both IT and facilities have to learn a valuable lesson about when to let go. IT groups threatened by the cloud are learning to integrate cloud-based development and services into their planning. Facilities related to IT can no longer be focused strictly on building and infrastructure maintenance and development and need to look outside the box (literally) for things that will improve their response time to business changes. Flexibility on delivering data center level services to the business is a requirement for future growth; being bound to anything, be it technology or a facility or a way of doing business, prevents all parts of your organization from seeing the best possible growth path for the company.

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Can a data center really drive economic recovery?


By David Chernicoff

ViaWest, which recently opened its latest facility in North Las Vegas, has certainly created a data center that stands out in a colo market filled with competitors searching for a competitive edge. Its new Lone Mountain Data Center is an Uptime Institute Tier 4 certified design, a level that has not previously been achieved by a colo facility and something that is certain to be a differentiator in the industry. With 74,000 square feet of raised floor space, LEED, Energy Star, and Green Globe certifications, ViaWest has pulled out all the design stops in building its flagship facility. From the perspective of the data center industry, the new facility is a excellent example of data center state-of-the-art and an impressive accomplishment. But is that a reason for the City of North Las Vegas to see it as a turning point for a community that has been mired in the economic doldrums for a very long time? North Las Vegas has seen a few other businesses over that last few years that opened to major fanfare and flared out and died quickly, leaving the city in no better economic shape. The highest profile of these businesses, the ill-fated Amonix solar panel manufacturer, brought a significant number of jobs and attention; attention that proved to be detrimental when the operation collapsed in 2011, taking the jobs and area investment with it. The ViaWest data center is certainly a low-risk operation, in terms of concerns over the viability of the project. ViaWest is a well-established data center provider and its decision to build a flagship facility means that the data center isnt going anywhere anytime soon. But the nature of the data center business is that it doesnt bring long-term large-scale economic prosperity to areas where one-off data centers are built. Even if other data center providers choose to build in the same area, as we are seeing in Oregon and in North Carolina, the overall impact on the local economy in terms of jobs and related business, once the data center facilities are completed and operational, is usually pretty minimal. North Las Vegas will be able to point to the ViaWest facility as a business that has chosen to be in its city. But to expect ongoing economic benefit seems to be an unrealistic choice. While it is true that some number of people will relocate to the area to work in the facility, there is no compelling reason for them to choose to live in the same city. The nature of the valley is such that people are more likely to put quality of life choices ahead of proximity to the office when choosing to relocate themselves and their families. Hopefully, North Las Vegas will be realistic about what the ViaWest facility means to its city.

Copyright 2013

CBS Interactive Inc. All rights reserved.

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Choosing a DCIM provider


By David Chernicoff

Data center infrastructure management is an interesting topic. From the 20,000-foot view, it seems like a completely obvious idea for overall data center management. But even with decades of data center experience behind us, DCIM is just now bubbling to the top as a key component in state-of-the-art data center design. Intelligent systems management, embedded in devices, is a relatively recent design feature when you look at commonly applied standards to data center hardware. It wasnt that long ago that intelligent server management required add-in cards and out-of-band solutions to implement. But over the last few years, instrumenting just about every component found in the data center has become commonplace, along with inexpensive solutions for retrofitting devices such as server racks with monitoring and management equipment. For years, the companies that built the data center infrastructure equipment (as opposed to the IT load devices) have been developing and deploying management solutions for their hardware. In recent years, they have been building entire suites of management software that is designed to go far beyond simple management, adding everything from on-demand provisioning to goal-seeking analysis tools, tagging these softwaremanagement tools for their hardware with the moniker data center infrastructure management. While initially resistant, data center operators are beginning to embrace the overall concept of DCIM and are looking for ways to integrate it with their general operations. The major hardware infrastructure vendors, led primarily by the manufacturers of the big-ticket electrical equipment found in every data center, offer everything from stand-alone applications to full suites of DCIM-focused product. Theres no question that DCIM is on the minds of IT buyers these days, and that is clear in the relatively recent appearance of DCIM features and DCIM-labeled management tools from traditional IT hardware and software vendors. With vendors of all stripes pushing unified and converged infrastructures, the selection of management tools across the board will have a far-reaching impact on the future of your datacenter.

Copyright 2013

CBS Interactive Inc. All rights reserved.

10

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

The 21st century data center: Three game-changers


By Scott Lowe

As we approach the midpoint of the decade, a number of forces are coming together that could fundamentally reshape the data centers that have been carefully crafted over the previous decade. Lets consider three potential contributing factors and how they might affect data center plans in the next few years.

Cloud
Perhaps the biggest potential game-changer is the cloud. As cloud-based services continue their expansion, the march to the enterprise is inevitable. At first, companies will find themselves testing the waters with a small workload just to see how everything works. And then, as the cloud providers prove themselves capable of handling bigger workloads, and if the economics make sense, organizations will slowly move those tough-tosupport workloads to the cloud, too. If this pattern sounds familiar, youre not going crazy. Its pretty much the same way that virtualization revolutionized the data center in the previous decade. Organizations will ultimately begin to move to the cloud for the same reasons that they turned to virtualization. However, the cloud offers even more possibilities: Pay for what you use. With virtualization, organizations were able to reduce their hardware costs by consolidating inefficient servers. But even with virtualization, there is still waste in the environment in the form of overhead and unused capacity. With a move to the cloud, there exists the potential to simply pay as you go, which translates into paying for what you use and no more. Capital vs. operating costs. Data center = big capex costs and cloud = operational costs. On-demand capacity expansion. Data centers arent always the most flexible objects in the world. With the right cloud provider, as your needs fluctuate, you can add and reduce capacity at will. Less time on IT, more time on business. As you spend less time on data center needs, there is more time for business-facing activitiesat least in theory!

Over the next few years, I do expect to see many companies move at least some workloads to the cloud.

Converged infrastructure
As you probably know, the cloud is not for everyone. So the data center will continue to be integral for a lot of companies. Further, even for those companies that do embrace the cloud, its unlikely that they will move everything. Some items will remain local.

Copyright 2013

CBS Interactive Inc. All rights reserved.

11

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

However, for many, there is a need to simplify the data center, especially when it comes to supporting certain new initiatives, such as VDI. Building data center environments is a lot of work and requires significant planning to ensure that there is capacity to meet ongoing needs and testing that must be done to make sure everything works together. Stop the madness! This is where the emerging hyperconverged market can be of great assistance. Currently, there are three major players in this space: Nutanix, Simplivity, and Pivot3. Each company takes a building block approach to data center architecture. Each block includes storage, compute, memory, and a hypervisor. As you begin to run out of resources in an existing block, you simply add another one. As time goes on, these solutions are becoming even easier to manage. For example, with Nutanix, as you add new blocks, they are automatically detected and added to the resource pool without an administrator needing to go through a discovery process. You cant get much easier than that. Further, these solutions are often specifically targeted to handle certain kinds of workloads, such as VDI. They generally include hard disk-based storage and enough solid state storage to support things like boot storms and login storms, as well as enough capacity to handle storing hundreds or thousands of desktops. Obviously, these kinds of solutions arent for everyone. But for IT shops that want to simplify the data center and redirect scarce resources toward the business, they could be a perfect fit. I fully expect to see these kinds of solutions enjoy great success in the coming years. For those who dont fit the hyperconverged space, there are always solutions such as Dells vStart and EMCs Vblock, which can meet data center integration needs. These solutions scale to meet the needs of even the largest of organizations and also simplify the data center by providing a one-stop shop for the entire environment while providing a single phone number for support.

The economy
As times goes on, IT departments are continually being directed more toward the business and less toward the technologyhence the need and desire to turn to solutions like cloud and hyperconverged infrastructure. Even as the economy improves, I see IT as permanently changed and needing to continue efforts to refocus attention to bottom line driven solutions. This will necessarily affect the data center in the years to come. CIOs will be forced to make different kinds of decisions than they may have made in the past, including simplifying the data center and redistributing the workloads in new ways.

Summary
Of course, there are other factors at play when it comes to the next-decade data center. But I believe that the continuing rise of the cloud and converged/hyperconverged architecture, coupled with ongoing economic needs, will be significant drivers in the trends.

Copyright 2013

CBS Interactive Inc. All rights reserved.

12

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Determining the worth of an IaaS VM


By John Joyner

A virtual machine (VM) that is running in a public cloud, which augments or extends traditional on-premise VM computing roles, is known as an infrastructure-as-a-service (IaaS) class VM. Individual business cases are the economic drivers that move IaaS class VMs into the public cloud. There are many motivationssuch as seeking high availability, global access, and elasticitywhere large public cloud providers can uniquely deliver value. For example, a public cloud providers content delivery network (CDN) might be able to synchronize IaaS document libraries across regions more simply and cheaply than was previously possible. Other IaaS implementations may not have as clear a payback or return on investment (ROI), so the question arises: How much is an IaaS VM worth?

A good place to start when calculating ROI on cloud IaaS migrations is to know what it would cost to refresh existing infrastructure on new but similarly configured hardware and software platforms.

Apples to apples, then some math


Various public cloud and application providers offer business case calculators, but each is probably somewhat slanted toward the technology of its vendor. A good place to start when calculating ROI on cloud IaaS migrations is to know what it would cost to refresh existing infrastructure on new but similarly configured hardware and software platforms. Lets say you have an application that features two load-balanced servers that, if deployed on two industrystandard servers (ISS) with 2 GB RAM, a dual-core CPU, a local hard disk, and a three-year warranty, represents at least a $5,000 capital expenditure. Operating system and management licenses can increase this by more than $3,000. A conservative $500 annual per-server power and facilities charge means it would cost about $11,000 to host the two-server application for three years on conventional infrastructure (excluding networking). So when looking to migrate that workload, anything less than $11,000 total three-year expenditurefor equivalent or superior servicewill be attractive. This amounts to about $306 per month using conventional infrastructure, or about $153 per month per server. You can go shopping for public cloud IaaS VM providers knowing what the relative costs are to doing it yourself.

Copyright 2013

CBS Interactive Inc. All rights reserved.

13

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Compare rates, but consider the big picture


At Amazon Web Services (AWS), a Medium Utilization Reserved Small Instance VM on three-year term has a total three-year cost of $1,261, or $36 per month per server. Microsofts Windows Azure Virtual Machines (Preview) service Small Instance VM is $57 per month with no multi-year obligation. Rackspace Cloud Servers lists a similar computer with a $117 per month cost. Obviously, all three public cloud options are cheaper ($36, $57, and $117 per month) over the three-year period than doing it yourself with traditional on-premise servers ($153 per month). At first glance, AWS has the cheapest option, but here, the comparisons across public cloud providers become more nuanced. For example, the AWS price requires a three-year commitment, and there are higher prices for shorter terms. The Azure offering uniquely includes geo-replicated hard disks in the base price. Another difference is that the Rackspace offering includes bandwidth, while the other providers charge extra for networking. Each providers blend of services adds complexity in the decision making, but provider diversity can be leveraged to use the highest-value features of each public cloud provider. Rarely will the base monthly run rate of the IaaS VM be the decision maker. Do the math to calculate what it would cost to do it in-house; then, verify that the public cloud solutions cost less out of the gate. If your idea passes the first test, move to analyzing the details of the cloud provider offerings. Here are some valid considerations for selecting a public cloud provider, after quantifying the ROI in your particular business case: Your near-term public cloud plans are part of an overarching cloud strategy, which should be the primary decision maker when it comes to cloud providers. A particular provider may or may not permit virtual private network (VPN) connections between the public cloud and either an on-premise network or another public or private cloud. Your operating system or vendor application stack has tools that permit easy moving of workloads into a particular cloud provider. Some other factor, such as the cloud providers CDN or networking charges, are particularly suited to the business needs of the workloads. You are staging workloads across public cloud providers on purpose, effectively removing a particular public cloud as a single point of failure for your enterprise application.

The example in this article was for two modest capacity application computers. Your situation might involve complex workloads, such as database and transaction processing, that require more robust, more expensive IaaS class VMs. Regardless of the VM category and selected public cloud service provider, small scale pilot projects to gather precise cost/performance metrics are a must to validate cost and ROI models in public cloud projects.

Copyright 2013

CBS Interactive Inc. All rights reserved.

14

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

IaaS provider comparison reveals market trends for the cloud


By Thoran Rodrigues

For me, there are two main uses for any kind of comparison data: The first one is to pit providers against each other, checking to see who is going to offer cheaper servers, better support, and so on. This is useful for newcomers to the cloud, people who are looking to change providers, and for other similar uses. The second, less obvious application of any comparison data is market exploration, looking at how the companies position themselves on the dimensions specified and trying to find market trends from this positioning. My latest cloud IaaS vendor comparison includes 16 companies, broken down into Top-of-Mind companies (the leaders, the ones we immediately think about when talking about the cloud) and the Upstarts (companies that are less well-known but still provide good service). I created two subsets of dimensions, one of Cloud Promises, covering the main promises of cloud computing (cost optimization, scalability and automation, and flexibility) and one of User Concerns, dealing with the greatest concerns users have when moving to the cloud (security, vendor lock-in, and reliability). There are, in fact, several interesting studies we can extract from the data, but some care must be taken. Our first step for analysis is going to be data normalization (not to be confused with database normalization), so that we can compare the companies without a single dimension distorting our analysis. Normalization was done using a simple standard score; discrete dimensions were placed on a 0-2 scale, where 0 is the lowest (worst) grade and 2 is the highest (best) grade. From here, these two simple charts compare the average scores of Top-of-Mind companies with the Upstarts. (If youre wondering about the scale, its in z-scores.)

Copyright 2013

CBS Interactive Inc. All rights reserved.

15

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

From this first chart, we can see that the Upstartsthe smaller companies that are fighting against the leaders in this marketare on average charging lower prices and offering a higher degree of customization of virtual instances (more Instance Types); the larger providers, as expected, have more robust offerings, bringing to the table more data centers, better monitoring tools and APIs, and a greater variety of pricing plans. On the User Concerns, we can also see some interesting polarization: The smaller providers seem to be focusing more on offering premium service (through extended service hours and more contact channels), more aggressive SLAs, and the possibility of uploading your own VMs to the services. However, the larger ones have much better security ratings.

These charts look only at the averaged data, so lets examine the details: In the Cost Reductions / Optimizations section, Lunacloud comes out in front. It has the cheapest servers Ive found, at roughly US$ 46.00 per month, as well as the cheapest storage cost and the second cheapest outbound data cost (no charge for inbound). In the Scalability and Automation section, Amazon and SoftLayer come out on top, offering rich APIs, full scale out and scale up functionality, and good monitoring tools to round it out.

Copyright 2013

CBS Interactive Inc. All rights reserved.

16

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

In the Choice and Flexibility section, AT&T wins by sheer numbers: 26 global datacenters to choose from, with fully configurable instances. And even if it supports only two operating systems out of the box, you can upload your own VM images with whatever software you want. In the Security Features section, many companies are mostly tied, offering datacenters with all the required certification and some security featuresbut these are far from complete. The same thing happens with Ease of Migration. Most companies, especially those employing VMware technology, are allowing for easy VM upload and download, simplifying the life of IT departments.

In the Reliability section, Rackspace, OpSource, SoftLayer, and Hosting.com take the top spot, with services that have been running for more than five years, aggressive SLAs, and extensive support. The rise of VMware is an especially interesting trend. While some of the large providers are spending time and effort trying to create open source cloud standards, such as OpenStack and CloudStack, VMware is quietly taking over the market. In the group of providers checked, there are more companies using VMware technology than any other cloud standard. This trend could end with VMware becoming the true cloud standard, and it does, in fact, make sense: Many midsize and large companies already use VMware internally, so the migration from internal data centers to the cloud becomes much easier if the cloud provider offers technology the company is familiar with. All providers that work with VMware are also offering the possibility of clients uploading their own VMs, making any transition even simpler. Finally, we can look at who stands out from the pack. For this, a simple analysis is to simply sum the normalized scores for each provider, assigning an equal weight for all dimensions. In this case, the top providers with respect to the Cloud Promises are Softlayer, Opsource, Rackspace, Amazon, and Lunacloud. These providers all have low prices, good APIs and monitoring tools, and many instance types and datacenters to choose from. Looking at the User Concerns, the top providers are Softlayer, Opsource, Hosting.com, and Tier3. Rackspace and Amazon are solid but dont stand out so much here. The top providers in this category all offer excellent customer service and aggressive SLAs and have all the security certifications on their data centers. They are also services that have been online for five years or more, so they have a longer track record. Many other interesting trends can be extracted from the data. Im providing the full Excel spreadsheet, already with the numeric transformations and the normalization, so that anyone who wants can work with the data. Rank the providers, add (or remove) dimensions, add other providers, place weights on the dimensions to change the scores, or simply change the data around.

Copyright 2013

CBS Interactive Inc. All rights reserved.

17

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

The role of UPS systems in energy efficient data centers


By Rick Vanover

Too many times when we design data centers, we monitor power consumption at the device only. This could be from a storage array or a server in many cases. Sometimes, we might also check in on the power distribution units (PDUs) that may be in use within a rack as well. But do we look closely enough at our uninterruptable power supply (UPS) systems? Chances are, we may find efficiency there if we have to be constantly mindful of data center power consumption. Recently I read a blog by Troy Miller that got me thinking about this very topic. The logic here is that if the UPS unit is very efficient, it is possible that the overhead associated with the power protection can be reduced. And that might be the extra few percents of improvement needed in the power(and space-) constrained data centers of today. Now, to set the record straight, all UPS devices are not created equal. There are three main types: VFD (line interactive), VI (standby), and VFI (online double conversion). These types were enumerated recently by APC, leaders in the UPS space. There is a certain amount of competitive tension going around about this topic, and the fact that all devices are not created equal is important. The VFI type is preferred by many. AC power is received, it charges the battery as DC, and then the battery power is converted to AC for the devices to consume. In this way, there is no interruption if there is a power loss because the second stage is untouched from the battery. Some electronic devices in the data center may require a certain type, such as the VFI. In my experience (albeit quite a while ago), I had a telecom system that required that type of power conversion. The system provided voicemail for my office and for some reason, it was sensitive enough to detect normal power fluctuations if a regular UPS (VFD type) switched from AC to battery. Other sensitive systems may require power conditioning, which I have experience with as well. In fact, Ive even put power conditioning in front of a UPS unit. That gives very clean and reliable power, but it requires a high power overhead to do it. The takeaway is to know the devices in your data center, big or small. It also may be worth implementing high efficiency UPS devices that do VFD for one range of equipment and another group that does VFI for the most critical areas. You can get additional power efficiency via the UPS if you need it.

Copyright 2013

CBS Interactive Inc. All rights reserved.

18

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Inside the Backblaze data center and its Pod 3.0 architecture
By Scott Lowe

Have you ever wondered how some online backup companies can afford to provide the services they provide at what seem to be insanely low prices? One companyBackblazehas, for quite some time, made its hardware specifications freely available. This has allowed those requiring mass storage to rely on a proven architecture thats used every day to support a profitable business. Backblaze recently released version 3.0 of its Pod architecture, which sports a whopping 180 TB of capacity in just a few rack units of space and at a cost of less than 6 cents per gigabyte. That 6 cents per gigabyte includes all costs for the storage chassis, 45 x 4 TB hard drives, and all of the components and electronics that make the solution work its magic. How do they do it? First of all, Backblaze has spent an incredible amount of time testing and validating every part that goes into its Pod architecture. It does so in both lab and real-world, missioncritical, customer-facing environments. As a result, the company is an authority on specific parts and whether they work or dont work. For example, in Pod 3.0, Backblaze recommends the use of a specific brand of SATA cable. For many, a SATA cable is a SATA cable, but for a company like Backblaze, a faulty SATA cable can be a really bad day for a customer, so everything is tested. Perhaps the most important part of the Pod, the chassis has undergone some refinements in Pod 3.0. Its a known fact that physical vibration can be a performance killer. Any unexpected movement of a hard drive read or write head requires that drive to spin back around to correct the issue. With Pod 3.0, Backblaze has made improvements to the chassis with an eye toward reducing vibration. Each row of 15 drives now includes its own anti-vibration assembly intended to address this issue. In addition to helping to keep performance steady, reducing drive vibration can improve disk failure rates. All told, this 180 TB behemoth costs just under $11,000 to build with 4 TB hard drives. A 3 TB hard drive version yielding 135 TB of raw capacity can be built for $7,567. Backblaze explains that hard drive prices have remained relatively high over the past couple of years, so storage costs are not dropping as they once were. But with the introduction of 4 TB drives, the company is able to squeeze more capacity into a single unit, which provides more storage for less power. In that way, the company can somewhat reduce costs with Pod 3.0. In fact, Backblaze even provides these figures: With 3 TB drives, the companys cost for the storage is 63 cents per terabyte per month on a full-rack basis. With 4 TB drives, this cost plummets to 47 cents per terabyte per

For many, a SATA cable is a SATA cable, but for a company like Backblaze, a faulty SATA cable can be a really bad day for a customer.

Copyright 2013

CBS Interactive Inc. All rights reserved.

19

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

month. If those numbers appear too low, they arent. Thats the amortized cost of storage for Backblaze on a monthly basis. The company estimates that the lower ongoing cost for the 4 TB drives means that the more expensive upfront costs are recovered in around five months. For me, one of the most impressive aspects of the company is its willingness to openly share its hardware specifications and cost figures. If you jump to Appendix A in this Backblaze blog post, you will find a comprehensive list of parts that make up the Backblaze Pod solution. Bear in mind that Backblaze enjoys some significant economies of scale with regard to procurement, so your own cost may be a bit higher if you decide to try to build one on your own. From a performance perspective, Backblazes services are certainly designed to maximize capacity. Data trickles into Backblazes data centers and traverses the storage infrastructure. As you may imagine, there is significant write traffic and read traffic increases when a customer needs to recover a system. Backblaze is focused on providing inexpensive but reliable backup space for its customers and it does it well with Pod 3.0.

Copyright 2013

CBS Interactive Inc. All rights reserved.

20

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Rackspace: An overview of a major cloud player


By Nick Hardiman

Lets say I am part of an organization that wants to test drive a cloud computing service now, with a view toward entering into a managed service partnership later. Should I try Rackspace? Many companies seem to be. Is Rackspace the best place for my organizations people to play around with off-premise computing? Rackspace is a large and successful hosting provider. It has had its share of data center problems in the past and has plenty of happy customers now. Rackspace is one of the hyperscale providers, along with Google, Facebook, and Microsoft. But Rackspace isnt the biggestAWS is. Many data centers, managed by an army of engineers, sysadmins, and operators, host a zillion servers for customers around the globe. Theyve got a great SLA, but so has HP. Rackspace had the cojones to kick off the OpenStack project, but I dont know what difference that makes to me. There is no doubt they are trying hard to be the best in the business - the most reliable, the most supportive and even the most open. Have they got what I want?

Rackspace pushes the Fanatical Support angle. You cant move your mouse on its website without a chat window popping up prompting you to talk to its support staff.

The Rackspace edge


In the world of cloud computing, all the big players provide instant access, customer self-service, and utilitystyle bills. Those are qualifiers for membership of the cloud club, not differentiators. What benefits does Rackspace offer that are different from the other big players? The Rackspace people are clear about what benefits they offer and they have a strong marketing department that makes sure the message comes across loud and clear. Who hasnt looked at a trade magazine (paper or Internet version) and read about Rackspaces Fanatical Support, OpenStack, Interoperability, and so on? But what does that mean? Why are these words important? And whats that capitalization telling me?

Fanatical Support
Rackspace pushes the Fanatical Support angle. You cant move your mouse on its website without a chat window popping up prompting you to talk to its support staff. Computer systems are the most complicated

Copyright 2013

CBS Interactive Inc. All rights reserved.

21

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

machines ever built by humans. Nothing spooks an organization like the fear of being left by themselves in the dark when systems fall over. Rackspace has gone as far as registering Fanatical Support and a few variants as trademarks and making it a central pillar of its entire business.

OpenStack
Rackspace kicked off the OpenStack cloud management platform project. Big players, including Rackspace, HP, and NASA, use OpenStack apps within their data centers. The development of OpenStack as an open source project is leading to a new business ecosystem forming around it. In the world cloud ecosystem championships, OpenStack is not a finalist (AWS wins and VMware is runner-up). But its early days. OpenStack is a young project and it has a lot of potential to develop. The open source applications are timesavers for coders.

Interoperability
Interoperability reassures early adopters that they are not entering a relationship they cant escape later. Interoperability is the ability to chop and change between cloud providers. The FUD (fear, uncertainty, and doubt) of vendor lock-in has always been useful in marketing. In the 1990s, IBM was advertising the exclusive power of the AIX OS, while Sun marketed Java as write once, run anywhere freedom of choice. In truth, cloud interoperability is patchy at best. Cloud computing is just too young a computing segment to provide stable interfaces to build onthere is too much change happening. OpenStack is more about making interoperability possible than making it. At least the information needed to get the job done lies in the public domain, rather than hidden as company secrets. Interoperability will, eventually, allow an organization to do things like this: Automatically and seamlessly transfer data between cloud storage providers Spread a workload across cloud computing providers Take custom code from one platform and run it on another without the need for an expensive migration project or support contract

Infrastructure test drive


Its usually impossible to say up front if a certain cloud provider has what my organization needs. There just isnt enough knowledge to make an informed decision. There is one way to find out for sure. Sign up to Rackspace, run up a couple of test services, and find out how it feels.

Copyright 2013

CBS Interactive Inc. All rights reserved.

22

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Test-driving infrastructure is not possible in the on-premise IT old world. Its not possible to convince IBM to drive a truck full of computers and software to the office, unload it, and install it just so a prospective customer can play with it. How does an enterprise customer convince IBM to dump a labs worth of computing power one month, HPs truckload to arrive the next, and Dells the month after that? In the new cloud world, an enterprise can play with the large computing installations of half a dozen vendors and see which one fits best with its business.

Copyright 2013

CBS Interactive Inc. All rights reserved.

23

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Is data sovereignty a real concern?


By Spandas Lui

Data sovereignty has been a persistent concern for Australian companies that think their data has less protection when its hosted overseas. But data center provider Digital Realty, among other global companies, doesnt believe that concern is legitimate. The US Patriot Act has become the bogeyman when it comes to why Australian companies should be keeping their data onshore. There is a perceptionwhich some claim is a misconceptionthat the Act gives the US government unfettered access to data hosted in the US or by a US company, a view that Australian cloud providers have used to market their own services. US-based provider Microsoft is more dismissive of this perception, even claiming that Australias data-protection laws are just as bad, if not worse, compared with the US. The Commonwealth Bank, which is a client of Amazon Web Services (AWS), has come out to rubbish the data sovereignty issue as well. In terms of the Patriot Act fears, if a company isnt doing anything nefarious, it shouldnt worry about the US government gaining access to its information, said Digital Realtys senior vice-president and regional head for Asia Pacific, Kris Kumar. Unless youre doing something wrong or against the law, you have nothing to worry about, he said. Any act that exists need warrants to access that datathe government cant clamp down and access the data willy-nilly. According to Kumar, data sovereignty shouldnt be much of a concern for Australian businesses. Digital Realty has a network of data centers across the globe, and in the last 14 months, it has made a big push into Australia, building and acquiring several data centers in the country. I think data sovereignty is a partially made up issue, Kumar said. There is no regulatory impact to hosting data offshore other than taxation issues, and tax offices globally are working to pin down the tax provisions under some of the business offerings from the global cloud providersbut thats a whole separate issue. Cloud providers that host data in data centers overseas are aware of the importance of keeping customer data safe and are smart enough to know they cant compromise on this matter, Kumar said. He fears that the data sovereignty concerns in Australia will stifle the growth of the countrys cloud industry. There is a lot of hype around this issue, and some protectionism going on in terms of the current domestic environment in Australia, he said. Cloud is pretty much an organism that multiplies, grows, connects, and does various things. If you start to create a situation where you believe everything has to be hosted onshore, you will create a data desert rather than a data oasis.

Copyright 2013

CBS Interactive Inc. All rights reserved.

24

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Having your data disconnected from the rest of the world will do no good for anybody from a business perspective. It pays to be part of a globally connected cloud, according to Kumar, though he acknowledged that it made sense to host business critical data and applications that are latency and time-sensitive locally. But the productivity applications such as Microsoft 365 and AWS services for computing are not customer data sensitive and can be done from anywhere, he said. There is a place in the world for both onshore and offshore hosting.

Copyright 2013

CBS Interactive Inc. All rights reserved.

25

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Googles new Asia data centers will boost speed of services


By Jamie Yap

With Googles new data centers in Asia scheduled to go operational this year, users in the region such as India can enjoy the Internet giants services, including Web search and YouTube videos, at faster speeds. In a recent report by The Economic Times, Lalitesh Katragadda, country head, India products at Google, said Internet connectivity speed in India is not very high, and the new data centers will be crucial to this market due to its proximity. Google started construction of the new data centers in Hong Kong, Singapore, and Taiwan beginning in late 2011. The Singapore facility is expected to be complete early this year, while the one in Taiwan will be complete by the second half of this year. Google has not yet given a clear timeframe for completion of its Hong Kong data center, the report said. Outside Asia, Google has seven data centers in the US, and one each in Finland, Belgium, and Ireland. Katragadda said India was not chosen as a location due to its hot weather. He also said that some Google services, such as YouTube and Google Hangout, cant be accessed in India at optimal speeds currently. With the new centers, time taken to access such services will reduce dramatically, which will help boost user adoption. In the report, Raj Gopal AS, managing director at Bangalore-based NxtGen Datacenter and Cloud Services, said that the proximity of the Asian data centers could result in Google services becoming 30 percent faster. Typically, countries located close to data centers enjoy faster access to data on the Internet. The report said that India is ranked 112th globally in Internet speed, according Akamai Technologies. Citing figures from StatCounter, the report also said that India is one of the largest markets for Google, with more than 100 million users and a 95 percent share of Indias search market. Over the next three years, Google expects 500 million online users from emerging markets, as compared to 15 million from the United States. More people from India are coming online everyday and this is an important market as Google looks to bring the next one billion online, Katragadda said. We plan to invest disproportionately in India in the coming months and years.

Copyright 2013

CBS Interactive Inc. All rights reserved.

26

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Hong Kong is planning underground data centers


By Ellyne Phneah

Hong Kongs government is looking to create new space by building data centers in the special administrative region in underground caves. Speaking at the recent Datacenter Space Asia Conference, Hilary Cordells, partner at a local real estate law firm Cordells, revealed that plans for subterranean data centers were in progress, The Register news site reported. Rock cavern development can be done, and data centers use is a particularly good one. Its on the governments radar screen and its taking active steps, Cordells said, adding that some sites will be more suitable than others. It is also possible for ownership to be divided according to different strata since legally, the person who owns the ground owns everything underneath the center of the earth and everything above to the heavens. While the theory is sound, the environmental impact of construction could be high, as water tables would need to be lowered and toxic material removed. Engineering consultancy firm Arup released an initial feasibility report last year, which said that underground facilities in Hong Kong will benefit data center owners by increasing security, as they reduce the risk of accidental impact, blast, and acts of terrorism. The report also found that two-thirds of Hong Kong has land of high to medium suitability for digging rock caverns. Within the territory, five areas comprising more than 20 hectares apiece have been selected in the report as strategic cavern areas, which can technically accommodate multiple cavern sites. The areas include Hong Kong islands Mount Davis, Kowloons Lion Rock, New Territories Tuen Mun and Sha Tin, and Lantau Islands Siu Ho Wan.

Copyright 2013

CBS Interactive Inc. All rights reserved.

27

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Indias IaaS market to grow 40 percent until 2015


Swati Prasad

India is still in the very early stages of cloud adoption, driven primarily by infrastructure-as-a-service (IaaS) deployments. A study by Knowledgefaber, a consultancy and research firm, on cloud computing adoption among Indias enterprises indicates IaaS as the cloud delivery model that will witness higher adoption rates among these companies. The IaaS market in the country is set to grow at 40 percent CAGR (compound annual growth rate) between 2011 and 2015. Currently, about 40 percent of total IT spending in India comes from large enterprises, and substantial investment is expected from this segment in cloud computing as well. With legacy infrastructure in place and existing investments in enterprise software licenses, the migration path in terms of cloud delivery and deployment models for large enterprises will be different from small and midsize businesses (SMBs). Enterprises belonging to manufacturing, BFSI (banking, financial services, and insurance), healthcare, information and communication technology (ICT), government, and education sectors are expected to be major cloud adopters driven by higher IT spending and increasing need for cloud-based services, the study said. SMBs, however, are leveraging cloud-based applications such as CRM (customer relationship management), ERP (enterprise resource planning), SCM (supply chain management), and other collaboration and communication applications to boost organizational efficiency and productivity and effectively compete with larger enterprises. But while India forms 17 percent of the total cloud market in Asia-Pacific, domestic firms are still in the phase of understanding cloud benefits as they look to cut IT costs and achieve higher IT services availability. Current IT architecture has led organizations worldwide to allocate nearly 70 percent of their IT budgets to keep existing applications running, leaving scope for only 30 percent to create new value. With increased business complexity and declining profit margins, companies are now looking at cloud computing as a new IT business model to create more value, achieve cost benefits and business agility, Knowledgefaber said. The total cloud computing market in India was estimated to be worth US$458 million in 2011. By comparison, the global cloud computing market was worth US$34.7 billion in the same period of time.

While India forms 17 percent of the total cloud market in Asia-Pacific, domestic firms are still in the phase of understanding cloud benefits as they look to cut IT costs and achieve higher IT services availability.

Copyright 2013

CBS Interactive Inc. All rights reserved.

28

EXECUTIVES GUIDE TO THE 21ST CENTURY DATA CENTER

Asia leads cloud growth


Cloud computing is now witnessing robust adoption across the world, especially among developing markets in Asia-Pacific and Latin America. India, China, Australia, and Singapore are leading the way for the Asia-Pacific region. Robust uptake of mobile devices, such as smartphones and tablets, need for online storage, and improving Internet and broadband availability will provide the impetus for cloud adoption in the region, according to the report. Conversely, security concerns, government regulations, and lack of clarity on business benefits from cloud remain major hurdles for adoption in the region.

Copyright 2013

CBS Interactive Inc. All rights reserved.

You might also like