You are on page 1of 16

> Colo Design | Supplement

Future-Proofing

Colo Design
Supplement
Sponsored by
INSIDE

Racks of the future Designing for chaos Safe and secure


> We’re used to the 19in rack. > We’re still designing for a world > It might be mostly security
Some of us are comfortable with that is not on fire. It’s time to accept theater, but protecting your colo
the 21in rack. But why stop there? climate change, and build for it from intruders matters
First lithium-ion battery
cabinet designed by data
center experts for data
center users.
Cut your total cost of ownership with the Vertiv™ HPL
lithium-ion battery system, a high-power energy
solution with best-in-class footprint, serviceability
and user experience.

Smaller, lighter and lasting up to four times longer than VRLA


counterparts, the newest generation of lithium-ion batteries pay
for themselves in a few years. The Vertiv HPL battery cabinet features
safe and reliable lithium-ion battery modules and a redundant battery
management system architecture with internal power supply. These
features and the cabinet’s seamless integration with Vertiv UPS units
and monitoring systems make it ideal for new deployments or as
replacement for lead-acid alternatives. Plus, its user-friendly display
leads to a best-in-class user experience.

Why choose Vertiv HPL Energy Storage System?

yySafe, reliable backup power that offers a lower total cost


of ownership

yyDesign that is optimized for use with Liebert® UPS units


and for control integration

yyWarrantied 10-year battery module runtime at temperatures up


to 86°F/30°C for improved power usage effectiveness (PUE)

yySuperior serviceability

Visit Vertiv.com/HPL

© 2019 Vertiv Group Corp. All rights reserved. Vertiv™ and the Vertiv logo are trademarks
or registered trademarks of Vertiv Group Corp. While every precaution has been taken to
ensure accuracy and completeness here, Vertiv Group Corp. assumes no responsibility, and
disclaims all liability, for damages resulting from use of this information or for any errors or
omissions. Specifications are subject to change at Vertiv’s sole discretion upon notice.
Colo Design Supplement

Sponsored by

Contents
4 T
 he future of hardware: Why racks Designs for the future

A
and cooling aren't a done deal
fter many years
8 Advertorial: Vertiv lives the dream share some of that strength with
of building data
the surrounding grid, then it is very
centers, you might
10 R
 esilient design: Can your facility go much part of the solution.
think that the
off-grid and survive a catastrophe? innovation was
Goodbye to humans? It looked
13 P
 ost human design: done, but that isn't
like lights-out operation might be
Facilities still need people the case. This supplement looks at
coming, allowing IT equipment to
how design is changing in the most
be operated in sealed environments
14 Colo security: Let the right one in important areas.
suited to electronics, not breathing
Racks and cooling are the most organisms (p13).
basic elements of any facility, but But the customer is still human.
they are up for some of the biggest Even for vanity alone, data centers
potential changes (p4). still need life support in case the
Niche areas like HPC have the masters should drop by, either to fix
biggest leeway to rock the boat. the hardware or impress a client.
They have higher margins and For now at least. If Microsoft
higher demands for performance. succeeds in moving data to the
Cloud providers with complete sea bed, server huggers may finally
control of their hardware are also decide to stay ashore.
leading the charge. In one French
provider we saw racks shifted to a Finally, security. You might
horizontal position, and a custom think this remains the same. It's
13 cooling system designed and built
for one provider alone.
all about keeping data private and
avoiding breaches which can harm
That doesn't seem to make sense your organization. But if there is
in a world where commoditization a radical change to architectures,
normally drives out diversity. then you need to rethink how you
But what's happening is secure the data and the promises
that changes in one part of that hold it (p14).
Supplement designed by Dorothy McHugh

the ecosystem are driving In a nutshell, when virtual and


developments elsewhere. physical spaces are increasingly
shared, this increases the necessity
Living through disaster has to secure physical access. But
10 risen through the priorities of data
center operators, as the likelihood
ironically, as outsourcers hold
other people's data, the same
of freak weather increases, and process might actually decrease the
political tensions fluctuate and roil. likelihood that they will be properly
Apocalyptic as this sounds, secured.
it turns out that a lot of the Security is never complete, and
4 preparation for this is emerging
right now (p10). Data centers
nor is the overall design issue. This
supplement updates the story but it
already have to plan for power will not be the last word.
outages, with battery power and
UPS. Now they need to add floods This year's design ideas could
8 and storms.
If they can also arrange
be obsolete in a couple of years'
time, and so could some of the
independence from the grid, then business models driving them.
so much the better. A data center The common factor will be
which can support itself this way the relentless pressure to find
is also helping reduce overall creative solutions to the shifting
14 environmental impact. And if it can requirements of the field.

Colo Design Supplement 3


Colo Design Supplement | Hardware of the future

FUTURE OF
HARDWARE
You might expect the physical design
of data center hardware to be pretty
standard by now. Don’t be so sure…

"REITs are not the


perfect answer. We are
a large option, not as
constrained a REIr. We
are a large option, not as
constrained a REIT"

Peter Judge
Global Editor

4 DCDMagazine
4 DCD Magazine••datacenterdynamics.com
datacenterdynamics.com
S
ervers have been installed in 19in
racks since before data centers
existed. The technology to cool
the air in buildings has been
developed to a high standard, and
electrical power distribution is a
very mature technology, which has made only
incremental changes in the last few years.
Given all that, you might be forgiven
for thinking that the design of data center
hardware is approaching a standard, and
future changes will be mere tweaks. However
you would be wrong. There are plenty of
radical approaches to racks, cooling and
power distribution around. Some have been
proclaimed for years, others have come
from out of the blue. Not all of them will gain
traction.

Rack revolution
For someone used to racks in rows, a visit
to one of French provider OVH’s cloud
data centers is a disorienting experience.
OVH wants to be the major European cloud
provider - it combines VMware and OpenStack the ideas have spread further, into colocation
based platform as a service (PaaS) public spaces. Here, the provider doesn’t have
clouds and hosts corporate private clouds for ultimate control of the hardware in the space,
customers - but goes against the standard fare so it can’t deliver the monolithic data center
of the industry. Instead of standing vertically, architectures envisaged by OCP, but some
OVH’s racks lie horizontally. customers are picking up on the idea, and the
Close to its French facilities in Roubaix, OCP has issued facility guidelines on making
it has a small factory, which makes its own a place “OCP ready,” meaning that OCP
rack frames. These “Hori-Racks” are the same racks and OCP hardware are welcome and
size as normal 48U racks, but configured supported there.
completely differently. Inside them, three OCP put forward a new design of racks,
small 16U racks sit side-by-side. The factory which packs more hardware into the same
pre-populates these “Hori-Racks” with servers space as a conventional rack. By using
before they are shipped to OVH facilities, more of the space within the rack, it allows
mostly in France, but some in farther corners equipment that is 21in wide, instead of the
of the world. usual 19in. It also allows deeper kit, with an
The reason for the horizontal approach OpenU measuring 48mm, compared to a
seems to be speed of manufacturing and ease hardware groups like Open Compute Project regular rack unit of 44.5mm.
of distribution: “It’s quicker to deploy them in (OCP), started by Facebook, and Open19, The design also uses DC power, distributed
the data centers with a forklift, and stack them” started by LinkedIn. via a bus bar on the back of the rack. This
said OVH chief industrial officer François Both operate as “buyers’ clubs,” sharing approach appeals to monolithic webscale
Stérin. custom designs for hardware so that multiple users such as Facebook, as it allows the
Racks are built and tested speedily, with customers can get the benefits of a big data center to do away with the multiple
a just-in-time approach that minimizes order for those tweaks - generally aimed at power supplies within the IT kit. Instead of
inventory. Three staff can work side by side to simplifying the kit, and reducing the amount distributing AC power, and rectifying it to DC
load and test the hardware, and then a forklift, of wasted material and energy in the end in each individual box, it’s done in one place.
truck or trailer (or ship) can move the rack product. Traditional racks, and IT hardware, Open Rack version 1 used 12V power,
to its destination, in Gravelines, Strasbourg it turns out, include a lot of wasted material, and a 48V power was also allowed in version
or Singapore. In the building, up to three from unnecessary power equipment down to 2, which also added the option for lithium-
racks are stacked on top of each other, giving manufacturers’ brand labels. ion batteries within the racks, as a kind of
the same density of servers as conventional OCP was launched by Facebook in 2011 to distributed UPS system.
racking. develop and share standardized OEM designs That was too radical for some, and in 2016
OVH can indulge its passion for exotic for racks and other hardware. The group’s LinkedIn launched the Open19 group, which
hardware because it sells services at the PaaS raison d’etre was that webscale companies proposed mass market simplifications without
level - it doesn’t colocate customers’ hardware. could demand their own customized breaking the 19in paradigm. Open19 racks are
Also, it takes a novel approach to cooling hardware designs from suppliers, because divided into cages with a simplified power
(see later) which allows it flexibility with the of their large scale. By sharing these designs distribution system, similar to the proprietary
building itself. more widely it would be possible to spread the blade servers offered by hardware vendors.
You don’t have to go as far as OVH in benefits to smaller players - while gathering The foundation also shared specifications for
changing racks: there are plenty of other suggestions for improved designs from them. network switches developed by LinkedIn.
people suggesting new ways to build them. While the OCP’s founders all addressed “We looked at the 21 inch Open Rack,
The most obvious example is the open source giant webscale players, there are signs that and we said no, this needs to fit any 19 inch

Colo Design Supplement 5


Colo Design Supplement | Hardware of the future

Liquid cooling
For the last couple of decades, liquid cooling
has shown huge promise. Liquid has a much
higher ability to capture and remove heat than
air, but moving liquid through the hardware
in a rack is a big change to existing practices.
So liquid has remained resolutely on the list
rack,” said Open19 founder Yuval Bachar, focusing on Edge deployments. of exotic technologies which aren’t actually
launching the group at DCD Webscale in 2016. In July 2019, however, LinkedIn worth the extra cost and headache.
”We wanted to create a cost reduction of 50 announced it no longer planned to run its Below 20kW per rack, air does the job
percent in common elements like the PDU, own data centers, and would move all its cost effectively, and there is no need to bring
power and the rack itself. We actually achieved workloads over to the public cloud - obviously liquid into the rack. Power densities are still
65 percent.” enough using its parent Microsoft’s Azure generally well below that figure, so most data
Just as it launched Open19, LinkedIn cloud. centers can easily be built without liquid
was bought by Microsoft, which was a major Also this year, LinkedIn announced that cooling. However, there are two possibilities
backer of OCP, and a big user of OCP-standard its Open19 technology specifications will be pushing liquid cooling to the fore.
equipment in data centers for its Azure cloud. contributed to OCP. It’s possible that OCP and Firstly GPUs and other specialized
Microsoft offered webscale technology to Open19 specifications may merge in future, hardware for AI and the like could drive the
OCP, such as in-rack lithium-ion batteries to but too early to say. Even if LinkedIn no longer power density upwards.
provide local power continuity for the IT kit, needs it, the group has more than 25 other
potentially replacing the facility UPS. members. "At the system level, AI
While the LinkedIn purchase went For webscale facilities, OCP is pushing
through, OCP and Open19 have continued ahead with a third version of the OCP Rack, hardware solutions will
in parallel, with OCP catering for giant data
centers, and Open19 aiming at smaller
backed by Microsoft and Facebook, and it
seems to be driven by increasing power
continue to drive higher
facilities used by smaller companies that density demanded by AI and machine power densities"
nevertheless, like LinkedIn, are running their learning.
own data centers. Latterly, Open19 has been “At the component level, we are seeing Secondly, for those who implement liquid
power densities of a variety of processors cooling there are other benefits. Once it’s
and networking chips that will be beyond the implemented, liquid cooling opens up a lot of
ability of air to cool in the near future,” said the flexibility for the facility. Air-cooled racks are
Facebook blog announcing OCP Rack v3. “At part of a system which necessarily includes air
the system level, AI hardware solutions will conditioning, air handling and containment -
continue to drive higher power densities.” up to and including the walls and floors of the
The new version aims to standardize whole building.
manifolds for circulating liquid coolant Liquid cooled racks just need an umbilical
within the racks, and also heat exchangers connection, and can be stood separately, on a
for cabinet doors, and includes the option of cement floor, in a carpeted space, or in a small
fully immersed systems. It’s not clear what the cabinet.
detailed specifications are but they will emerge This can be difficult to present in a retail
from OCP’s Rack and Power project, and its colocation space, because it affects the IT
Advanced Cooling Solutions sub-project. equipment. So unless the end customer

6 DCD Magazine • datacenterdynamics.com


specifically needs liquid cooling, it’s not going markets, said Stérin: “To test the market we
to happen there. But it does play well to the don't need to build a giant 100MW mega data
increasing flexibility of data centers, where center, we can start with like a 1MW data
the provider has control of the hardware, center and see how the market works for us.”
and there isn’t a building-level containment With a lot of identical racks, and a
system. factory at its disposal, OVH has pushed the
Small Edge facilities are often micro data technology forward, showing DCD multiple
centers without the resources of a full data versions of the concept. OVH technicians
center behind them. Other data centers are demonstrated a maintenance procedure on
being built inside repurposed buildings, often the current version of the cooling technology.
in small increments. A liquid cooling system It seemed a little like surgery. First the
can fit these requirements well. tubes carrying the cooling fluid were sealed
Early mainframes were water-cooled but with surgical clamps, then the board was
in the modern era, liquid cooling has emerged disconnected from the pipes and removed.
from various sources. Then a hard drive was replaced with an SSD.
Some firms, including Asperitas, Submer
and GRC, completely immerse the racks in
a tub of inert liquid. No energy is required
for cooling, but maintenance is complicated
because the rack design is completely
changed, and servers and switches must be

"To test the market we


don't need to build a
giant 100MW mega data
center"
lifted from the tub and drained before any
hardware modifications. Iceotope, which now
has backing from Schneider, has a system
which immerses components in trays within
the racks.
Others offer direct circulation, piping liquid
through the heatsinks of energy-hungry
components. This was pioneered by gamers
who wanted to overclock their computers,
and developed rigs to remove the extra
heat produced. Firms like CoolIT developed
circulation systems for business equipment
in racks, but they have been niche products
aimed particularly at supercomputers. They
require changes to the racks, and introduce
a circulatory system, which flows cool water
into the racks and warm water out.
In Northern France, OVH has its own take Already this design is being superseded by its own, on a cement floor, connected to a
on liquid cooling, as well as the racks we saw another one, which uses bayonet joints, so the conventional cooling system. Mainstream
earlier. It builds data centers in repurposed board can be unplugged without the need to vendors such as Vertiv offer modular builds
factories which formerly made tapestry, clamp the tubes. that can be placed on cement floors, while
soft drinks and medical supplies, and liquid Less radical liquid cooling systems are others offer their own take.
cooling allows it to treat these facilities as available, including heat exchangers in the One interesting vendor is Giga Data
shells: building in one go, with a raised floor, cabinet doors, which can be effective at Centers, which claims its WindChill enclosure
and building-level air conditioning OVH levels where air cooling would still be a viable can achieve a PUE of 1.15, even when it is
stacks in racks as needed, connecting them to option. being implemented in small increments
a liquid cooling system. OVH combines this with its circulation within a shell, such as the Mooresville, North
“Our model is that we buy existing system. The direct liquid cooling removes Carolina building where it recently opened a
buildings and retrofit them to use our 70 percent of the heat from its IT equipment, facility.
technology,” OVH chief industrial officer but the other 30 percent still has to go The approach here is to build an air
François Stérin explained. “We can do that somewhere - and is removed by rear-door cooling system alongside the rack, so a large
because we make our own racks with a water heat exchangers. This is a closed loop system, volume of air can be drawn in and circulated.
cooling system that is quite autonomous - with the heat rejected externally. Don’t be lulled into a false sense of
and we also use a heat exchanger door on the Liquid cooling isn’t necessary for a system security. The design of hardware is changing
back of the racks. This makes our racks pretty designed to be installed in a shell. It’s now just as fast as ever, and those building
agnostic to the rest of the building.” quite common to see data centers where colocation data centers need to keep track of
The flexibility helps with changing a contained row has been constructed on developments.

Colo Design Supplement 7


Advertorial: Vertiv

Why the Internet of


Everything Needs the
Internet of Everywhere
A fundamental realignment of data center infrastructure is underway from the core to
the edge. But what does that mean in practical terms? Ubiquity, speed and power will be
critical according to Vertiv and EdgeConneX.

I
t’s no secret that the number future data center demand will look like –
and types of Internet-connected especially at the edge. For example, Vertiv "The Internet of the
devices are undergoing a seismic recently released an in-depth report on
shift. the future of the data center industry. last decade was not
The expectation is that data
from billions of newly-connected
Data Center 2025: Closer to the Edge
included results from a global survey of
architected in the way it
Internet of Things (IoT) devices – from more than 800 data center professionals. needs to be architected
mobile handsets to industrial equipment According to the report, 20 percent of
– will drive a major change in Internet survey respondents predicted a 400 percent for the future."
architecture: from the core to the edge. increase in the number of edge sites by
According to a 2018 451 Research survey, 2025. exploding demand and tsunami of data,”
of more than 700 data center operators, 73 Similarly, EdgeConneX also has been said Vertiv’s Weil. “However, what I think is
percent of respondents expect that up to 75 involved in impactful research initiatives. interesting is to start to sub-segment that
percent of data center/cloud capacity will For example, it contributed to a recently and not focus on the mega-trend but think
be used to support IoT initiatives by the end published Telecommunications Industry more about the use cases and the things
of 2020. Association (TIA) Position Paper that that are driving that data.”
The need to effectively remake the provides an important overview for Weil referenced another recent research
Internet for a new era of connected devices defining development, implementation, effort from Vertiv in which the company
and associated two-way data traffic will and operational aspects of Edge Data defined four key edge archetypes it believes
create challenges and opportunities for Centers (EDCs). could help the industry develop a more
data center operators (including colocation Vertiv and EdgeConneX recently brought nuanced view of different edge scenarios
providers) and technology suppliers alike. their unique perspectives together. Matt and when they may materialize. “We
Two organizations that are deeply Weil, director of offering management need to think about which of these are
immersed in helping to deliver that for Vertiv, joined Philip Marangella, chief happening now, which are happening
architectural change are EdgeConneX and marketing officer for EdgeConneX, for an soon and which are a little bit more down
Vertiv. in-depth DCD Discussion to discuss how the road,” Weil said. “For example, people
Vertiv, a specialist in critical data center infrastructure is evolving, and point to driverless cars as an edge use case,
infrastructure, and EdgeConneX, a pioneer the practical considerations for owners and but from a mass adoption perspective it is
in defining and building the edge of the operators. probably at least five years off, if not more.”
network, have partnered on dozens of
projects across three continents (North Beyond the Data Tsunami Ubiquity is key
America, South America and Europe) since The DCD Discussion came at the issue of There was agreement from both Weil and
2014. future edge and data center demand from Marangella that supporting those new use
Those projects include supporting several angles, but the initial focus was cases and archetypes from a latency and
hyperscale cloud deployments in Chicago, on trying to understand the size of the bandwidth perspective will require Internet
Dublin and Amsterdam, to multiple network demand and the key drivers. “A lot of people architecture that is ubiquitous and not
edge sites extending east to west from in the industry point to the volume of data just centered around major data center
Miami to Seattle, and from north to south generated and the volume consumed. We hubs. “I like the phrase from EdgeConneX
from Toronto to Buenos Aires. have referred to those sexy statistics too: that the ‘Internet of Everything needs the
Vertiv and EdgeConneX also have The number of zettabytes per year, what Internet of Everywhere’ – you need those
invested significant time, effort and that is going to be five years from now, points of presence to be ubiquitous,” said
resources into understanding what 10 years from now, using phrases like Weil. “And that really requires us, which is
Advertorial: Vertiv

the phase we are going through now, to EdgeConneX may need to complete The response from the industry needs be
rearchitect how that Internet is constructed. new builds in as little as six months to investment in new approaches to on-site
The Internet of the last decade was not remain competitive, but often exceeds even power generation, according to Marangella.
architected in the way it needs to be that compressed timeframe. “I think our “That is where you need to get creative with
architected for the future.” record is three and a half months where some power requirements,” he said. “We
we were able to turn around a data center did that in Dublin where we built our own
The need for speed for a particular customer, and that is where natural gas power generation plant on-site
Another key issue in the Vertiv and having Vertiv as a partner comes in, where to be able to augment the grid power to
EdgeConneX discussion was the that consistent design can be replicated in have adequate MW to be able to support
importance speed of deployment has new markets.” customer requirements.”
and will play in meeting future data Ultimately, it is important to understand
center demand. Marangella explained Power availability that keeping up with change is not just about
that build time is a major competitive While innovations in data center design are the edge or just the enterprise, and growth
differentiator for EdgeConneX. “Speed is helping EdgeConneX and other operators of the edge is not some sort of solution to a
absolutely critical for us. We sell to the most to compete more effectively, there are other data center problem. It is a natural evolution
demanding cloud, content and networking issues that still need to be overcome when of the network to meet changing consumer
service providers in the world and they it comes to building out future capacity. demand. And more importantly, it is one
have extremely high expectations,” he said. Specifically, Marangella sees power piece of an increasingly diverse network
“Also, consider that we don’t speculatively availability as a long-term issue for the quilt that also includes enterprise, colocation
build, ‘Here is our site, come to us.’ We whole industry. and cloud data centers.
work with the customer to find the optimal “Power is a big mitigating factor for The next-gen network is both hyperscale
location for them whether it be diverse data centers going forward, more than the and hyper-local, and the businesses who
power feed, diverse network or what have design or what is going inside the building,” will succeed in this environment will work
you.” he said. “The issue is especially pronounced with partners who can provide end-to-end
Marangella added that speed is such when you look at markets like Europe expertise and solutions. Anything less will
a critical issue for EdgeConneX because where available grid power is insufficient to prove insufficient in the face of the growing
the company often competes against support the growth of data centers.” data tsunami.
colocation providers that may already
be established in a specific territory.
“A competitor might not have the best
capacity in terms of location, but the
availability of that capacity is here and Vertiv Global Solutions
now,” he explained. “To solve for that we Matt Weil, Director Offering Management
need to be very fast in our process in terms matthew.weil@vertiv.com
of how we select and build or convert a (+1) 630 579 5052
particular site and make it live for that
customer.”
Colo Design Supplement | Resiliency amid chaos

Designing for
RESILIENCY
in a chaotic age
We live in interesting times, and that requires data centers Sebastian Moss
to be designed with extra care, Sebastian Moss reports Deputy Editor

I
n a year where arctic forests burn, “It's incredible to me to see that even He added: “I will venture to say that if
cities experience their first major data center operators are not considering you're building data centers right now, in
floods, and record hurricanes batter that at all,” Ibbi Almufti, head of the risk and higher natural hazard zones, you most likely
unprotected towns, belief in a stable resilience practice at engineering and design will not be meeting your availability targets
world is perhaps a sign of an unstable consultancy company Arup, told DCD. even if you're designing to modern building
mind. “They're just building like everyone else codes.”
Our planet is becoming an increasingly builds - it's crazy.” It’s a question of weighing the cost of
dangerous place, threatening not just Data centers are often designed simply making a data center more resilient versus
people’s lives, but the buildings that support to comply with modern building code the benefit of it not suffering downtime - or,
societies, and the grid that powers it all. And standards, which on the face of it seem to at least, returning to service faster. “With
yet, despite copious evidence showing these be written to ensure the construction can data centers, if you're out for a day or two,
rising risks, many are continuing to operate withstand a reasonable amount of disaster. you're out millions of dollars,” Almufti said.
with a business as usual mentality. “But the only thing the [building codes] “You could just take those millions of dollars
care about is life safety. And even when and invest it up front to prevent that from
you're designing to a higher level of code happening in the first place. Kind of a no
performance, it's just lower life safety risk. brainer.”
“It's never looking at functionality-level Anthropogenic climate change has made
performance or anything like that,” Almufti the weather more savage and unpredictable,
said. Considering the rising perils, and the and will soon make it much worse. But it’s
need to maintain data center uptime, data not the only problem.
center design needs to “move the goalposts “Forget about global warming for a
from life safety to beyond life safety and look second, if you have a site you built 10
at limiting downtime and repair costs and years ago, and now you've got land being
protecting your investment.” developed [nearby], then all of a sudden,

10 DCD Magazine • datacenterdynamics.com


you’ve got water run-on - that's increasing walls, floor, roof together to form a super- "If you're building data
your flood hazard,” Almufti said. structure. Then have an isolated foundation
In the Midwest of the US, as well as that becomes the sub-structure. centers in higher natural
elsewhere around the world, “there's new “That is the basic premise for our latest
seismicity from the deep well wastewater data center. However, in order to nullify the hazard zones, you
injection,” also known as fracking. “This is
man-made induced seismicity,” Almufti said.
vibrations and keep the equipment as safe
as the human inhabitants, we also needed to
most likely will not be
“And scientists discover new faults all the look at what the building sits on.” meeting your availability
time - I'm a seismic engineer - you will never Colt’s Inzai 2 facility sits on a series of
see the seismic hazard go down in a location seismic isolators and teflon sliders. “These targets"
because of new science, it always goes up.“ isolators are sometimes called a Shake
Those looking to design for seismic events Table - capable of holding 125 tons per m2, primarily due to trees falling on power lines
could learn from Japan, a nation that has isolating the whole building from any seismic and disrupting the services - they're typically
wrestled with earthquake after earthquake. forces and allowing the sub-structure and down for one, two days at a time. It's not an
After opening a new data center in the super-structure to move independently,” infrequent event.”
country in 2017, Colt DCS' VP of operations, Dixon said. “Thus, permitting the bulk of the Another problem “is that multiple data
Ian Dixon, told DCD: "The tried and trusted seismic shock/energy to be dissipated by the centers may be relying on the same utility
way to make a structure more resistant to the sub-structure and reduce the impact on the substation, and that substation is essentially
lateral forces of an earthquake is to tie the super-structure." a single point of failure,” Almufti said. “A lot
That might be excessive for most US data of the substations that we've seen are in flood
centers, but the problem is that the majority zones, for example.”
of data center operators may no longer be That’s why some are turning to microgrids
aware of the risks surrounding their facility, to give data centers more control over the
and what design changes they need to make. power used to keep their data centers online.
Worsening weather, flood runoff risks, "We know that gas can go down, we know
seismic changes - “a lot of these things just that power can go down; it's about making
slip through the cracks,” Almufti said. “You the site secure," Arup associate Russell Carr
might have assessed the risk at one point told DCD.
- probably not - but even if you have, it's Microgrids incorporate three key
changed a lot in the last 5, 10 or so years.” components: “You've got some generation,
Even when a facility itself is protected you've got some storage, and you've got a
from these changes, it may be at the mercy demand or load,” Carr said. “Generation can
of surrounding infrastructural weaknesses. be provided by multiple sources - it can be
“Our analysis shows that relatively low wind variable renewables, such as photovoltaics,
speeds can cause power outages. This is wind, biogas fuel cells, natural gas fuel cells,
natural gas generators, regular generators -
it just needs a system to provide power but
it has to have storage as well, that can be
lithium-ion batteries, flow batteries, various
technologies.”
The decision over which generation and
storage technologies to use is dictated by the
hazards found in the data center’s location, as

Colo Design Supplement 11


Colo Design Supplement | Resiliency amid chaos

well as local regulatory stipulations. “You've got to look at the load of that That fault ruptures every multiple thousand
“Let's say I'm on the East Coast [of the particular building and the size of that years, but when it does, it's huge. So how do
US],” Carr said. “There’s a lot of overhead building, and how to match up generation you design for that?”
power lines, I might have my backup and storage to that building.” And, with some risks potentially wiping
generation in gas fuel cells because I know in Microgrids for larger facilities can still out the region’s population, is there even any
Superstorm Sandy the gas system stayed up, provide financial returns, and help improve point designing for that? Such questions can
while the electricity system stayed down. grid resilience, but full islanding is unlikely. weigh on someone, and Almufti admits that
“Whereas, on the West Coast, the disaster Instead, with such sites, it helps that the immersing himself in a world of risks has
here is an earthquake, gas lines can go out for companies that operate those data centers changed him.
a long time, but electricity would typically be usually manage numerous facilities. “I'm super risk averse,” he said. “I live in
restored back in a week.’” Regulations, such as “It’s about basically combining the San Francisco, and when I walk through
air quality laws in California, dictate design: interdependencies of those data centers,” downtown I’m always looking up like 'Oh my
“Diesel generators can't operate all the time, Almufti said. “So, for example, the parent-child god, please don't happen now,' or if I go in
gas turbines go through a lot of permitting relationships between individual data centers an older building, it’s like ‘get me out of here
because California is trying to phase out gas… and comparing that against the spatial extent right now.’
these are things you have to consider.” of a given hazard scenario to understand “It's kind of like living in like The Matrix,
Arup recently completed a microgrid the likelihood of simultaneous outage and no one else sees it, but you see it because you
project in California for a confidential client downtime at multiple data centers.” do it every day. The chances that you're in
who wanted to build a multipurpose campus a building when the disaster strikes is really
with several Edge data centers. “Resiliency low, but it happens. It happens.”
was a big driver for this site,” Carr said. "Solar flares are kind of
“The client's aim was to be able to operate
indefinitely independent of the grid for their scary, I don't think we
key loads,” islanding the site off from the
wider grid. can quantify the risk to
“This client wasn't stupid, they didn't
want to put this in for free - they also wanted
that. That's like a black
the system to make money, so we looked at swan event"
how can we make money [from] the battery
capacity of this site.”
For the campus, the client had 4MW of Such an approach can “inform ideal
Edge data center load in each of the four separation distances between existing data
buildings, along with around 20MW of centers and new data centers.”
general campus load. “We had 16MW of solar Despite such precautions, no matter how
on site, 6MW of diesel generation, 4MW of much one prepares, no matter how strong
fuel cell generation, and 5MW of batteries at one builds, there are still the unknown,
this site,” Carr said. unpredictable dangers. “A major solar flare
Should clouds gather over its solar panels, could wipe out the entire eastern seaboard,”
the system proactively drops non-critical Almufti said. “Things would be down for
loads before the shade cuts photovoltaic months - everything, not just the individual
output. Whatever happens to the outside data centers. Solar flares are kind of scary,
world, the campus can continue on, an I don't think we can quantify the risk to
island unto itself. that. That's like a black swan event, but it
But, Carr conceded, such luxurious grid happened 250 years ago, so it's possible.”
resilience is only really for smaller data There are several such hazards where
center deployments: “If we're looking at a “the cost benefit would show that you
big hyperscale data center, it's harder to put shouldn't design for these events, if you're
renewable generation on that data center - not impacted for hundreds and hundreds of
even if you covered the roof with solar, it's a years.
100MW data center. You're gonna power the “One thing that's lurking out there is the
lights. New Madrid Fault Line in the Memphis area.

DCD>Resources When can we afford to deploy solar Read


for on-grid solutions? Now

With the reduction of solar panel prices and infrastructure, and the slow rise of utility rates,
this whitepaper analyses whether solar can serve as a power device and share duties to
provide power to the load in On-Grid solutions with reasonable financial returns.

bit.ly/DCDSolarStudies

12
12 DCD
DCDSupplement • datacenterdynamics.com
Magazine • datacenterdynamics.com
Colo Design Supplement | Post-people?

The post-human data


center will have to wait
Despite the predictions, it seems that data centers Peter Judge
aren’t yet ready to do without human kindness Global Editor

T
hree years ago, we heard a lot Meanwhile, conditions in an unstaffed have a use case which absolutely needs
about “lights-out” facilities. data center could be optimized for the autonomous IT resources: The Edge.
Data centers were going to efficient running of the IT equipment, not The over-familiar pitch for the Edge is that,
become fully autonomous and the comfort of the staff. Modern equipment because new applications like the Internet
run without intervention from operates at a greater range of temperatures. of Things require low latency, data center
people. The so-called cold aisle can now be as warm resources must be available close to the
There were a whole lot of reasons this as 80°F, and the hot aisle can be more than sources of data, at the “edge” of the network.
was going to happen. Staff are expensive, 100°F. Stop cooling data centers for comfort, Managing a multitude of tiny Edge
and equipment is reliable. One leader in and a lot less energy is wasted. resources manually would add so much
promising this was EdgeConneX, a colo And in these temperatures, if you get the overhead, those applications would become
provider specializing in smaller facilities physical IT work done by robots, then you financially impossible, so now is the time for
outside the major data center hubs. “When the lights-out facility to finally arrive.
you try to operate this small footprint, like a We now finally The trouble is, things have gone quiet.
two-megawatt facility, it’s difficult to man on- DCIM products continue to potter along,
site. It’s just not economically feasible,” he told have a use case for making the same promises and not
a data center event.
At this point, we had had a decade of huge
autonomous IT perceptibly changing. LitBit disappeared
without a trace, and the robot arms are folded
claims for management and monitoring resources: The Edge away: PayPerHost is sticking to its knitting:
systems like DCIM, which could handle the As far as we can tell, it’s never said anything
M&E equipment, while every IT systems can eliminate unused space too. Robots can more about robots.
vendor was promising fully virtualized private be designed to operate in narrower spaces, EdgeConneX’s King says that small users
cloud systems made up of pools of storage leaving more room for the IT equipment. don’t mind the idea of autonomous IT, but
and servers which could be built and changed Back then, companies like web hoster large customers don’t trust it: “Fortune 500
in software, on demand - HPE’s “composable” PayPerHost were promising robotic data companies say 'okay that’s great, but we’re
data center for instance. centers, while LitBit was going to use AI to going to add staff.'”
Surely, with all this, it should be possible automate the maintenance procedures. And And, perhaps ironically, those considering
to operate a data center remotely. Failed EdgeConneX had edgeOS, a platform for lights-out operation can be dissuaded by
hardware could be sidelined, and replaced en remote management. industry efforts to increase reliability. If a
masse during an occasional site visit by an A peak moment for the lights-out data business wants Uptime Institute’s Tier III or
engineer. center movement came in 2016, when IV levels of reliability, Uptime recommends a
Going lights-out promised other benefits Microsoft announced that it had operated a minimum of one to two qualified operators
besides staff savings. Automatic processes rack of Azure Cloud servers on the sea bed on site 24 hours per day, 7 days per week, 365
could be more reliable we were told - for three months. Project Natick was sealed days per year. And it goes without saying,
Experts in failure, like the Ponemon in a pressurized vessel, and operated in a those staff need oxygen and room to move.
Institute, regularly list human lights-out manner, because the only way to Project Natick is still going, and has
error as the top cause of physically access it was to haul it up to the expanded from one rack to twelve racks, in a
outages. surface. sunken box the size of a shipping container,
Natick added another item to the list of under the sea near Orkney. But that was a
benefit delivered by post-human facilities: year ago. Microsoft promised to reveal results
it was filled with a nitrogen about now, but all it’s saying is that the test
atmosphere: It was continues, with Natick in use by development
unbreathable, but it teams.
guaranteed there would be “The test is ongoing,” Ben Cutler of
no fires. Microsoft Research said to DCD. “We’re
After all that excitement, extremely pleased with the reliability and
the world of lights-out other operational metrics we've seen.”
facilities has gone rather That’s fine, but anyone thinking about
quiet. Which is a shame, lights-out operation, will need a little more
because we now finally than that.

Issue 31 •Colo
January/February 2019 13
Design Supplement
Colo Design Supplement | Security focus

Data is private, but data centers are often shared.


That’s the issue colocation security methods must Dan Swing
contend with, now and in the future Contributor

E
nterprises may want to move dollars’ worth of servers. In December 2018, or unwanted - visitors. Perimeter fences,
to the cloud, but they still hold Australian telecommunications provider generic warning signs, and minimal external
plenty of IT resources. To make Vocus came under fire when a customer entry/exit points will help deter opportunistic
this easier to handle, they are complained that a door to the facility had attempts at entry. Guards, barriers,
often moving them to colocation been left wide open and unlocked for monitoring such as CCTV and potentially
spaces. A 2017 survey by Vertiv months. As well as theft of the physical access controls such as key cards will reduce
found that 57 percent of businesses were infrastructure, unauthorized access to the number of people that even make it to the
planning to increase their data center servers could allow intruders to steal data front doors of the facility.
outsourcing, while research from Technavio or make changes to the data and processes However, while securing the external
predicts that the colocation market will grow running on that hardware. perimeter matters, internally is where much
at around nine percent through to 2022. While both of the above firms are still in
Meanwhile, much of the cloud is actually business, failing to keep a location secure “If you're targeting a
held in colocation space, where giants like could potentially be ruinous. Even if your
AWS use sizable chunks of shared facilities to terms of service are watertight when it business that hosts at a
fill gaps in their coverage.
For both these customers, the sector
comes to liability for security incidents,
the loss of trust could easily lead to an
colo, you can take some
offers a reduction in capital costs, the ability exodus of customers, especially in a highly space there"
to scale, and a useful geographical spread. competitive landscape with so many
But there’s a downside: You have to trust the alternative providers. of the focus should be. The extra footfall
provider that is looking after your hardware “In a world where data breaches can compared to an owned and operated data
and your data. You have to know it is kept safe see a global business go under overnight, center means that staff should remain extra
from potential attackers. data centers have a critical role to play vigilant and more stringent controls should
“Data center users really value security,” in protecting against this,” says Poole. be in place; employees may become used to
says Russell Poole, managing director of “The implications of a security breach are seeing unfamiliar faces in various parts of the
Equinix UK and the Nordics. “It’s the top catastrophic for the reputation of not just facility performing seemingly innocent work,
requirement of all our customers when they the data center company, but any company but in reality could be an attacker targeting a
are looking to deploy with us.” that hosts its data within the premises.” customer.
When asked about examples of potential
A high priority for colo providers Unique challenges attack methods on customers renting space
A security incident at a colo site is just as Colocation sites have all the security within colocation sites, Holly Grace Williams,
harmful to the customer as if it happened on concerns of in-house facilities which are Technical Director at penetration testing firm
a site they owned and operated. But security owned and operated by one organization. Secarma says one way is to simply rent space
failures can be even more embarrassing for But they have another big set of challenges within the same facility.
the affected colo provider - as they represent because they have multiple users, and “If you're targeting a business that hosts
a failure in its core business. potentially a revolving door of tenants at a colo, you can take some space there and
A Chicago-based colo site hosted by coming and going to the site at all times. gain access to the premises. Then you can
C I Host was broken into four times in a Externally, minimal signage and try and target other people within that space;
two-year period between 2005-2007, with promotion around the location of a site if there are exposed ports in cages and you
thieves making off with tens of thousands of can reduce the chance of unexpected - have a window of time you can probably

14
14 DCD
DCDMagazine
Magazine••datacenterdynamics.com
datacenterdynamics.com
insert cables into those ports.” to keep a history of who went where and and archiving of critical infrastructure areas
Because of this, she says, proper when. Internal monitoring such as CCTV and all customer cages.”
segmentation between customer areas, along should also be deployed across the facility, All on-site guards and staff should be
with proper monitoring and trained staff is and manned 24/7. well-trained and aware of potential social
key. “When prospective tenants visit a data engineering attempts are important. All the
“Colocation providers should have man center to review its suitability,” advises controls and defenses in the world will fail if
traps, which only allow one person through at Equinix’s Poole, “they should ask themselves on-site are willing to hold doors open or flout
a time, and in the hosting space they should ‘how difficult would it be for me to get in here regular processes for a smart and determined
have rack segmentation, room segmentation if I had forgotten my fob or pass?’. The answer attacker. Ensuring that staff are confident
and solid cages with narrow mesh.” to that should be, ‘impossible’.” enough to stick to protocol even in the face
“You should have anti-tamper At Equinix sites, he explains, access is by of pressure, are unafraid to ask questions
mechanisms so you can detect when a appointment only and entry is gained via a or double-check anything they are unsure
series of fob and biometric, hand geometry of, and be wary of attempts of manipulation
"The colo provider's readers that recognize handprints and is key to ensuring attempts at social
authority from an encrypted database. engineering are less likely to succeed.
team needs to keep an “Once inside, a sign-in procedure and Regular penetration testing conducted
eye on anyone working visual confirmation by trained security
guards ensures that entry is only given to
by both the colo provider and the tenants in
order to not only ensure that security controls
in rooms" authorised visitors. Hundreds of security are being enacted properly but are also
cameras and hand readers are stationed effective, as well as finding potential gaps or
customer's rack has been opened and throughout the International Business shortfalls and make improvements. Likewise,
integrate it with a system that tells you Exchanges to provide detailed surveillance customers should be encouraged to make
instantly whether that customer's staff are their own inspections and ensure the site is
present. Lock picking and the forcing open of up the standards they expect or require.
cages can happen. “There is a difference between a secure
“The colo provider's team needs to keep colo and a very secure colo,” explains
an eye on anyone working in rooms to Secarma's Williams, “but most people
ensure they’re only accessing their own kit wouldn't make that distinction on gut feeling,
and be there to take action if a rack has been they'd make it based on some compliance or
opened when staff from the tenant company regulatory requirement.”
are not in the building.”
Within the building itself guests
should only be accepted if scheduled and
confirmed with the company, with relevant
documentation and ID in hand. Access
controls such biometrics and key cards
should be in use across the facility, with logs

Issue 31Colo
• January/February 2019 15
Design Supplement
When speed of deployment is critical,
Vertiv has you covered.
Hot-scalable, isolated, power-dense critical infrastructure from the experts in digital
continuity — just in time to meet your unique needs.

With the Vertiv™ Power Module 1000/1200, you can rapidly construct redundant blocks of 1000 or 1200 kVA/kW
critical power infrastructure for your new or existing facility, giving you added capacity without overburdening IT
resources. Plus, the hot-scalable infrastructure ensures business continuity by allowing you to deploy additional
units without taking critical loads offline.

Why choose Vertiv Power Module?

yyHigh power density built around market-leading Liebert® UPS technology


yyEnergy-efficient operation with airflow containment to ensure optimal equipment conditions
yyClose to “plug and play” functionality minimizes site work and speeds deployment
yyHot-scalable units eliminate the need for downtime when boosting power capacity

Vertiv.com/PowerModule

© 2019 Vertiv Group Corp. All rights reserved.

You might also like