You are on page 1of 58

ASHRAE’s new energy standard for data centers

By Bill Kosik, PE, CEM, LEED AP, BEMP; exp, Chicago


ASHRAE Standard 90.4 is a flexible, performance-based energy standard that
goes beyond current ASHRAE 90.1 methodology.
Learning objectives:
 Explain ASHRAE Standard 90.1.
 Understand the fundamentals of ASHRAE Standard 90.4.
 Explore how ASHRAE 90.4 will impact data center mechanical/electrical system
design.

The data center industry is fortunate to have many dedicated professionals volunteering
their time to provide expertise and experience in the development of new guidelines, codes,
and standards. ASHRAE, U.S. Green Building Council, and The Green Grid, among others,
routinely call on these subject matter experts to participate in working committees with the
purpose of advancing the technical underpinnings and long-term viability of the
organizations‘ missions. For the most part, the end goal of these working groups is to
establish consistent, repeatable processes that will be applicable to a wide range of project
sizes, types, and locations. For ASHRAE, this was certainly the case when it came time to
address the future of the ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise
Residential Buildings vis-à-vis how it applies to data centers.
ASHRAE Standard 90.1 and data centers
ASHRAE 90.1 has become the de facto energy standard for U.S. states and cities as well as
many countries around the world. Data centers are considered commercial buildings, so the
use of ASHRAE 90.1 is compulsory to demonstrate minimum energy conformance for
jurisdictions requiring such. Specific to computer rooms, ASHRAE 90.1 has evolved over the
last decade and a half, albeit in a nonlinear fashion. The 2001, 2004, and 2007 editions of
ASHRAE 90.1 all have very similar language for computer rooms, except for humidity
control, economizers, and how the baseline HVAC systems are to be developed. It is not
until the ASHRAE 90.1-2010 edition where there are more in-depth requirements for
computer rooms. For example, ASHRAE 90.1-2010 contains a new term, ―sensible coefficient
of performance‖ (SCOP), an energy benchmark used for computer and data processing
room (CDPR) air conditioning units. The construct of SCOP is dividing the net sensible
cooling capacity (in watts) by the input power (in watts). The definition of SCOP and the
detail on how the units are to be tested comes from the Air Conditioning, Heating, and
Refrigeration Institute (AHRI) in conjunction with the American National Standards Institute
(ANSI) and was published in AHRI/ANSI Standard 1360: Performance Rating of Computer
and Data Processing Room Air Conditioners.
With the release of ASHRAE 90.1-2013, additional clarification, and requirements related to
data centers including information for sizing water economizers and an introduction of a new
alternative compliance path using power-usage effectiveness (PUE) were included. As a part
of the PUE alternate compliance path, cooling, lighting, power distribution losses, and
information technology (IT) equipment energy are to be documented individually. But since
the requisites related to IT equipment (ITE) listed in ASHRAE 90.1 were originally meant for
server closets or computer rooms that consume only a piece of the energy of the total
building, there were still difficulties in demonstrating compliance. Yet there was no
slowdown in technology growth; projects began to slowly include full-sized data centers with
an annual energy usage greater than the building in which they are housed. Even with all
the revisions and additions to ASHRAE 90.1 relating to data centers, there were still
instances that proved difficult in applying ASHRAE 90.1 for energy-use compliance.

Fortunately, as the data center community continued to evolve in terms of sophistication in


designing and operating highly energy-efficient facilities, so did ASHRAE 90.1 with the
release of the 2013 edition. But even before ASHRAE 90.1-2013 was released, the data
center community was pushing for clearer criteria for energy-use compliance. It was crucial
that these criteria would not stifle innovation, but at the same time provide logic and
consistency on how to comply with ASHRAE 90.1. Many in the data center engineering
community (including ASHRAE) knew something needed to change.
ASHRAE Standard 90.4-2016
Given the long history of ASHRAE 90.1 (dating back to 1976) and its demonstrated
effectiveness in reducing energy use in buildings, several questions needed to be addressed
before new criteria could be developed. What would be the best way to develop new
language for data center facility energy use? Should it be an overlay to the existing
standard? Should it be a stand-alone document? Should it be a stand-alone document and
duplicate all the language in ASHRAE 90.1? How should the technical processes developed
by The Green Grid and U.S. Green Building Council be folded into the standard? Would it be
able to keep up with the fast-paced technology developments that are truly unique to data
centers?
Fast-forward a few years and in mid-2016, ASHRAE published ASHRAE 90.4-2016: Energy
Standard for Data Centers. Coming in at just 68 pages, ASHRAE 90.4 doesn‘t seem to be as
detailed as compared with other standards released by ASHRAE (ASHRAE 90.1 weighs in at
just over 300 pages). But this is by design—instead of trying to weave in data center-
specific language into the existing standard, ASHRAE wisely chose to create a (mostly)
stand-alone standard that is only applicable to data centers and contains references to
ASHRAE 90.1. These references mainly are for building envelope, service-water heating,
lighting, and other requirements. Using this approach avoids doubling up on future revisions
to the standard, minimizes any unintended redundancies, and ensures that the focus of
ASHRAE 90.4 is exclusive to data center facilities. Also, issuing updates to ASHRAE 90.1 will
automatically update ASHRAE 90.4 for the referenced sections. In the same way, updates to
ASHRAE 90.4 will not affect the language in ASHRAE 90.1. Using ASHRAE 90.1 will not
automatically require the use of ASHRAE 90.4. In fact, since many local jurisdictions operate
on a 3-year cycle for updating their building codes, many are still using the ASHRAE 90.1-
2013 or earlier. The normative reference in ASHRAE 90.4 is ASHRAE 90.1-2016; however,
the final say on an administrative matter like this will always fall to the authority having
jurisdiction (AHJ).
Fundamentals of ASHRAE 90.4

ASHRAE 90.4 gives the engineer a completely new method for determining compliance.
ASHRAE introduces new terminology for demonstrating compliance: design and annual
mechanical load component (MLC) and electrical-loss components (ELC). ASHRAE is careful
to note that these values are not comparable to PUE and are to be used only in the context
of ASHRAE 90.4. The standard includes compliance tables consisting of the maximum load
components for each of the 19 ASHRAE climate zones. Assigning an energy efficiency target,
either in the form of design or an annualized MLC to a specific climate zone, will certainly
raise awareness to the inextricable link between climate and data center energy
performance (see figures 1 and 2). Since strategies like using elevated temperatures in the
data center and employing different forms of economization are heavily dependent on the
climate, an important goal is to increase the appreciation and understanding of these
connections throughout the data center design community.
Design mechanical-load component
MLC can be calculated in one of two ways to determine compliance. The first is a summation
of the peak power of the mechanical components in kilowatts, as well as establishing the
design load of the IT equipment, also in kilowatts. ASHRAE 90.4 has a table of climate zones
with the respective design dry-bulb and wet-bulb temperatures that are to be used when
determining the peak mechanical system load. The calculation procedure is shown below. It
must be noted that when comparing the calculated values of design MLC, the analysis must
be done at both 100% and 50% ITE load; both values must be less than or equal to the
values listed in Table 6.2.1 (design MLC) in ASHRAE 90.4.
Design MLC=[cooling design power (kW)+pump design power (kW)+heat rejection design
fan power (kW)+air handler unit design fan power (kW)]÷data center design ITE power
(kW)
Annualized mechanical-load component
The concepts used for the annualized MLC path are like the design MLC, except an hourly
energy analysis is required when using the annualized MLC path.
This energy analysis must be done using software specifically designed for calculating
energy consumption in buildings and must be accepted by the rating authority. Some of the
primary requirements of the software include the dynamic characteristics of the data center,
both inside and outside. The following are some of the software requirements used in the
modeling:
 Test in accordance with ASHRAE Standard 140: Standard Method of Test for the
Evaluation of Building Energy Analysis Computer Programs.
 Able to evaluate energy-use status for 8,760 hours/year.
 Account for hourly variations in IT load, which cascades down to electrical system
efficiency, cooling system operation, and miscellaneous equipment power.
 Include provisions for daily, weekly, monthly, and seasonal building-use schedules.
 Use performance curves for cooling equipment, adjusting power use based on
outdoor conditions as well as evaporator and condenser temperatures.
 Calculate energy savings based on economization strategies for air- and water-based
systems.
 Produce hourly reports that compare the baseline HVAC system to a proposed
system to determine compliance with the standard.
 Calculate required HVAC equipment capacities and water- and airflow rates.
Since ASHRAE 90.4 categorizes compliance metrics based on climate zone, it is imperative
that the techniques used in simulating the data center‘s energy use are accurate based on
the specific location of the facility. As such, the simulation software must perform the
analysis using climatic data including hourly atmospheric pressure, dry-bulb and dew point
temperatures as well as wet-bulb temperature, relative humidity, and moisture content. This
data is available from different sources and in the form of typical meteorological year,
(TMY2, TMY3), and EnergyPlus Weather (EPW) files that are used as an input to the main
simulation program.
This compulsory hourly energy-use simulation considers fluctuations in mechanical system
energy consumption, particularly in cases where the equipment is designed for some type of
economizer mode, as well as energy reductions in vapor-compression equipment from
reduced lift due to outdoor temperature and moisture levels. This approach seems to be the
most representative of determining the energy performance of the data center, and since it
is based on already established means of determining building energy use (i.e., hourly
energy-use simulation techniques), it also will be the most understandable. Again, it must be
noted that when comparing the calculated values of annualized MLC, the analysis must be
done at both 100% and 50% ITE load; both values must be less than or equal to the values
listed in Table 6.2.1.2 (annualized MLC) in the ASHRAE standard. It also is important to note
that both the design and annualized MLC values are tied to the ASHRAE climate zones.
When energy use is calculated using simulation techniques, it becomes obvious that the
energy used has a direct correlation to the climate zone, primarily due to the ability to
extend economization strategies for longer periods of time throughout the year. If we
compare calculated annualized MLC values for data centers with the MLC values in ASHRAE
90.4, the ASHRAE requirements are relatively flat when plotted across the climate zones.
This means the calculated MLC values in this example have energy-use efficiencies that are
in excess of the minimum required by the standard (see Figure 7).
Annual MLC=[cooling design energy (kWh)+pump design energy (kWh)+heat rejection
design fan energy (kWh)+air handler unit design fan energy (kWh)]÷data center design ITE
energy (kWh)
Design electrical-loss component
Using the ASHRAE 90.4 approach to calculate the ELC defines the electrical system
efficiencies and losses. For the purposes of ASHRAE 90.4, the ELC consists of three parts of
the electrical system architecture:
1. Incoming electrical service segment
2. Uninterruptible power supply (UPS) segment
3. ITE distribution segment.
The segment for electrical distribution for mechanical equipment is stipulated to have losses
that do not exceed 2%, but is not included in the ELC calculations. All the values for
equipment efficiency must be documented using the manufacturer‘s data, which must be
based on standardized testing using the design ITE load. The final submittal to the rating
authority (the organization or agency that adopts or sanctions the results of the analysis)
must consist of an electrical single-line diagram and plans showing areas served by electrical
systems, all conditions and modes of operation used in determining the operating states of
the electrical system, and the design ELC calculations demonstrating compliance. Tables
8.2.1.1 and 8.2.1.2 in ASHRAE 90.4 list the maximum ELC values for ITE loads less than 100
kW and greater than or equal to 100 kW, respectively. The tables show the maximum ELC
for the three segments individually as well as the total.

The electrical distribution system‘s efficiency impacts the data center‘s overall energy
efficiency in two ways: the lower the efficiency, the more incoming power is needed to serve
the IT load. In addition, more air conditioning energy is required to cool the electrical
energy dissipated as heat. ASHRAE 90.4, Section 6.2.1.2.1.1, is explicit on how this should
be handled: ―The system‘s UPS and transformer cooling loads must also be included in [the
MLC], evaluated at their corresponding part-load efficiencies.‖ The standard includes an
approach on how to evaluate single-feed UPS systems (e.g., N, N+1, etc.) and active dual-
feed UPS systems (2N, 2N+1, etc.). The single-feed systems must be evaluated at 100%
and 50% ITE load. The dual active-feed systems must be evaluated at 50% and 25% ITE
load, as these types of systems will not normally operate at a load greater than 50%.
Addressing reliability of systems and equipment
One of the distinctive design requirements of data centers is the high degree of reliability.
One manifestation of this is the use of redundant mechanical equipment. The redundant
equipment will come online when a failure occurs or when maintenance is required without
compromising the original level of redundancy. Different engineers use different approaches
based on their clients‘ needs. Some will design in extra cooling units, pumps, chillers, etc.
and have these pieces of equipment running all the time, cycling units on and off as
necessary. Other designs might have equipment to handle more stringent design conditions,
such as ASHRAE 0.4% climate data (dry-bulb temperatures corresponding to the 0.4%
annual cumulative frequency of occurrence).
And yet others will use variable-speed motors to vary water and airflow, delivering the
required cooling based on a changing ITE load. Since these design approaches are quite
different from one another, Table 6.2.1.2.1.2 in ASHRAE 90.4 provides methods for
calculating MLC compliance under these scenarios.
Performance-based approach
ASHRAE 90.4 uses a performance-based approach rather than a prescriptive one to
accommodate the rapid change in data center technology and to allow for innovation in
developing energy efficiency cooling solutions. Some of the provisions seem to especially
encourage innovative solutions including:
 Onsite renewables or recovered energy. The standard allows for a credit to the
annual energy use if onsite renewable energy generation is used or waste heat is
recovered for other uses. Data centers are ideal candidates for renewable energy
generation, as the load can be constant through the course of the daytime and
nighttime hours. Also, when water-cooled computers are used with high-discharge
water temperatures, the water can be used for building heating, boiler-water
preheating, snow melting, or other thermal uses.
 Derivation of MLC values. The MLC values in the tables in ASHRAE 90.4 are
considered generic to allow multiple systems to qualify for the path. The MLC values
are based on systems and equipment currently available in the marketplace from
multiple manufacturers. This is the benchmark for minimum compliance that must be
met. But ideally, the project would go beyond the minimum and demonstrate even
greater energy-reduction potential.
 Design conditions. The annualized MLC values for air systems are based on a delta T
(temperature rise of the supply air) of 20°F and a return-air temperature of 85°F.
However, the proposed design is not bound to these values if the design
temperatures are in agreement with the performance characteristics of the coils,
pumps, fan capacities, etc. This provision from the standard gives the engineer a lot
of room to innovate and propose nontraditional designs, such as water cooling of the
ITE equipment.
 Trade-off method. Sometimes mechanical and electrical systems have constraints
that may disqualify them from meeting the MLC or ELC values on their own merit.
The standard allows, for example, a less efficient mechanical system to be offset by
a more efficient electrical system and vice versa. Another benefit of using this
approach comes from the mechanical and electrical engineer having to collaborate by
going through an iterative, synergistic design process.
Publishing ASHRAE 90.4-2016 is a watershed moment—to date, there has not been a code-
ready, technically robust approach to characterize mechanical and electrical system designs
to judge conformance to an energy standard. This is no small feat, considering that data
center mechanical/electrical systems can have a wide variety of design approaches,
especially as the data center industry continues to develop more efficient ITE equipment
requiring novel means of power and cooling. And since ASHRAE 90.4 is a separate
document from ASHRAE 90.1, as computer technology changes, the process to
augment/revise ASHRAE 90.4 should be less difficult because they are two separate
documents. While certainly not perfect, ASHRAE 90.4 is a major step along the path of
ensuring energy efficiency in data centers.

Bill Kosik is a senior mechanical engineer at exp in Chicago. Kosik is a member of the
Consulting-Specifying Engineer editorial advisory board.
ASHRAE 90.4: Why This Data Center Standard Matters
Nicolas Sagnes | May 15, 2017
In September of 2016, the American Society of Heating, Refrigerating, and Air-Conditioning
Engineers (ASHRAE) published a new and improved standard that establishes the minimum
energy efficiency requirements for data centers.
ASHRAE 90.4-2016 has been in development for several years. Overall, this new standard
contains recommendations for the design, construction, operation, and maintenance of data
centers. Additionally, this standard focuses on the use of both on-site and off-site renewable
energy.
This standard explicitly addresses the unique energy requirements of data centers in
opposition to standard buildings, thus integrating the more critical aspects and risks
surrounding the operation of data centers
What is New?
Standard 90.4 is a performance-based design standard that offers numerous design
components for mechanical load (MLC) and electrical loss (ELC). After determining the
calculations of both the MLC and ELC, these calculations are compared to the maximum
allowable values based on climate zones. Compliance with Standard 90.4 is achieved when
the calculated values do not exceed the values contained in the standard. An alternative
compliance path is provided to allow tradeoffs between the MLC and ELC.
The absence of PUE in 90.4 allows the primary focus to be on energy consumption, rather
than efficiency.
PUE, as a simpler metric, represents efficiency. It allows data center operators to measure
the effectiveness of the power and cooling systems over time. However, PUE is quite limited,
as it measures only the relative difference between power consumed on IT equipment, and
the energy consumed on IT and infrastructure combined. PUE isn‘t a useful tool for
determining whether or not the overall energy consumption was increased at the facility
level.
ASHRAE 90.4 intends to tackle and regulate lower performers, while being mindful of
geographic areas. The new standard aims to impact power utilization as a whole throughout
a data center facility, highlighting the impact of raising the temperature into the white space
on overall energy consumption.
Another key part of this standard is containment. Containment looks closely at the
homogeneity of a given air volume across a data center, therefore limiting the power loads
necessary to overcome hotspots.
How Raritan helps?
As a key player in the most advanced data center efficiency and management practices,
Raritan is allowing end users across the largest facilities to leverage the granular capabilities
of our PX Intelligent Power Distribution Units to centralize key environmental metrics at the
rack and device levels.
Outlet level metering in conjunction with temperature and humidity sensors are useful in
determining whether or not the IT equipment is drawing more power. This is typically
caused by a fan accelerating due a potential rise in temperature.
Creating links between causes and effects across the data center allows Raritan PX users to
comply with ASHRAE 90.4. Users get a clear picture of the containment (or lack thereof), as
well as the effect airflow has on the power chain at a granular level.
Leveraging this data ultimately gives PX users the ability to make insightful decisions about
implementing more efficient containment solutions with Legrand, and urges users to take
action with more effective cooling policies and load balancing across facilities.
Learn more about the PX series of intelligent PDUs for your data center, and let us know if
you have any questions or comments.
How to Design a Data Center Cooling System for ASHRAE 90.4

WRITTEN BY Anastasia Churazova


How to Design a Data Center Cooling System for ASHRAE 90.4
Data centers are critical, energy-hungry infrastructures that operate around the clock. They
provide computing functions that are vital to the daily operations of top economic, scientific,
and technological organizations around the world. The amount of energy consumed by
these centers is estimated at 3% of the total worldwide electricity use, with an annual
growth rate of 4.4%. Naturally, this has a tremendous economic, environmental, and
performance impact that makes the energy efficiency of cooling systems one of the primary
concerns for data center designers, ahead of the traditional considerations of availability and
security. [1]
Studies also show that the largest energy consumer in a typical data center is the cooling
infrastructure (50%), followed by servers and storage devices (26%) [2]. Thus, in order to
control costs while meeting the increasing demand for data center facilities, designers must
make the cooling infrastructure and its energy efficiency their primary focus.
Data Center Cooling: Which Standards to Follow?
Until recently, this was a challenging task due to the fact that the industry standards used to
assess the energy efficiency of data centers and server facilities were inconsistent. To
establish a governing rule for data center energy efficiency measurements, power usage
effectiveness (PUE) was introduced in 2010. However, it served as a performance
metric rather than a design standard and still failed to address relevant design components,
so the problem remained.
New Energy Efficiency Standard ASHRAE 90.4
This led the American Society of Heating, Refrigerating, and Air-Conditioning Engineers
(ASHRAE), one of the main organizations responsible for developing guidelines for various
aspects of building design, to develop a new standard that would be more practical for the
data center industry. ASHRAE 90.4 has been in development for several years and was
published in September of 2016, bringing a much-needed standard to the data center
community. According to ASHRAE, this new standard, among other things, ―establishes the
minimum energy efficiency requirements of data centers for design and construction, for the
creation of a plan for operation and maintenance and for utilization of on-site or off-site
renewable energy resources.‖ [3]
Data Center Model
Overall, this new ASHRAE 90.4 standard contains recommendations for the design,
construction, operation, and maintenance of data centers. This standard explicitly addresses
the unique energy requirements of data centers as opposed to standard buildings, thus
integrating the more critical aspects and risks surrounding the operation of data centers.
And unlike the PUE energy efficiency metric, the calculations in ASHRAE 90.4 are based on
representative components related to design. Organizations need to calculate efficiencies
and losses for different elements of the systems and combine them into a single number,
which must be equal to or less than the published maximum figures for each climate zone.
How Computational Fluid Dynamics Can Help You Comply with ASHRAE 90.4
Any number of different cooling system design strategies or floor layout variations can affect
the results, thereby changing efficiency, creating hotspots or altering the amount of
infrastructure required for the design. Computational fluid dynamics (CFD) offers a method
of evaluating new designs or alterations to existing designs before they are implemented.
To learn how CFD can help you curb excessive energy consumption of your data
center systems and comply with ASHRAE 90.4, watch this webinar:

Watch Webinar Recording

Design Strategies to Reduce Data Center Energy Consumption


Designing a new data center facility or changing an existing one to maximize cooling
efficiency can be a challenging task. Design strategies to reduce the energy efficiency of a
data center include:
 Positioning data centers based on environmental
conditions (geographical location, climate, etc.)
 Design decisions based on infrastructure topology (IT infrastructure and
tier standards)
 Adapting best cooling system strategies
Improving the data center cooling system configuration is a key opportunity for the HVAC
design engineer to reduce energy consumption. Some of the different cooling strategies that
designers and engineers follow to conserve energy are:
 Air conditioners and air handlers
The most common types are an air conditioner (AC) or computer room air
handler (CRAH) units that blow cold air in the required direction to remove hot
air from the surrounding area.

 Hot aisle/cold aisle


The cold air (or aisle) is passed to the front of the server racks and the hot air
comes out of the rear side of the racks. The main goal here is to manage the
airflow in order to conserve energy and reduce cooling cost. The image below
shows the cold and hot aisles airflow movements in a data center.

 Hot aisle/cold aisle containment


Containment of the hot/cold aisles is done mainly to separate the cold and hot air
within the room and remove hot air from cabinets. The image below shows
detailed airflow movement of cold and hot containment individually.

 Liquid cooling
Liquid cooling systems provide an alternative way to dissipate heat from the
system. This approach includes air conditioners or refrigerants with cold water
close to the heat source.

 Green cooling
Green cooling (or free cooling) is one of the sustainable technologies used in
data centers. This could involve simply opening a data center window covered
with filters and louvers to allow natural cooling techniques. This approach saves
a tremendous amount of money and energy.
Identifying the right combination of these cooling techniques can be challenging. Here‘s how
CFD simulation can make this task easier.
Case Study: Improving A Data Center Cooling System
Computational fluid dynamics (CFD) can help HVAC engineers and data center designers to
model a virtual data center and investigate the temperature, airflow velocity and pressure
fields in a fast and efficient way. The numerical analysis presents both 3D visual contouring
and quantitative data that is highly detailed yet easy to comprehend. Areas of complex
recirculating flow and hotspots are easily visualized to help identify potential design flaws.
Implementing different design decisions and strategies into the virtual model is relatively
simple and can be simulated in parallel.
Project Overview
For the purpose of this study, we used a simulation project from the SimScale Public
Projects Library that investigates two different data center cooling system designs, their
cooling efficiency, and energy consumption. It can be freely copied and used by any
SimScale user.
The two design scenarios are shown in the figure below:

The first design that we will consider uses a raised floor configuration, a cooling system that
is frequently implemented in data centers. When this technique is used, cold air enters the
room through the perforated tiles in the floor and in turn cools the server racks. Additionally,
the second model uses hot aisle containment and lowered ceiling configuration to improve
the cooling efficiency. We will use CFD to predict and compare the performance of the two
designs and determine the best cooling strategy.
CFD Simulation Results
Baseline Design of the Data Center Cooling System
We investigated the temperature distribution and the velocity field inside of the server room
for both design configurations. The post-processing images below show the velocity and
temperature fields for the midsection of the baseline design. It can be observed that the hot
air is present in the region of the cold aisle. This is due to the mixing of both the cold and
hot aisles within the data center surrounding. The maximum velocity for this baseline design
is at 0.44 m/s with a temperature range of 28.6 to 49.7 degrees Celsius.
The temperature contour shows that the zones between the two server racks are much
more cooled in comparison with the others. The reasons for this can be understood by
looking at the flow patterns.
It is evident in the above image that in the server rows where inlets are present, the top of
the racks sees a descending flow direction, instead of the desirable ascending flow from the
inlets themselves. This is due to the strong recirculation currents driven by thermal
buoyancy forces. This effect is very undesirable, as it reduces the cooling effectiveness
specifically for the top shelves of the server racks. This effect could be minimized by
allowing for proper airflow above the racks, either by increasing the ceiling height, placing
more distributed outlets on the ceiling, or using some kind of active flow control system
(fans) to direct the flow above the server racks. Or more simply, by preventing the hot air
coming from the racks from freely circulating.
The temperature plot shows a significant temperature stratification which is to be expected
given the large recirculation currents. We can observe that only the lowermost servers are
receiving appropriate cooling.
Improved Design of the Data Center Cooling System
The next two pictures show the velocity and temperature distribution for the improved
design scenario, middle section pane.
The velocity field shows that the flow is now driven to the outlets. This is due to the
presence of containment on top of the racks. This also results in a better temperature
distribution. The cold zones between the server racks are particularly extended.

The above image shows how the hot containment prevents the ascending flow from
recirculating back to the inlet rows. This results in a cleaner overall flow pattern compared
to what was seen in the previous design. It is also evident that the new design reduces
temperature stratification, particularly in the contained regions between the servers.
The average temperature calculated for each rack is lower for the improved design by
about 23%.

This is also reflected in the decrease in the amount of power that has to be supplied to the
server to prevent overheating. On average, energy savings of 63% for the data center
cooling system have been achieved.

Conclusions
A typical data center may consume as much energy as 25,000 households. With the rising
cost of electricity, increasing energy efficiency has become the primary concern for today‘s
data center designers.
This case study was just a small illustration of how CFD simulation can help designers and
engineers validate their design decisions and accurately predict the performance of their
data center cooling systems to ensure no energy is wasted. The whole analysis was done in
a web browser and took only a few hours of manual and computing time. To learn more,
watch the recording of the webinar:

Watch Webinar Recording

If you want to read more about how CFD simulation helps engineers and architects improve
building performance, download this free white paper.
References
 W. V. Heddeghem et al., Trends in worldwide ICT electricity consumption from
2007 to 2012, Comput. Commun., vol. 50, pp. 64–76, Sep. 2014
 Top 10 energy-saving tips for a greener data center, Info-Tech Research Group,
London, ON, Canada, Apr. 2010,
http://static.infotech.com/downloads/samples/070411_premium_oo_greendc_top
_10.pdf
 ASHRAE Standard 90.4-2016 – Energy Standard for Data Centers,
https://www.techstreet.com/ashrae/standards/ashrae-90-4-2016
Data Center PUE vs. ASHRAE 90.4
April 8, 2019 Julius Neudorfer
It has been almost three years since the
ASHRAE 90.4 Energy Standard for Data
Centers was finalized and went into effect in
2016, yet even today, many in the data
center industry are not fully aware of its
existence or its implications. Far more
people are familiar with The Green Grid
power usage effectiveness (PUE) metric,
first introduced in 2007, which started the
data center industry thinking about the
energy efficiency of the physical facility. While originally PUE was based on power
measurements (kW) which were snapshots, it was updated in 2011, and based on
annualized energy usage (kWh), which reflected a more meaningful efficiency picture under
various operating conditions. This snapshot power measurement was one of the loopholes
of the original version of PUE metric. Subsequently in 2011, TGG updated the PUE metric to
Version 2 (https://bit.ly/2usqKPj).
Its purpose was to help data center operators to baseline and improve their own facility.
PUE has been criticized by some since it only covers the energy efficiency of the facility (not
the IT systems), however, that was clearly its stated purpose. Nonetheless, its underlying
simplicity allowed managers to easily calculate (or guess) a facility‘s PUE which drove its
widespread adaption. In addition, the PUE metric helped prompt the U.S. EPA to create the
Energy Star program for Data Centers which became effective in 2010. This program
continues to be a voluntary award program and currently there are 152 Energy Star Certified
Data Centers listed at https://bit.ly/1V5Vafa.
PUE VS ASHRAE 90.4
PUE is considered the defacto metric by the data center industry, and in 2016 became an
ISO standard (ISO/IEC30134). Yet, despite this, most building departments do not know
much about data centers and never heard of The Green Grid or the PUE metric.
Nonetheless, while data centers may be different than an office building, like other
buildings, they still need to comply with any local and state building codes for safety, and
more recently for energy efficiency. In many areas of the U.S., ―ASHRAE 90.1 Energy
Standard for Buildings Except Low-Rise Residential Buildings‖ is referenced and incorporated
as part of some state or local building codes. Data centers were previously more or less
exempted in the 90.1 standard, however as of 2010, data centers were included. There
were complaints that it was too proscriptive from the data center industry and was updated
in 2012 to try to address this issue. In 2016, ―90.4 Energy Standard for Data Centers‖ was
introduced and subsequently 90.1-2016 transferred references to energy performance
requirements for data centers to the newly issued 90.4 standard.
The other aspect of PUE is that technically speaking, it is not a design metric. It is meant to
measure baseline and continuously improve and optimize operating energy efficiency.
Nonetheless, PUE been used as a reference for building design goals before construction. It
is also sometimes referenced in colocation contractual SLA performance or in energy cost
schedules. In contrast, the ASHRAE 90.4 standard is primarily a design standard meant to
be used when submitting plans for approval to build a new data center facility. It also covers
facility capacity upgrades of 10% or greater, which could complicate some facility upgrades.
In contrast, the energy calculation methodology for the 90.4-2016 standard is far more
complex that the PUE metric. However, one of the issues with the PUE metric is that it does
not have a geographic adjustment factor. This means that since cooling system energy
typically represents a significant percentage of the facility energy usage, identically
constructed data centers would each have a different PUE if one were located in Miami,
while the other was in Montana. But PUE originally and still provides a simple uniform
number which is easy to understand and monitor efficiency, regardless of location.
The 90.4-2016 standard separated the electrical power chain losses from the cooling system
energy efficiency calculations. While primarily focused on cooling performance, the 90.4-
2016 standard also details and limits the total maximum electrical losses though the entire
power chain, from the utility handoff, through the UPS and distribution system, and ending
at the cabinet power strips feeding the IT equipment. Moreover, this is very strictly
prescribed by ―Electrical Efficiency Compliance Paths,‖ along with calculations and detailed in
a table with specific limits on energy losses for varying levels of redundancy: N, N+1, 2N,
and 2(N+1) for the UPS and distribution losses at various operating load levels.
ASHRAE typically revises and updates its standard every three to four years, and publishes
proposed revisions for public comments, which can be found at http://osr.ashrae.org/. In
March, there were three proposed Addendums: f, g, and h, released concurrently and
posted for 30-day public review. The first, addendum ―f,‖ is focused on UPS efficiency and is
described ―to better align with current vintages of UPS technology in terms of performance
and industry evolution.‖ The original and proposed Maximum Design Electrical Loss
Component (Design ELC) table had UPS efficiency/loss listings at 100%-50%-25% loads
(per system or module — depending of system design N, N+1, 2N, etc.). For systems with
ITE loads greater than 100 kW, the proposed revision substantially decreases the maximum
allowable UPS losses from 9% to 6.5% (at 100% load), from 10% to 8% (at 50% load),
and from 15% down to 11% (at 25% load). The newer UPS units are more efficient across a
wider range of load levels and many may achieve 93.5% efficiency (6.6% loss factor) at full
load. However, it is more difficult to deliver 89% efficiency (11% loss factor) when
operating at only 25% load.
It is the cooling system calculation section known as mechanical load component (MLC) that
includes location as factor for meeting the cooling system energy compliance. It
incorporates a table with 18 U.S. climate zones listed in ASHRAE Standard 169, each with an
individual Maximum Annualized MLC compliance factor. In the proposed addendum ―g‖
revision, for data centers with greater than 300 kW of ITE load, the maximum MLC
compliance factor was substantially decreased for each climate zone (requiring less cooling
system energy), that really kicks up the requirements a notch. In some cases, the new
maximum MLC would be reduced by as much as 50% to 60% in some zones; such as 4B,
5B, and 6B.
The last item, addendum ―h,‖ has less impact on data centers since it focused on wiring
closets. However, it is an additional cooling efficiency factor that could be subject to scrutiny
and would need to meet mandatory compliance requirements.
Ironically, the ASHRAE Thermal Guidelines for Data Processing Environments, which is
widely considered an industry bible by the majority of data center operators, is not legally
recognized by governmental agencies responsible for overseeing and enforcing building
codes related to the design and construction of buildings. It has undergone four revisions
since its inception in 2004, which had a very tight recommended environmental ITE
envelope. It was the third and fourth editions which promoted and endorsed cooling energy
efficiency by introducing the expanded allowable IT intake temperature ranges and also
broadening humidity ranges, which effectively negated the need for energy intensive tight
humidity control.
The Bottom Line
I have been a longstanding advocate of data center energy efficiency and have written and
spoken about it well before PUE was introduced. When 90.4 originally came out in 2016, I
wrote that it was about to ―move your cheese.‖ The proposed addendums which tighten the
efficiency requirements, that will take effect in 2020, may move it a bit further. But is this
really necessary? Clearly the designs of the newest facilities, and especially the colocation
and hyperscalers, are highly self-motivated to focus on energy efficiency. The massive shift
toward colocation and cloud service providers have directly or indirectly made energy
efficiency a competitive mandate and part of the justification for lowering TCO.
Nonetheless, many older data centers were designed with availability as the highest priority,
efficiency was not given the same consideration as in modern designs. Moreover, it is also
true that prior to the PUE metric many organizations that owned their own data centers
were not very aware of their facility energy efficiency. In some cases, the managers never
saw or were not responsible for energy costs. However, while fewer enterprise organizations
are building their own new sites, there are still many older sites that are still operational. As
a consultant, I perform data center energy efficiency assessments and have seen older
facilities which are still in good condition, which unfortunately may have a PUE of 2 to 3,
primarily due the age of their electrical and cooling infrastructure.
While it is easy to simply recommend equipment upgrades, these are costly and payback
can be hard to economically justify. In addition, it is very difficult or impossible to replace
key, but inefficient, components without shutting down or disrupting the facility. Critical
elements, such as large chillers, cannot be upgraded, especially if there is limited or no
redundancy. Even today I have found that in most instances, a significant amount of cooling
system energy can be saved in data centers by the low cost or no-cost fixes to basic airflow
issues.
Fixing the basic low hanging fruit, such as installing blanking plates, adjusting or relocating
floor grilles, is non-disruptive and is within the capabilities of most in-house staff. This can
solve most of their cooling issues, which allows raising temperatures to save energy. More
importantly, these older sites typically have little or no visibility into how the facility
infrastructure energy is consumed, other than see the total on their monthly utility bills. So if
you are considering major equipment upgrades to an existing facility, now would be a good
time to consider reviewing ASHRAE 90.4-2016, as well as the pending addendums to see if
they apply to your project. Consider purchasing a DCIM system or granular energy metering
and monitoring system to continuously optimize cooling system efficiency before investing in
expensive forklift upgrades.
Examining the Proposed ASHRAE 90.4 Standard
BY JULIUS NEUDORFER - APRIL 4, 2016 LEAVE A COMMENT

The potential impact 90.4 will most likely increase the cost of preparing documentation
during the design stage to submit plans for building department approvals.
The stated purpose of the of the proposed ASHRAE 90.4P standard is ―to establish the
minimum energy efficiency requirements of Data Centers for: the design, construction, and
a plan for operation and maintenance, and utilization of on-site or off-site renewable energy
resources‖. The scope covers a) New Data Centers or portions thereof and their systems, b)
new additions to Data Centers or portions thereof and their systems, and c) modifications to
systems and equipment in existing Data Centers or portions thereof‖. It also states that the
provisions of this standard do not apply to: a. telephone exchange(s) b. essential facility(ies)
c. information technology equipment (ITE).

Data Center Frontier Special Report on Data Center


Cooling – Download it Now
This article is the forth in a series on data center cooling
taken from the Data Center Frontier Special Report on Data
Center Cooling Standards (Getting Ready for Revisions to
the ASHRAE Standard)
Mandatory Compliance Though Legislation
The proposed 90.4 standard states that: ―Compliance with
this standard is voluntary until and unless a legal
jurisdiction makes compliance mandatory through
legislation‖. As previously stated, one of the many data
center industry concerns are that the ―authorities having jurisdiction‖(AHJ). This
encompasses many state and local building departments, as well as the federal government,
which use 90.1 as a basis for many types of commercial and industrial new buildings or
requirements to upgrade existing buildings. Therefore, they will also use 90.4 as their new
data center standard, since it is referred to in the revision to the 90.1 standard. Moreover
90.4 also acts as a more detailed supplement to 90.1 and in fact specifically requires all
other compliance with 90.1 building provisions. In addition to any design compromises and
increased costs, it seems unlikely that many AHJs local building inspectors may fully
understand the complex data center specific issues, such as redundancy levels, or even be
familiar with TC9.9 or The Green Grid PUE metric. This could delay approvals or force
unnecessary design changes, simply based on how the local building inspector interprets the
90.1 and 90.4 standards.
Mandatory Electrical and Mechanical Energy Compliance (90.4)
Electrical Loss Component (ELC):
The designers and builders of new data centers will need to demonstrate how their design
will comply with the highly specific and intertwining mandatory compliance paths. This is
defined by the ―design electrical loss component (design ELC)‖ sections. These involve a
multiplicity of tables of electrical system energy efficiency minimum standards issues related
to IT design load capacity at both 50 and 100 percent. It addition it requires multiple levels
of electrical system path losses and UPS efficiencies at 25, 50 and 100 percent loads system
at various levels of redundancy (N, N+1, 2N and 2 N+1), electrical system path losses.
Moreover, it delineates three distinct sections in the power chain losses: ―incoming service
segment; UPS segment and ITE distribution segment‖ (which extends down cable losses to
the IT cabinets). Furthermore, it states that the Design ELC and ―shall be ―calculated using
the worst case parts of each segment of the power chain in order to demonstrate a
minimum level of electrically efficient design.‖
Mandatory PUE
The second revision of the proposed 90.4 standard explicitly lists mandated maximum PUE
ranges from 1.30 to 1.61, each specifically related to a geographic area listed in each of 18
ASHRAE climate zones. However, these PUE value seemed prohibitively low and many felt
they would increase initial design and build costs substantially and created a lot of industry
concerns, comments and protests. Moreover, it did not take into account the local cost of
energy, its availability and fuel sources, or water usage or any potential restrictions or
shortages of water, which was recently an issue in California‘s ongoing drought. The PUE
reference was removed in the next revision; however the issues of local resources remained
unaddressed.
Mechanical Loss Component (MLC):
The 3rd revision removed all references to The Green Grid PUE requirements; however it
contained highly detailed specific compliance requirements for minimum energy efficiency
for the mechanical cooling systems, again specifically listed for each of the 18 climate zones.
The 3rd revision has another cooling performance table (again for each of the 18 climate
zones), called ―design mechanical load component (design MLC)‖ defined as; the sum of all
cooling, fan, pump, and heat rejection design power divided by the data center ITE design
power (at 100% and 50%).
One of the other and perhaps significant issues is that all cooling efficiency calculations
would seem to preclude the effective use of the ―allowable‖ ranges to meet the mandatory
and prescriptive requirements: ―The calculated rack inlet temperature and dew point must
be within Thermal Guidelines for Data Processing Environments recommended thermal
envelope for more than 8460 of the hours per year.‖ So even if the data center design is
intended for new higher temperature IT equipment (such as A2, A3 or A4), it would
unnecessarily need to be designed and constructed for the lower recommended range,
which could substantially increase cooling system costs.
A3 and A4 Servers
While A3 and A4 servers did not exist in 2011 when the expanded ranges were originally
introduced, as of 2015, there were several A4 rated servers on the market whose
manufacturer‘s specifications state that those models can: ―run continuously at 113°F
(45°C) —with no impact on reliability‖. This new generation of A3 and A4 hardware
overcomes the early restrictions by some manufacturers‘ regarding limiting the exposure
time to higher temperatures.
Concurrently with the release of the 3rd draft of 90.4, ASHRAE also released new Proposed
Addendum ―cz‖ to 90.1-2013 for public review has now removed all references mandatory
PUE compliance. The addendum provides a clear cut reference transferring all data center
energy efficiency requirements to 90.4, which should reduce potential conflict and confusion
(other aspects of the building would still need to comply with local building codes). The goal
of publishing the final version of 90.4 is the fall of 2016.
Nonetheless while this was a significant issue, why should a data center be still limited to
the ―recommended‖ temperature and dew point, by designing a system to meet the
mandatory cooling system energy efficiency requirements? It should be up to the operators
if and for how long they intend to operate in the expanded ―allowable‖ ranges. This is
especially true now that virtually all commodity IT servers can operate within the A2 range
(50-95°F). Moreover, Solid State Disks (SSD) now have a much wider temperature range of
up to 32°F -170°F. While originally expensive, SSDs continue to come down in price, as well
as matching spinning disk storage capacity and are substantially faster delivering increased
system throughput, increasing overall server performance and energy efficiency They have
become more common in high performance servers and more recently as a cost effective
option in commodity servers, which will also eventually result in in greater thermal tolerance
as servers are refreshed.
So with all the these additional factors defined in the ASHRAE Thermal Guidelines, and the
proposed 90.4 standard, many of which overlap and potentially conflict with each other, how
should facility designer and managers decide as to the ―optimum‖ or ―compliant‖ operating
environmental conditions in the data center?
Next week we will wrap up this Special Report and share the bottom line on this evolution of
data center cooling. If you prefer you can download the Data Center Frontier Special Report
on Data Center Cooling Standards in PDF format from the Data Center Frontier White Paper
Library courtesy of Compass Data Centers. Click here for a copy of the report.
Understanding the New ASHRAE 90.4 Standard
Posted on December 26, 2016October 2, 2017

The American Society of Heating, Refrigerating, and Air-conditioning Engineers, more


commonly known as ASHRAE, has finally put forth the finished version of the data center
efficiency and energy usage standards. According to ASHRAE, the purpose of the 90.4
Standard is ―to establish the minimum energy efficiency requirements of Data Centers for:
the design, construction, and a plan for operation and maintenance, and utilization of on-
site or off-site renewable energy resources.‖ The ASHRAE Standard takes into consideration
that most existing data centers pass the requirements set under the standard, going with an
80/20 approach. This means that only the data centers that are the most energy inefficient
are the ones that will fail to comply. The standard also takes into consideration the constant
innovation and progress that takes place in the IT world and subsequently affects data
centers.
Part of Standard 90.4 focuses on the efficiency of the use of on and off site renewable
energy resources. Instead of opting for the unit of PUE or Power Usage Effectiveness to
measure the efficiency of a data center, 90.4 divides it into two parts: Mechanical Load
Component (MLC), a merit to measure the minimum energy efficiency of all mechanical
cooling systems specified for a variety of different climate zones, and Electrical Loss
Component (ELC). The values are calculated and compared to the specified limits for each
unique climate zone. The Standard requires that the calculated values be lower than the set
limits in order that the goal of the standard to be achieved.
An alternative path also allows data centers to perform tradeoffs between the two: MLC and
ELC, if required in the case either of the systems compensates for the inefficiency of the
other.
The older 90.1 Standard was implemented for almost all kinds of buildings, but the 90.4
Standard is built to ensure that it covers the design and structure of data centers
specifically. ASHRAE now looks to shift the compliance from the previous version of the
Standard to the new recommendation for the efficient functioning of a data center sites as
well as efficient energy usage.
Jeffrey Dorf is the Editor of The Data Center Blog, President of the Global Data Center
Alliance, and oversees the Mission Critical Practice for the M+W Group
Raising Chilled Water Set Point
March 6, 2018 | Ernest Domingue
Raising your chilled water supply temperature, while still meeting the cooling demands of
your Data Center, can reward you with energy savings by reducing the work of the chiller‘s
compressor. The compressor after all is the largest user of electrical energy when compared
to the pumps and cooling tower fans that make up the rest of the chiller plant.
ASHRAE Standard 90.1 2016 includes a requirement for resetting CHWST (chilled water
supply temperature). Part 6.5.4.4 of the Standard requires systems larger than 25 Tons
(300,000 BTUH) to include controls that automatically reset supply temperature by
representative building loads. Automatic reset is a very important component in evaluating
the part load performance of a system across a wide range of annual weather data for a
given location. For a Data Center, evaluating the Mechanical Load Component (MLC) using
the calculation described in ASHRAE Standard 90.4 2016, the Energy Standard for Data
Centers, this systems evaluation approach is an ideal way to compare systems that are
being proposed for a new or renovated Data Center.
Even Data Center loads, which are typically constant year round and effected very little (if at
all) by outdoor seasonal changes, can benefit by automatic reset of CHWST. For instance,
since the cooling system is typically sized for the final ITE load, it will be oversized until the
ITE load is fully populated. This sometimes takes several years. Until that time, the cooling
plant will operate at part load. A good plant design will take advantage of this and will have
the components designed to operate efficiently at these partial loads while designing the
entire HVAC infrastructure with the final load in mind or as modular components with
redundant modules. The 90.4 Energy Standard allows energy modeling to include operation
of the redundant components as long as part load efficiencies of this equipment are used.
Apart from the chiller plant, additional cooling terminals (CRAH units) will typically be
incorporated into the Data Center design to provide redundancy in the white space. It is
common to operate all of the CRAH units simultaneously including the redundant ones, all
using variable speed fans to provide good underfloor air distribution as well as part load
efficiency.
Keep in mind that when incorporating automatic chilled water reset, CRAH unit chilled water
coils must be sized for cooling the final load with warmer chilled water supply temperatures.
Taking advantage of chiller energy savings by resetting CHWST will not work unless the
terminal coils are designed properly with sufficient coil surface area for cooling the design
load using the warmer, reset temperature. This would all need to be determined during the
design phase with upper limits set, based on coil sizing, as to how high the chilled water
temperature can be raised.
High Humidity Issues
Problems with high humidity can result during portions of the year while delivering too high
a CHWST. High humidity issues are usually caused by humidity migration into the Data
Center from an outside source and not directly related to warmer chilled water supply
temperatures. Humidity will migrate from areas of higher vapor pressure to areas of lower
vapor pressure similar to temperature seeking to equalize from outdoors on a hot summer
day through an insulated wall by conduction to the indoors of a cool air conditioned building.
The insulation slows the conduction of temperature just as a vapor barrier slows the
migration of vapor. Not all vapor barriers are perfectly sealed and gaps in the vapor barrier
let moisture into the building. This should not discourage you from raising your chilled water
set point. Even in most of the Northeast United States, where the climate is considered
cool/humid to cold/humid, there is still a large portion of the year that you will be able to
cool the Data Center with elevated chilled water temperatures. Problems with high space
humidity may only present itself during summer months.
Automatic Reset Control
Resetting the CHWST should be performed based on feedback from the actual cooling load
or demand for cooling. This can be accomplished through monitoring the room temperature
with sensors located at the rack inlets. Chilled water return temperature can also be used
since that represents the demand for cooling. The preferred method is monitoring the actual
real time position of the chilled water control valves at the CRAH units. This method can
only be performed through the use of DDC based building automation controls. Space
relative humidity and dew point temperature can be monitored and used to override the
reset schedule and lower the CHWST when space humidity becomes an issue.
Many factors need to be considered when deciding to raise the chilled water temperature
setpoint. It is not as easy as a couple clicks in the Building Automation System. Even
slight changes in the chilled water supply temperature could have a ripple effect on the
system and could result in a number of space condition issues, and potential failures.
If there is a desire to operate your system at an elevated chilled water setpoint then it
needs to be designed to do so right from the beginning. On an existing system, it would
need to be properly analyzed to determine if 1) it is even possible with the existing
equipment‘s coils, 2) how high of a chilled water temperature can the Data Center tolerate,
and 3) can my control system safely accommodate such a system without any reduction of
reliability.
The control system is the brains of the operation and would need to be evaluated to make
certain it has the correct logic to automatically control a resettable system. Sure, it is not a
problem to manually change the chilled water setpoint from 45F to 50F, but what if there is
an issue in the Data Center. Are there enough ‗smarts‘ in the control system to recognize an
adverse condition and automatically make the necessary, ―safe‖ modifications to bring the
system back under control.
There is a continued design effort among engineers and now a gentle push (from ASHRAE
Standard 90.1 and 90.4 2016) to be a better steward of our electrical consumption, but the
entire system needs to be reviewed to make sure that saving energy does not come at a
cost of reduced reliability.
Difference between white space and gray space in data centers

By Jasmie Russello on January 07, 2016

Companies are becoming more and more technology oriented and getting dependent on IT
for almost all of its operations. This has led to the growth in demand for data center
services. As a result, the data center managers are faced with the challenges of optimum
utilization of available data center space for improving the efficiency. One of the best
approaches to deal with such challenges is to consolidate the white space and gray space in
the data center to improve the operational capabilities.

First let us understand the difference between White space and Grey space in data centers:

White space in data center refers to the area where IT equipment are placed. Whereas Gray
space in the data centers is the area where back-end infrastructure is located.

White Space includes housing of Grey Space includes space for

Servers Switch-gear

Storage UPS

Network Gear Transformers


Racks Chillers

Air conditioning Units Generators

Power Distribution system

Below are few strategies to consolidate white space and gray space in order to improve data
center operational efficiency and availability. These may relieve the burden for data center
managers in an effective way.

White space efficiency techniques: To save usable space in data centers

 Virtualization is the key to efficiency: Virtualization replaces a large number of


less-efficient devices with a small number of highly efficient ones. It saves a lot of
space in data centers. VMware and other virtualization management systems can be
deployed to create a fully virtualized environment.
 Leveraging cloud computing resources: Cloud is gaining popularity today as the
organizations can lessen down the number of servers they have by using public
cloud data centers which make use of the public internet to transfer data. Less
number of servers means saving of space.
 Data center capacity planning: Data center asset management tools help in
estimation for power and servers requirement currently as well as in near future. It
helps in optimal sizing of data centers, saving the wastage of space due to lack of
planning.
 Switch to SSDs: SSDs are faster, energy-efficient and they have no mechanical
components. These are much more compact than the disk-based storage systems.

Gray Space efficiency techniques: to save space for storing infrastructure

Latest energy storage technology: Technologies like flywheels help in increasing the
momentum of the machine. In cases of power fluctuations, the disk continues spinning
because of this momentum, producing additional power which UPS uses as the emergency
brief standby power. In a way, it reduces the number of batteries required for power supply
which in turn saves space for storing them.
White Space 69: New and improved
February 28, 2017 By: Max Smolaks
Now with 33 percent female presence
This week on White Space, we introduce a new member of the editorial team – please give
a warm welcome to Tanwen Dawn-Hiscox!
We start with the observation that liquid cooling is like a bus: you wait ages for news, and
then three announcements come at once. Swedish company Asetek has signed up a
mysterious data center partner while in the UK, Iceotope has partnered with 2bm. And in
Holland, Asperitas has unveiled an ‗immersed computing‘ system that essentially sinks
servers in giant tubs of Vaseline - cue the lubricant jokes.
There‘s an update on the lawsuit filed by British modular data center specialist Bladeroom
against Facebook - the company claims that the social network stole its designs and
deployed them in Lulea, Sweden. It is now obvious that the case will go to court.
Google has finally made Nvidia GPUs available to the Google Compute Platform customers,
becoming the last of the major cloud vendors to do so: the same GPUs were offered by
SoftLayer in 2015, and both AWS and Microsoft jumped on the bandwagon in 2016. What
makes Google‘s approach different is the intent to also deploy graphics chips from AMD, the
eternal underdog.
Meanwhile Dutch Data Center Association has published a report that claims Dutch
colocation facilities have contributed more than $1 billion to the country‘s GDP in 2016.
There‘s plenty of other content in the show: anecdotes, observations and travel notes. You
can also expect more news on Schneider Electric and AMD in the nearest future.

The Technoaesthetics of Data Centre "White Space"


SEPTEMBER 6, 2017
8-2 Table of Contents | http://dx.doi.org/10.17742/IMAGE.LD.8.2.5 | TaylorPDF

The Technoaesthetics of Data Centre ―White Space‖


Abstract | Why are the walls, floors, and ceilings of data centres always painted white?
Photographs of data centre interiors tend to focus on the advanced technologies contained within
them, while the surrounding white surfaces disappear into the background. Bringing this
overlooked design feature to the foreground, this essay explores the technical functions,
temporalities, and transparencies of white space within data centres.
La technoestétique de « l‘espace blanc » du centre de données
Résumé | Pourquoi les murs, les planchers et les plafonds des centres de données sont-ils toujours
peints en blanc? Les photographies de l‘intérieur des centres de données ont tendance à se
concentrer sur les technologies de pointe qu'elles contiennent, tandis que les surfaces blanches
environnantes disparaissent en arrière-plan. En mettant en lumière cet élément de conception
souvent négligé, cette étude explore les fonctions techniques, les temporalités et les transparences
de l'espace blanc dans les centres de données.

A.R.E. Taylor | University of Cambridge


The Technoaesthetics of Data Centre ―White Space‖
And ―white‖ appears. Absolute white. White beyond all whiteness. White of the coming of
white. White without compromise, through exclusion, through total eradication of non-
white. Insane, enraged white, screaming with whiteness.
—Henri Michaux (198)
What then is the essential nature of cloudiness?
—Ludwig Wittgenstein (15)
Chromopolitics
―White space‖ is a term used in the data centre industry to describe the space allocated for IT
equipment. It is the space occupied by server cabinets, storage, network gear, racks, air-
conditioning units, and power-distribution systems. The phrase also refers to the empty, usable
square footage that is available for the deployment of future IT equipment. Optimising white
space is a key part of data centre design and management. Generally speaking, the more white
space the better, as the ability to expand computing capacity is essential to ensuring long-term
business growth. White space management (WSM) is an increasingly valuable skill for data centre
managers who should be able to maximise usage of white space by strategically deploying IT
equipment to increase facility efficiency and save space.
Weaved around three photographs of the interior whitescape of a data centre managed by Secura
Data Centres[1] in the north-east of England, this experimental essay blends insider (emic) and
outsider (etic) voices together to explore the technical and aesthetic - ―technoaesthetic‖ -
operations of white space. The term ―technoaesthetics‖ aims to capture the fusion ―of appearance
and utility‖ (Masco 368) that white space encompasses by addressing the ways in which the
whitewashed surfaces of the data centre have a technical function but also an aesthetic (and
therefore social, political, and historical) dimension.
A variety of commentators have recently begun to grapple with cloud computing as both
a metaphor and a material infrastructure. It is typically argued that the metaphorical conceit of
―the cloud‖ evokes images of ethereality and immateriality that actively erases the physicality of
Internet infrastructure and rhetorically conceals the political realities of its practices and processes.
Critically and creatively exploring the gap between the metaphor of the cloud and its material
components, this nascent body of work has attempted to draw attention to the fibre-optic cables,
pipes, wires, and satellites that are seemingly removed by the misleading cloud metaphor (Parks;
Blum; Starosielski). Across these diverse projects, perhaps the most persistently examined object
of cloud infrastructure has been the data centre (Arnall; Bridle; Graham; Holt and Vonderau; Hu;
Jones; Levin and Jeffery). Yet, while data centres and the technical equipment contained within
them have been subject to growing critical reflection, the white floors, ceilings, walls, and surfaces
have been left largely ignored, appearing only as a passive backdrop for the action of other
sociotechnical arrangements.
Histories of architecture have long-recognised the structural role of architectural features that may
at first appear ―superficial‖, such as the white wall (Wigley; el-Khoury). Space, too, is never
a neutral backdrop but a product of people‘s interactions with the material world (Lefebvre).
Building on insights from the history of modern architecture and bringing the white space of data
centres into relation with other white spaces from popular culture, this essay will explore what this
overlooked design feature may tell us about the cloud and its supporting infrastructure.
Public Images
The last decade has seen a slow but steady increase in images of data centres circulated within
the global mediasphere, with visual technologies facilitating the aestheticisation of the white data
centrescape. The fictional action of a growing number of films and TV shows has occurred within
the white space of data centres (Smolaks), while photographs of these ―architectural curiosities‖
(Stasch 77) frequently garnish articles, exposés, and essays in the popular press. Chanel even
adopted a data centre theme for their SS17 event for Paris Fashion week, with the catwalk
transformed into a white floor flanked by white server racks (Moss). It is the aesthetically-pleasing
white spaces of the data centre that are most often photographed or filmed in advertising
campaigns and other media products. Rarely do we see what some practitioners refer to as the
―grey space‖: the unphotogenic backstage areas of the data centre where back-end equipment
like switch gear, uninterruptible power supplies, transformers, chillers, and generators are located.
White space plays an important role in mediating and transforming popular imaginaries of the
cloud. We might say, then, that data centres are steadily coming out from behind the screens of
the digital world and are increasingly infiltrating the popular imagination through a number of
visual channels and media forms.
The data centre industry‘s increasing visibility is heavily entangled with several major political
developments. Perhaps the most significant of these was the leaking of top secret documents by
ex-NSA contractor Edward Snowden in 2013, which revealed extensive partnerships between
government mass surveillance projects and tech companies like Microsoft, Google, Facebook, and
Apple, who granted organisations permission to access data stored in their data centres as part of
various surveillance programs (such as the NSA‘s PRISM and GCHQ‘s Tempest programs). Tech
companies have released extensive visual footage of their data centres as part of a larger effort to
restore public trust in the post-Snowden environment. The regular and highly publicised hacking
of global corporations such as Sony, TalkTalk, and Yahoo! has further drawn attention to the
ethics and (in)security of practices and processes of data storage. Increasingly stringent
international regulations on data sovereignty, in which data is subject to the laws of the country in
which it is stored, has radically reinforced the significance of geographic space within cloud culture
and resulted in a widely-publicised boom in data centre construction in ―information friendly‖
countries like Luxembourg (Dawn-Hiscox) and Iceland (Johnson). Heavy criticism from
environmental policy makers over the industry‘s excessive (fossil fuel) energy consumption to
power and cool IT equipment has also played a central role in many companies relocating their
data centres in Nordic countries, where they can take advantage of the naturally cool climate.
These sociopolitical and geopolitical developments have steadily brought data centres into the
media spotlight, mobilising popular opinion as well as academic reflection. Indeed, a gradual
cultural awakening to the political realities of data storage has occurred: data in ―the cloud‖ is
increasingly figured less as some ethereal evaporation in a kind of Internet water cycle but is
increasingly imagined to be stored – held hostage, even - on corporate hard drives hidden in
sinister server farms.
Public debates and discussions about data centres and cloud computing within the popular press
and academia thus tend to revolve around questions of privacy, trust, and transparency. It is
largely in response to accusations of non-transparency that tech companies like Facebook,
Amazon, Apple, Microsoft, and Google have engaged in rigorous publicity campaigns visualising
their data centres (or ―fulfilment centres‖ in Amazon‘s case) in an attempt to improve their public
image.
As data centres begin to reposition themselves more visibly within a variety of media landscapes,
a flood of data centre imagery has been unleashed showcasing the physical insides of the
Internet. Google has released photo galleries, panoramic tours and video footage of their data
centres, featuring the friendly (and ethnically and gender diverse) faces of the staff who work in
these buildings. Amazon has employed a similar strategy, releasing high-definition footage of their
machine-warehouses interspersed with brief talking-head interviews of their warehouse packers
(or ―fulfilment centre associates‖ in Amazon parlance). Extensive photo and news coverage
accompanied the 2013 opening of Facebook‘s ―green‖ data centre in Luleå, Sweden (Jones;
Vonderau). Facebook also made the blueprints public for their recently constructed facility in
Prineville, Oregon (Quirk). Microsoft has similarly jumped on the bandwagon, making free
QuickTime video tours of their data centres available for download from their corporate
website.[2] In fact, today, the majority of data centres - from the corporate behemoths to the
independent colos - have some form of image gallery or 3D virtual tour on their websites where
you can scroll through various photographs or view video footage of the facility.
Data centre interiors tend to be represented as specialised technological environments full of
colourful cables, complex wiring and futuristic-looking IT equipment. Yet, while the specific
machinery pictured in these publicity shots always varies, the white ceilings, walls, and floors that
form the background against which the equipment is displayed, rarely, if ever, changes [Figure 1].
Figure 1: A world of white space.
Virtual Spaces
Journeying through data centres – whether in person, by browsing online images, or through a 3D
virtual tour of a facility – is perhaps the closest a human can get to being sucked into the Internet.
On the server hard drives locked behind the perforated doors of the white metal cabinets, data is
stored and accessed by Internet users from all over the world. The whirring machinery and the
giant cables and wires are the organs and intestines of the Internet. Seemingly endless air-
conditioned corridors of identical server cabinets surround you as you venture through this strange
data space. But this is not the 8-bit ―electronic world‖ of ―cyberspace‖ as imaged and imagined in
the vintage visions of the film Tron (1982), William Gibson‘s Neuromancer (1984), or any other
―human-sucked-into-a-computer‖ narrative from cyberculture films and literature - the visual
strategies of which were predominantly informed by the aesthetics of early graphics programs and
the computer-generated neon-grid geographies of early arcade video games like Pong.[3] Instead
of black backdrops vectorised by space-time neon gridlines we have a vast expanse of whiteness.
The only neon here comes from the flickering server-lights and their reflections in the fluorescent
white of the floor-space when the lights are turned off.
This is not to say that the visual strategies of data centre advertising campaigns do not play with
the semiotic remnants of these retro cyber-visions. ―Data centres often turn their lights off for
photo shoots so you can properly see the neon,‖ Stuart Hartley, the Chief Technology Officer at
Secura explained to me, ―simply because it makes them look more virtualistic‖ (Hartley). The
image of the server-cabineted corridor [Figure 2], typically bathed in blue neon, has become the
canonical icon of the data centre. Produced and reproduced by and through ―patterns of imaginal
repetition‖ (Frankland 103) and mass media circulation, it is the shot of a neon-soaked aisle
flanked by racks of encaged servers that is no doubt the most frequently encountered
representation of data centres today (returned by any basic Google search for ―data centre‖). In
advertising images, the symmetrical geometries of the cabinets are typically combined with a low-
angle shot, cinematically transforming the server cabinets into sublime, giant monoliths
reminiscent of the extra-terrestrial machine-monoliths in Stanley Kubrick‘s 2001: A Space Odyssey
(1968).
In keeping with William Gibson‘s (60) various descriptions of cyberspace as ―distanceless‖ and
―extending to infinity‖, the whitewashed interior makes this data centre appear sublimely vast.
Painting surfaces white is a technique commonly deployed by interior designers to make spaces
appear larger than they are. The hyper-illuminated, uniform whiteness of this data centre obscures
the points where the walls, ceiling, and flooring join together, creating the illusion of an almost
dimensionless space that appears ―seamless, continuous, empty, uninterrupted‖ (Batchelor,
Chromophobia 9). In this respect, the white innards of the data centre perhaps have more in
common with the contemporary visualisations of virtual spaces that we find in multi-dimensional
modelling software programs, virtual world editors (sometimes referred to as ―sandboxes‖) and
the famous virtualistic white spaces that featured in The Matrix franchise (1999-2003).
Yet data centre white space not only emits signs of virtuality through its association with the
imaginal renderings of ―infinite datascapes‖ (Gibson 288) from popular culture, but is also the
direct product of virtualisation technologies. In data centre parlance, virtualisation describes an
approach to pooling and sharing technology resources between clients and has been widely
adopted throughout the industry in recent years.[4] Ian Cardy, the Head of Disaster Recovery at
Secura explained the logic behind virtualisation to me as follows:
Before the recent surge in virtualisation, data centres were rapidly running out of white
space. Servers and storage devices were only running at 10% or less of total capacity,
meaning floor-space was filling up with hardware. Virtualisation basically enables us to
unlock the unused 90% of a device‘s capacity… Think of a server or hard drive as a tower
block without any floors in it; you can‘t access all the unused space above your head, which
just goes to waste. Virtualisation software divides that space into multiple floors and rooms
and puts in stairways and doors to access them. This means multiple clients can then
experience fast and seamless server or storage access without realising they are all living
next door to each other in the same device or distributed across multiple devices. In
a virtualised environment, multiple physical machines can be consolidated into fewer
machines, meaning less physical hardware is needed, which greatly increases the availability
of white space in the facility. (Cardy)
Virtualisation enables data centres to maximise the utilisation of their hardware and ―has allowed
for thousands if not millions of users to share a data centre in the cloud‖ (Hu 61).[5] By pooling IT
resources together in this way, facility operators are able to reduce the number of physical devices
in the data centre and reclaim vital square footage. ―Virtualisation‖, Cardy summarised, ―means
less hardware [and] more white space‖.
While the term ―virtualisation‖ conjures imaginaries of dematerialisation or non-physicality, there is
always an underlying physical machine doing the work. At the same time, however, virtualisation
can be seen as a dematerialising process to the extent that its implementation enables data
centres to eliminate excess physicality in the form of surplus hardware and free-up white space.
For data centre practitioners, then, the white, dimensionless spaces of the data centre not only
look like the computer-generated spaces that feature in popular imaginaries of ―the virtual‖, but
are the product of virtualisation itself, symbolic of the moment when the virtual has become
infrastructured.
Figure 2: The iconic image of the server-cabineted corridor.
Future-Proof Spaces
Whiteness has long-featured in cultural images and imaginations of the future. The white wall was
the icon of the modernist architectural movement pioneered by Le Corbusier – and later
associated with minimalism.[6] It reached the height of its popularity during the interwar period
and was central to the modernist project‘s desire to whitewash the past and build a new future
after the First World War (Wigley; Ballard). White space has also been a recurring motif in science
fiction films, from George Lucas‘ THX 1138 (1971) to the Wachowski‘s The Matrix (1999). White
surfaces are a central design feature of Hollywood spaceship interiors – from the Space Age
cinema of the 1960s and 70s to present-day blockbusters and video games. When combined with
the advanced computer technology and the hermetically-sealed metallic surfaces,[7] the white
ceilings, floors and walls, give data centres an unmistakably spaceship-like appearance.[8] ―White
surfaces just have that futuristic feel about them‖, Hartley explained to me. ―It‘s important for
a data centre to look futuristic; you don‘t want them to look old or dated, as this doesn‘t inspire
confidence in the client… white can make a facility look like it‘s going to last well into the future‖
(Hartley).
Yet, while the white spaces of science fiction and modernist architecture imaged and imagined
new possible futures, the white spaces of the data centre do not attain their futurity by virtue of
their active role in bringing new futures into being but rather seem to achieve their futuristic effect
by virtue of their association with these future-making projects of the past. In this way, the
whitewashed technoaesthetics of data centre interiors creates more a ―retroactive sense of
futurity‖ (Jakobsson and Stiernstedt 2012:112). They do not so much participate in the production
of new possible futures but rather, participate in the visuo-nostalgic reproduction of futures past.
The technical function of white space as part of a broader anticipatory practice known as ―future-
proofing‖ further complicates the data centre industry‘s relation to the future. ―Future-proofing‖ is
an emic term that describes an approach or design philosophy to data centre resources that aims
to save infrastructure and IT from rapid obsolescence. In the fast-changing world of the data
centre, where new (and extremely expensive) equipment is constantly being developed and
deployed, it is important that this equipment will last well into the distant future and not be
―outdated before it‘s even installed‖, as Cardy put it. Maximising the availability and production of
white space (through techniques like virtualisation as well as strategically organising the
arrangement of IT equipment) is a future-proofing measure that aims to ensure the continued
growth and future of the data centre.
The practice of maximising the availability and production of white space so that data centres do
not get too full too quickly is guided by logics of preparedness, contingency, redundancy, and
resilience rather than ideals of renewal or regeneration. While the presence of white space
represents the possibility of future expansion and growth, the future, as embodied in the concept
of future-proofing, appears not so much as something to be embraced but something to protect
the data centre from, to stop the future from getting in and outdating the technology. White space
not only regurgitates the mise-en-scène of futures past, but, as a future-proofing technique
becomes a barricade against future futures, blocking them out. The aesthetic strategy of
whitewashing, which produces the illusion of seamlessness, further reinforces this hermetic
imaginary. While discourse on Big Data is dominated by optimistic hypes and hopes for a better
future, the anticipative white spaces of the buildings in which this proleptic data is stored reflect an
inability to imagine that future as anything other than threatening.
Sterile Spaces
The startling whiteness of their architectural surfaces presents data centres as sterile spaces. Data
centres are highly controlled environments. A variety of contaminants can cause lasting damage to
the expensive equipment housed within these infrastructures. Organic and inorganic particulate
matter (PM), such as dust, plant pollens, human hair, liquid droplets and smoke from cigarettes
and nearby traffic can interfere with the drive mechanisms of magnetic media (such as the
read/write actuator arms in hard disk drives), causing corrosion, oxide flake-off, wasted energy,
and permanent equipment failure. The white surfaces and well-lit rooms serve to make visible any
foreign matter that may have entered the facility. Here the data centre whitescape joins ―the
doctors white coat, the white tiles of the bathroom [and] the white walls of the hospital‖ (Wigley
5). Furthermore, for security reasons, most data centre construction standards prohibit windows
that provide ―visual access‖ to the data centre, particularly the computer rooms, data floors, and
other secured areas. The reflective properties of the white surfaces therefore enable facility
operators to get more mileage out of their artificial lighting, reducing electricity costs. For this
reason, often the IT equipment itself is painted white [Figure 2].
The brightness of white surfaces, where light is reflected, is a frequently deployed trope in the
domain of public health, where the white wall has long-played a prominent role not only in the
exercise and display of cleanliness, but also ―in the construction of the concept of cleanliness‖ (el-
Khoury 8; Berthold). In an analysis of urban sanitisation projects in late-18th-century France,
Rodolphe el-Khoury suggests that the rhetorical power of the white surface stemmed primarily
from its visual properties, or, more precisely, from its capacity to translate the condition of
cleanliness into an image. El-Khoury argues ―The norms of cleanliness were moral rather than
functional‖ and ―had more to do with ‗propriety‘ than with health‖ (8). In this way, from the 1780s
whiteness came to function as an evident index of cleanliness in the domain of public health and
hygiene – a symbolic code that continues to be deployed in diverse arenas today - from sanitation
photography to data centre security.
This ―evidentiary‖ relationality of whiteness and cleanliness, is, of course, social and historical and
tightly tied to long-standing racial associations that emerged from the classificatory projects of the
Enlightenment era. During this period, whiteness was brought into relation with concepts of
cleanliness, purity, and civilisation, while blackness was aligned with dirt, impurity, and
backwardness (Garner 175; Cretton; Dyer). Whiteness-as-cleanliness is thus a social cleanliness.
Jonathan Shore, the Data Hall Manager at Secura, suggested that ―white looks virtuous, innocent‖.
He elaborated by explaining that, in the hyper-luminous data halls, ―there are no dark corners for
dirt to accumulate or secrets to hide‖ (Shore 2016b). Here whiteness eliminates darkness,
transforming data centres into clean and moral spaces in the process. In our conversations, data
centre professionals would often align the internal whiteness of the data centre with a social or
moral cleanliness, reflective of the data centre‘s responsibility, security, and, perhaps somewhat
paradoxically, transparency.
Transparent Spaces
While the opacity of the white surface might at first appear diametrically opposed to established
conceptions of transparency (Wittgenstein 24), for Shore it actually makes Secura‘s data centres
more transparent: ―A white wall obviously isn‘t see-through like a glass wall,‖ he highlighted, ―but
we‘re not trying to hide that we‘re hiding something‖ (Shore 2016b). For Shore, transparency was
a suspect term, the invocation of which was always an act of concealment: ―It‘s the suspiciously
over-transparent buildings that we should be cautious of,‖ he argued, ―the see-through office
workspaces and BBC newsrooms… that pass under the radar because they look like they‘re hiding
nothing.‖[9]
The white interiors of data centres stand in stark opposition to other architectures of the so-called
―Information Age‖, in which transparency of information has arguably led to transparent interiors
and exteriors (Shoked 101). Contemporary architectures are typically defined by see-through
surfaces, open spaces, and spherical shapes that eliminate angles and corners. Such architectures
reflect non-hierarchical, holacratic management forms and aspire to ideals of openness and
transparency. Walls are not white but glass, while ceilings and floors are stripped back to expose
the bare pipes, concrete, or metal foundations as if there is nothing to hide.
In contrast, within the rectangular geometries of data centres, corners proliferate and transparent
materials like glass, Perspex, and translucent polycarbonate are actively avoided. The ubiquitous
white surfaces in data centres are overtly opaque yet provide Shore with a claim to transparency
by virtue of that very opacity. Indeed, the extreme opacity of Secura‘s white spaces highlights –
rather than hides - the act of concealment that is usually elided within images of buildings or
spaces claiming to be transparent. Shore suggests ―The opacity of the white basically draws your
attention to what you can‘t see‖ (Shore 2016b). Here, white space does not just symbolise
cleanliness but cleanses itself from the ideology – or ‗tyranny‘ - of transparency (Strathern).
As white walls gradually became a staple of modern architecture, whiteness as a signifier of
cleanliness also came to signify absence – not only of dirt, but also of visual stimuli (el-Khoury).
The colour white became a neutral or blank background that was supposedly colourless in the
same way white people were ―raceless‖ (Cretton; Dyer). As such, white walls did not need to be
see-through to be transparent, because they ―are just what they are‖ with ―no possibility of lying‖
(Batchelor, Chromophobia 10). Yet with the vast whited rooms of the Secura data centre,
a different logic is at work. While Le Corbusier‘s white wall supposedly rendered architecture
transparent by liberating the walls from visual decoration, Secura‘s white wall problematizes the
relation between transparency and visibility (Lefebvre 76; Kuchinskaya). In Secura‘s data centre
facilities, it is not so much about what can be seen, but whether what can‘t be seen is shown. By
showing what can‘t be seen (e.g. photographing the white cages of the servers but not the
servers themselves) the luminous whitescapes of this data centre make visible the limits of
transparency-as-visibility.
At the same time, we must not forget the ―grey spaces‖ that exist in the often unseen background
of the data centre (though the ceilings, floors and walls of these spaces are still often painted
white). Indeed, as David Batchelor reminds us, ―the luminous is almost always accompanied by
the grey‖ (60). While the emic division between ―white‖ and ―grey‖ space may at first appear to
correspond to ―visible areas‖ and ―invisible areas‖ respectively, rather, these represent two
different regimes of in/visibility. As we have seen, the aesthetically-pleasing white areas, though
frequently-visualised, are composed of registers of visibility and invisibility and the same goes for
grey space. At the same time, Shore‘s assertion that whiteness eliminates dark corners suggests
that, though the white spaces of the Secura data centre may aspire to operate beyond
transparency, they are still entangled within Enlightenment associations of light, whiteness, truth,
and morality that underpin contemporary regimes of transparency (Mehrpouya and Djelic).
Images of white space, then, may be seen more as a kind of performative playing with the signs
of transparency, allowing data centres to hover somewhere between revelation and concealment.
Revealed Spaces
Another white space that hovers between revelation and concealment is the famous virtual
environment from The Matrix (1999) known as the Construct. This cinematic white space is
a productive tool for thinking through some of the ways in which white space may operate in data
centre imagery. The Matrix is set in a dystopian future where intelligent machines keep humans in
a state of suspended animation, harvesting their energy to power their machine world. Humans
do not experience this machinic reality. Rather, they are plugged into a neural-interactive virtual
reality known as the Matrix. The protagonist is the computer hacker Neo, who learns the truth
about the nature of reality from a mysterious figure named Morpheus. The film follows Neo as he
goes through the process of discovering the true conditions of his existence, waking up from the
ideology of the simulated Matrix and fighting in the war against the sentient machines. In perhaps
one of the most memorable scenes, a data probe is inserted into the headjack at the base of
Neo‘s skull which plugs him into the Construct. He is immediately transported into a completely
white and dimensionless space. Morpheus is standing in the seemingly endless whitescape, along
with two leather armchairs, a 1950‘s television set, and a small circular table. Morpheus explains
that this is the Construct, a ―loading program‖ that provides users with a virtualised space in which
training simulations are run.
It is in the Construct that Neo begins what we might call – following the psychotropic writings of
Henri Michaux (xiii) and Timothy Leary - the process of ―deconditioning‖. For these figures,
psychedelic drugs had a demystifying effect on human consciousness, enabling users to free
themselves from the ideological shackles of their social conditioning and thus ―awaken from a long
ontological sleep‖ (Leary 76). Whiteness, in particular, played an important part in Michaux‘s
entheogenic experience of mescaline. Similarly, the white space in The Matrix does not just serve
as a background against which Neo‘s ontological transformation takes place, but is itself
a constituent part of this transformation, an active mechanism of deconditioning: it is in and
through this space that Neo awakens to a newly-deconditioned plane of existence.
Could we see the images of data centres released by technology companies under the guise of
―transparency‖ as similarly operationalising white space? Within tech companies‘ representational
strategies, the hyper-illuminated white interiors of data centres are analogously positioned as the
material, machinic ―reality‖ behind the illusory ideology of the cloud in a way almost identical to
the revelation narrative that structures The Matrix. In this way, the use of photographic and video
imaging technologies to reveal the physical foundations of the cloud may be seen not simply as
practices of visualisation and visibilisation, but also as belonging to rituals of illumination and
revelation associated more with the mystic tradition.
A crucial similarity between the whitescapes of the data centre and The Matrix is the way in which
they reverse the revelation narrative of the mystic tradition (most commonly associated with
occultism of secret societies like the Illuminati and Freemasons, but also tied to the psychedelic
mysticism of the 1960s). Mysticism is typically underpinned by a dualistic ontology, that is, the
postulation of a spiritual and a material realm (Surette 13). In the classic mystical revelation
narrative, a transcendental revelation occurs whereby the material or hylic world gives way to the
substanceless, immaterial or spiritual plane of noumenal reality (Yates; Surette 13). In The Matrix
and the cloud revelation narrative, however, rather than achieving some sort of transcendental
access to an immaterial or ethereal realm beyond the cloaked world of matter, we experience the
violent return of materiality: Neo is a slave in a machine world; the cloud is concrete. Data centre
visual revelations enact a mythic reversal (in Lévi-Straussian terms) of the traditional narrative of
mystical illumination. Like the deconditioning process that occurs in the Construct, the aim of the
cloud revelation narrative is not so much to open viewers‘ eyes and minds to the spiritual,
psychical, or divine world hidden behind the material world, but rather to reveal the material
reality behind the illusory myth of the cloud.
We must, however, not overlook the slight but significant way in which the cloud revelation
narrative departs from that of The Matrix. Whereas in The Matrix the white Construct space
operates outside of both the virtualised world of the Matrix and the ruins of the present-day world,
in the case of the cloud revelation narrative, the white space and ―reality‖ collapse into each other:
white space is represented as the reality of the cloud. It is in this functioning, that we find the
ideological operation of the data centre imagery released by tech companies and it is precisely
here, in this sl(e)ight difference that a number of critics of the data centre industry have positioned
their analyses.
While these revelatory images of white space were released by tech companies predominantly to
promote transparency and improve the public profiles of data centres, critics of the industry have
noted that these images offer very little in terms of providing any meaningful insights into the
political realities of data storage practices. Glossy close-ups of wires, pipes, and servers ―drag‖ this
equipment out of its context (Rojek), concealing the ―real‖ facts about how these buildings
operate (Strathern 314). Holt and Vonderau (75) have noted how these images transform cloud
infrastructure into a kind of abstract art [Figure 3], obliterating the trace of any relationship
between the imaged equipment and larger environmental, political and social processes in which
they participate.[10] Indeed, non-specialists will have little clue as to what is being pictured.
Critical commentators have thus argued that media economies provide the data centre industry
with a circuity in which to mediate its own visibility, ―dictating the terms of its own representation‖
(Levin and Jeffery 8). These coded representational strategies have the inverse effect of rendering
these buildings and their social relations more invisible and opaque. Through carefully exposed,
angled, and framed photographic and filmic image-products data centres are transformed into
beautiful, aestheticised whitescapes, the sociopolitical structures of knowledge and the vested
interests that configure the conditions of data centre visibility are effectively whitewashed.
Something of a stalemate has thus ensued. While the data centre industry continues to pump out
―behind-the-screens‖ footage of cloud infrastructure, that footage is always suspect, insufficient
and concealing something else.
Figure 3: The abstract art of cloud infrastructure.
Conclusion: Cloudy Spaces
Despite the heavy criticism of ―the cloud‖ as a misleading metaphor, in an accidental way, the idea
of cloudiness accurately captures the obscurities, opacities, and contradictions of data centres and
openly points to the obfuscatory operations of ―clouding‖ or concealing that occur in these spaces.
In his Remarks on Colour, Ludwig Wittgenstein focuses his attention on the seemingly trivial fact
of white‘s opacity and its relation to transparency and cloudiness. ―Is cloudy that which conceals
forms, and conceals forms because it obliterates light and shadow?‖ (15). He follows this question
with another: ―Isn‘t white that which does away with darkness?‖ (Ibid). Whiteness lightens, so the
puzzle proposition goes, but clouds, though white, obscure light. The white spaces of cloud
infrastructure operate in similarly contradictory ways. While data centres are essential to storing
and managing the information that plays such an important part in public imaginaries of the
―transparent‖ information society, their architectures do not readily reflect the logic of
transparency associated with the data they store (Shoked). Their hyper-illuminated interiors are
completely opaque, but, in a ―post-truth‖ (or, to perhaps be more accurate, ―post-transparent‖)
kind of way, they are arguably more transparent than the suspiciously ―over-transparent‖
architectures of the Information Age. Transparency and opacity cease to be perceived as
contradictory. As a future-proofing measure the white spaces of cloud infrastructure are at once
orientated toward the future, but also serve to block the threatening future out, steeped in
mediatised memories of futures past. Spanning the real and the fictive, these cinematic spaces
engage with cultural manifestations of futurity in film, cyberpunk, and interior design, and play
with the signs of transparency and visibility to such an extent that physicality, fantasy, and
technical function are inseparably entangled.
These fantastic buildings, epic in volume, sit at the centre of the ongoing industrialisation of
computing, and whiteness thus plays an important part in their functioning, staging, and
imag(in)ing (Larkin). Reversing the revelation narratives of mystical experience, the white spaces
of data centres perhaps reveal less about ―the cloud‖ and more about cultural fetishisations of the
physical sites of production and the inability of analytical dualisms like transparency/opacity,
materiality/immateriality and visibility/invisibility to adequately capture and conceptualise the
contradictions of digital-industrial infrastructures. Data centre technoaesthetics performatively play
with and collapse distinctions between these categories. Questions, accusations, and analytics that
fall along these lines appear hopelessly ill-adapted to the post-transparent logics of these
architectures and the terrain they traverse.
Works Cited
2001: A Space Odyssey. Directed by Stanley Kubrick. Distributed by Metro-Goldwyn-Mayer. 1968.
Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. University of
Minnesota Press. 1996.
Arnall, Timo. 2015. ―Internet Machine.‖ Installation at Big Bang Exhibition, London. 2015
http://bit.ly/2wHeXzW
Ballard, J.G. 2006. ―A Handful of Dust.‖ The Guardian. 2006. http://bit.ly/2wHnLpv
Batchelor, David. Chromophobia. Reaktion. 2000.
---. The Luminous and the Grey. Reaktion. 2014.
Berthold, Dana. ―Tidy Whiteness: A Genealogy of Race, Purity and Hygiene.‖ Ethics and the
Environment. Vol. 15, No. 1, 2010, pp. 1-26.
Blanchette, Jean-François. ―Introduction: Computing‘s Infrastructural Moment.‖ Regulating the
Cloud: Policy for Cloud Infrastructure. Edited by Jean-François Blanchette and Christopher S. Yoo.
MIT Press. 2015.
Blum, Andrew. Tubes: Behind the Scenes of the Internet. Viking. 2012.
Bridle, James. ―Secret Servers.‖ Booktwo. 2011. http://bit.ly/2wdEFZE
---. ―The Cloud Index.‖ Cloudindx. Digital project commissioned by the Serpentine Galleries,
London. 2016. http://cloudindx.com
Cardy, Ian. Head of Disaster Recovery at Secura. Unstructured Interview Transcription. 29 March
2016.
Cretton, Viviane. ―Performing Whiteness: Racism, Skin Colour and Identity in Western
Switzerland.‖ Ethnic and Racial Studies. 2017.
Dawn-Hiscox, Tanwen. ―Estonia to Create ‗Data Embassy‘ in Luxembourg.‖ Data Center Dynamics.
2017. http://bit.ly/2wHopTE
Dyer, Richard. White. Routledge. 1997.
El-Khoury, Rodolphe. ―Polish and Deodorise: Paving the City in Late-Eighteenth-Century France‖.
Assemblage, No. 31, 1996, pp. 6-15.
Frankland, Stan. ―The Bulimic Consumption of Pygmies: Regurgitating and Image of Otherness.‖
The Framed World: Tourism, Tourists and Photography. Edited by Mike Robinson and David
Picard. Ashgate. 2009.
Garner, Steve. Whiteness: An Introduction. Routledge. 2007.
Gibson, William. Neuromancer. Harper Voyager. 1984.
Graham, Stephen. ―Data Archipelagos‖. CLOG: Data Space. Edited by Kyle May. 2012, pp. 20-22.
Hartley, Stuart. Chief Technology Officer at Secura. Unstructured Interview Transcription. 22
March, 2016.
Holt, Jennifer and Patrick Vonderau. ―Where the Internet Lives: Data Centres as Cloud
Infrastructure‖. Signal Traffic: Critical Studies of Media Infrastructure. Edited by Lisa Parks and
Nicole Starosielski. University of Illinois Press. 2015, pp. 71-93.
Hu, Tung-Hui. A Prehistory of the Cloud. MIT Press. 2015.
Jakobsson, Peter and Fredrik Stiernstedt. ―Time, Space and Clouds of Information: Data Centre
Discourse and the Meaning of Durability.‖ Cultural Technologies: The Shaping of Culture in Media
and Society. Edited by Göran Bolin. Routledge. 2012.
Johnson, Alix. ―Data Havens of Iceland.‖ Savage Minds. 2014. http://bit.ly/2wPL67L
Jones, Jonathan. ―Where the Internet Lives.‖ The Guardian. 2015. http://bit.ly/2vy2Gc1
Kuchinskaya, Olga. The Politics of Invisibility: Public Knowledge about Radiation Health Effects
After Chernobyl. MIT Press. 2014.
Larkin, Brian. ―The Politics and Poetics of Infrastructure.‖ Annual Review of Anthropology. No. 42,
2013, pp. 327-43.
Le Corbusier. The Decorative Art of Today. 1925. Translated by James I Dunnett. Architectural
Press. 1987.
Leary, Timothy. ―The Religious Experience.‖ Journal of Psychedelic Drugs. Vol. 3, No. 1, 1970, pp.
76-86.
Lefebvre, Henri. The Production of Space. Translated by David Nicholson-Smith. Blackwell. 1974.
Levin, Boaz and Ryan S. Jeffery. ―Lost in the Cloud: The Representation of Networked
Infrastructure and its Discontents.‖ Spheres Journal for Digital Culture, No. 3,
2016. http://bit.ly/2w6OI4u
Masco, Joseph. ―Nuclear Technoaesthetics: Sensory Politics from Trinity to the Virtual Bomb in Los
Alamos.‖ American Ethnologist. Vol. 31, No. 3, 2004, pp. 349-373.
Ghost Rider 2099. Written by Len Kaminski. Marvel Comics. 1994-1996.
Mehrpouya, Afshin and Marie-Laure Djelic. ―Transparency: From Enlightenment to Neoliberalism
or When a Norm of Liberation becomes a Tool of Governing.‖ HEC Paris Research Paper, No. ACC-
2014-1059, 2014.
Michaux, Henri. ―With Mescaline.‖ Darkness Moves: An Henri Michaux Anthology 1927-1984.
Translated by David Ball. University of California Press. 1994.
Miller, Rich. ―The Space Station Data Centre.‖ Data Centre Knowledge. 2013. http://bit.ly/2vy4yl7
Moss, Sebastian. ―Chanel uses Data Centre Theme at Paris Show.‖ Data Centre Dynamics. 2016.
http://bit.ly/2d8S5OM
Parks, Lisa. ―Obscure Objects of Media Studies: Echo, Hotbird and Ikonos.‖ Strange Spaces:
Explorations into Mediated Obscurity. Edited by Andre Jansson and Amanda Langerkvist.
Routledge. 2009.
Quirk, Vanessa. ―Data Centres: Anti-Monuments of the Digital Age.‖ ArchDaily. 2012.
http://bit.ly/2wdhqPf
Rojek, Chris. ―Indexing, Dragging and the Social Construction of Tourist Sights.‖ Touring Cultures:
Transformations of Travel and Theory. Edited by Chris Rojek and John Urry. Routledge. 2003.
Shoked, Noam. ―Transparent Data, Opaque Architecture.‖ CLOG: Data Space. Edited by Kyle May.
2012, pp. 100-101.
Shore, Jonathan. Data Hall Manager at Secura. Unstructured Interview Transcription. 7 April
2016a.
---. Data Hall Manager at Secura. Unstructured Interview Transcription. 25 August 2016b.
Smolaks, Max. Let‘s Make Data Centre‘s Cool! Data Centre Dynamics. 2016.
http://bit.ly/2wQmxqW
Space 1999. Created by Gerry and Sylvia Anderson. ITV Network. 1975-1977.
Stasch, Rupert. ―The Camera and the House: The Semiotics of New Guinea ‗Treehouses‘ in Global
Visual Culture.‖ Comparative Studies in Society and History. 53(1), 2011, pp. 75–112.
Star Trek. Created by Gene Roddenberry. Paramount Television, 1966-1969
Starosielski, Nicole. ―Fixed Flow: Undersea Cables as Media Infrastructure.‖ Signal Traffic: Critical
Studies of Media Infrastructure. Edited by Lisa Parks and Nicole Starosielski, University of Illinois
Press, 2015, pp. 53-70.
Starosielski, Nicole. Surfacing. Online digital resource. 2016. http://surfacing.in
Strathern, Marilyn. ―The Tyranny of Transparency.‖ British Educational Research Journal. Vol. 26,
No. 3, 2000, pp. 309-321.
Surette, Leon. The Birth of Modernism: Ezra Pound, T.S. Eliot, W.B. Yeats and the Occult. McGill-
Queen‘s University Press. 1993.
The Matrix. Directed by The Wachowski Brothers. Distributed by Warner Bros. 1999.
―Homer3, Treehouse of Horror VI.‖ The Simpsons. Created by Matt Groening. Fox. 1995
THX 1138. Directed by George Lucas. Distributed by Warner Bros. 1971.
Vonderau, Asta. ―On the Poetics of Infrastructure: Technologies of Cooling and Imagination in
Digital Capitalism.‖ eitschrift f r Volkskunde. Beitr ge zur Kulturforschung. Vol. 113/1, 2017, pp.
24-40.
Wigley, Mark. White Walls, Designer Dresses: The Fashioning of Modern Architecture. MIT Press.
1995.
Wittgenstein, Ludwig. Remarks on Colour. Blackwell Publishing. 1979.
Yates, Frances A. The Rosicrucian Enlightenment. Routledge. 2002.
Endnotes
[1] All names and identifying details have been changed to protect the privacy of individuals. The
material presented in this article is drawn from fieldwork and interviews with data centre
practitioners over a 15-month period.
[2] See Holdt and Vonderau for a more comprehensive overview of the ―technopolitics‖ of data
centre ―hypervisibility‖.
[3] The Simpsons‘ ―Homer3‖ episode from Treehouse of Horror VI (1995) and Marvel‘s Ghost Rider
2099 comic book series (1994-1996) are also clear examples of this grid aesthetic.
[4] Jean-François Blanchette has provided a detailed historical contextualisation of virtualisation
(7).
[5] For a nuanced analysis of virtualisation as a political ideology productive of neoliberal
subjectivities, see Tung-Hui Hu (2015).
[6] Le Corbusier introduced his theory of the white wall in his book, The Decorative Art of Today
(1925).
[7] While the focus of this essay is colour, the politics of texture play an important role in the
production (and selling) of data centre space. A significant function of the hard surfaces of the
data centre is their role as security shields against electromagnetic radiation. These surfaces are
usually made from specially reinforced metallic panels designed to block the various frequencies,
fields, signals, waves and rays from threatening sources of electromagnetic radiation such as
lightning, electromagnetic pulses, space weather, and radio frequency interception devices.
[8] The influence of the cinescape of the spaceship in data centre design is perhaps best
illustrated by the space station-themed data centre developed in 2013 by Bahnhof, a Swedish
Internet service provider (Miller). Influenced by sci-fi TV shows like Star Trek (1966-1969) and
Space 1999 (1975-1977), the outer shell of the data centre is an inflatable dome reminiscent of
Buckminster Fuller‘s geodesic domes and was built by Lindstrand Technologies, who previously
built the parachute that deposited the Beagle 2 space probe on the surface of Mars. In
a promotional video, Jon Karlung, the CEO and visionary behind the science fictional data centre
describes how the automated pneumatic entry doors make a Star Trek-style whooshing noise as
they open and close (Miller). Jakobsson and Stiernstedt have an insightful analysis of Pionen, an
older data centre owned by Bahnhof that is equally science fictional.
[9] As Marilyn Strathern asks, ―what does visibility conceal?‖ (310).
[10] By presenting data centres as a stand-in for the ―globally dispersed forces that actually drive
the production process‖, corporate images of data centres enact what Appadurai has called
―production fetishism‖ (41). Building upon Marx‘s theory of commodity fetishism, Appadurai‘s
concept of the production fetish addresses the way in which locality – the local factory or other
site of production (in this case the data centre) - ―becomes a fetish‖ to the extent that they
conceal geopolitical and transnational relations that are vital to the production of the cloud.
Data Center Definition and Solutions
Data Center topics covering definition, objectives, systems and solutions.
By Michael Bullock
CIO | 14 August 2009 18:00 AST
 What is a data center?
 How are data centers managed?
 What is a green data center?
 What are some top stakeholder concerns about data centers?
 What options are available when I'm running out of power, space or cooling?
 What are some data center measurements and benchmarks and where can I find
them?
 Is the federal government involved in data centers?
 What should I consider when moving my data center?
 What data center technologies should I be aware of?
What is a data center?
Known as the server farm or the computer room, the data center is where the majority of an
enterprise servers and storage are located, operated and managed. There are four primary
components to a data center:
White space: This typically refers to the usable raised floor environment measured in
square feet (anywhere from a few hundred to a hundred thousand square feet). For data
centers that dont use a raised floor environment, the term "white space" may still be used to
show usable square footage.
Support infrastructure: This refers to the additional space and equipment required to
support data center operations — including power transformers, your uninterruptible power
source (UPS), generators, computer room air conditioners (CRACs), remote transmission
units (RTUs), chillers, air distribution systems, etc. In a high-density, Tier 3 class data center
(i.e. a concurrently maintainable facility), this support infrastructure can consume 4-6 times
more space than the white space and must be accounted for in data center planning.
How are data centers managed?
Operating a data center at peak efficiency and reliability requires the combined efforts of
facilities and IT.
IT systems: Servers, storage and network devices must be properly maintained and
upgraded. This includes things like operating systems, security patches, applications and
system resources (memory, storage and CPU).
Facilities infrastructure: All the supporting systems in a data center face heavy loads and
must be properly maintained to continue operating satisfactorily. These systems include
cooling, humidification, air handling, power distribution, backup power generation and much
more.
Monitoring: When a device, connection or application fails, it can take down mission
critical operations. Sometimes, one system's failure will cascade to applications on other
systems that rely on the data or services from the failed unit. For example, multiple
systems, such as inventory control, credit card processing, accounting and much more will
be involved in a complex process such as eCommerce checkout. A failure in one will
compromise all the others. Additionally, modern applications typically have a high degree of
device and connection interdependence. Ensuring maximum uptime requires 24/7
monitoring of the applications, systems and key connections involved in all of an enterprises
various workflows.
Building Management System: For larger data centers, the building management system
(BMS) will allow for constant and centralized monitoring of the facility, including
temperature, humidity, power and cooling.
The management of IT and data center facilities are often outsourced to third party
companies that specialize in the monitoring, maintenance and remediation of systems and
facilities on a shared services basis.
What is a green data center?
A green data center is one that can operate with maximum energy efficiency and minimum
environmental impact. This includes the mechanical, lighting, electrical and IT equipment
(servers, storage, network, etc.). Within corporations, the focus on green data centers is
driven primarily by a desire to reduce the tremendous electricity costs associated with
operating a data center. That is, going green is recognized as a way to reduce operating
expense significantly for the IT infrastructure.
The interest in green data centers is also being driven by the federal government. In 2006,
Congress passed public law 109-431 asking the EPA to: "analyze the rapid growth and
energy consumption of computer data centers by the Federal Government and private
enterprise."
In response, the EPA developed a comprehensive report analyzing current trends in the use
of energy and the energy costs of data centers and servers in the U.S. and outlined existing
and emerging opportunities for improving energy efficiency. It also made recommendations
for pursuing these energy-efficiency opportunities broadly across the country through the
use of information and incentive-based programs.
According to the EPA report, the two largest consumers of electricity in the data center are:
• Support infrastructure — 50% of total
• General servers — 34% of total
Since then, significant strides have been made to improve the efficiency of servers. High
density blade servers and storage are now offering much more compute capacity per Watt
of energy. Server virtualization is allowing organizations to reduce the total number of
servers they support, and the introduction of EnergyStar servers have all combined to
provide many options for both the public and private sectors to reduce that 34% of
electricity being spent on the general servers.
Of course, the greatest opportunity for further savings is in the support infrastructure of the
data center facility itself. According to the EPA, most data centers consume 100% to 300%
of additional power for the support systems than are being used for their core IT operations.
Through a combination of best practices and migration to fast-payback facility improvements
(like ultrasonic humidification and tuning of airflow), this overhead can be reduced to about
30% of the IT load.
What are some top stakeholder concerns about data centers?
While the data center must provide the resources necessary for the end users and the
enterprise's applications, the provisioning and operation of a data center is divided
(sometimes uncomfortably) between IT, facilities and finance, each with its own unique
perspective and responsibilities.
IT: It is the responsibility of the business's IT group to make decisions regarding what
systems and applications are required to support the business' operations. IT will directly
manage those aspects of the data center that relate directly to the IT systems while relying
on facilities to provide for the data center's power, cooling, access and physical space.
Facilities: The facilities group is generally responsible for the physical space — for
provisioning, operations and maintenance, along with other building assets owned by the
company. The facilities group will generally have a good idea of overall data center
efficiency and will have an understanding of and access to IT load information and total
power consumption.
Finance: The finance group will be responsible for aligning near term vs. long term capital
expenditures (CAPEX) to acquire or upgrade physical assets and operating expenses (OPEX)
to run them with overall corporate financial operations (balance sheet and cash flow).
Perhaps the biggest challenge confronting these three groups is that by its very nature a
data center rarely will be operating at or even close to its optimally defined range. With a
typical life cycle of 10 years (or perhaps longer), it is essential that the data center's design
remains sufficiently flexible to support increasing power densities and various degrees of
occupancy over a not insignificant period of time. This in-built flexibility should apply to
power, cooling, space and network connectivity. When a facility is approaching its limits of
power, cooling and space, the organization will be confronted by the need to optimize its
existing facilities, expand them or establish new ones.
What options are available when I'm running out of power, space or cooling?
Optimize: The quickest way to address this problem and increase available power, space
and cooling is to optimize an existing facility. The biggest gains in optimization can be
achieved by reducing overall server power load (through virtualization) and by improving the
efficiency of the facility. For example, up to 70% of the power required to cool and humidify
the data center environment can be conserved with currently available technologies such as
outside air economizers, ultrasonic humidification, high efficiency transformers and variable
frequency drive units (VFDs). Using these techniques when combined with new, higher
density IT systems will allow many facilities to increase IT capacity while simultaneously
decreasing facility overhead.
Move: If your existing data center can no longer be upgraded to support today's more
efficient (but hotter running and more energy-thirsty) higher-density deployments, there
may be nothing you can do except to move to a new space. This move will likely begin with
a needs assessment/site selection process and will conclude with an eventual build-out of
your existing facility or a move to a new building and site.
Outsource: Besides moving forward with your own new facility, there are two other options
worth consideration:
• Colocation: This means moving your data center into space in a shared facility managed
by an appropriate service provider. As there are a broad range of business models for how
these services can be provided (including business liability), it is important to make sure the
specific agreement terms match your short-and-long term needs and (always) take into
account the flexibility you require so that your data center can evolve over its lifespan.
• Cloud computing: The practice of leveraging shared computing and storage resources
— and not just the physical infrastructure of a colocation provider — has been growing
rapidly for certain niche-based applications. While cloud computing has significant quality-of-
service, security and compliance concerns that to date have delayed full enterprise-wide
deployment, it can offer compelling advantages in reducing startup costs, expenses and
complexity.
What are some data center measurements and benchmarks and where can I find
them?
PUE (Power Usage Effectiveness): Created by members of the Green Grid, PUE is a
metric used to determine a data center's energy efficiency. A data center's PUE is arrived at
by dividing the amount of power entering it by the power used to run the computer
infrastructure within it. Expressed as a ratio, with efficiency improving as the ratio
approaches 1, data center PUE typically range from about 1.3 (good) to 3.0 (bad), with an
average of 2.5 (not so good).
DCiE (Data Center Infrastructure Efficiency): Created by members of the Green Grid,
DCiE is another metric used to determine the energy efficiency of a data center, and it is the
reciprocal of PUE. It is expressed as a percentage and is calculated by dividing IT equipment
power by total facility power. Efficiency improves as the DCiE approaches 100%. A data
center's DCiE typically ranges from about 33% (bad) to 77% (good), with an average DCiE
of 40% (not so good).
LEED Certified: Developed by the U.S. Green Building Council (USGBC), LEED is an
internationally recognized green building certification system. It provides third-party
verification that a building or community was designed and built using strategies aimed at
improving performance across all the metrics that matter most: energy savings, water
efficiency, CO2 emission reduction, the quality of the indoor environment, the stewardship
of resources and the sensitivity to their impact on the general environment. For more
information on LEED, go to www.usgbc.org.
The Green Grid: A not-for-profit global consortium of companies, government agencies
and educational institutions dedicated to advancing energy efficiency in data centers and
business computing ecosystems. The Green Grid does not endorse vendor-specific products
or solutions, and instead seeks to provide industry-wide recommendations on best practices,
metrics and technologies that will improve overall data center energy efficiencies. For more
on the Green Grid, go to www.thegreengrid.org.
Telecommunications Industry Association (TIA): TIA is the leading trade association
representing the global information and communications technology (ICT) industries. It
helps develop standards, gives ICT a voice in government, provides market intelligence,
certification and promotes business opportunities and world-wide environmental regulatory
compliance. With support from its 600 members, TIA enhances the business environment
for companies involved in telecommunications, broadband, mobile wireless, information
technology, networks, cable, satellite, unified communications, emergency communications
and the greening of technology. TIA is accredited by ANSI.
TIA-942: Published in 2005, the Telecommunications Infrastructure Standards for Data
Centers was the first standard to specifically address data center infrastructure and was
intended to be used by data center designers early in the building development process.
TIA-942 covers:
• Site space and layout
• Cabling infrastructure
• Tiered reliability
• Environmental considerations
Tiered Reliability — The TIA-942 standard for tiered reliability has been adopted by ANSI
based on its usefulness in evaluating the general redundancy and availability of a data
center design.
Tier 1 Basic — no redundant components (N): 99.671% availability
• Susceptible to disruptions from planned and unplanned activity
• Single path for power and cooling
• Must be shut down completely to perform preventive maintenance
• Annual downtime of 28.8 hours
Tier 2 — Redundant Components (limited N+1): 99.741% availability
• Less susceptible to disruptions from planned and unplanned activity
• Single path for power and cooling includes redundant components (N+1)
• Includes raised floor, UPS and generator
• Annual downtime of 22.0 hours
Tier 3 — Concurrently Maintainable (N+1): 99.982% availability
• Enables planned activity (such as scheduled preventative maintenance) without disrupting
computer hardware operation (unplanned events can still cause disruption)
• Multiple power and cooling paths (one active path), redundant components (N+1)
• Annual downtime of 1.6 hours
Tier 4 — Fault Tolerant (2N+1): 99.995% availability
• Planned activity will not disrupt critical operations and can sustain at least one worst-case
unplanned event with no critical load impact
• Multiple active power and cooling paths
• Annual downtime of 0.4 hours
Due to the doubling of infrastructure (and space) over Tier 3 facilities, a Tier 4 facility will
cost significantly more to build and operate. Consequently, many organizations prefer to
operate at the more economical Tier 3 level as it strikes a reasonable balance between
CAPEX, OPEX and availability.
Uptime Institute: This is a for profit organization formed to achieve consistency in the
data center industry. The Uptime Institute provides education, publications, consulting,
research, and stages conferences for the enterprise data center industry. The Uptime
Institute is one example of a company that has adopted the TIA-942 tier rating standard as
a framework for formal data center certification. However, it is important to remember that
a data center does not need to be certified by the Uptime Institute in order to be compliant
with TIA-942.
Is the federal government involved in data centers?
Since data centers consume a far greater share of the power grid than any other sector,
they have attracted the attention of the federal government and global regulatory agencies.
Cap and Trade: Sometimes called emissions trading, this is an administrative approach to
controlling pollution by providing economic incentives for achieving reductions in polluting
emissions. In concept, the government sets a limit ("a cap") on the amount of pollutants an
enterprise can release into the environment. Companies that need to increase their
emissions must buy (or trade) credits from those who pollute less. The entire system is
designed to impose higher costs (essentially, taxes) on companies that don't use clean
energy sources. The Obama administration is proposing Cap and Trade legislation and that
is expected to affect U.S. energy prices and data center economics in the near future.
DOE (Department of Energy): The U.S. Department of Energys overarching mission is to
advance the national, economic, and energy security of the United States. The EPA and the
DOE have initiated a joint national data center energy efficiency information program. The
program is engaging numerous industry stakeholders who are developing and deploying a
variety of tools and informational resources to assist data center operators in their efforts to
reduce energy consumption in their facilities.
EPA (Environmental Protection Agency): The EPA is responsible for establishing and
enforcing environmental standards in order to safeguard the environment and thereby
improve the general state of Americas health. In May 2009 the EPA released Version 1 of
the ENERGY STAR® Computer Server specification detailing the energy efficiency standards
required by the agency. Servers have to carry the label.
PL 109-431: Passed in December 2006, the law instructs the EPA to report to congress the
status of IT data center energy consumption along with recommendations to promote the
use of energy efficient computer servers in the US. It resulted in a "Report to Congress on
Server and Data Center Energy Efficiency" delivered in August 2007 by the EPA ENERGY
STAR Program. This report assesses current trends in the energy use and energy costs of
data centers and servers in the US and outlines existing and emerging opportunities for
improved energy efficiency. It provides particular information on the costs of data centers
and servers to the federal government and opportunities for reducing those costs through
improved efficiency. It also makes recommendations for pursuing these energy-efficiency
opportunities broadly across the country through the use of information and incentive-based
programs.
What should I consider when moving my data center?
When a facility can no longer be optimized to provide sufficient power and cooling — or it
can't be modified to meet evolving space and reliability requirements — then you're going to
have to move. Successful data center relocation requires careful end-to-end planning.
Site selection: A site suitability analysis should be conducted prior to leasing or building a
new data center. There are many factors to consider when choosing a site. For example, the
data center should be located far from anyplace where a natural disaster — floods,
earthquakes and hurricanes — could occur. As part of risk mitigation, locations near major
highways and aircraft flight corridors should be avoided. The site should be on high ground,
and it should be protected. It should have multiple, fully diverse fiber connections to
network service providers. There should be redundant, ample power for long term needs.
The list can go on and on.
Move execution: Substantial planning is required at both the old and the new facility
before the actual data center relocation can begin. Rack planning, application dependency
mapping, service provisioning, asset verification, transition plans, test plans and vendor
coordination are just some of the factors that go into data center transition planning.
If you are moving several hundred servers, the relocation may be spread over many days. If
this is the case, you will need to define logical move bundles so that interdependent
applications and services can be moved together so that you will be able to stay in operation
up to the day on which the move is completed.
On move day, everything must go like clockwork to avoid down time. Real time visibility into
move execution through a war room or a web-based dashboard will allow you to monitor
the progress of the move and be alerted to potential delays that require immediate action or
remediation.
What data center technologies should I be aware of?
Alternative Energy: Solar, wind and hydro show great potential for generating electricity
in an eco-friendly manner. Nuclear and hydro show great potential for grid based, green
power. However, the biggest challenge when it comes to using alternative energy for your
data center applications is the need for a constant supply at high service levels. If you use
alternative energy but still need to buy from the local power company when hit with peak
loads, many of the economic benefits youre reaping from the alternative energy source will
disappear quickly. As new storage mechanisms are developed that capture and store the
excess capacity so it can be accessed when needed, then alternative energy sources will
play a much greater role in the data center than they do today. Water and air based storage
systems show great potential as eco-friendly energy storage options.
Ambient Return: This is a system whereby air returns to the air conditioner unit naturally
and unguided. This method is inefficient in some applications because it is prone to mixing
hot and cold air, and to stagnation caused by static pressure, among other problems.
Chiller based cooling: A type of cooling where chilled water is used to dissipate heat in
the CRAC unit (rather than glycol or refrigerant). The heat exchanger in a chiller based
system can be air or water cooled. Chiller based system provide CRAC units with greater
cooling capacity than DX based systems. Besides removing the DX limitation of a 24° F.
spread between output and input, the chiller system can adjust dynamically based on load.
Chimney effect: Just as your home chimney leverages air pressure differences to drive
exhaust, the same principle can be used in the data center. This has lead to a common
design with cool air being fed below a raised floor and pulled into the data center as hot air
escapes above through the chimney. This design creates a very efficient circulation of cool
air while minimizing air mixing.
Cloud computing: This is a style of computing that is dynamically scalable through
virtualized resources provided as a service over the Internet. In this model the customer
need not be concerned with the technical details of the remote resources. (That's why it is
often depicted as a cloud in system diagrams.) There are many different types of cloud
computing options with variations in security, backup, control, compliance and quality of
service that must be thoroughly vetted to assure their use does not put the organization at
risk.
Previous 1 2 3
Cogeneration: This is the use of an engine (typically diesel or natural gas based) to
generate electricity and useful heat simultaneously. The heat emitted by the engine in a
data center application can be used by an "absorption chiller" (a type of chiller that converts
heat energy into cooling) providing cooling benefits in addition to electric power. In addition,
excess electricity generated by the system can be sold back to the power grid to defray
costs. In practice, the effective ROI of cogeneration is heavily dependent on the spread
between the cost of electricity and fuel. The cogeneration alternative will also contribute to
substantial increase in CO2 emissions for the facility. This runs counter to the trend toward
eco-friendly solutions and will create a liability in Cap and Trade carbon trading.
Colocation: Colocation is one of several business models where your data center facilities
are provided by another company. In the colocation option, data centers for multiple
organizations can be housed in the same facility sharing common power and cooling
infrastructure and facilities management. Colocation differs from a dedicated hosting
provider in that the client owns its own IT systems and has greater flexibility in what
systems and applications reside in their data center. The lines are blurred between the
various outsourcing models with variations in rights, responsibilities and risks. For this
reason, when evaluating new facilities it is important to make sure the business terms align
properly with your long term needs for the space.
Containers: The idea of a data center in a container is that all the power, cooling, space
and connectivity can be provisioned incrementally through self contained building blocks, or
standard sized shipping containers. These containers can be placed outside your place of
business to expand data center capacity or may be deployed in a warehouse type
environment. The primary benefit data center containers provide are that they support rapid
deployment and are integrated and tuned to support very high power densities. Containers
have been embraced for use in cloud type services by Google and Microsoft. The potential
downsides of containers are several: They are expensive (more per useable SF than custom
built facilities), tend to be homogeneous (designed for specific brands/models of systems)
and are intended for autonomous operation (the container must remain sealed to operate
within specifications).
CRAC (Computer Room Air Conditioner): A CRAC is a specialized air conditioner for
data center applications that can add moisture back into the air to maintain the proper
humidity level required by the electronic systems.
DX cooling (direct expansion): A compressor and glycol/refrigerant based system that
uses airflow to dissipate heat. The evaporator is in direct contact with the air stream, so the
cooling coil of the airside loop is also the evaporator of the refrigeration loop. The term
"direct" refers to the position of the evaporator with respect to the airside loop. Because a
DX-based system can reduce the air temperature by a maximum of 23° F, they are much
more limited in application when compared to more flexible chiller based systems.
Economizer: As part of a data center cooling system, air economizers expel the hot air
generated by the servers/devices outdoors and draw in the relatively cooler outside air
(instead of cooling and recirculating the hot air from the servers). Depending on the outdoor
temperature, the air conditioning chiller can either be partially or completely bypassed,
thereby providing what is referred to as free cooling. Naturally, this method of cooling is
most effective in cooler climates.
Fan tile: A raised floor data center tile with powered fans that improve airflow in a specific
area. Fan tiles are often used to help remediate hot spots. Hot spots are often the result of
a haphazard rack and server layout, or an overburdened or inadequate cooling system. The
use of fan tiles may alleviate a hot spot for a period of time, but improved airflow and
cooling systems that reduce electricity demands generally are a better option for most
facilities.
Floor to Ceiling Height: In modern, high-density data centers, the floor to ceiling height
has taken on greater importance in site selection. In order to build a modern, efficient
facility, best practices now call for a 36-foot (or more) raised floor plenum to distribute cool
air efficiently throughout the facility (with overhead power and cabling). In addition, by
leveraging the chimney effect and hot air return, the system can efficiently reject the hot air
while introducing a constant flow of cool air to the IT systems. To build a facility
upgradeable to 400 watts/SF, you should plan on a floor to ceiling height of at least 18 feet.
Some data center designs forego a raised floor and utilize custom airflow ducting and
vertical isolation. Since this is a fairly labor intensive process and is tuned to a specific rack
layout, it may not be suitable for installations where the floor plan is likely to evolve over the
life of the data center.
Flywheel UPS system: A low-friction spinning cylinder that generates power from kinetic
energy, and continues to spin when grid power is interrupted. The flywheel provides ride-
through electricity to keep servers online until the generators can start up and begin
providing power. Flywheels are gaining attention as an eco-friendly and space saving
alternative to traditional battery based UPS systems. The downside to flywheel power
backup is that the reserve power lasts only 15-45 seconds as compared to a 20 minute
window often built into battery backups.
Hot Aisle/Cold Aisle: Mixing hot air (from servers) and cold air (from air conditioning) is
one of the biggest contributors to inefficiencies in the data center. It creates hot spots,
inconsistent cooling and unnecessary wear and tear on the cooling equipment. A best
practice to minimize air mixing is to align the racks so that all equipment exhausts in the
same direction. This is achieved simply by designating the aisles between racks as either
exclusively hot-air outlets or exclusively cool-air intakes. With this type of deployment, cold
air is fed to the front of the racks by the raised floor and then exhausted from the hot aisles
overhead.
NOC (Network Operations Center): A service responsible for monitoring a computer
network for conditions that may require special attention to avoid a negative impact on
performance. Services may include emergency support to remediate Denial-of-Service
attacks, loss of connectivity, security issues, etc.
Rack Unit: A rack unit or U (less commonly, RU) is a unit of measure describing the height
of equipment intended for mounting in a computer equipment mounting rack. One rack unit
is 1.75 inches (44.45 mm) high.
RTU (Rooftop Unit): RTUs allow facilities operators to place data center air conditioning
components on the building's roof, thereby conserving raised white space while improving
efficiency. In addition, as higher performance systems become available, RTUs can be easily
upgraded without affecting IT operations.
Power-density: As servers and storage systems evolve to become ever more powerful and
compact, they place a greater strain on the facility to deliver more power, reject more heat
and maintain adequate backup power reserves (both battery backup and onsite power
generation). When analyzing power-density, it is best to think in terms of Kw/rack and total
power, not just watts per square foot (which is a measure of facility capacity). Note: See
watts per square foot.
Power Density Paradox: Organizations with limited data center space often turn to
denser equipment to make better use of the space available to them. However, due to the
need for additional power, cooling and backup to drive and maintain this denser equipment,
an inversion point is reached where the total need for data center space increases rather
than falls. This is the power density paradox. The challenge is to balance the density of
servers and other equipment with the availability of power, cooling and space in order to
gain operating efficiencies and lower net costs.
Raised-floor plenum: This is the area between the data center sub floor and the raised
floor tiles. It is typically used to channel pressurized cold air up through floor panels to cool
equipment. It has also been used to route network and power cables, but this is not
generally recommended for new data center design.
Remote hands: In a hosted or colocation data center environment, remote hands refers to
the vendor-supplied, on-site support services for engineering assistance, including the power
cycling of IT equipment, visual inspection, cabling and maybe even swap out of systems.
Steam Humidification: Through the natural cooling process of air conditioning, the
humidity levels of a data center are reduced, just as you would find in a home or office air
conditioning environment. However, due to the constant load of these AC systems, too
much moisture is removed from most IT environments and must be reintroduced to
maintain proper operating humidity levels for IT equipment. Most CRAC units use a relatively
expensive heat/steam generation process to increase humidity. These steam-based systems
also increase the outflow temperature from the CRAC unit and decrease its overall cooling
effectiveness. See: Ultrasonic humidification
Ultrasonic Humidification: Ultrasonic humidification uses a metal diaphragm vibrating at
ultrasonic frequencies and a water source to introduce humidity into the air. Because it does
not use heat and steam to create humidity, ultrasonic systems are 95% more energy
efficient than the traditional steam-based systems found in most CRAC units. Most
environments can easily be converted from steam based to ultrasonic humidification.
UPS (Uninterruptible Power Supply): This is a system that provides backup electricity
to IT systems in the event of a power failure until the backup power supply can kick in. UPS
systems are traditionally battery and inverter based systems, with some installations taking
advantage of flywheel-based technology.
VFD (Variable Frequency Drive): A system for controlling the rotational speed of an
alternating current (AC) electric motor by controlling the frequency of the electrical power
supplied to the motor. VFDs save energy by allowing the volume of fluid/air to adjust to
match system's demands rather than having the motor operating at full capacity only.
Virtualization: As servers have become more and more powerful, they have also (in
general) become underutilized. The challenge to IT organizations has been to
compartmentalize applications so they can be self contained and autonomous while at the
same time sharing compute capacity with other applications on the same device. This is the
challenge addressed by virtualization. Virtualization is the creation of a virtual (rather than
actual) version of something, such as an operating system, a server, a storage device or
network resources. Through virtualization, multiple resources can reside on a single device
(thereby addressing the problem of underutilization) and many systems can be managed on
an enterprise-wide basis.
Watts per Square Foot: When describing a data center's capacity, watts per square foot
is one way to describe the facility's aggregate capacity. For example, a 1,000 square foot
facility with 1 MW power and cooling capacity will support an average deployment of 100
watts per square foot across its raised floor. Since some of this space may have CRAC units
and hallways, the effective power density supported by the facility may be much greater (up
to the 1MW total capacity). Facilities designed for 60 W/SF deployments just a few years
ago cannot be upgraded to support the 400 W/SF loads demanded by modern, high density
servers. Note: See power-density.
Michael Bullock is the founder and CEO of Transitional Data Services (TDS), a Boston, MA-
based consulting firm focusing on green data centers, data center consolidation / relocation,
enterprise applications and technical operations. Prior to founding TDS, Bullock held
executive leadership positions at Student Advantage, CMGI and Renaissance Worldwide.

You might also like