Professional Documents
Culture Documents
Intel IT
IT Best Practices
Facility and IT Innovation
January 2012
computing and lowered facility by adopting a proactive server refresh strategy and investing in the latest
costs by adopting a proactive generation Intel® Xeon® processors. Other contributors to data center density
server refresh strategy and include building an enterprise private cloud and increasing cabinet sizes,
including 15 kilowatts (kW), 22.5 kW, and 30 kW. We anticipate that our data
investing in the latest generation
center density will continue to increase.
Intel® Xeon® processors.
The ever-increasing capabilities predicted architectural details such as vapor sealing
by Moore’s law—basically, doing more in less walls, floors, and ceilings; removing
space—make it imperative that we optimize raised metal floors (RMF); increasing
data center efficiency and reliability. To do floor structural loading limits; and using
this, we focus on several areas, some of overhead structured cabling paths.
which are industry-standard and simply good • Electrical considerations. Our high-
room management practices. Others, such as density data centers use flexible redundant
our automated temperature, humidity, and busway circuit designs and 415/240 Vac
static pressure controls, are proven elements distribution to reduce electrical losses,
of air-cooled designs developed by Intel. helping us design an efficient, fault-
• Air management. Air segregation and tolerant room.
automated control sensors enable us to • Stranded capacity. We proactively
efficiently manage capacity and cooling identify stranded capacity in all areas of
costs in our data centers. the data center, such as power and cooling
• Thermal management. Excess heat capacity, to increase efficiency and avoid
is a major contributor to IT equipment having to make expensive changes to the
downtime due to internal thermal existing infrastructure.
controls that are programmed to shut By focusing on these facility design areas,
down the device. For our high-density we can increase room infrastructure capacity,
data centers, we concentrate on which enables higher densities while reducing
John Musilli maximizing thermal ride-through. operating costs and maintaining the levels of
Senior Data Center Architect, Intel IT
• Architectural considerations. To redundancy required by our business models.
Brad Ellison support and optimize our air and thermal
Senior Data Center Architect, Intel IT management techniques, we attend to
IT@Intel White Paper Facilities Design for High‑density Data Centers
Although high-density data centers can We focus our design efforts on five main
be more efficient in terms of energy areas: air management, thermal management,
consumption and use of space, they also electrical design, architectural design, and
pose a potentially greater business continuity harvesting stranded capacity. Focusing on
risk. A low-density data center with less these areas helps us to reach our goal of
than 5 kW per cabinet might have 2,500 consolidating our data centers to reduce
servers in 20,000 square feet. An average operating cost while still maintaining the levels
incident, such as a rack- or row-level event, of redundancy required by our business models.
might result in the loss of only 10 percent of
those servers. In higher density data centers,
Air Management
IT@Intel however, that same compute
Airflow management is the key to increasing
The IT@Intel program connects IT or storage capacity may be located in only
the cooling capacity of facilities. Airflow
professionals around the world with their 500 square feet. This higher concentration
design for new construction can influence
peers inside our organization – sharing of servers makes it more likely that an
the size and shape of rooms or buildings that
lessons learned, methods and strategies.
Our goal is simple: Share Intel IT best
practices that create business value and
make IT a competitive advantage. Visit
us today at www.intel.com/IT or contact
your local Intel representative if you’d
like to learn more.
2 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper
house data centers and can determine the We set the supply air temperature just below
maximum efficiency and density of an air- the thermal boundary for the first server fan
cooled data center. Efficient airflow design speed increase (80° F (26° C)). This prevents
in existing data centers increases energy speed-up of the internal fans, which would
efficiency, reduces operations cost, avoids use 2 to 10 percent additional uninterruptable
hot spots, and, most importantly, supports power supply (UPS) server power, depending
high-density computing by enabling us to use on the number of fans and speed steps
larger per-cabinet loads, such as 15–30 kW. programmed into the device’s BIOS.
Although the American Society of Heating,
We use computational fluid dynamics
Refrigerating, and Air-Conditioning Engineers
(CFD) algorithms, developed for Intel’s …all of our data centers—even
guidelines allow a supply air temperature of
manufacturing facilities, to model and design
80.6° F (27° C), we keep our supply air at 78° F those with 30-kW cabinets—
air segregation configurations for our data
centers. The right configuration enables our
(25.5° C) in the cold aisle, measured between are air cooled, and our design
six and seven feet above the finished floor.
servers to operate at optimum temperature roadmap supports 50 kW per
without wasting energy. Intel’s data center designs use both hot
aisle enclosures (HAEs) and passive ducted
rack for new construction.
Our air management goals include the following:
cabinet air segregation, also known as
• Supply the necessary amount of cool air to “chimney cabinets.” Although our CFD
each cabinet modeling shows that hot aisle and cold
• Remove the same amount of hot exhaust aisle enclosures provide the same thermal
air from each cabinet solution, we have experienced greater
benefits with the HAE configuration with
• Keep the hot exhaust air away from the
return air temperature design range from
equipment air intake
95° F (35° C) to 105° F (41° C), such as:
• Maximize the temperature difference
between cold and hot air (Delta T) to • Greater heat transfer efficiency across
improve cooling efficiency the cooling coils due to elevated return air
temperatures and a greater Delta T.
These goals must be achieved in a redundant
and uninterrupted manner, according to business • By carefully developing our work procedures,
requirements. these higher temperatures are practical
because data center staff enter this space
only for new landings or maintenance, so
Air segregation
our staff works in an ambient temperature
In our experience, with segregated air paths
environment most of the time.
and flooded room designs we can support
densities up to 30 kW per rack server • During a mechanical or electrical failure
anywhere in the data center. A key design that causes a loss of cooling to the data
requirement for air segregation is to maintain center, the walls, floor, and cabinets
a constant pressure differential between the serve as a “thermal battery,” releasing
supply and return air paths, so that enough stored energy and contributing to thermal
conditioned air is provided to meet the actual ride-through (discussed in the Thermal
server demand. Management section later in this paper).
www.intel.com/IT 3
IT@Intel White Paper Facilities Design for High‑density Data Centers
Table 1. Rack density and watts consumed per • In our data centers that use a raised metal 15-kW Cabinets
square foot^ floor (RMF) for supply air delivery, all of the For low-density cabinets up to 15 kW, we
Watts Per leaked air—0.5 to 2 percent—is available use traditional air cooling with the computer
Year Rack Density Square Foot to cool the IT equipment in areas outside room air conditioning (CRAC) or computer
2004 15 kilowatts (kW) 250 the HAE. room air handler (CRAH) units in the return
2006 22.5 kW 375 • The common building electrical airstream without any air segregation. This
Current 30 kW 500 components, such as panels, breakers, configuration is illustrated in Figure 1.
^ Data is based on a 5,000-square-foot room with cooling and conductors, outside of the HAE,
units and power distribution units (PDUs) in the same space. are not subjected to the elevated 22.5-kW Cabinets
temperatures found in the cold aisle For 22.5-kW cabinets, we use HAEs with
designs and therefore do not need to the CRAC unit in the return airstream, as
be evaluated for thermal de-rating. illustrated in Figure 2.
URE
C CLOS
CRA E EN CRA
C
AISL
HOT
C C
CRA CRA
C C
CRA CRA
OOR OOR
L FL L FL
META META
RA ISED RAISED
Figure 1. Simply moving the computer room air conditioning, or computer Figure 2. To accommodate the greater cooling needs of 22.5-kilowatt (kW)
room air handler units, into the return air stream without air segregation cabinets, we bolster the standard 15-kW design with walls, forming a hot
provides sufficient cooling for 15-kilowatt cabinets. aisle enclosure.
4 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper
Flooded supply air design Automated Control Sensors throttling of cold air during fluctuations in
A flooded supply air design, used in conjunction In the past, our room air handler designs used server load and heat generation.
with chimney cabinets or HAEs, eliminates the uncontrolled direct-drive fans to meet the
need for a raised metal floor (RMF) and enables air moving and cooling requirements, with Sensor case study
conditioned air to be introduced into the room the return air temperature control sensors. To gather real world data, we installed wired
from any direction at the required volume. An However, this was inefficient because Intel and wireless automated temperature, static
optimum design for a data center without data centers rarely operate at 100-percent pressure, humidity, and power monitoring
an RMF can use air segregation, an air-side design load. Typically, data centers operate sensors at a 1.2 megawatt production data
economizer, or a fan-wall configuration. at about 60 percent of peak load; however, center that did not have air segregation.
a particular job might push the workload to These sensors sample the environment
In this type of design, we keep the supply-air
90 percent for a few hours, which then requires every 15 seconds and average the readings
velocity in front of the server and storage
maximum design cooling during that period. over 1 to 6 minutes.
devices at or below what we call “terminal
velocity,” targeted at less than 400 feet per To improve the energy efficiency of The sensor array includes:
minute (fpm). A supply air velocity greater our data centers, we have adopted • CRAC and air handling unit (AHU) flow
than 400 fpm can exceed the ability of some manufacturing clean room management and temperature sensors. We placed flow
IT equipment to draw the air into the device. technology that uses real-time data to meters in the chilled water supply and return
supply the appropriate temperature and pipes to determine the actual real-time work
Some facilities use an RMF as an access floor
volume of air for the data center load at done by each unit. We placed two additional
for power cooling and network infrastructure.
any particular time. We have also replaced temperature sensors in the supply air
This does not prohibit a flooded air design—we
single-speed, direct-drive fan motors in duct and six sensors in the return air path
simply consider the RMF elevation as the slab
the air handling systems with variable to identify short cycling and the Delta T
height, and raise the room air handlers’ supply
frequency drives (VFDs) to allow the across the cooling coil.
path above the RMF, as shown in Figure 4.
NU M
C PLE
CRA IR
RAP R NA CRA
U
C RET C
CRA
C CRA
CRA C
RAP
ETS
YC ABIN
C
CRA
CHI MNE
CRA C
E
SUR
EN CLO
CRA
C
CRA AISLE
C HOT
PDU C SLAB PDU
CRA CRE
TE
CON PDU
OOR
L FL
M ETA
SE D
RAI
Figure 3. Passive ducted equipment cabinets are very effective at Figure 4. For rooms with a raised metal floor (RMF), we raise the room
delivering server exhaust to a return air plenum in the ceiling, allowing us air handlers’ supply path above the RMF to achieve a flooded supply air
to air-cool 30-kW cabinets. design used in conjunction with chimney cabinets or hot aisle enclosures.
www.intel.com/IT 5
IT@Intel White Paper Facilities Design for High‑density Data Centers
1
See “Thermal Storage System Provides Emergency Data
Center Cooling,” Intel Corp., September 2007.
6 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper
In these data centers, we place room air For example, typically we keep the relative Table 2. Room environmental settings
handler fans and a chilled water pump on humidity between 20 to 80 percent for server used at Intel
a separate mechanical load UPS to move environments. However, tape drives and tape Environmental
existing air and water through the room storage may have different environmental Factor Setting
air handler coils. The amount of total heat specifications and may require a separate Humidity 20% to 80% for servers
removed is proportional to the temperature conditioned space. Supply Air 78º F (25º C)
and volume of chilled water available. For Temperature Top of equipment, cold aisle
direct expansion systems, we install the Architectural Considerations Pressure 0.015 WC Positive Supply Air
mechanical load UPS to support the minimum To support and optimize our air and thermal Differential (equivalent to 3.736 Pa)
between Supply
cooling requirements before the UPS batteries management efforts, our data centers Air and RAP
are drained or before the start of the feature several architectural modifications
CFM per kW 80 to 120 CFM
generator and compressor cycle times. that may not be typical in many data centers.
CFM – cubic feet per minute; kW – kilowatt; Pa – Pascal;
In some designs, we use the outside • Sealed walls, ceiling, and floors. Sealing RAP – return air plenum; WC – water column
air environment as a supply or exhaust these surfaces with a vapor barrier enables
solution during emergency thermal events better humidity control. Concrete and drywall
to increase the total usable room cooling are porous and allow moisture to enter the
capacity. We exhaust air out of the room, data center environment.
either to a hallway or outside the building
• Removal of RMFs. We began removing
through a wall or roof vents.
RMFs from our data centers 10 years
At times, we mix data center and facility ago. In addition to enabling better air
motor loads on the UPS systems because we management, removing RMFs offer the
have found this approach can reduce harmonic following benefits:
current content and improve the overall –– More available floor space from no
output power factor (PF) of the UPS units. ramps or stairs
We use VFDs to feed larger motors because
–– Floor installation and maintenance
they provide in-rush isolation from the other cost savings
UPS loads. Some motors are served from UPS
–– Reduced risk from seismic events and
units without VFDs but are usually smaller in
simplified seismic bracing
horsepower compared to the UPS capacity.
–– Reduced dead weight floor loading for
Depending on the size of motors to be used
above-grade rooms
and UPS manufacturer, we may consult
–– Reduced overall room slab-to-deck
with the UPS manufacturer to have the UPS
height requirements
properly sized for the motor in-rush current.
–– Opportunity for ceiling RAP
Room Environmentals –– No need for under-floor leak detection
and fire suppression
Whenever possible, we use room-level
global integrated control logic instead of –– No floor safety issues
standalone device control logic. This allows –– No electrical bonding of flooring
all the mechanical devices to work together components to the earth ground
to efficiently maintain the environmentals Because of the benefits of a no-RMF design,
within the zone of influence for a particular we made this part of our standard data
cooling or airflow device. We carefully control center design three years ago.
the humidity, static pressure, and temperature
in our data centers, as shown in Table 2.
www.intel.com/IT 7
IT@Intel White Paper Facilities Design for High‑density Data Centers
• Increased floor loading specifications. to prevent loss of redundant switches on the design of the UPS electronics,
Our older data centers are limited to 125 or routers caused by a local catastrophic can create other internal UPS problems.
pounds per square foot; the current design event, including accidental fire sprinkler Because of this, we have had to derate
standard is now 350 pounds per square discharge, overhead water from plumbing or some older UPS modules and specific UPS
foot. The higher floor loading limit enables building leaks, damage from falling objects manufacturer’s designs. We have seen
the floors to support the additional weight or local rack electrical failure. We use a a leading PF greater than one with new
of more fully loaded blade server racks and preferred distance of at least 40 lineal feet servers in our data centers, accounting
storage arrays—each weighing between (12 meters) between redundant devices. for a greater percentage of the attached
2,000 to 2,500 pounds. • Overhead cabling trays. These provide UPS load. Intel engineers have consulted
• Integrated performance “shakedown” reasonable access and flexibility to run with UPS manufacturers, determined the
testing. We test all systems in a way copper and fiber throughout the data correct rating for specific UPS equipment,
that resembles actual data center events, center without restricting airflow in older and updated Intel’s design construction
including using controlled overload data centers or creating a need for an RMF. standards to reflect these changes. Newer
conditions, to determine if controls and UPS units are designed to handle leading
• Armor-covered fiber cable for multiple
procedures are established and can be PFs and have resolved the issues generated
strand conductors. We use this
used to prevent or mitigate loss of data by the newer PSUs.
arrangement in paths going into or out
center services. All systems are integrated of the data center room or building to • Variable-frequency drives (VFDs).
and tested to the maximum fault capacity prevent damage from activities that are All new fan and pump equipment
and tolerances, per engineered or factory not controlled by data center operators. installations use VFDs, and we retrofit
specifications. them to existing systems in the field. The
• Cable management. We are careful to dress
building management systems control
cables away from server chassis fan outlets.
Structured Cabling the VFDs on cooling fans, with inputs
Departing from industry-standard data center from pressure differential sensors.
Electrical Considerations
design, for the past 10 years we have used • Power diversity. Depending on the power
The data center electrical distribution is
custom-length power and network cables and configuration, some UPSs can be fully loaded
critical for an efficient, fault-tolerant room.
have eliminated the use of cable management to the manufacturers’ design rating as
Our high-density data centers use flexible
arms on server racks and patch panels in power source one. If the UPS fails, a three-
branch circuit designs, which reduce electrical
our row-end network access distribution way “break before make” static transfer
losses. Some of the ways we make our data
between the switch and the server. As a switch (STS) transfers the power source
centers’ electrical systems reliable and cost-
standard practice, we replace the layer-one to utility power as one of our redundant
efficient include the following:
networking cabling with each server refresh sources of equipment power. Then, when
cycle, allowing for a cable plant upgrade every • Overhead busway power distribution. In the generator starts, the STS transfers the
three to five years. While patch panels and 2000, our first installed overhead bus was power source to the generator. When the
patch cables offer flexibility, they also add only 100 amps and our rack power demand UPS is restored, the STS transfers back to
four more possible points of failure, along was 3 kW to 5 kW supporting eight server utility power, then back to the UPS without
with an approximate doubling of the cost of racks in a redundant configuration. In 2012 having to drop the load to the data center.
installation. We have found that network paths designs, the amperage has increased to
• High voltage. We use the highest voltage
rarely change, so the benefits of lower cost 800 amps per bus-duct to support higher
alternating current (Vac) the equipment
and having only two points of failure instead density server racks.
accepts. This allows for higher efficiency
of six far outweigh the flexibility benefit. • UPS derating. The large-scale deployment and greater kWs per circuit.
Other layer-one network practices we use to of refreshed servers that use switching
• 415/240 Vac power distribution. In
minimize risk and total loss of connectivity power supply units (PSUs) can trend toward
60-Hz countries, we can provide twice as
include the following: a leading PF. This in turn can cause higher
many kWs per electrical path—one-half
temperatures that negatively affect the
• Network switch physical separation. We the installation cost per kW compared to
UPS output transformer and, depending
add this to the logistical network design conventional 208/120 Vac installations—at
8 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper
www.intel.com/IT 9
Harvesting Stranded Conclusion Contributors
UPS capacity
As we continue to consolidate our data Ajay Chandramouly
Electrical load phase imbalance at several Industry Engagement Manager, Intel IT
centers, proactively refresh servers
components can limit or strand total ST Koo
and storage with newer systems
conditioned electrical capacity available to data Data Center Architect, Intel IT
based on the latest generation of
center devices. These components include UPS, Ravindranath D. Madras
Intel Xeon processors, and further
distribution breakers, room PDU, and rack-level Data Center Enterprise Architect, Intel IT
increase the virtualization of our Office
circuit breakers. We attempt to maintain a phase Marissa Mena
and Enterprise applications for our
value difference of 5 to 15 percent between Data Center Architect, Intel IT
enterprise private cloud, the density of
phases, as our experience has shown that this Isaac Priestly
our data centers is steadily increasing.
range provides an efficient use of conditioned Strategic Financial Analyst, Intel IT
power and prevents circuit breaker trips or UPS
We focus on several areas to optimize the
by-pass events.
efficiency and reliability of Intel’s high-density Acronyms
These same components, if undersized for data centers. These areas include air and
AHU air handling unit
the load, can also limit or strand downstream thermal management, architectural and
CFD computational fluid dynamics
electrical capacity to the data center devices. electrical considerations, and the harvesting
We carefully size these components to provide of stranded cooling and power capacity. CFM cubic feet per minute
all the load the system was intended to CRAC c omputer room air
Focusing on these design areas enables us
deliver, with additional capacity for temporary conditioning
to increase the room infrastructure capacity—
downstream load changes. CRAH computer room air handler
thereby enabling higher densities and
We also carefully construct our redundancy reducing operating cost while still maintaining Delta T change in temperature
topology so that it is in balance with the UPS, the levels of redundancy required by our fpm feet per minute
distribution board, room PDUs, branch circuits, business models. HAE hot aisle enclosure
and rack power strips. The goal is that no single HRS heat recycling system
component or circuit limits the capacity of the kW kilowatt
overall electrical design during a failover event.
PDU power distribution unit
This is important because failover events
PF power factor
typically put additional load on the system
during a power transfer. PSU power supply unit
RAP return air plenum
RMF raised metal floor
STS static transfer switch
For more information on Intel IT best practices,
UPS uninterruptible power supply
visit www.intel.com/it.
Vac voltage alternating current
VFD variable frequency drive
This paper is for informational purposes only. THIS DOCUMENT IS PROVIDED “AS IS” WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF
MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL,
SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any patent, copyright, or other intellectual property rights, relating to use of
information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others.
Copyright © 2012 Intel Corporation. All rights reserved. Printed in USA Please Recycle 0112/ACHA/KC/PDF 326180-001US