You are on page 1of 10

IT@Intel White Paper

Intel IT
IT Best Practices
Facility and IT Innovation
January 2012

Facilities Design for


High‑density Data Centers
Executive Overview
We have enabled high density At Intel IT, we have enabled high density computing and lowered facility costs

computing and lowered facility by adopting a proactive server refresh strategy and investing in the latest

costs by adopting a proactive generation Intel® Xeon® processors. Other contributors to data center density

server refresh strategy and include building an enterprise private cloud and increasing cabinet sizes,
including 15 kilowatts (kW), 22.5 kW, and 30 kW. We anticipate that our data
investing in the latest generation
center density will continue to increase.
Intel® Xeon® processors.
The ever-increasing capabilities predicted architectural details such as vapor sealing
by Moore’s law—basically, doing more in less walls, floors, and ceilings; removing
space—make it imperative that we optimize raised metal floors (RMF); increasing
data center efficiency and reliability. To do floor structural loading limits; and using
this, we focus on several areas, some of overhead structured cabling paths.
which are industry-standard and simply good • Electrical considerations. Our high-
room management practices. Others, such as density data centers use flexible redundant
our automated temperature, humidity, and busway circuit designs and 415/240 Vac
static pressure controls, are proven elements distribution to reduce electrical losses,
of air-cooled designs developed by Intel. helping us design an efficient, fault-
• Air management. Air segregation and tolerant room.
automated control sensors enable us to • Stranded capacity. We proactively
efficiently manage capacity and cooling identify stranded capacity in all areas of
costs in our data centers. the data center, such as power and cooling
• Thermal management. Excess heat capacity, to increase efficiency and avoid
is a major contributor to IT equipment having to make expensive changes to the
downtime due to internal thermal existing infrastructure.
controls that are programmed to shut By focusing on these facility design areas,
down the device. For our high-density we can increase room infrastructure capacity,
data centers, we concentrate on which enables higher densities while reducing
John Musilli maximizing thermal ride-through. operating costs and maintaining the levels of
Senior Data Center Architect, Intel IT
• Architectural considerations. To redundancy required by our business models.
Brad Ellison support and optimize our air and thermal
Senior Data Center Architect, Intel IT management techniques, we attend to
IT@Intel White Paper Facilities Design for High‑density Data Centers

Contents Background incident will result in the failure of at least


50 percent of the servers and network
Executive Overview.. ............................ 1 Intel’s business strategy includes
devices, depending on the switch topology.
consolidating our data centers to
Background. . ........................................... 2 increase utilization. To reach this goal, Mitigating these risks and challenges, while
we have increased the density of our minimizing costs, affects every aspect—from
High-Density Facility Design the slab to the roof—of data center design.
data centers over the last decade by
at Intel...................................................... 2 The savings, risks, and the resulting design
putting more powerful servers into
Air Management.. ............................... 2 must be carefully balanced to meet customer
less space. Each new generation of
Thermal Management. . ..................... 6 Intel® Xeon® processor features more business requirements based on the intended
Architectural Considerations. . .......... 7 compute power and more cores. use of each data center.
Electrical Considerations. . ................. 8 Therefore, as we refresh our servers
Stranded Capacity. . ............................ 9 on a proactive three- to four-year
schedule, a single new server can
Conclusion. . ...........................................10 take the place of several older ones— High-Density Facility
Acronyms. . .............................................10
accomplishing more work in a smaller Design at Intel
footprint. In addition, building Intel’s Recognizing both the benefits and the
private enterprise cloud, by virtualizing risks of high-density computing and
more than 60 percent of our Office and storage, Intel IT focuses on several
Enterprise applications, has put more design areas, some of which are
demands on our data centers, thereby industry-standard and simply good
increasing their density. We anticipate room management practices. Others,
that the density of our data centers such as our automated temperature,
will continue to increase with current humidity, and static pressure controls,
designs including 15-kilowatt (kW), are proven elements of higher density
22.5-kW, and even 30-kW cabinets. air-cooled designs developed by Intel.

Although high-density data centers can We focus our design efforts on five main
be more efficient in terms of energy areas: air management, thermal management,
consumption and use of space, they also electrical design, architectural design, and
pose a potentially greater business continuity harvesting stranded capacity. Focusing on
risk. A low-density data center with less these areas helps us to reach our goal of
than 5 kW per cabinet might have 2,500 consolidating our data centers to reduce
servers in 20,000 square feet. An average operating cost while still maintaining the levels
incident, such as a rack- or row-level event, of redundancy required by our business models.
might result in the loss of only 10 percent of
those servers. In higher density data centers,
Air Management
IT@Intel however, that same compute
Airflow management is the key to increasing
The IT@Intel program connects IT or storage capacity may be located in only
the cooling capacity of facilities. Airflow
professionals around the world with their 500 square feet. This higher concentration
design for new construction can influence
peers inside our organization – sharing of servers makes it more likely that an
the size and shape of rooms or buildings that
lessons learned, methods and strategies.
Our goal is simple: Share Intel IT best
practices that create business value and
make IT a competitive advantage. Visit
us today at www.intel.com/IT or contact
your local Intel representative if you’d
like to learn more.

2 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper

house data centers and can determine the We set the supply air temperature just below
maximum efficiency and density of an air- the thermal boundary for the first server fan
cooled data center. Efficient airflow design speed increase (80° F (26° C)). This prevents
in existing data centers increases energy speed-up of the internal fans, which would
efficiency, reduces operations cost, avoids use 2 to 10 percent additional uninterruptable
hot spots, and, most importantly, supports power supply (UPS) server power, depending
high-density computing by enabling us to use on the number of fans and speed steps
larger per-cabinet loads, such as 15–30 kW. programmed into the device’s BIOS.
Although the American Society of Heating,
We use computational fluid dynamics
Refrigerating, and Air-Conditioning Engineers
(CFD) algorithms, developed for Intel’s …all of our data centers—even
guidelines allow a supply air temperature of
manufacturing facilities, to model and design
80.6° F (27° C), we keep our supply air at 78° F those with 30-kW cabinets—
air segregation configurations for our data
centers. The right configuration enables our
(25.5° C) in the cold aisle, measured between are air cooled, and our design
six and seven feet above the finished floor.
servers to operate at optimum temperature roadmap supports 50 kW per
without wasting energy. Intel’s data center designs use both hot
aisle enclosures (HAEs) and passive ducted
rack for new construction.
Our air management goals include the following:
cabinet air segregation, also known as
• Supply the necessary amount of cool air to “chimney cabinets.” Although our CFD
each cabinet modeling shows that hot aisle and cold
• Remove the same amount of hot exhaust aisle enclosures provide the same thermal
air from each cabinet solution, we have experienced greater
benefits with the HAE configuration with
• Keep the hot exhaust air away from the
return air temperature design range from
equipment air intake
95° F (35° C) to 105° F (41° C), such as:
• Maximize the temperature difference
between cold and hot air (Delta T) to • Greater heat transfer efficiency across
improve cooling efficiency the cooling coils due to elevated return air
temperatures and a greater Delta T.
These goals must be achieved in a redundant
and uninterrupted manner, according to business • By carefully developing our work procedures,
requirements. these higher temperatures are practical
because data center staff enter this space
only for new landings or maintenance, so
Air segregation
our staff works in an ambient temperature
In our experience, with segregated air paths
environment most of the time.
and flooded room designs we can support
densities up to 30 kW per rack server • During a mechanical or electrical failure
anywhere in the data center. A key design that causes a loss of cooling to the data
requirement for air segregation is to maintain center, the walls, floor, and cabinets
a constant pressure differential between the serve as a “thermal battery,” releasing
supply and return air paths, so that enough stored energy and contributing to thermal
conditioned air is provided to meet the actual ride-through (discussed in the Thermal
server demand. Management section later in this paper).

www.intel.com/IT 3
IT@Intel White Paper Facilities Design for High‑density Data Centers

Table 1. Rack density and watts consumed per • In our data centers that use a raised metal 15-kW Cabinets
square foot^ floor (RMF) for supply air delivery, all of the For low-density cabinets up to 15 kW, we
Watts Per leaked air—0.5 to 2 percent—is available use traditional air cooling with the computer
Year Rack Density Square Foot to cool the IT equipment in areas outside room air conditioning (CRAC) or computer
2004 15 kilowatts (kW) 250 the HAE. room air handler (CRAH) units in the return
2006 22.5 kW 375 • The common building electrical airstream without any air segregation. This
Current 30 kW 500 components, such as panels, breakers, configuration is illustrated in Figure 1.
^ Data is based on a 5,000-square-foot room with cooling and conductors, outside of the HAE,
units and power distribution units (PDUs) in the same space. are not subjected to the elevated 22.5-kW Cabinets
temperatures found in the cold aisle For 22.5-kW cabinets, we use HAEs with
designs and therefore do not need to the CRAC unit in the return airstream, as
be evaluated for thermal de-rating. illustrated in Figure 2.

Air cooling 30-kW Cabinets


For 30-kW cabinets, we use flooded supply
The density of server racks—and therefore
air design, passive chimneys—ducted server
the power used per square foot—has been
cabinets without fan assist—as well as HAEs.
steadily increasing, as shown in Table 1.
We have found that passive ducted equipment
Although there is no consensus in the cabinets are very effective at delivering server
industry that high-density racks can be air exhaust to a return air plenum (RAP). Passive
cooled effectively, all of our data centers— ducting eliminates the need for fans, which are
even those with 30-kW cabinets—are air problematic from a monitoring, maintenance,
cooled, and our design roadmap supports and replacement perspective. Our design for
50 kW per rack for new construction. 30-kW cabinets is shown in Figure 3.

URE
C CLOS
CRA E EN CRA
C
AISL
HOT
C C
CRA CRA

C C
CRA CRA

OOR OOR
L FL L FL
META META
RA ISED RAISED

Figure 1. Simply moving the computer room air conditioning, or computer Figure 2. To accommodate the greater cooling needs of 22.5-kilowatt (kW)
room air handler units, into the return air stream without air segregation cabinets, we bolster the standard 15-kW design with walls, forming a hot
provides sufficient cooling for 15-kilowatt cabinets. aisle enclosure.

4 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper

Flooded supply air design Automated Control Sensors throttling of cold air during fluctuations in
A flooded supply air design, used in conjunction In the past, our room air handler designs used server load and heat generation.
with chimney cabinets or HAEs, eliminates the uncontrolled direct-drive fans to meet the
need for a raised metal floor (RMF) and enables air moving and cooling requirements, with Sensor case study
conditioned air to be introduced into the room the return air temperature control sensors. To gather real world data, we installed wired
from any direction at the required volume. An However, this was inefficient because Intel and wireless automated temperature, static
optimum design for a data center without data centers rarely operate at 100-percent pressure, humidity, and power monitoring
an RMF can use air segregation, an air-side design load. Typically, data centers operate sensors at a 1.2 megawatt production data
economizer, or a fan-wall configuration. at about 60 percent of peak load; however, center that did not have air segregation.
a particular job might push the workload to These sensors sample the environment
In this type of design, we keep the supply-air
90 percent for a few hours, which then requires every 15 seconds and average the readings
velocity in front of the server and storage
maximum design cooling during that period. over 1 to 6 minutes.
devices at or below what we call “terminal
velocity,” targeted at less than 400 feet per To improve the energy efficiency of The sensor array includes:
minute (fpm). A supply air velocity greater our data centers, we have adopted • CRAC and air handling unit (AHU) flow
than 400 fpm can exceed the ability of some manufacturing clean room management and temperature sensors. We placed flow
IT equipment to draw the air into the device. technology that uses real-time data to meters in the chilled water supply and return
supply the appropriate temperature and pipes to determine the actual real-time work
Some facilities use an RMF as an access floor
volume of air for the data center load at done by each unit. We placed two additional
for power cooling and network infrastructure.
any particular time. We have also replaced temperature sensors in the supply air
This does not prohibit a flooded air design—we
single-speed, direct-drive fan motors in duct and six sensors in the return air path
simply consider the RMF elevation as the slab
the air handling systems with variable to identify short cycling and the Delta T
height, and raise the room air handlers’ supply
frequency drives (VFDs) to allow the across the cooling coil.
path above the RMF, as shown in Figure 4.

NU M
C PLE
CRA IR
RAP R NA CRA
U
C RET C
CRA

C CRA
CRA C
RAP

ETS
YC ABIN
C
CRA
CHI MNE
CRA C
E
SUR
EN CLO
CRA
C
CRA AISLE
C HOT
PDU C SLAB PDU
CRA CRE
TE
CON PDU
OOR
L FL
M ETA
SE D
RAI

CRAC–computer room air conditioning CRAC–computer room air conditioning


PDU–power distribution unit PDU–power distribution unit
RAP–return air plenum RAP–return air plenum

Figure 3. Passive ducted equipment cabinets are very effective at Figure 4. For rooms with a raised metal floor (RMF), we raise the room
delivering server exhaust to a return air plenum in the ceiling, allowing us air handlers’ supply path above the RMF to achieve a flooded supply air
to air-cool 30-kW cabinets. design used in conjunction with chimney cabinets or hot aisle enclosures.

www.intel.com/IT 5
IT@Intel White Paper Facilities Design for High‑density Data Centers

• Rack-level temperature sensors. Thermal Management


We established rack-level temperature After power loss to IT equipment, excess
Business Value of monitoring through multi-elevation supply heat is a major contributor to IT equipment
Automated Control air sensor readings and from on-board downtime due to internal thermal controls that
Sensors server sensors. The latter provide a more are programmed to shut down the device. For
accurate picture of the data center’s cooling our high-density data centers, we concentrate
In one five-year-old room with
air distribution. We placed an additional on maximizing thermal ride-through and
traditional hot and cold aisles
sensor at the top rear of the rack to identify controlling room environmental factors.
and no air segregation, we were
air recirculation from the front or rear of the
oversupplying 164,000 cubic
rack. We used this information to identify Thermal Ride-through
feet per minute (CFM) of air and
uneven distribution, recirculation, or mixing
120 more tons of cooling than Thermal ride-through is the amount of time
of supply and return airstreams.
necessary to maintain the room available to absorb the heat generated in
environmentals within the upper • Electrical power sensors. For monitoring the data center environment and maintain
operating thermal boundaries of and efficiency metrics we used substation, acceptable equipment temperatures after loss
78º F (25º C). With a USD 205,000 UPS, and busway meters to identify data of cooling. Thermal ride-through is important
investment in HAEs, VFDs, and center loads and losses. We deployed because as long as devices are operating, they
cascade supply air temperature electrical power sensors throughout the are generating heat without an active means
and pressure differential sensors, chilled water plant electrical distribution to neutralize it.
we were able to power off six system and individually on the CRAC or
It is possible that a data center will continue
of sixteen 30-ton CRAC units AHUs. We also set up a wireless weather
running for a period of time—using a UPS,
and operate the remaining units station in the chiller equipment yard to
batteries, fly-wheel, or other power-storage
at 60-percent fan speed. The record the local conditions and correlate
system—even if power to the cooling system
AHUs currently provide supply air these to chiller efficiency.
fails due to a failure of utility power, a
at 75º F (24º C). This generated • Rack-level power monitoring. These generator, or an automatic transfer switch. In
an annual power savings of USD sensors identified what influence the order to prevent extreme heat buildup, we use
88,000 and captured a stranded rack density heat load had on cooling in a various approaches to supply residual cooling.
cooling capacity of 490 kW, specific area of the room. Our thermal ride-through designs extend the
providing a return on investment As a result of the case study, we made cooling capacity beyond the power storage
of less than 2.5 years. Based on some design changes. We installed cascade capacity time by five to seven minutes. This
the success of this project, we temperature and static pressure controls. We removes any residual heat from critical devices
modified seven additional rooms programmed these controls to reset supply air before temperatures exceed the physical,
globally in a similar manner. temperature at the IT device to the highest logical, or pre-determined thermal boundary
possible design temperature (78° F (26° C)) of any data center equipment.
and to deliver the needed air volume to the The amount of time an environment can
room. The controls used direct supply air reach a certain temperature is not an exact
temperature feedback at the top of the rack formula. However, we consider the amount of
between six and seven feet from the finished air changes through the equipment, the inlet
floor and averaged pressure differential air temperature to the equipment, and the
feedback from multiple sensors in the supply Delta T across the equipment in determining
and return air paths. how fast a room will overheat.
Using data from the sensors, the sensor Intel’s chilled water thermal ride-through
controls adjust the chilled water supply valve solutions use the existing thermal storage
and the supply air fans on the room cooling capacity in the form of existing pipe fluid
units as needed to meet demand. volume and the addition of chilled water
storage tanks, if needed, to meet the
minimum cooling demand.1

1
See “Thermal Storage System Provides Emergency Data
Center Cooling,” Intel Corp., September 2007.

6 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper

In these data centers, we place room air For example, typically we keep the relative Table 2. Room environmental settings
handler fans and a chilled water pump on humidity between 20 to 80 percent for server used at Intel
a separate mechanical load UPS to move environments. However, tape drives and tape Environmental
existing air and water through the room storage may have different environmental Factor Setting
air handler coils. The amount of total heat specifications and may require a separate Humidity 20% to 80% for servers
removed is proportional to the temperature conditioned space. Supply Air 78º F (25º C)
and volume of chilled water available. For Temperature Top of equipment, cold aisle
direct expansion systems, we install the Architectural Considerations Pressure 0.015 WC Positive Supply Air
mechanical load UPS to support the minimum To support and optimize our air and thermal Differential (equivalent to 3.736 Pa)
between Supply
cooling requirements before the UPS batteries management efforts, our data centers Air and RAP
are drained or before the start of the feature several architectural modifications
CFM per kW 80 to 120 CFM
generator and compressor cycle times. that may not be typical in many data centers.
CFM – cubic feet per minute; kW – kilowatt; Pa – Pascal;
In some designs, we use the outside • Sealed walls, ceiling, and floors. Sealing RAP – return air plenum; WC – water column
air environment as a supply or exhaust these surfaces with a vapor barrier enables
solution during emergency thermal events better humidity control. Concrete and drywall
to increase the total usable room cooling are porous and allow moisture to enter the
capacity. We exhaust air out of the room, data center environment.
either to a hallway or outside the building
• Removal of RMFs. We began removing
through a wall or roof vents.
RMFs from our data centers 10 years
At times, we mix data center and facility ago. In addition to enabling better air
motor loads on the UPS systems because we management, removing RMFs offer the
have found this approach can reduce harmonic following benefits:
current content and improve the overall –– More available floor space from no
output power factor (PF) of the UPS units. ramps or stairs
We use VFDs to feed larger motors because
–– Floor installation and maintenance
they provide in-rush isolation from the other cost savings
UPS loads. Some motors are served from UPS
–– Reduced risk from seismic events and
units without VFDs but are usually smaller in
simplified seismic bracing
horsepower compared to the UPS capacity.
–– Reduced dead weight floor loading for
Depending on the size of motors to be used
above-grade rooms
and UPS manufacturer, we may consult
–– Reduced overall room slab-to-deck
with the UPS manufacturer to have the UPS
height requirements
properly sized for the motor in-rush current.
–– Opportunity for ceiling RAP
Room Environmentals –– No need for under-floor leak detection
and fire suppression
Whenever possible, we use room-level
global integrated control logic instead of –– No floor safety issues
standalone device control logic. This allows –– No electrical bonding of flooring
all the mechanical devices to work together components to the earth ground
to efficiently maintain the environmentals Because of the benefits of a no-RMF design,
within the zone of influence for a particular we made this part of our standard data
cooling or airflow device. We carefully control center design three years ago.
the humidity, static pressure, and temperature
in our data centers, as shown in Table 2.

www.intel.com/IT 7
IT@Intel White Paper Facilities Design for High‑density Data Centers

• Increased floor loading specifications. to prevent loss of redundant switches on the design of the UPS electronics,
Our older data centers are limited to 125 or routers caused by a local catastrophic can create other internal UPS problems.
pounds per square foot; the current design event, including accidental fire sprinkler Because of this, we have had to derate
standard is now 350 pounds per square discharge, overhead water from plumbing or some older UPS modules and specific UPS
foot. The higher floor loading limit enables building leaks, damage from falling objects manufacturer’s designs. We have seen
the floors to support the additional weight or local rack electrical failure. We use a a leading PF greater than one with new
of more fully loaded blade server racks and preferred distance of at least 40 lineal feet servers in our data centers, accounting
storage arrays—each weighing between (12 meters) between redundant devices. for a greater percentage of the attached
2,000 to 2,500 pounds. • Overhead cabling trays. These provide UPS load. Intel engineers have consulted
• Integrated performance “shakedown” reasonable access and flexibility to run with UPS manufacturers, determined the
testing. We test all systems in a way copper and fiber throughout the data correct rating for specific UPS equipment,
that resembles actual data center events, center without restricting airflow in older and updated Intel’s design construction
including using controlled overload data centers or creating a need for an RMF. standards to reflect these changes. Newer
conditions, to determine if controls and UPS units are designed to handle leading
• Armor-covered fiber cable for multiple
procedures are established and can be PFs and have resolved the issues generated
strand conductors. We use this
used to prevent or mitigate loss of data by the newer PSUs.
arrangement in paths going into or out
center services. All systems are integrated of the data center room or building to • Variable-frequency drives (VFDs).
and tested to the maximum fault capacity prevent damage from activities that are All new fan and pump equipment
and tolerances, per engineered or factory not controlled by data center operators. installations use VFDs, and we retrofit
specifications. them to existing systems in the field. The
• Cable management. We are careful to dress
building management systems control
cables away from server chassis fan outlets.
Structured Cabling the VFDs on cooling fans, with inputs
Departing from industry-standard data center from pressure differential sensors.
Electrical Considerations
design, for the past 10 years we have used • Power diversity. Depending on the power
The data center electrical distribution is
custom-length power and network cables and configuration, some UPSs can be fully loaded
critical for an efficient, fault-tolerant room.
have eliminated the use of cable management to the manufacturers’ design rating as
Our high-density data centers use flexible
arms on server racks and patch panels in power source one. If the UPS fails, a three-
branch circuit designs, which reduce electrical
our row-end network access distribution way “break before make” static transfer
losses. Some of the ways we make our data
between the switch and the server. As a switch (STS) transfers the power source
centers’ electrical systems reliable and cost-
standard practice, we replace the layer-one to utility power as one of our redundant
efficient include the following:
networking cabling with each server refresh sources of equipment power. Then, when
cycle, allowing for a cable plant upgrade every • Overhead busway power distribution. In the generator starts, the STS transfers the
three to five years. While patch panels and 2000, our first installed overhead bus was power source to the generator. When the
patch cables offer flexibility, they also add only 100 amps and our rack power demand UPS is restored, the STS transfers back to
four more possible points of failure, along was 3 kW to 5 kW supporting eight server utility power, then back to the UPS without
with an approximate doubling of the cost of racks in a redundant configuration. In 2012 having to drop the load to the data center.
installation. We have found that network paths designs, the amperage has increased to
• High voltage. We use the highest voltage
rarely change, so the benefits of lower cost 800 amps per bus-duct to support higher
alternating current (Vac) the equipment
and having only two points of failure instead density server racks.
accepts. This allows for higher efficiency
of six far outweigh the flexibility benefit. • UPS derating. The large-scale deployment and greater kWs per circuit.
Other layer-one network practices we use to of refreshed servers that use switching
• 415/240 Vac power distribution. In
minimize risk and total loss of connectivity power supply units (PSUs) can trend toward
60-Hz countries, we can provide twice as
include the following: a leading PF. This in turn can cause higher
many kWs per electrical path—one-half
temperatures that negatively affect the
• Network switch physical separation. We the installation cost per kW compared to
UPS output transformer and, depending
add this to the logistical network design conventional 208/120 Vac installations—at

8 www.intel.com/IT
Facilities Design for High‑density Data Centers IT@Intel White Paper

the same current. This configuration is Harvesting Stranded


compatible with major suppliers of PSUs, In-Room Cooling Capacity
which are rated 90 to 250 Vac or 200 to As server density goes up, the rack power Data Center Heat
250 Vac. For existing electrical equipment, and heat density also increase, creating Recycling Systems
we place the 480/415 Vac transformer a need for greater airflow or cooling in a Cut Business Cost
downstream of the UPS module, generator specific area of the room. The total demand
At an Intel data center in Russia,
emergency panel, and UPS bypass breaker. on the cooling system is the same but the
Intel developed and installed a heat
This keeps the different power sources at distribution is wrong. We have found that
recycling system (HRS) that reuses the
the same voltage. We achieve an additional by simply moving a CRAC unit to another
heat generated by servers in a data
benefit by using common worldwide power location, we can deliver the once-stranded
center to warm office space during cold
distribution solutions for the racks in the CRAC capacity to a specific area of the room
months, as well as to heat water for
rooms, allowing 400/230 Vac to be used in where needed, thus eliminating the stranding.
restrooms and cafeterias. In recognition
Europe and 380/220 Vac in Asia.
A congested or undersized RMF can strand of this project’s innovative approach to
• Fully rated electrical components and the available cooling capacity of the room, energy savings, the team was awarded
conductor sizing. For continuous duty, we measured by total CFM delivered at a the bronze Intel Environmental Award.
size and rate our branch circuit breakers, practical static pressure. To avoid the loss of A similar HRS system was installed at
busways, distribution breakers, disconnects, capacity caused by under-floor congestion or an Intel data center in Israel.
panels, conductors, and rack power strips undersized RMF for new densities, we raise
The Russian project required the IT
at the stated value on the component. This the room cooling units above the surface of
data center team and Intel Corporate
provides a significant capacity capability the RMF—essentially equivalent to a flooded
Services to work closely together.
within the distribution design, in addition supply air design on grade—or relocate under-
Although the system does not
to providing a safety margin for peak loads. floor cabling into overhead trays.
currently use 100 percent of the heat
We follow all applicable local electrical
generated by the data center, the
codes and rules when specifying and Harvesting Stranded
proof of concept was successful. In
installing electrical components. chilled water capacity
its first phase, the HRS in Russia is
The chiller plant may have the required
helping Intel save about USD 50,000
Stranded Capacity capacity for the data center’s cooling
annually. The IT data center team is
Stranded capacity is the available design requirements but limiting factors, such as
now working on the second phase of
capacity of the derated installed space, power, an undersized pipe or pump, may contribute
the HRS project, which will use the full
or cooling infrastructure minus the actual use to stranded chilled water capacity, or the
amount of heat generated by the data
or the unavailable capacity of one of these installed CRAC capacity may be less than the
center servers. We anticipate this will
data center elements due to the limitations capacity of the chiller or line size. If the supply
save Intel about USD 300,000 per year,
of another interdependent element. We and return water temperature Delta T is less
or up to USD 1 million over a three-year
proactively identify stranded capacity, such than the chiller equipment design, the chillers
period. In a larger data center, such a
as power and cooling capacities, in all areas of true capacity is not available. We look at these
system has the potential to generate
the data center. Harvesting stranded capacity types of factors to determine whether any
even more savings.
can help us avoid having to make expensive chilled water capacity is being wasted and
changes to the existing infrastructure. then modify installations accordingly.

www.intel.com/IT 9
Harvesting Stranded Conclusion Contributors
UPS capacity
As we continue to consolidate our data Ajay Chandramouly
Electrical load phase imbalance at several Industry Engagement Manager, Intel IT
centers, proactively refresh servers
components can limit or strand total ST Koo
and storage with newer systems
conditioned electrical capacity available to data Data Center Architect, Intel IT
based on the latest generation of
center devices. These components include UPS, Ravindranath D. Madras
Intel Xeon processors, and further
distribution breakers, room PDU, and rack-level Data Center Enterprise Architect, Intel IT
increase the virtualization of our Office
circuit breakers. We attempt to maintain a phase Marissa Mena
and Enterprise applications for our
value difference of 5 to 15 percent between Data Center Architect, Intel IT
enterprise private cloud, the density of
phases, as our experience has shown that this Isaac Priestly
our data centers is steadily increasing.
range provides an efficient use of conditioned Strategic Financial Analyst, Intel IT
power and prevents circuit breaker trips or UPS
We focus on several areas to optimize the
by-pass events.
efficiency and reliability of Intel’s high-density Acronyms
These same components, if undersized for data centers. These areas include air and
AHU air handling unit
the load, can also limit or strand downstream thermal management, architectural and
CFD computational fluid dynamics
electrical capacity to the data center devices. electrical considerations, and the harvesting
We carefully size these components to provide of stranded cooling and power capacity. CFM cubic feet per minute
all the load the system was intended to CRAC c omputer room air
Focusing on these design areas enables us
deliver, with additional capacity for temporary conditioning
to increase the room infrastructure capacity—
downstream load changes. CRAH computer room air handler
thereby enabling higher densities and
We also carefully construct our redundancy reducing operating cost while still maintaining Delta T change in temperature
topology so that it is in balance with the UPS, the levels of redundancy required by our fpm feet per minute
distribution board, room PDUs, branch circuits, business models. HAE hot aisle enclosure
and rack power strips. The goal is that no single HRS heat recycling system
component or circuit limits the capacity of the kW kilowatt
overall electrical design during a failover event.
PDU power distribution unit
This is important because failover events
PF power factor
typically put additional load on the system
during a power transfer. PSU power supply unit
RAP return air plenum
RMF raised metal floor
STS static transfer switch
For more information on Intel IT best practices,
UPS uninterruptible power supply
visit www.intel.com/it.
Vac voltage alternating current
VFD variable frequency drive

This paper is for informational purposes only. THIS DOCUMENT IS PROVIDED “AS IS” WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY OF
MERCHANTABILITY, NONINFRINGEMENT, FITNESS FOR ANY PARTICULAR PURPOSE, OR ANY WARRANTY OTHERWISE ARISING OUT OF ANY PROPOSAL,
SPECIFICATION OR SAMPLE. Intel disclaims all liability, including liability for infringement of any patent, copyright, or other intellectual property rights, relating to use of
information in this specification. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted herein.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. * Other names and brands may be claimed as the property of others.
Copyright © 2012 Intel Corporation. All rights reserved. Printed in USA Please Recycle 0112/ACHA/KC/PDF 326180-001US

You might also like