You are on page 1of 4

FEDERAL ENERGY MANAGEMENT PROGRAM

Data Center Rack


Cooling with Rear-door
Heat Exchanger
Technology Case-Study Bulletin

Figure 1: Passive Rear Door Heat Exchanger devices at LBNL

As data center energy densities in


power-use per square foot increase, Server racks can also be cooled with During operation, hot server-rack airflow
competing technologies such as modular, is forced through the RDHx device by the
energy savings for cooling can be
overhead coolers; in-row coolers; and server fans. Heat is exchanged from the
realized by incorporating liquid-cooling close-coupled coolers with dedicated hot air to circulating water from a chiller
devices instead of increasing airflow containment enclosures. or cooling tower. Thus, server-rack outlet
air temperature is reduced before it is
volume. This is especially important in
discharged into the data center.
a data center with a typical under-floor 2 Technology Overview
cooling system. An airflow-capacity The rear door heat exchanger (RDHx) 2.2 Technology Benefits
devices reviewed in this case study are RDHx cooling devices can save energy
limit will eventually be reached that referred to as passive devices because and increase operational reliability in data
is constrained, in part, by under-floor they have no moving parts; however, they centers because of straightforward installa-
dimensions and obstructions. do require cooling water flow. A passive- tion, simple operation, and low mainte-
style RDHx contributes to optimizing nance. These features, combined with
energy efficiency in a data center facility compressorless, indirect evaporative cooling,
1 Introduction in several ways. First, once the device make RDHx a viable technology in both
Liquid-cooling devices were installed on is installed, it does not directly require new and retrofit data center designs. It
server racks in a data center at Lawrence infrastructure electrical energy to operate. may also help eliminate the complexity
Berkeley National Laboratory (LBNL) in Second, RDHx devices can use less and cost of under-floor air distribution.
Figure 1. The passive-technology device chiller energy since they perform well at
removes heat generated by the servers warmer (higher) chilled water set-points. Reduce Maintenance
from the airflow leaving the server rack. Third, depending on climate and pip- Because passive RDHx devices have no
This heat is usually transferred to cooling ing arrangements, RDHx devices can moving parts, they require less main-
water circulated from a central chiller eliminate chiller energy because they can tenance compared to computer room
plant. However at LBNL, the devices are use treated water from a plate-and-frame air conditioning (CRAC) units. RDHx
connected to a treated water system that heat exchanger connected to a cooling devices will require occasional cleaning
rejects the heat directly to a cooling tower tower. These inherent features of a RDHx of dust and lint from the air-side of the
through a plate-and-frame heat exchanger, help reduce energy use while minimizing coils. RDHx performance also depends
thus nearly eliminating chiller energy use maintenance costs. on proper waterside maintenance.
to cool the associated servers. In addition
to cooling with passive heat exchang- 2.1 Basic operation Reduce or Eliminate Chiller
ers, similar results can be achieved with The RDHx device, which resembles Operation
fan-assisted rear-door heat exchangers and an automobile radiator, is placed in the RDHx devices present an opportunity
refrigerant-cooled rear-door exchangers. airflow outlet of a server rack. to save energy by either reducing or

continued >
FEDERAL ENERGY MANAGEMENT PROGRAM

Figure 3: Inside rack RDHx, open 90°


Figure 2: RDHx hose connections
Cooling Capacity at LBNL
eliminating chiller operation. Because RDHx to the data center also depends on ➢ Inlet server air temperature = 70°F
RDHx devices perform well at warmer the server’s workload, which can vary (21°C); Outlet server air temp = 100°F
chilled water set-points, they are typically continuously. Consequently, it is possible to 120°F (37.8°C to 48.9°C)
more energy efficient than CRAC units. to have discharge air temperatures higher, ➢ Leaving air temperature from RDHx
Potentially, a data center could eliminate or lower, than the desired server inlet air = ~80°F (26.7°C)
chiller use completely by having the temperature for the data center. ➢ Supply water temperature to RDHx
RDHx device reject heat using indirect Therefore, the RDHx system should be = 72°F (22°C); Outlet RDHx water
evaporative cooling in a cooling tower. commissioned to accommodate for this temperature = 76°F (24.4°C)
variability by modulating cooling water ➢ RDHx water flow rate per door = 9 GPM
2.3 Infrastructure requirements flow-rate, or temperature. In the case of (34 LPM); Total flow for six doors = 54
At LBNL, the RDHx devices were con- higher RDHx discharge temperatures, a GPM (204 LPM)
nected to treated-water system connected central cooling device, such as a CRAC ➢ Heat removed: QRDHX = 500 x 54 GPM
to a cooling-tower by routing new pipes unit, can compensate. Importantly, the x (76°F – 72°F) = 108,000 BTUH or 9
through the existing under-floor space. case study demonstrated the necessity of Tons
Overhead connections to an RDHx are having a comprehensive energy monitor- ➢ Server Load = 10 to 11 kW/Rack x 6
also available from the manufacturer. The ing system to optimize performance and Racks = 66 kW
RDHx devices were plumbed with flex- energy savings from the RDHx system. ➢ Percent cooling load provided by RDHx:
ible hoses that included quick-disconnect LBNL employed a wireless monitoring (9 Tons x 3.51 kW/Ton[conversion
fittings. These fittings allow the devices system to maximize energy savings. constant]) / 66 kW = 31.6 kW / 66 kW
to swing open, or be removed, during = ~48% of server waste heat removed
server maintenance and upgrades, see by RDHx system
Figures 2 and 3. 3 Implementation
The installation of a RDHx is less Cost at LBNL
2.4 Capacity sizing complicated than installing a CRAC unit.
The RDHx devices cost $6,000 per device
Cooling capacity is achieved by setting a However, prior to installing any ad-
ditional cooling device, operators of data plus installation and infrastructure additions.
flow rate with respect to the available
coolant temperatures. Coolant flow rates centers using under-floor air distribution
can range from 4 gallons per minute (GPM) should consider and implement appropri-
per door to over 15 GPM per door. Server ate energy-efficiency measures including:
➢ Optimizing control coordination by
outlet air temperatures can be reduced installing an energy monitoring and
➢ Scrutinizing floor tile arrangements
anywhere from 10°F (5.5°C) to 35°F control system (EMCS).
& server blanking.
(19.4°C), depending on coolant flow-rate
and temperature and server outlet tempera- ➢ Increasing data center setpoint ➢ Installing hot-aisle or cold-aisle
ture. Discharge temperature from each temperature. isolation systems.
FEDERAL ENERGY MANAGEMENT PROGRAM

All of these basic measures will con- ➢ Verify server airflow temperature at be used to gather air temperatures and to
tribute to increasing the cooling capacity inlet and outlet (before RDHx). develop trend information prior to installing
of your existing data center systems and a RDHx system. Generally in a data center,
may help avoid the complexity of install- ➢ Check air temperature at RDHx an EMCS with centralized, automated,
ing new, additional cooling capacity. outlet. direct digital control can coordinate energy
use of cooling systems such as CRAC
3.1 Site preparation and installation ➢ Measure RDHx coolant flow and
units, thus maximizing performance of
inlet and outlet temperatures.
• Determine chiller and cooling tower the rack-mounted RDHx devices.
system capacity. ➢ Confirm inlet coolant temperature
at each RDHx is above dew point 4.3 Metering for energy management
• Locate and determine access and The LBNL demonstration project clearly
temperature.
connections to existing chilled- validated the old energy-use axiom that
water, treated-water, and tower- ➢ Check for leaks at pump and in generally states that you cannot man-
water systems. piping arrangements. age energy without monitoring energy.
Adequate metering and monitoring is
• Examine server racks for missing ➢ Review and test new control essential to provide reliable energy-use
blanking and side plates. sequences. data to sustain the performance of the
• Remove or relocate obstructions to RDHx installation.
routing new piping under floor. 4 Lessons Learned
4.4 Rethinking rack arrangements
This demonstration project at LBNL
• Install new circulating pump(s), RDHx technology may make creating
provided lessons-learned that may be
flow-balancing valves, and fittings, what is usually referred to as “hot aisle
relevant to other RDHx projects. One
as necessary. isolation” less important. Using a RDHx
unexpected result was the amount of excess
cooling capacity created within the data can sufficiently reduce server outlet
• Route piping and flexible hose.
center. The newly found capacity required temperatures to point where hot and cold
• Install isolation and balancing valves. coordinating RDHx cooling capacity with aisles are no longer relevant. In a new
the existing CRAC units. Standard CRAC data center design, adjoining server racks
• Prepare servers for possible return-air control methods, which are may not need to have their outlet air-
down-time. beyond the scope of this bulletin, can flows facing each other. A series airflow
present a challenging situation. arrangement, where air from a rack outlet
• Add new temperature sensors. supplies air to an adjoining rack’s inlet,
4.1 Airflow management can be implemented. Depending on the
• Install heat exchanger door; check
rack arrangement in an existing data
flexible pipe clearances. Airflow from under-floor outlets needed
center, an RDHx system can simplify air
to be directed correctly. The location and
• Purge and pressure test system for management by eliminating the need for
adjustment of the supply outlets such as
leaks. hot and cold aisle isolation.
adjustable perforated tiles, was optimized
prior to fitting RDHx devices. Air leaks
3.2 Commissioning around the rear-door heat exchangers 5 References
LBNL encountered a variety of minor required sealing since the doors did not Hydeman, M. “Lawrence Berkeley
initial start-up issues related to the fit and always fit tightly. Additionally, LBNL National Laboratory (LBNL) Case Study
finish of the RDHx devices such as air found that RDHx devices were not well of Building 50B, Room 1275 Data Center
leaking around the RDHx devices and air suited for all rack designs, especially from 2007 to 2009.” Taylor Engineering,
short-circuiting within the racks. During racks older than 10 years. The poten- LLC. December 2009
startup, it is essential to monitor all liquid tial for hot-air short-circuiting within
and air temperatures and coolant flow-rates the racks is widespread. To limit this
to optimize RDHx performance. It is recom- situation, LBNL installed server rack 6 Acknowledgements
mended that the following are commis- blanking-plates and side-panels. In addi-
sioned as part any RDHx installation: Primary Author:
tion, LBNL used brush-type seals around
RDHx hoses to mitigate this air-leak Geoffrey C. Bell, P.E.
➢ Confirm point-to-point connections Lawrence Berkeley National Laboratory
pathway.
of temperature sensors. One Cyclotron Road
4.2 Monitoring performance a necessity M.S. 90-3111
➢ Ensure airflow leakage around the Berkeley, CA 94720
RDHx and recirculation within the An EMCS, BAS (Building Automation
Voice: 510.486.4626
rack is minimized. System), or other monitoring system, should
e-mail: gcbell@lbl.gov
FEDERAL ENERGY MANAGEMENT PROGRAM

Reviewers and Contributors: Steve Greenberg, P.E.


Paul Mathew, Ph.D. Lawrence Berkeley National Laboratory
Lawrence Berkeley National Laboratory One Cyclotron Road
One Cyclotron Road M.S. 90-3111
M.S. 90-3111 Berkeley, CA 94720
Berkeley, CA 94720 Voice: 510.486.6971
Voice: 510.486.5116 e-mail: segreenberg@lbl.gov
e-mail: pamathew@lbl.gov
For more information on FEMP:
William Tschudi, P.E. Will Lintner, P.E.
Lawrence Berkeley National Laboratory Federal Energy Management Program
One Cyclotron Road U.S. Department of Energy
M.S. 90-3111 1000 Independence Ave., S.W.
Berkeley, CA 94720 Washington, D.C. 20585-0121
Voice: 510.495.2417 202.586.3120
e-mail: wftschudi@lbl.gov william.lintner@ee.doe.gov

EERE Information Center Prepared at the Lawrence Berkeley


1-877-EERE-INF (1-877-337-3463) National Laboratory
www.eere.energy.gov/informationcenter A DOE national laboratory

June 2010

Printed with a renewable-source ink on paper containing at least 50% wastepaper, including 10% post consumer waste.

You might also like