You are on page 1of 16

Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Contents lists available at ScienceDirect

Renewable and Sustainable Energy Reviews


journal homepage: www.elsevier.com/locate/rser

Advances in data center thermal management


Yogesh Fulpagare, Atul Bhargav n
Energy Systems Laboratory, Indian Institute of Technology Gandhinagar, VGEC Campus, Chandkheda, Ahmedabad, GJ 382424, India

art ic l e i nf o a b s t r a c t

Article history: With the increase in electronic traffic, heat generated by electronic equipment, and the concomitant
Received 18 March 2014 costs of powering cooling systems in electronic data centers are increasing continually. Various research
Received in revised form groups working in academic institutions, research laboratories and industries have been applying a
2 November 2014
variety of tools to study and improve performance of data centers. This paper reviews recent
Accepted 8 November 2014
Available online 12 December 2014
contributions to the state of knowledge in data center thermal management. First, we review numerical
and experimental studies on the most common air-cooled raised floor plenum type data centers. We
Keywords: then summarize published research on rack layout, efficiency and performance metrics for data centers,
Data center dynamic control and life cycle analyses, and validation of numerical models. Finally, we review some
Thermal management
recently proposed cooling strategies and numerical optimization efforts. We find that the research
Heat dissipation
carried out on thermal management of data centers has helped improve performance in many instances
Performance metrics
(such as rear door water cooled heat exchanger type rack), and has helped establish some physics-based
criteria for data center designers. Based on the trends observed in this review, we expect further
improvements to all aspects of data center design and operation in the near future, with a focus on real-
time measurement & control, model validation and heuristics based optimization. In addition, some
significant changes such as thermal energy storage and smart-grid capabilities are also expected to be
incorporated into data center control strategies.
& 2014 Elsevier Ltd. All rights reserved.

Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
1.1. Raised floor plenum (RFP) models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 982
1.2. Rack layout with thermal analysis and power distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
1.3. Energy efficiency and thermal performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 984
1.4. Data center dynamics: control and lifecycle analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
1.5. Model validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
1.5.1. Compact modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
1.6. Data center cooling strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
1.6.1. Water cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
1.7. Programming based optimization of data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
2. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
Appendix A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994

1. Introduction

Most large and medium business enterprises depend on some


kind of data center usage, in addition to millions of individual
n
users who access online content on the world wide web [1]. The
Corresponding author. Tel.: þ 91 814 030 7813; fax: þ91 792 397 2324.
E-mail addresses: fulpagare_yogesh@iitgn.ac.in (Y. Fulpagare),
most important requirement for a data center is uninterrupted,
atul.bhargav@iitgn.ac.in (A. Bhargav). zero downtime operation. An interruption caused by equipment

http://dx.doi.org/10.1016/j.rser.2014.11.056
1364-0321/& 2014 Elsevier Ltd. All rights reserved.
982 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Nomenclature m mass flow rate, kg/s


MMT Mobile Measurement Technology
ASHRAE American Society of Heating, Refrigerating, and Air- NN neural network
Conditioning Engineers OH overhead supply and returns
c specific heat, kJ/kg K P Cooling power spent on cooling devices, kW
I Computational Fluid Dynamics P DataCenter total power consumption of a data center, kW
I Cubic Foot per Minute P IT power spent on computing, storage, network
CRAC Computer Room Air Conditioning equipment, kW
DC data center P reuse reused power, kW
DCN Data Center Networks PDU Power Distribution Unit
DRHx Dry Cooler with Rack Heat Exchanger PIV Particle Image Velocimetry
EASE Evaporative Air Side Economizer POD Proper Orthogonal Decomposition
EDRHx Evaporative Dry Cooler with Rack Heat Exchanger PTFRD Perforated Tile Flow-Rate Distribution
EPA Environmental Protection Agency PUE Power Usage Effectiveness
ERE Energy Reuse Effectiveness Q_ heat removal rate by coolant, kW
GA Genetic Algorithm RFP raised floor plenum
HT heat transfer SHI Supply Heat Index
HTS Highest Thermostat Setting SLA service level agreement
HVAC Heating Ventilation and Air-Conditioning STD raised floor plenum supply
IASE Indirect Air Side Economizer T in inlet temperature, K
ILP Integer Linear Programming T out outlet temperature, K
IT information technology TEC thermoelectric coolers
LPM Liter Per Minute UPS Uninterrupted Power Supply

failure would entail costly repairs and replacement. However, even


more serious is the prohibitive cost of business interruption.
Therefore, for uninterrupted operation, reliability of power and
electronic cooling become crucial. During the last ten years,
increasing demand for computer resources has led to significant
growth in the number of data center servers, along with an
estimated doubling in the energy used by these servers and the
power and cooling infrastructure that supports them [2]. The
increase in energy use includes increased energy costs for business
and government, increased emissions, including greenhouse gases,
from electricity generation, increased strain on the existing power
grid to meet the increased electricity demand, and increased
capital costs for expansion of data center capacity and construc-
tion of new data centers. According to 2006 USEPA estimates, 1.5%
of the total United States' electricity consumption of 6 billion kWh
was attributed to data centers (an increase from previous esti-
mates [2]). To put this usage in a global perspective, this electricity Fig. 1. ASHRAE environmental classes for data centers (Adopted from [4]).
consumption equaled Congo's electricity consumption in that year
[3]. This goes on to show the importance of conservation and 4. Data center dynamics: control and lifecycle analysis
management of energy usage in data centers. 5. Model validation
In 2011, ASHRAE developed a new set of guidelines [4] that 6. Data center cooling strategies
helped categorize data centers (Fig. 1), thereby providing data 7. Programming based optimization of Data Center
center operators an opportunity to optimize energy efficiency and
reliability to suit their particular business needs. High construction Previously, Bash et al. [5] focused on the immediate and long
costs have forced data centers to maximize floor space utilization, term research needs for efficient thermal management of data
with the use of higher power density racks. Several heat dissipa- centers. Schmidt et al. [6] and Rambo et al. [7] had reviewed the
tion challenges result from this trend of packing higher computing literature dealing with various aspects of cooling computer and
power into available space. A large number of recent investigations telecommunications equipment rooms of data centers to explore
have used numerical, experimental and CFD analysis on data factors that affect cooling system design, and to improve or
centers thermal performance. A comprehensive review of these develop new configurations for electronic cooling, thereby
investigations conducted over the past decade is presented here in improving equipment reliability. In this paper, we have retained
subsequent chapters. the categories mentioned in this work, while making conscious
The various modeling efforts have ranged from individual attempts to divide the literature so as to minimize overlap bet-
component modeling to rack and power layouts and can be ween categories.
classified into the following main categories:
1.1. Raised floor plenum (RFP) models
1. Raised floor plenum (RFP) airflow modeling
2. Rack layout with thermal analysis and power distribution A very commonly used standard raised floor plenum and room
3. Energy efficiency and thermal performance metrics return cooling scheme airflow schematics is shown in (Fig. 2).
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 983

Fig. 2. Alternative cooling schemes employing (a) raised floor plenum (RFP) supply and standard return (SR), (b) RFP supply and overhead (OH) return, and (c) OH supply
with room return (RR) and (d) OH supply and return (Adopted from [7]).

The hot-aisle cold-aisle approach has gained wide acceptability in using techniques like partitions, drop ceilings, ducted racks, etc.
data center layouts, making this rack configuration possibly the and evaluated these using CFD [1].
only common feature between data centers. Hence this layout is a Modeling of obstructions in RFP is one of the important issues
good candidate case for validation studies, since the hot and cold that need further attention. Due to complex and swirling nature of
aisle flow patterns strongly influence the overall thermal perfor- flows in the RFP, detailed CFD studies become imperative. These
mance of a data center. studies will be useful to predict airflow patterns in the RFP.
The majority of results experimental measurements are RFP However, one of the remaining challenges is an appropriate model
investigations that used a flow rate measurement. Comparison for obstructions.
with measurements by Schmidt et al. [8] and VanGilder et al. [9] To quantify the effect on rack inlet air temperatures as a result
showed that CFD predictions have an average (root mean square) of mal-distribution of airflows exiting the perforated tiles located
error of at least 10%, with specific locations exhibiting more than adjacent to the fronts of the racks was generated from CFD tool
100% error in some cases. A solution that matches the observed Tileflow. Both raised floor heights and perforated tile-free areas
boundary conditions (perforated tile flow rates) does not guaran- were varied in order to explore the effect on rack inlet tempera-
tee that the solution over the domain is correct or even unique. tures. The series of computational work, resulted to provide some
With perforated tile flow rate prediction errors of about 10%, the guidance on design and layout of data center [9,10].
error associated with the velocity field in the RFP may be The effect of under-floor blockages on data center performance
significantly larger, rendering the physical arguments used to was discussed by Bhopte et al. [11]. The tile flow rate and rack inlet
develop RFP design guidelines questionable. Therefore, the bound- temperatures were significantly affected due to the inclusion of
ary conditions used to model computer room air conditioning blockages in the model. To improve the thermal performance,
(CRAC) unit blowers supplying the RFP deserve further investiga- broad guidelines were set up for data center operators.
tion. Thus there is a need to model CRAC unit in detail that would Samadiani et al. [12] showed the need of modification of the
provide the details of the flow direction and distribution among pressure loss coefficients across the perforated tiles to achieve
exhausts of CRAC unit, which seems to be missing from the better agreement between simulations and experiments. They also
reported literature. concluded that better numerical modeling is necessary to achieve
Predicting the perforated tile flow rate distribution (PTFRD) is accurate results by quantifying the effect of modeling the compu-
of great concern for a majority of RFP data centers. RFP data ter room and the CRAC units on PTFRD. Changes from about 5% to
centers use 0.61 m  0.61 m (20  20 ) tiles almost universally to 135% were observed for various changes in the model.
form the raised floor with depths vary between 0.305 m (12″) and With the advent of high density racks, the study of overall air
0.914 m (36″) across facilities. The RFP is commonly used to route distribution inside the data center room does not give a complete
the various power and data cables as well as chilled water lines for picture. Therefore Kumar and Joshi [13] experimentally investi-
the CRAC units, which can create substantial blockages and gated the effect of tile air flow rate on the server air distribution at
severely alter the PTFRD. Patankar in his work studied in detail various locations in the rack using the PIV technique. Lower flow
the effect of various parameters on the airflow inside the RFP data rates showed normal velocities at the rack inlet face, resulting in a
center and its effect on the cooling. The parameters discussed relatively uniform air distribution air pattern. Higher air tile flow
included height of the raised floor, location of CRAC, under-floor rates led to entrainment and recirculation effects, with most of
obstruction, etc. He also suggested ways of controlling the air flow the cold air escaping from the top of cold aisle at these flow rates.
984 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

This study shows that increasing CRAC flow rates did not essen- A lot of studies have focused on the effect of various above floor
tially mean higher cooling, which is a very counter intuitive result and under floor parameters on rack inlet temperatures with an
and has led to further studies in this regard. assumption that a uniform air flow distribution is exiting from the
Ibrahim et al. [14] numerically modeled data center with time perforated tiles [23–29]. Schmidt et al. [10] studied the effect of
dependent power and cooling air supply conditions. Results non-uniform distribution of airflow exiting the perforated tiles on
showed that the time varying nature of power and CRAC supply the rack inlet temperatures. They used the same model as in [9] to
had a large impact on the average rack inlet temperatures, and investigate the effect of PTFRD on uniformly dissipating racks. The
hence a need of a transient model was proposed to reflect the PTFRD was varied by altering the perforated tile open area and RFP
actual behavior of the data center. Most studies model tiles with depth, resulting in more uniform rack inlet temperature in the
constant velocity assumption. While this assumption accurately vertical direction for specific combinations of tile open area, RFP
models mass flux, it is at the cost of momentum flux. It entails an depth and rack power dissipation. The effect of layout of the
error as large as 300% in the value of momentum flux. Abdelmak- perforated tiles was carried out by Schmidt et al. [8] with a series
soud et al. [15] worked on this and carried out combined experi- of experimental studies. This study is a seminal work in experi-
mental and computational studies to model the tile opening and mental measurement of how air flow is influenced by parameters
geometry to better predict the mixing process of the air flow from like layout of perforated tiles, number of CRAC units and their
the tile with the ambient air. location. A CFD model is also proposed and is compared with the
Gondipalli et al. [16] presented transient numerical modeling of experimental data. A more detailed summary of numerical model
data center clusters using commercial CFD code Flotherm. Time- from literature is shown in Appendix A
dependent variations of power and flow rate were presented. A
pyramid shaped transient profile for varying power was used to 1.2. Rack layout with thermal analysis and power distributions
establish a spatial-and time-grid independent temperature and
flow distributions. It was shown that although larger step sizes Layout studies are important for achieving optimal dimensions,
reduce computational time, it results in reduced accuracy. A including hot and cold aisle spacing, and facility ceiling height,
comparison was made between their transient model and the parameters that affect RFP data center cooling system thermal
steady state model, which demonstrated the accuracy of the efficiency. It is also important in order to avoid recirculation,
transient model with a case study in which a CRAC failed. Iyengar short-circuiting, cool air returning to CRAC before passing through
et al. [17] studied the methodologies to reduce the energy usage any data processing equipment. As equipment is upgraded and the
for cooling in data centers by cutting of the supply to CRAC units, power dissipation varies over spatial and temporal dimensions,
while ensuring the cooling of IT equipments. They also evaluated due to varying workloads imposed on the data processing equip-
the reduction of energy usage by ramping down motor speed of ment, optimal positions to locate high power-dissipation racks
the CRAC units and increasing the refrigeration chiller plant becomes important.
temperature set point. The use of the second technique was found Various combinations of CRAC unit supply and return (Fig. 2) as
to be the most effective, saving up to 13% of the IT load. well as rack level thermal analysis with different layouts have
In order to overcome the recirculation effects, Gondipalli et al. been modeled to improve thermal efficiency of data centers. Each
studied the effect of isolating hot and cold aisles [18]. Roofs, doors data center has a unique geometrical footprint and rack layout,
and a combination of them were used for isolation in three therefore a common basis is needed to compare the thermal
different cases and their effect on rack inlet temperatures was performance of various cooling schemes in data center. Unit cell
monitored using CFD simulations. Based on these studies, it was architecture of a RFP data center was formulated by considering
concluded that using a combination of roofs and doors one can the asymptotic flow distribution in the cold aisles with increasing
effectively decrease the rack inlet temperature by 40% without number of racks in a row [30]. it was shown that at least 4 rows of
changing the room layout or the supply. racks are required to produce the hot aisle – cold aisle behavior,
Karki et al. [19] used a 1-D model to predict lateral flow with 1 hot aisle and 2 cold aisles. The data showed that flow rate of
distributions for two possible arrangements of CRAC units, and entrained air through the sides of the cold aisle is relatively
compared these with those given by a 3-D model. These sets of independent of the number of racks in a row and increases with
results were in excellent agreement and the case study showed increasing rack flow rate. Using 7 racks in a row will also be
that the one-dimensional model can be an effective design tool, sufficient to model current high power density data centers
especially for obtaining qualitative trends and for evaluating because the maximum flow rate of  2500 CFM is extremely high.
alternative designs. Hassan et al. [31] presented a CFD model that optimized the data
Bedekar et al. [20] used commercial CFD software to study center layout based on the inputs and thermal mass and energy
the effect of CRAC location of a fixed rack layout in a data center. balance principles. It was developed as a tool to evaluate air flow
Three cases with varying flow rates (6000, 8000 and 10,000 CFM) in 3D and optimizing thermal loads to design new data centers or
were studied. Results showed that performance of data center is improve existing data centers.
strongly affected by the location and flow rate of the CRAC. A full review of the most recent advances in Data Center
VanGilder et al. [21] studied 240 CFD models based on actual Networks (DCN) was provided by Hammadi and Mhamdi [32]
data center floor plans. The investigators varied floor plan, with a special emphasis on the architectures and energy efficiency.
perforated tile type (% open area), airflow leakage rate and It showed a detailed comparison of existing research efforts in
plenum depth. Based on these studies, design recommendations DCN energy efficiency as well as the most adopted test beds and
were made with respect to maximizing perforated tile airflow simulation tools for DCNs.
uniformity.
Leakage flows, the air flow through the seams between the 1.3. Energy efficiency and thermal performance metrics
panels of raised floor, leads to an increase in the total cooling air
requirement. But the quantification of this effect was missing. To operate data centers with high energy efficiency and
Radmehr et al. [22] outlined a procedure to measure leakage flow reliability with low total cost of ownership in order to get the
with experimental validation. Leakage area was also calculated in highest computational output for their energy cost input, the
this study. Such study can lead to better design of data centers performance metrics play an important role. In this section, we
with improved air flow distribution. review the most widely used data center metrics, as well as those
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 985

that will provide the data center designer, operator and owner considering the ratios (cold supply air enthalpy rise before it
with the best opportunity for reducing energy consumption. reaches the racks)/(enthalpy rise at the rack exhaust), and (heat
The heat recovery efficiency is one of the most important extraction by the CRAC units)/(enthalpy rise at the rack exhaust),
metrics for heat energy reuse, and its variation with coolant which in practice require the air temperature evaluation at
temperature. It can be defined as the ratio of rate of heat removal arbitrary points near the rack inlet and exhaust. Sharma et al.
by the coolant to electrical power. The heat removal rate calculated [39] later computed these dimensionless performance metrics in
as an operational data center by taking air temperature measure-
 U  ments just below the top of each rack inlet and outlet. Norota et al.
Q_ ¼ mc T out T in ð1Þ [40] used the statistical definition of the Mahalanobis-distance as
an index to describe the non-uniformity in rack thermal perfor-
Q_ is a function of the mass flow rate m, the specific heat c and the mance, and calculated it by considering the scatter of population
change in temperature. The grin grid introduced two metrics in the Euclidean distance between two given points. Uddin and
Power Usage Effectiveness (PUE) and the Energy Reuse Effective- Rahman [41] proposed a framework that incorporates the latest
ness (ERE) to account the energy efficiency of data center. The PUE energy saving techniques and gives detailed information for its
value is defined as implementation. They also apply green metrics like PUE, Data
Center Effectiveness and Carbon Emission Calculator to measure
P DataCenter þP Cooling
PUE ¼ ð2Þ the performance. Their work aims to provide a standard to be
P IT
followed for design of energy efficient data centers.
The symbols P DataCenter , P Cooling and P IT denote the total power Priyadumkol and Kittichaikarn [42] formulated dimensionless
consumption of a data center, power spent on cooling devices and parameters that can be effectively used as a tool for increasing the
power spent on computing, storage and network equipment energy efficiency of data centers. They proposed Supply Heat
respectively. The low PUE value indicates that majority of energy Index (SHI), a measure of recirculation effect, which helps under-
consumed by a data center is used for computing [33]. To stand the convective heat transfer effects in the data center room.
demonstrate the use of this metric, a case study (Table 4) with SHI parameter has to be minimized, with the ideal value being
hypothetical data center based on actual server data considered zero, implying no recirculation effects. They also used thermal
[34]; PUE can be calculated as loads and the flow from the perforated tile to define SHI. The
effectiveness of two other indices, Rack Cooling Index (RCI) and
3; 891; 480 kWh
PUE ¼ ¼ 1:63 ð3Þ Return Temperature Index (RTI) was shown by Cho and Kim [43].
2; 391; 480 kWh They computed these indices to compare two cooling systems. RCI
The heat reuse factor in a data center is not accounted in PUE is a measure of how effectively the cooling system maintains the
value. Therefore, another metric ERE is used, defined as data center in the industry temperature guidelines and standards.
RTI is a measure of the level of by-pass air or re-circulation air in
P DataCenter þ P Cooling  P reuse
ERE ¼ ð4Þ the equipment room. They also evaluated various strategies to
P IT
improve the energy efficiency of the cooling system using these
ERE accounts for the benefits achieved by reuse of waste heat from indices. Cho et al. [44] used these to compare the thermal
a data center [33].The Supply Heat Index (SHI) is a dimensionless environment inside a typical data center module of 46 different
measure of recirculation of hot air into the cold aisles. SHI not only design alternatives using CFD. This study helped optimize the data
provides a tool to understand convective heat transfer in the center design and goes on to show the benefits of having indices
equipment room but also suggests means to improve energy for comparing energy efficiency of various models and designs.
efficiency. SHI is a number between 0 and 1; a low value is Considering the spatial uniformity of thermal performance of
preferred (typically, SHIo0.40). system level electronics metrics were formulated and applied to
Mobile measurement technology (MMT) by Hamann et al. [35] data centers in [45]. The study takes into account the effects of
yielded the first ever experimental 3D temperature images of a recirculation and the mixing of the hot and cold air in metrics
data center. Further, Hamann et al. proposed to extend the MMT formulation. Entropy generation minimization is employed as a
concept towards a dynamic solution of thermal management metric, as a system with the least entropy generation will be the
problem of data centers [36]. He proposed combining the high- one with the least recirculation effect and thus the system will be
resolution static data obtained from MMT with a time-resolved more efficient. The employed metrics result in better evaluation of
data obtained from a complementary sensor network to model the data center design.
data center and develop a control strategy to minimize the energy Pedram [46], identified the key sources of inefficiencies in data
consumption. centers. He further suggested solutions which aim to counter these
Sun and Lee [37] comparatively studied the energy usage of problems while meeting the stipulated service level agreements. A
two data centers in a tropical climate. Extensive study led them to data center was studied by Lu et al. [47] for a duration over a year
conclude that data centers are extremely high energy consuming to identify the key problems and major energy saving opportu-
facilities (about 3000 kWh m  2 year  1) and that power demand is nities. Based on their observations, they proposed to reduce the
grossly over-provided for in these facilities. This results in high CRAC fan speed, as well as adding a heat recovery system which
capital and running costs. They show that around 56% energy could provide a large amount of heat for process heating as well as
consumption can be reduced for the facility that they studied space heating applications. Such a systematic study is advised to
through efficient designs and better control systems. They also get a critical look on the usage and possible energy saving
pointed out the need of practical design guidelines and bench- approaches that could then be applied.
marking for efficient design of data centers A data center design cannot be improved on the basis of
The performance of data centers was assessed by various temperature and velocity fields only. Hence, Fakhim et al. [48]
research groups, with most authors reporting the maximum inlet proposed an exergy based performance metric was on the basis of
temperature to each rack which made specific comparisons first law. Exergy destructions was calculated on a prototype data
between various cooling scenarios difficult. Sharma et al. [38] center with CFD analysis which is not possible through thermal
introduced dimensionless numbers to quantify the effects of maps. This analysis provided an optimal parametric design
recirculation. These dimensionless numbers were arrived at by conditions.
986 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Siriwardana et al. [49] developed a holistic approach based on by an abstract heat flow model, where recirculation was modeled
abstract heat-flow models and swarm based optimization techni- as cross interference. This model uses the on-board and ambient
ques. This upgrades the existing computing equipments inside a sensor readings of work load and thermal profile data as inputs
DC environment. It is also observed that the proposed approach and predicts the resultant temperature distribution taking into
outperforms the conventional load spreading technique. account the data center topology. The results indicate that the
Meisner et al. [50] showed that unlike mobile devices, high modeling technique is valid and the model can be used in on-line
power servers are more efficient than low power servers and the thermal monitoring for data centers.
drawback of the higher cooling cost is easily outweighed by their Schmidt et al. [59] provided a detailed overview of the
higher efficiencies. Singh et al. [51] used CFD models based on challenges in thermal management of data centers. Their work
conjugate heat transfer and fluid flow for a production data center describes the key factors affecting the environmental conditions of
and validated the model experimentally. As a result of extensive the equipments within a data center. They also quantify, based on
parametric studies, they proposed several design and operational certain metrics, the verity of various attempts to numerically
changes, which resulted in energy savings of about 20%. This model the thermo-fluid mechanisms that occur in a data center.
model was then used to obtain guidelines for better design of data Facility level cooling schemes have been known to cause
centers. inefficiencies with hot air recirculation and cold air short-
Le et al. [52] proposed a framework for optimization based circuiting and thus research focus has shifted to rack-level thermal
request distribution in multi-data-center web services which analysis to overcome these short comings. Herrlin [60] formulated
minimized energy consumption and cost, while respecting rack-level thermal performance metrics based on rack inlet tem-
service-level agreements (SLAs). This was accomplished by taking perature (including conditions that exceeds the desirable limits),
advantage of time zones, variable electricity prices, and the green testing the methodology and the index on a typical data center
energy. They also proposed a heuristic algorithm that is generic in environment with two contrasting cooling systems. It was shown
nature. that the index gives the meaningful information on how the
Pakbaznia and Pedram [53] minimized the information tech- equipment is effectively maintained within recommended
nology (IT) equipment and air conditioning equipment power temperature range.
usage by modeling it as an integer linear programming problem. Heydari and Sabounchi [61] incorporated a model of an over-
The algorithm uses a thermally aware task-placement to allocate head rack-level supplemental refrigeration device and used CFD
tasks on various servers and returns control parameters for the analysis to compute temperature distributions in a RFP data
cooling system to minimize the energy usage. The experimental center. Results showed that airflow rate had a greater effect on
results of the proposed model showed 13% power saving over a rack thermal performance than refrigerant temperature or refrig-
baseline task assignment technique. erant flow rate.
Varsamopoulos et al. [54] improved their own previously A reduced order thermal method based on Proper Orthogonal
proposed thermal-aware job scheduling algorithms. The contribu- Decomposition (POD) was presented by Samadiani et al. [62] to
tions of the paper include (1) usage of linear cooling models in predict the temperature field in complex systems in terms of
analysis and algorithm design and (2) realization of power savings multiple design variables. It was shown that the POD results
even without a thermal-aware job scheduler. remain accurate for the case study even if the given thermal
Patterson [55] concluded that individual data center character- information at the component boundaries decreases by 67%.
istics affect the optimum temperature for efficient operation. It
was seen by an organized energy picture of the data center that 1.4. Data center dynamics: control and lifecycle analysis
raising the temperature of the data center does not lead to an
increase in efficiency, as was generally believed. Most of the investigations mentioned are concerned with a
Jonas et al. [56] proposed a predictive algorithm based on neural nearly fixed geometry, number of CRAC units and number of racks,
network methodologies which takes CPU (Central Processing Unit) which may represent only a single point in time over a data
utilization rates as input and predicts the temperatures at the center's life. To compensate for time varying heat loads, Boucher
equipment inlets. This methodology greatly simplifies the thermal et al. [63] attempted to control rack inlet temperature by varying
mapping of the data center by reducing it to the inlets of all CRAC temperature and flow rate, as well as perforated tile open
equipment. The neural network was trained by the on-board area. Sharma et al. [64] proposed redistributing computing work-
sensors. This method, if successful, will make headways into load rather than cooling resources and presented CFD based
energy-saving approaches based on actual thermal mapping of the results and Patel et al. discussed the advantages for distributable
data center. cooling resources at various levels [65]. Other investigations have
To improve energy efficiency and reliability of data center considered combined power and cooling systems and their viabi-
operation, thermal-aware workload replacement or scheduling lity over the facility lifecycle [66,67]. A real-time transient thermal
had been studied by Tang et al. [57]. Based on the previous model was developed to demonstrate the heating of a data center
research work on characterizing heat recirculation of data center following the loss of utility power by Lin et al. [68]. Proper strategy
as cross interference coefficient, a task scheduling algorithm (XInt) was recommended according to each cooling system's character-
was proposed which leads to minimal heat recirculation and a istics to achieve the desired temperature control during the power
more uniform temperature distribution, and consequently results outages.
in minimal cooling energy cost of data center operation. Compar- Boucher et al. [69] presented experimental results that demon-
ison of energy efficiency between XInt and other algorithms shows strate how simple modular control strategies can be implemented.
that XInt consistently achieves the best energy efficiency and saves They evaluated three types of actuations: CRAC supply tempera-
20–30% of energy cost at a moderate data center utilization rate. tures, CRAC fan speeds and plenum vent positions to be used for
XInt also outperforms another recirculation – minimizing algo- efficient control of the data center. Furthermore, from the experi-
rithm named MinHR. It was shown that the standard deviation of mental data it was shown that it is possible to improve the energy
inlet temperature is a better metric to quantify the degree of performance of a data center by up to 70% over current standards
recirculation inside data center. while maintaining proper thermal management conditions. Bash
Tang et al. [58] tried to characterize recirculation effect and et al. [70] outlined an architecture and control scheme for dynamic
accelerate thermal evaluation for high performance data centers thermal management of air cooled data centers. It was shown that
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 987

this control scheme results in up to 50% reduction in the cooling flow rate predictions as boundary conditions to an above floor
power consumption, and also increases the space utilization, in model. During validation in RFP data center models, point wise
addition to reducing cost by as much as 25%. temperature measurements at inlet and exit of individual servers
have been generally considered, thereby emphasizing the need to
1.5. Model validation accurately model fluid flow and heat transfer effects accurately.

Due to the complexity of air flow and heat transfer, validation


studies of data center CFD/HT models were limited until 2005; 1.5.1. Compact modeling
after this period, a large number of model validation studies have Compact models are widely accepted, as they save computa-
been done on RFP data centers. A primary concern in data center tional time. A compact model uses a number of internal states to
CFD/HT validation is the appropriate validation of numerical produce predefined output data, given a prescribed set of inputs,
models. Even a small prototypical data center facility includes and parameters that differentiate between compact and black-box
countless point-wise temperature, velocity and pressure measure- models. Compact modeling at the rack level can provide multiple
ments. Full-field visualization and measurement techniques, such level of description for data center airflow and heat transfer
as Particle Image Velocimetry (PIV), laser-induced fluorescence characteristics depending on the details included in the compact
can provide very detailed descriptions of the velocity and tem- model. Since rack has multiple server models which generates
perature fields. heat at chip level inside the server, so as to capture the cooling and
Abdelmaksoud et al. [71] studied two main aspects: (1) to the convective processes at the rack level, most of the researchers
assess the importance of conserving both mass and momentum at [76] have subdivided the racks into a series of server models,
tile exit and rack rear doors in data center CFD simulation and where each server contained an induced draft fan model at
(2) to highlight the importance of including buoyancy for Archi- exhaust, a lumped flow resistance at the inlet to model the
medes number of order nearly 1 or higher. It was shown that the pressure drop and uniform heat flux over the bottom surface to
inclusion of these effects can have a significant influence on CFD model the power dissipation from a number of components.
simulations of the temperature fields in the vicinity of the racks. Arghode et al. [77] studied air flow through a perforated tile
By accounting for these effects in the CFD model, better agreement and its entrance into the adjacent server rack, and compared the
between the CFD simulation results and the measured data were results of detailed CFD analysis with various other computational
observed. The overall RMS (Root Mean Square) temperature error models as well as actual experimental measurements (using PIV).
was reduced from the  4 1C (obtained from CFD simulations They concluded that it was important to include the geometric
without accounting these effects) to  2 1C (when these effects details of the tile for better accuracy.
were accounted). Ghosh et al. [78] investigated the effect of server population of
On the basis of rack inlet temperature, a comparison between a rack on air temperatures in a data center facility. Hot and cold
test results and numerical simulation were done by Shrivastava aisles with different number of servers were observed with a rack-
et al. [72]. For racks near the CRACs and along the aisle, good level containment system. CFD studies aided the study of air flow
agreement with experimental measurements was observed at and it was shown that the server population inside a rack has a
moderate power levels. They also suggested methods for collection significant impact on air temperatures.
of data at higher densities, as the greatest difference in compar- The effect of server air flow distribution in a RFP data center
ison was observed in those regions. facility was investigated by Kumar et al. [79] through a series of
Choi et al. [73] presented a detailed 3D CFD based thermal experiments, where rack air flow was measured using PIV. In these
modeling tool, ThermoStat, for rack mounted server systems. 20 experiments, the fan speed was varied while holding the tile air
nodes (each with dual Xenon processors) were modeled and flow rate constant. The results with various cases showed sig-
validated with over 30 sensor measurement data. ThermoStat nificant change in air distribution pattern.
can be used in both system building and packaging studies to Influence indices approach for data center cooling by Bhagwat
figure out how to place components, design cooling systems, etc. et al. [80] cuts down the exploration time by 80% for a 1500 square
as well as for undertaking higher level (architectural/software) foot data center. The time taken by the computational resources to
thermal optimization studies. gather enough information of the data center from the CFD
Fakhim et al. [74] evaluated thermal behavior of a data center analysis for it to take informed decisions is called exploration
by numerical analysis of thermal and fluid flow fields. Using time. These indices characterize the causal relationship between
cooling percentage deviation and SHI, they analyzed the perfor- heat sources and sinks, and were used to refine the design/
mance of design modifications suggested to overcome hot spots. operation of the data center either manually or programmatically.
Changing CRAC position was found to have the largest impact on New designs were evaluated with further CFD runs to compute
these parameters. The findings lead to a better understanding and new influence indices and the process is repeated to yield
solution of formation of hot spots in data centers. improved designs as per the computation budget.
Schmidt et al. [75] examined a large cluster installation for high There are numerous complex turbulent flow regimes encoun-
powered racks (up to 27 kW) and compared the temperature tered in thermo-fluid engineering applications that have so far not
profiles obtained by above floor thermal measurements and CFD been amenable to effective systematic design, because of the
analysis. It was suggested that CFD results deviate from actual computational expense involved in model evaluation and the
measurements in certain cases and fine tuning of the model is inherent variability of turbulent systems. Hence the robust design
necessary in most of the cases for accuracy. approach was applied to data center server cabinets [81]. Combin-
The experimental temperature distributions reveal strong hot ing the three constructs namely (1) Proper Orthogonal Decom-
spots in the data center, suggesting that improvement in current position (POD), (2) robust design principles and (3) compromise
cooling schemes is possible. Table 3 provides a more detailed Decision Support Problem (cDSP) formed the basis of this
description of investigations of RFP validated study, in the form of approach. This study enabled a 50% increase in the heat removal
summary of survey. Thermal issues such as hot spots arise in the capacity. The results were verified through experimental measure-
space that houses the racks, regardless of whether the cold supply ments, and generally agreed well with numerical simulation
air is delivered through a RFP or a overhead ductwork. Schmidt values. If the purpose of data center modeling is to investigate
et al. [9,10] modeled the RFP flow with input the perforated tile air flow and heat transfer at a relatively farther distance from the
988 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

rack, where the rack itself can be considered as a heat load, black systems, detailed rack level analysis is required, subject to the
box models may yield satisfactory results. Since the temperatures availability of computational facilities.
in and around the racks drive the need for efficient cooling
1.6. Data center cooling strategies

It is expected that air cooled data centers will be not effective


beyond a server power density of 25 kW per server rack. Also, air
cooled data centers typically have a low PUE, leading many
researchers to explore alternate cooling strategies.
Biswas et al. [82] presented a detailed power model that
integrates on-chip dynamic and leakage power sources, heat
diffusion through the entire chip, thermoelectric coolers (TEC)
and global cooler efficiencies, and all their mutual interactions
with multi-scale analysis. Results showed that  27% of global
cooling power could be saved with data center supply temperature
at 15 1C.
Wang et al. [83] proposed a model that links temporary
changes in cooling system settings (i.e., the CRAC output tempera-
ture) and cooling energy consumption. Using CFD tools, they
analyzed the thermal dynamics of a data center, and calculated
power consumption with and without the use of thermal storage.
In addition, they also used TStore (a cooling strategy that leverages
thermal storage to cut the electricity bill for cooling, without
causing servers in a data center to overheat) simulations with
thermal storage units to reduce cooling costs over short and
Fig. 3. Schematic diagram showing the use of the finned-tube evaporator for long time scales. Results showed that TStore achieved the
cooling the hot air discharged by a server rack in a data center environment desired cooling performance with a  17% reduction in power
(Adapted from [84]). consumption.
Based on a two-level approach, a computational method for
efficient analysis of the steady-state performance of two-phase
pumped-loop cooling systems with multiple components and
branches was carried out by Kelkar et al. [84]. A hybrid approach
based on 1D homogenous two-phase flow within the flow passage
with empirical correlations was used to analyze the performance
of actual two-phase system with multiple evaporators (Fig. 3).
It was shown that during the design process, the behavior of
individual components and their quantitative information becomes
imperative.
Shrivastava et al. [85] have developed a computational tool to
optimize cluster cooling performance by combining a real-time
Neural-Network (NN) based algorithm as a cooling prediction
engine, on which Genetic Algorithm (GA) optimization function-
ality was built. Three example cases were discussed, and results
indicate the potential of the combined tool. The GA was shown to
solve fairly complex data center optimization problems efficiently.
To deploy high density enclosures and blade servers in data
center, Rasmussen [86] highlighted practical and effective strate-
Fig. 4. Traditional chiller based data center cooling. gies. The parameters of actual and new data centers in terms of

Fig. 5. Data center length scales (Adopted from [81]).


Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 989

power density were examined and an approach to achieve high center test facility. A whole day run resulted in an average cooling
density was proposed. Norota et al. [40] focused on the uneven energy use of 3.5% of the total IT energy use, with average ambient
distribution of heat generation that occurs in an Internet Data air temperatures of 24 1C and average IT power of 13 kW.
Center, and confirmed its effects on the efficiency of air condition- The electronic data center cooling with hot water was proposed
ing by conducting an experimental study of 16 data centers by Kasten et al. [92]. Numerical modeling showed that an easily
accommodating four large scale computers. Cooling design con- available water flow of around 0.5 LPM with inlet temperatures
siderations by virtue of proper layout of racks can yield substantial above 50 1C was capable of handling the cooling requirements of
savings in energy. A data center cooling design through case data centers. Also there is a possibility of waste heat recovery
studies was shown by Patel et al. [87] and formed the basis for because of the high temperature water that exits from the
further work in cooling design. The presented case study helps in heat sink.
use of air conditioning recourses with changes in layout through Ebrahimi et al. [93] comprehensively reviewed the heat recov-
numerical modeling. ery opportunities in a data center cooling technologies. He con-
To eliminate the use of chillers in case of air cooled data centers cluded that absorption cooling and organic Rankine cycle are the
Iyengar et al. [88] proposed data center configuration with indirect most promising technologies to exploit the data center waste heat.
cooling approach. These configurations were compared with An approach to increasing energy efficiency by using distributed
traditional data center (Fig. 4) on the basis of cooling power use waste heat reutilization was presented by Woodruff et al. [94].
and realizable energy saving. Four highly energy efficient data The rise in equipment power densities along with the rise in
center configurations that obviate the need for chillers were expenditures of energy to power the equipment, has made it
studied. Three of these configurations include an Evaporative Air imperative for alternative, efficient cooling methods like liquid
Side Economizer (EASE), an Indirect Air Side Economizer (IASE), an cooling to be adopted. Extraction of heat at the source by a liquid
Evaporative Dry Cooler with Rack Heat Exchanger (EDRHx), and a cooled heat exchanger at the rear door of a rack reduces the
Dry Cooler with Rack Heat Exchanger (DRHx). The cooling power recirculation effects and is more effective than the CRAC placed
calculations were complemented with computations for server around the facility. Cost analysis also shows an advantage com-
inlet air temperature for the different configurations for two US pared to conventional means both in capital and running costs
cities (hot summer data using typical weather data). Except for the [59].
DRHx design, all the other economizer based approaches could
satisfy the recommended temperature ranges for several types of 1.7. Programming based optimization of data center
equipment classes.
To minimize energy consumption of the current data centers,
and to provide control strategies, many researchers have devel-
1.6.1. Water cooling oped and implemented various optimization methodologies.
Although considerable focus has been placed on the removal of Sarood et al. [95] proposed a scheme that minimizes the power
heat generated by chips and other components out of the server utilization and prevents hot spot formation for parallel applica-
enclosure, the rack cabinet is usually assumed to be provided with tions. They used dynamic voltage and frequency scaling and
an adequate supply of cooling air. The cabinet enclosure system frequency-aware load balancing. The experiments were run on a
within a data center complicates this assumption, as the exhaust 32-node cluster running applications with a range of power and
heat from one unit is drawn into another, and different devices utilization profiles. This resulted in energy savings of about 60%.
may have differing cooling requirements. Consideration of this Sayed et al. [96] made recommendations for temperature
multi-scale cabinet level resolution is important because heat management in data centers that have a potential for saving
dissipation at both chip and server length scales drive data center energy while maintaining system reliability and performance.
temperatures. In summary, data center system length scales span These were made based on extensive study of field data from
four orders of magnitude, from data center, to cabinet, to server, to different production environments. The effect of temperature on
chip, as shown in Fig. 5. server reliability and benchmarking studies were carried out on an
Zimmermann et al. [33] reported an experimental investiga- experimental test bed.
tion, and the corresponding energy and exergy analyses based on Durand-Estebe et al. [97] embedded PID algorithm in the CFD
Aquasar, the first hot water cooled supercomputer prototype. code to model data center with CPU leakage and overall material
Measurements and analyses on a prototype data center demon- electric consumption. It was shown that the PID algorithm
strated that the cooling requirements in data centers can be manages to maintain the server in the allowable ASHRAE range
efficiently addressed by using hot water as coolant. Almoli et al. (from 17 1C to 30 1C).
[89] developed CFD model for data center configurations with rear Wang et al. [98] proposed and evaluated two thermally aware
door liquid loop heat exchangers at the back of racks. It was shown task scheduling algorithms, TASA and TASA-B. The results showed
that the additional fans on these doors plays a significant role to that the algorithms could significantly reduce temperatures in
reduce the load on CRAC units. data centers with only about 10% increased job response time.
A novel economizer based data center test cell with a liquid Wang et al. [83] evaluated a cooling strategy for data centers that
cooled server racks was built by Iyengar et al. [90] as part of a US exploited the benefits of thermal storage and help cut the
Department of Energy project. They analyzed server cooling (using electricity costs. The strategy, called TStore, was tested using CFD
liquid water), rack enclosure cooling (with liquid distribution models and using actual workload and power price information.
and heat exchanger), and data center level cooling infrastructure. The strategy manages to reduce cooling costs by 15% while
A 22-hour experiment conducted with a full rack of servers avoiding server overheating. Wang et al. [99] discussed the
showed that extremely energy efficiencies can be realized, with feasibility of control strategies that unify workload, power and
only 3.5% of the rack power being used for cooling at the data cooling management. They simulate a blade enclosure system and
center level on a relatively hot New York summer day. discuss the opportunities, challenges and designs for such control
Through a series of experiments to characterize individual systems for the same. The results showed that the controllers can
equipment, data center thermal performance and energy con- effectively manage all the three parameters and thus improve the
sumption, David et al. [91] studied server level cooling by overall efficiency of the blade enclosure system. They further
combining recirculated air and warm water in a chiller-less data extend their hypothesis to data center level management.
990 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Banerjee et al. [100] developed a thermally aware load sche- size and CFD model construction. The abbreviation STD, RFP and OH
duling algorithm, named The Energy Inefficiency Ratio of SPatial refer to the standard CRAC return, raised floor plenum supply and
job scheduling (SP-EIR) along with a cooling aware job placement overhead supply and returns, respectively.
and cooling management algorithm, named Highest Thermostat
Setting (HTS). SP-EIR minimizes the total energy consumption
(computing þcooling), while assuming the SLAs are being met. 2. Summary
HTS minimizes the cooling demands on the CRAC to determine job
placement. Results from simulations based on ASU HPC data In the past decade there has been significant advancement in
center showed that when used in conjunction, the energy con- both scientific understanding of, and technological solutions for,
sumption reduces by 15%. The algorithms were also compared thermal management of data centers. With the continuing
with other algorithms and it showed positive results. increase in data generation, storage and processing requirements
Although the aforesaid approaches try to minimize data center across the globe, many more studies and technology interventions
power consumption, they lack a precise objective function and/or can be expected in the near and medium-term future. While data
accurate mathematical formulation of the optimization problem. centers continue to be predominantly of air-cooled RFP type, new
Pakbaznia and Pedram [53] proposed a heuristic algorithm that configurations and cooling technologies are being explored,
minimized the information technology (IT) equipment and air depending on the size and purpose of individual data centers.
conditioning equipment power usage by modeling it as an integer Within each of these cooling technologies, the emphasis is on
linear programming problem. It also concurrently addressed the simplification (compaction) of models, model-validation and use
problem of server consolidation and task assignments. The experi- for control and diagnostics. Based on this review, which covers the
mental results of the proposed model showed 13% power saving broad range of research initiatives in the field of data center
over a baseline task assignment technique. López and Hamann thermal management since 2007, some future directions for work
[101] proposed a simplified thermal model that uses the real time are suggested as under
sensor measurements for specifying boundary conditions. The
results showed good matching with actual thermal measurements a) Compact high-fidelity CFD models can be integrated with real-
and the model is accurate for limited real-time sensor data also. time control, diagnostics and job placement algorithms.
The model can be useful as a tool for real-time data center energy b) Characterization and dynamic modeling of CRAC units can be
management system. integrated with thermal models of the data center itself, and
Earlier Jonas et al. [102] and later Varsamopoulos et al. [103] possibly incorporate some smart-grid capabilities. This will also
proposed to speed up the evaluation of designs and algorithms help in designing thermal energy storage options where
with the use of a heat transfer model that captures transient feasible.
behavior. The addition of a time dimension allows data center c) Air cooling for some data centers has reached its limit; racks of
operators to model the periodic temperature fluctuations implied equipment with high-density loads must be distributed among
by discontinuous, step-linear cooling models. According to these other racks with less dense equipment loads in order to
authors, it maintains a good level of accuracy and saves a lot of manage heat effectively. This requires closer examination and
computational time. Wang et al. [104] formulated the problems of a better understanding of the flow and temperature dynamics
sensor placement for hot server detection in a data center as within individual data centers.
constrained optimization problems based on CFD and advanced d) To improve energy efficiency of data centers, guidelines cur-
optimization technique and showed that the proposed solution rently in use focus on rack inlet air temperatures. However,
outperforms several commonly used placement solutions in terms these guidelines could be made more effective if they consid-
of detection probability. ered energy consumption of fans, blowers, and other auxiliary
To improve energy efficiency and reliability of data center equipment. Developments of physics-based, heuristic guide-
operation, thermal-aware workload replacement or scheduling lines that use knowledge of the thermal environment through
had been studied by Tang et al. [57]. Details of the algorithm have real-time measurements are imperative for the efficient opera-
been mentioned in Section 1.3. To capture the real world discrete tion of data centers.
power states Moore et al. [105] developed scheduling algorithms e) It is very difficult to capture the dynamics of the data center on
with main focus on zonal heat distribution and recirculation-based real time basis due to the complexities involved in task
workload placements. The algorithms achieved to half the cooling management as well as in cooling. Therefore, a predictive
costs when compared to worst case scenario. All the savings model for any data center will be more efficient to predict
resulted through software without any capital investment. the thermal behavior of a data center and will perhaps be the
The thermal profile of data center is very dynamic in nature basis for data center efficiency management in the future.
and mainly depends on workload placement to servers, cooling f) Control strategies on real time basis, changes in facility archi-
configuration and physical topology of data center. To capture this tecture, advanced cooling approaches at server, rack levels and
dynamic behavior currently researchers are developing the pre- system levels, and corrosion prediction and management are
diction model of a data center and analyzing the effect of transient possible areas of future research in academia as well as industry.
conditions on data center; on server level or rack level or data
center room level [103,106–112].
Table 5 summarizes modeling details of data center simulations
published after 2007. A similar summary from year 2000 to 2007 can Appendix A
be referred from [7]. The table is organized into facility geometry,
rack and CRAC, the cooling scheme and other details regarding mesh See Tables 1–5 here.
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 991

Table 1
Raised floor plenum (RFP) data center numerical modeling details summary-1.

Author, Data center layout [m] Perforated tiles CRAC units Modeling
year
Length [m] Width RFP depth Racks Rack height Perforated % Open CRAC Ind. flow rate Grid Turbulence Commercial
[m] [m] [m] tiles area units [m3/s] cells model code

[69] 0.53, 0.91 0.9 1 2.286 – 33 2 1.224, 2594 4.2 k–ε realizable FLUENT
(height ¼1.98 m) millions
[70] 0.609 8.5 0.914 10 2 1 36 1 4.38 1.4 – –
millions
[72] – – – 48 – – 50 5 3.8–5.53[kg/s] 0.45 k–ε ANSYS CFXTM
million
[39] 0.6 1 0.6 7 2 7 ??? 1 3 216,000 k–ε FloVENT v9.2
[41] 363 m2 (height – 0.3 102 – 59 7 5.5 2.3 k–ε FloVENT
¼3 m) million V8.2
[1] 10.97 4.27 0.3048 22 – 15 25 1 4.72 – – –
[107] – – 0.3 20 – – 50 4 – – k–ε Flotherm

Table 2
Raised floor plenum (RFP) data center numerical modeling detailed summary-2.

Author, Data center layout [m] Perforated tiles CRAC units Modeling
year
Length Ceiling Width RFP Racks Rack Perforated % Open CRAC Ind. flow rate Grid cells Turbulence Commercial
[m] height [m] [m] depth [m] height [m] tiles area units [m3/s] model code

[15] 13.42 3.348 6.05 0.3 20 – – 50 4 4.71 230,000 k–ε Flotherm


V8.2™
[42] 30 2.34 12.1 75 – – 50 8 – 1.1million – ANSYS
CFXTM
[62] 15.24 3.05 5.49 0.43 1 2 3 27 1 4.81 130,000 k–ε Flotherm
[16] 42.7 3.048 48.8 0.457 213 – 256 – 23 5.85 – k–ε –
[63] 23.2 2.74 32.2 0.711 125 – – 40 12 6.46 1.5 million k–ε –
[17] 12.2 3.048 6.7 0.6 12 – 12 30 1 1.12–4.74 116000– k–ε Flotherm7.2™
145000
[10] 12.12 3.048 6.7 0.61 12 – 12 30 1 4.53 160000 – Flotherm
5.1™

Table 3
Summary of thermal/numerical modeling of data center reported in the literature.

Configuration/parameters Nature of work/study Observations/conclusions References

Data center layout [m] First modeling of DC using flow network modeling 1. The flow distribution can be altered by varying [7]
Length ¼ 7.27; width ¼ 7.87; RFP depth¼ 0.457; and CFD to predict perforated tile flow rates. the percent open area of the tiles.
perforated tiles ¼14; % open area¼25, 60; 2. With no obstructions in the plenum, the only
CRAC units ¼2; Ind. flow rate [m3/s] ¼2.48 airflow resistance in the loop caused by the
Modeling: Grid cells ¼ 48,000; grid cells/ pressure drop across the perforated tiles, making
m3 ¼7343; commercial code ¼ Compact the assumption of a uniformly pressurized
plenum valid.

Model 1: Data center layout [m] Numerical model and A two dimensional (depth- The first model failed to capture sharp variations [7],[8]
Length ¼ 20; width ¼ 6.06; RFP depth¼ 0.284; averaged) computational model was developed. A between neighboring tiles and in some cases did
perforated tiles ¼30–76; % open area¼ 19; fully three dimensional model of the same facility not resolve experimentally measured reversed
CRAC units ¼2; commercial code ¼ Compact also been developed and both the models flow.
Model 2: Data center layout [m] compared with the measurements. Second model showed improved agreement with
Length ¼ 20; width ¼ 6.06; RFP depth¼ 0.292; experimental measurements for various
perforated tiles ¼30–76; % open area¼ 19; perforated tile layouts, but also failed to predict
CRAC units ¼2; Ind. flow rate [m3/s] ¼5.78 the reversed flow in many tile configurations.

Data center layout [m] The minimum distance between a row of The results showed that reversed flow may occur [7]
Perforated tiles ¼5; % open area¼ 25; CRAC perforated tiles and the front of a CRAC unit was up to 4 tiles away from the CRAC unit, in agreement
units ¼ 1; Ind. flow rate [m3/s] ¼ 5.71 numerically investigated and Special treatments with the experimental measurements.
Modeling: Grid cells ¼ 66,490; grid cells/ were introduced to simulate the CRAC unit
m3 ¼15,954; turbulence model ¼ k–ε, k–ω; discharging into an infinitely large plenum without
commercial code ¼ Fluent constructing an excessively large numerical model.

Data center layout [m] Experimentally investigated the leakage flow, or Were able to produce predictions with an overall [22]
Length ¼ 13.41; width ¼4.88; RFP portion of total CRAC airflow that sweeps through accuracy of 90%.
depth¼ 0.419; perforated tiles ¼2–20; % open the seams of the raised floor tiles. Distributed the
area¼16; CRAC units ¼ 1 leakage flow uniformly throughout the perforated
Modeling: Turbulence model ¼ k–ε; tile seams and modeled chilled water supply lines.
commercial code ¼ TileFlow
992 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Table 3 (continued )

Configuration/parameters Nature of work/study Observations/conclusions References

Data center layout [m] Over 240 CFD models were analyzed to determine It showed Perforated tile type and the presence of [21]
10 different layouts the impact of data-center design parameters on plenum obstructions had the greatest potential
Length ¼6.40; RFP depth¼ 0.30–0.91; perforated tile airflow uniformity. And the CFD influence on airflow uniformity.
perforated tiles ¼ 28–187; % open model was verified by comparison to experimental
area¼ 25,56; CRAC units ¼ 2–18 test data.
Modeling: Grid cells/m3 ¼ 297–64,000;
turbulence model ¼ k–ε; commercial
code ¼Flovent

Data center layout [m] Presented a Computational Fluids Dynamics model The predicted flow rates through the perforated [29]
Length ¼39.60; width ¼22.30; RFP for calculating airflow distribution through tiles were in good agreement with the measured
depth ¼0.76; perforated tiles ¼ 352; % open perforated tiles in raised-floor data centers keeping values. The model gave an insight into the physical
area¼ 25CRAC units ¼11; Ind. flow rate [m3/ the assumption that, relative to the plenum, the processes that control the distribution of airflow
s] ¼ 5.85 pressure above the tiles was uniform. through the perforated tiles.
Modeling: Grid cells ¼136,770; grid cells/
m3 ¼ 815; turbulence model ¼k–ε;
commercial code ¼ TileFlow

Facility geometry [m] The open area of the perforated tiles, the locations The one-dimensional model used to predict the [19]
Length ¼11.67; width ¼8.53; RFP depth¼ 0.60; and flow rates of the computer room air lateral flow distributions for two possible
ceiling height¼ 3.05; rack: racks¼ 28; power conditioner (CRAC) units, and the size and location arrangements of the CRAC units, and these results
[kW] ¼ 12; flow rate [m3/s] ¼0.68; CRAC: of the under-floor obstructions like cables and were compared with those given by a three-
CRAC units ¼ 4; Ind. flow rate [m3/s] ¼ 4.72; pipes, the effect of these parameters on the airflow dimensional model. This case study showed that
perforated tiles ¼ 28; cooling scheme: distribution was studied using an idealized one- the one-dimensional model can be an effective
supply¼ RFP; return¼ STD, OH; dimensional computational model. design tool, especially for obtaining qualitative
orientation ¼ Nþ A; global parameters: trends and for evaluating alternative designs.
power density [W/ft2]¼ 314; Mrack/
Mcrac ¼ 1.01; Qrack/Qcrac ¼ 0.88
Modeling: Grid cells  1000¼ 450; grid cells/
m3 ¼ 1239; commercial code ¼ Flovent

Facility geometry [m] Showed an overview of a data center cooling Layout change was made by virtue of numerical [87]
Length ¼13.40; width ¼ 6.05; RFP depth¼ 0.00; design and presented the results of a case study. modeling to avail efficient use of air conditioning
ceiling height¼ 2.44–3.05; rack: racks¼ 20; resources.
power [kW] ¼4–2p; flow rate [m3/s] ¼ 0.325–
0.975; CRAC: CRAC units ¼2; Ind. flow rate
[m3/s] ¼ 3.25–9.75; perforated tiles ¼ 20;
cooling scheme: supply¼RFP; return¼STD;
orientation ¼ A; global parameters: power
density [W/ft2] ¼ 92–275; Mrack/Mcrac ¼ 0.8–
0.2
Modeling: Grid cells  1000¼ 135; grid cells/
m3 ¼ 546–682; turbulence model ¼ k–ε;
commercial code ¼ Flotherm

Facility geometry [m] Used slightly different model geometry to evaluate Improved thermal performance was demonstrated [38]
Length ¼11.67; width ¼8.53; RFP depth¼ 0.60; the effect of hot and cold aisle width, top rack to when the cold aisle width was increased in the
ceiling height¼ 3.48; rack: racks¼ 28; power ceiling height and distance between racks and room return case and when the hot aisle width was
[kW] ¼ 12; flow rate [m3/s] ¼0.034 m3/s; facility walls of a data center. decreased in the overhead return case.
CRAC: CRAC Units ¼ 4; Ind. flow rate [m3/s] ¼
4.72; cooling scheme: supply¼RFP;
return¼ ceiling return; global parameters:
power density [W/ft2]¼ 313
Modeling: Grid cells1000¼ 450

Facility geometry [m] Focused on the effect on rack inlet air temperatures For the lower tile flows and the more [10]
Length ¼13.40; width ¼6.05; RFP depth¼ 0.15– as a result of maldistribution of airflows exiting the maldistributed the flow, the lower the average rack
0.60; ceiling height ¼2.74; rack: racks ¼20; perforated tiles located adjacent to the fronts of the inlet temperatures. For higher tile flow, the
power [kW] ¼4–12; flow rate [m3/s] ¼ 0.325– racks. maldistribution did not have as large of an effect at
0.975; CRAC: CRAC units ¼2; perforated the highest rack locations.
tiles ¼20; % open area¼ 25, 60; cooling
scheme: supply¼RFP; return¼STD;
orientation ¼ A; global parameters: power
density [W/ft2] ¼ 92–275; Mrack/Mcrac ¼ 0.8–
1.2
Modeling: Grid cells  1000¼ 135; grid cells/
m3 ¼ 608; turbulence model ¼ k–ε;
commercial code ¼ Tileflow

Facility geometry [m] Effect of plenum depth, floor tile placement, and A multi-objective optimization study with equal [113]
Length ¼11.58; width ¼6.10; RFP depth ¼0.61; ceiling height on the rack inlet air temperature was weighting for the above 3 dimensional parameters
ceiling height¼ 3.05; rack: racks¼ 24; power discussed on a raised floor data center with 12 kW showed the minimum rack inlet temperature
[kW] ¼ 4–5; flow rate [m3/s] ¼ 0.327; CRAC: racks was considered. occurred at the maximum plenum depth (4 ft),
CRAC Units ¼ 2; Ind. flow rate [m3/s] ¼ 3.96; maximum ceiling height (20 ft) and perforated tiles
perforated tiles ¼ 24; % open area ¼ 30; located at the median value (8.75 ft), which need to
cooling scheme: supply¼RFP,OH; be weighed against the financial constraints on
return¼ STD,OH; orientation¼ N; global data center space.
parameters: power density [W/ft2]¼ 142;
Mrack /Mcrac ¼ 0.99; Qrack/Qcrac ¼ 0.57
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 993

Table 3 (continued )

Configuration/parameters Nature of work/study Observations/conclusions References

Modeling: Grid cells  1000 ¼28; grid cells/ A multivariable approach to optimize data center
m3 ¼108; turbulence model ¼ k–ε; room layout to minimize the rack inlet air
commercial code ¼ Flowtherm temperature was proposed.

Facility geometry [m] Incorporated a model of an overhead rack-level The results showed the airflow rate had a greater [61]
Length ¼ 17.07; width ¼ 9.75; RFP depth¼ 0.61; supplemental refrigeration device and used CFD to effect on rack thermal performance than
ceiling height ¼ 3.66; CRAC: CRAC Units ¼2 compute the temperature distributions in a RFP refrigerant temperature or flow rate.
(5); Ind. flow rate [m3/s] ¼ 5.66(2.12); % open data center.
area¼32; cooling scheme:
supply¼ RFPþ OH; return¼ STDþ OH;
orientation ¼N
Modeling: turbulence model ¼ k–ε; commercial
code ¼ Flovent

Facility geometry [m] Proposed redistributing the computing workload By selectively implementing the workload [64]
Length ¼ 11.70; width ¼ 8.50; RFP depth¼0.60; rather than the cooling resources and present CFD- redistribution, reduced the maximum temperature
ceiling height ¼ 3.10; rack: racks ¼28; power based results and Patel et al. discussed the without affecting cooling demand.
[kW]¼ 15.75; CRAC units ¼4; cooling advantages for distributable cooling resources at
scheme: supply¼ RFP; return ¼ STD; various levels.
orientation ¼N þA; global parameters:
power density [W/ft2]¼ 412; Qrack/Qcrac ¼ 1.16
Modeling: Grid cells  1000 ¼434; Grid cells/
m3 ¼1179

Facility geometry [m] Presented a novel load spreading technique that Holistic approach for optimizing the placement of [49]
Length ¼ 18; width ¼15.9; ceiling height ¼2.7; allows upgrading of the computing equipment the upgraded computing equipment in the DC
racks ¼27; flow rate [m3/s] ¼4.725; CRAC with minimal thermal impact on the existing outperforms the conventional load spreading
units ¼ 6; perforated tiles ¼ 92 optimized DC cooling environment which was technique.
Modeling: Grid cells  1000 ¼627.076; based on an abstract heat-flow model of the DC,
turbulence model¼ SST; commercial whose parameters determined by performing
code ¼ ANSYS CFX measurement campaign in the DC and with
support of CFD.

The abbreviation STD, RFP and OH refer to the standard CRAC return, raised floor plenum supply and overhead supply and returns, respectively. The dash (–) indicates a range
of parameters, while the plus sign ( þ ) indicates a combination of different parameters. The cooling scheme orientation ‘N’ indicates the CRAC unit exhaust direction is
normal to the cold aisle while ‘A’ indicates the CRAC unit exhaust direction is aligned down the cold aisle. The global metric mrack/mCRAC indicates the ratio between the
net rack flow rate and net CRAC flow and serves as a measure of recirculation while the Qrack/QCRAC metric indicates the ratio of the new power dissipated by all the racks
in the facility to the net cooling capacity of all the CRAC units, which is used to indicate the level of cooling provisioning.

Table 4
Hypothetical data center as a case study [34].

Area (sq ft) 2000


Number of racks 500
Servers per rack 20
Total servers 1000
Server vintage (year) 2006
Average server power (W) 273
Average data center connected load (total of all IT) (kW) 273
Peak server power 427
Annual IT energy use(kWh) 2,391,480
Annual energy loss in UPS to IT (kWh) 144,000
Annual energy loss in PDU (kWh) 96,000
Annual energy use in lighting/security (kWh) 60,000
Annual energy use in chiller plant (kWh) 450,000
Annual energy use in CRAC units (kWh) 700,000
Annual energy loss in UPS to cooling plant (kWh) 50,000
Total energy use (crossing the data center boundary) (kWh) 3,891,480
Total IT energy use (kWh) 2,391,480
994 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

Table 5
Summary of numerical modeling of data center reported from the literature and categorization by scope.

Author Year Alternative Control & Efficiency Layout Liquid Metrics Modeling Plenum Rack measurements Computational
cooling Lifecycle cooling

Siriwardana et al. 2012   


[49]
El-Sayed et al. [96] 2012  
Ghosh et al. [78] 2012    
Arghode et al. [77] 2012    
Bhagwat et al. [80] 2012    
Zimmermann et al. 2012    
[33]
Fakhim et al. [48] 2012    
Singh et al.[80] 2012  
Pedram [46] 2012 
El-sayed et al. [96] 2012 
Iyengar et al. [90] 2012    
Sarood [95] 2012 
David [91] 2012   
Iyengar et al. [88] 2012   
Wang et al.[98] 2011  
N. Rasmussen [86] 2011  
Fakhim et al. [74] 2011     
Wang et al. [98] 2011 
Wang et al. [83] 2011 
Li et al. [114] 2011  
Biswas et al. [82] 2011   
Meisner et al. [50] 2011 
Kumar et al. [79] 2011  
Abdelmaksoud et al. 2010 
[71]
Bhagwat et al. [80] 2010   
Umesh Singh et al. 2010     
[51]
Kumar et al. [13] 2010  
Samadiani et al. [12] 2010   
Iyengar et al. [17] 2010   
Samadiani et al. [62] 2010   
Schmidt et al.[75] 2010    
Shang et al. [115] 2010   
Wang et al. [99] 2010  
Patankar [1] 2010  
Banerjee et al. [100] 2010  
Ibrahim et al. [14] 2010  
Kasten et al. [92] 2010   
Abdelmaksoud et al. 2010    
[116]
Kelkar et al. [84] 2010   
Gondipalli et al. [16] 2010  
Pakbaznia et al. [53] 2009   
Shrivastava et al. 2009   
[72]
Varsamopoulos et al. 2009    
[54]
Kant [117], Review 2009
Patterson [55] 2008  
Hamann et al. [35] 2008   
Shrivastava et al. 2008  
[85]
Shrivastava et al. 2008  
[85]
Gondipalli et al. [18] 2008     
Samadiani et al. 2008  
[118]
Schmidt et al. [119] 2007
Review
Tang et al. [57] 2007    

References [4] Facilities MC, Spaces T, Equipment E. 2011 Thermal guidelines for data
processing environments – expanded data center; 2011. p. 1–45.
[5] Bash CE, Patel CD, Sharma RK. Efficient thermal management of data centers –
[1] Patankar SV. Airflow and cooling in a data center. J Heat Transf 2010;132 immediate and long-term research needs. HVAC R Res 2003;9(2011):137–52.
(7):073001. [6] Schmidt RR, Shaukatullah H. Computer and telecommunications equipment
[2] U.S.E.P. Agency. Report to Congress on Server and Data Center Energy room cooling: a review of literature. In: Proceedings of the eighth inter-
Efficiency Public Law 109-431; 2007. society conference on thermal and thermo-mechanical phenomena in
[3] U.S.E.I. Administration. International energy statistics; 2006. electronic systems (ITherm 2002) (Cat. No. 02CH37258); 2002. p. 751–66.
Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996 995

[7] Rambo J, Joshi Y. Modeling of data center airflow and heat transfer: state of [35] Hamann HF, Lacey JA, O’Boyle M, Schmidt RR, Iyengar M. Rapid three-
the art and future trends,. Distrib Parallel Databases 2007;21(2–3):193–225. dimensional thermal characterization of large-scale computing facilities.
[8] Schmidt R, Karki K, Patankar S. Raised-floor data center: perforated tile flow IEEE Trans Compon Packag Technol 2008;31(2):444–8.
rates for various tile layouts. In: Proceedings of the ninth intersociety [36] Hamann HF, van Kessel TG, Iyengar M, Chung J-Y, Hirt W, Schappert MA,
conference on thermal and thermo-mechanical phenomena in electronic Claassen A, Cook JM, Min W, Amemiya Y, Lopez V, Lacey JA, O’Boyle M.
systems (IEEE Cat. No. 04CH37543); 2004. p. 571–78. Uncovering energy-efficiency opportunities in data centers. IBM J Res Dev
[9] Schmidt R, Cruz E. Raised floor computer data center: effect on rack inlet 2009;53(3):10:1–12.
temperatures of chilled air exiting both the hot and cold aisles. In: Proceed- [37] Sun HS, Lee SE. Case study of data centers' energy performance. Energy Build
ings of the eighth intersociety conference on thermal and thermo- 2006;38(5):522–33.
mechanical phenomena in electronic systems (ITherm 2002) (Cat. No. [38] Sharma R, Bash C, Patel C. Dimensionless parameters for evaluation of
02CH37258); 2002. p. 580–94. thermal design and performance of large-scale data centers. In: Proceedings
[10] Schmidt R, Cruz E. Cluster of high-powered racks within a raised-floor of 8th AIAA/ASME Joint thermophysics and heat transfer conference; June
computer data center: effect of perforated tile flow distribution on rack inlet 2002. p. 1–11.
air temperatures. ASME 2004;126:510–8. [39] Sharma R, Bash C. Experimental investigation of design and performance of
[11] Bhopte S, Sammakia B, Schmidt R, Iyengar MK, Agonafer D. Effect of under data centers. In: Proceedings of the 2004 inter society conference on thermal
floor blockages on data center performance. In: Proceedings of the tenth phenomena, vol. 650; 2004. p. 579–85.
intersociety conference on thermal and thermo-mechanical phenomena in [40] Norota M, Hayama H, Enai M, Mori T, Kishita M, Research on efficiency of air
electronic systems (ITHERM 2006); 2006. p. 426–33. conditioning system for data-center. In: Proceedings of INTELEC’03; 2003. p.
[12] Samadiani E, Rambo J, Joshi Y. Numerical modeling of perforated tile flow 147–51.
distribution in a raised-floor data center. J Electron Packag 2010;132 [41] Uddin M, Rahman A. Energy efficiency and low carbon enabler green IT
(2):021002. framework for data centers considering green metrics. Renew Sustain Energy
[13] Kumar P, Joshi Y. Experimental investigations on the effect of perforated tile Rev 2012;16(6):4078–94.
air jet velocity on server air distribution in a high density data center. In: [42] Priyadumkol J, Kittichaikarn C. Application of the combined air-conditioning
Proceedings of the 12th intersociety conference on thermal and thermo- systems for energy conservation in data center. Energy Build 2014;68:580–6.
mechanical phenomena in electronic systems (ITherm 2010); 2010. [43] Cho J, Kim BS. Evaluation of air management system's thermal performance
[14] Ibrahim M, Gondipalli S, Bhopte S, Sammakia B, Murray B, Ghose K, Iyengar for superior cooling efficiency in high-density data centers. Energy Build
MK, Schmidt R. Numerical modeling approach to dynamic data center 2011;43(9):2145–55.
cooling. In: Proceedings of the 2010 12th IEEE intersociety conference on [44] Cho J, Yang J, Park W. Evaluation of air distribution system's airflow
thermal and thermo-mechanical phenomena in electronic systems; June performance for cooling energy savings in high-density data centers. Energy
2010. p. 1–7. Build 2014;68:270–9.
[15] Abdelmaksoud W, Khalifa HE, Dang TQ, Elhadidi, B, Iyengar RRSM. Experi- [45] Rambo J, Joshi Y. Thermal modeling of technology infrastructure facilities: a
mental and computational study of perforated floor tile in data centers. IEEE; case study of data centers. In: Handbook of numerical heat transfer, 2nd ed.
2010. p. 1–10. John Wiley & Sons, Inc.; 2009. p. 821–49.
[16] Gondipalli S, Ibrahim M, Bhopte S, Sammakia B, Murray, B, Ghose K, Iyengar [46] Pedram M. Energy-efficient datacenters. IEEE Trans Comput Des Integr
MK, Schmidt R. Numerical modeling of data center with transient boundary Circuits Syst 2012;31(10):1465–84.
conditions. In: Proceedings of the 2010 12th IEEE intersociety conference on [47] Lu T, Lü X, Remes M, Viljanen M. Investigation of air management and
thermal and thermo-mechanical phenomena in electronic systems; June energy performance in a data center in Finland: case study. Energy Build
2010. p. 1–7. 2011;43(12):3360–72.
[17] Iyengar M, Schmidt R, Caricari J. R,educing energy usage in data centers [48] Fakhim B, Srinarayana N, Behnia M, Armfield SW. Exergy-based performance
through control of room air conditioning units. IEEE; 2010. metrics to evaluate irreversibility in data centre environment airspace. In:
[18] Gondipalli S, Bhopte S, Sammakia B, Iyengar MK, Schmidt R, Effect of Proceedings of the seventh international conference on computational fluid
isolating cold aisles on rack inlet temperature. In: Proceedings of the 2008 dynamics (ICCFD7). Big Island, Hawaii; July 9–13, 2012.
11th intersociety conference on thermal and thermo-mechanical phenom- [49] Siriwardana J, Halgamuge SK, Scherer T, Schott W. Minimizing the thermal
ena in electronic systems; May 2008. p. 1247–54. impact of computing equipment upgrades in data centers. Energy Build
[19] Karki KC, Patankar SV. Airflow distribution through perforated tiles in raised- 2012;50:81–92.
floor data centers. Build Environ 2006;41(6):734–44. [50] Meisner D., Wenisch T. Does low-power design imply energy efficiency for
[20] Bedekar V, Karajgikar S, Agonafer D, Iyyengar M, Schmidt2 R. Effect of CRAC data centers?. IEEE; 2011. p. 109–14.
location on fixed rack layout. In: Proceedings of ITHERM’06; 2006. p. 421–25. [51] Singh U, Singh A. CFD-based operational thermal efficiency improvement of
[21] VanGilder J, Schmidt R. Airflow uniformity through perforated tiles in a a production data center. In: Proceedings of SustainIT’10; 2010. p. 1–7.
raised-floor data center. In: Proceedings of ASME Interpack conference; [52] Kien MM Le Ricardo Bianchini, Nguyen TD. Cost-and energy-aware load
2005. distribution across data centers; 2009.
[22] Radmehr A, Schmidt R, Karki K, Patankar S. Distributed leakage flow in [53] Pakbaznia E, Pedram M. Minimizing data center cooling and server power
raised-floor data centers. In: Proceedings of ASME InterPACK; 2005, p. 1–8. costs. In: Proceedings of thr 14th ACM/IEEE International conference on low
[23] Sharma R, Bash CE, Beitelmal A, Laboratories H, Road PM, Alto P. Thermal power electronic design – ISLPED’09; 2009. p. 145.
considerations in cooling large scale high compute density data centers; [54] Varsamopoulos G, Banerjee A, Gupta SKS. Energy efficiency of thermal-aware
2002. p. 767–76. job scheduling algorithms under various cooling models. Berlin Heidelb.:
[24] Patel CD, Bash CE, Belady C, Stahl L, Sullivan D. Computational fluid dynamics Springer-Verlag; 2009; 568–80.
modeling of high compute density data centers to assure system inlet air [55] Patterson MK. The effect of data center temperature on energy efficiency
specifications chandrakant. In: Proceedings of the Pacific Rim/ASME interna- power; 2008. p. 1167–74.
tional electronic packaging technical conference and exhibition; 2001. p. 1–9. [56] Jonas M., Varsamopoulos G, Gupta SKS. On developing a fast, cost-effective
[25] Patel C, Sharma R, Bash C, Beitelmal A. Thermal considerations in cooling and non-invasive method to derive data center thermal maps. In: Proceed-
large scale computer density data centers. In: Proceedings of the Itherm ings of the IEEE international conference on cluster computing; 2007. p. 474–
Conference; 2002. p. 767–76. 75.
[26] Awbi HB, Gan G. Predicting air flow and thermal comfort in offices,. ASHRAE [57] Tang Q, Gupta SKS, Varsamopoulos G. Thermal-aware task scheduling
J 1994;36(2):17–21. for data centers through minimizing heat recirculation. In: Proceedings
[27] Cinato P, Bianco C, Licciardi L, Pizzuti F, Antonetti M, Grossoni M. An of the 2007 IEEE international conference on cluster computing; 2007.
innovative approach to the environmental system design for TLC. In: p. 129–38.
Proceedings of Intelec98; 1998. [58] Tang Q, Mukherjee T, Gupta SKS, Cayton P. Sensor-based fast thermal
[28] Kiff P. A fresh approach to cooling network equipment, British Telecommu- evaluation model for energy efficient high-performance datacenters. In:
nications Engineering. London: Institution of British Telecommunications Proceedings of the 2006 fourth international conference on intelligent
Engineers; 1995. sensing and information processing; December 2006. p. 203–8.
[29] Karki K, Radmehr A, Patankar S. Use of computational fluid dynamics for [59] Schmidt RR, Cruz EE, Iyengar M. Challenges of data center thermal manage-
calculating flow rates through perforated tiles in raised-floor data centers. ment. IBM J Res Dev 2005;49(4.5):709–23.
HVAC&R Res 2003;9(2):153–66. [60] Herrlin M. Rack cooling effectiveness in data centers and telecom central
[30] Rambo J, Joshi Y. Physical models in data center airflow simulations. In: offices: the rack cooling index (RCI). Am Soc Heat Refrig Air-Con. Eng Inc
Proceedings of IMECE’03 – ASME international mechanical engineering 2005;111:1–11.
congress and R&D exposition. Washington D.C.; 2003. [61] Heydari A, Sabounchi P. Refrigeration assisted spot cooling of a high heat
[31] Hassan NMSMS, Khan MMKMK, Rasul MGG. Temperature monitoring and density data center. In: Proceedings of the 2004 inter society conference on
CFD analysis of data centre. Procedia Eng 2013;56:551–9. thermal phenomena; 2004. p. 601–06.
[32] Hammadi A, Mhamdi L. A survey on architectures and energy efficiency in [62] Samadiani E, Joshi Y. Proper orthogonal decomposition for reduced order
data center networks. Comput Commun 2013;40:1–41. thermal modeling of air cooled data centers. J Heat Transf 2010;132(7):
[33] Zimmermann S, Meijer I, Tiwari MK, Paredes S, Michel B, Poulikakos D. 071402.
Aquasar: a hot water cooled data center with direct energy reuse. Energy [63] Boucher T, Auslander D, Bash CE, Federspiel CC, Patel CD. Viability of
2012;43(1):237–45. dynamic cooling control in a data center environment; 2004. p. 593–600.
[34] Joshi Y, Kumar P. Energy efficient thermal management of data centers. New [64] Sharma R, Bash C. Balance of power: dynamic thermal management for
York: Springer; 2012 (p. 240, 248). internet data centers. IEEE Internet Comput 2005:42–9.
996 Y. Fulpagare, A. Bhargav / Renewable and Sustainable Energy Reviews 43 (2015) 981–996

[65] Patel CD, Bash, CE, Sharma R, Beitelmal A, Malone CG. Smart chip, system [90] Iyengar M, David M, Parida P, Kamath V, Kochuparambil B, Graybill D,
and data center enabled by advanced flexible cooling resources hewlett- Schultz M, Gaynes M, Simons R, Schmidt R, Chainer T. Server liquid cooling
packard laboratories. In: Proceedings of the 21st IEEE SEMI-THERM sympo- with chiller-less data center design to enable significant energy savings. In:
sium. Palo Alto, California; 2005. Proceedings of the 2012 28th annul semiconductor thermal measurement
[66] Herold KE, Radermacher R. Integrated power and cooling systems for data and management symposium, vol. 1; March 2012. p. 212–23.
centers. In: Proceedings of the eighth intersociety conference on thermal and [91] David MP, Iyengar M, Parida P, Simons R, Schultz M, Gaynes M, Schmidt R,
thermomechanical phenomena in electronic systems, ITherm 2002 (Cat. No. Chainer T, Experimental characterization of an energy efficient chiller-less
02CH37258); 2002. p. 808–11. data center test facility with warm water cooled servers, 2012 28th annul
[67] Shah A, Krishnan N. Life cycle evaluation of combined heat and power semiconductor thermal measurement and management symposium; March
alternatives in data centers. In: Proceedings of the 2005 IEEE international 2012. p. 232–37.
symposium on electronics and the environment; 2005. p. 19–24. [92] Kasten P, Zimmermann S, Tiwari MK, Michel B, Poulikakos D. Hot water
[68] Lin M, Shao S, (Simon) Zhang X, VanGilder JW, Avelar V, Hu X. Strategies for cooled heat sinks for efficient data center cooling: towards electronic cooling
data center temperature control during a cooling system outage. Energy with high exergetic utility. Front. Heat Mass Transf 2010;1(2):1–10.
Build 2014;73:146–52. [93] Ebrahimi K, Jones GF, Fleischer AS. A review of data center cooling
[69] Timothy CDP, Boucher D, Auslander David M, Bash Cullen E, Federspiel technology, operating conditions and the corresponding low-grade waste
Clifford C. Viability of dynamic cooling control in a data center environment; heat recovery opportunities. Renew Sustain Energy Rev 2014;31:622–38.
2006. [94] Zachary Woodruff J, Brenner P, Buccellato APC, Go DB. Environmentally
[70] Bash CE, Patel CD, Sharma RK. Dynamic thermal management of air cooled opportunistic computing: a distributed waste heat reutilization approach to
data centers. In: Proceedings of the 10th intersociety conference on thermal energy-efficient buildings and data centers. Energy Build 2014;69:41–50.
and thermomechanical phenomena in electronic systems (ITHERM 2006); [95] Sarood O, Miller P, Totoni E, L. Kale. Cool load balancing for high performance
2006. p. 445–52. computing data centers. IEEE; 2012. p. 1–14.
[71] Hall L, Schmidt. R.R., Waleed MI Abdelmaksoud A., Ezzat Khalifa H, Dang [96] El-sayed N, Amvrosiadis G, Schroeder B, Hwang AA. Temperature manage-
Thong Q. Improved CFD modeling of a small data center test cell; 2010. p. ment in data centers: why some ( might ) like it hot; 2012. p. 163–74.
1–9. [97] Durand-Estebe B, Le Bot C, Mancos JN, Arquis E. Data center optimization
[72] Shrivastava SK, Iyengar M, Sammakia BG, Schmidt R, Vangilder JW. Experi- using PID regulation in CFD simulations. Energy Build 2013;66:154–64.
mental–numerical comparison for a high-density data center: hot spot heat [98] Wang L, Khan SU, Dayal J. Thermal aware workload placement with task-
fluxes in excess of 500 W/ft 2. IEEE Trans Comp Packag Technol 2009;32 temperature profiles in a data center. J Supercomput 2011;61(3):780–803.
(1):166–72. [99] Wang Z, Tolia N, Bash C. Opportunities and challenges to unify workload,
[73] Choi J, Kim Y, Sivasubramaniam A, Srebric J, Wang Q. Lee J. Modeling and power, and cooling management in data centers. ACM SIGOPS Oper Syst Rev
managing thermal profiles of rack-mounted servers with thermostat. In: 2010;44(3):41.
Proceedings of the 2007 IEEE 13th international symposium on high- [100] Banerjee A, Mukherjee T, Varsamopoulos G, Gupta Sandeep KS. Cooling-
performance computer architecture; 2007. p. 205–15. aware and thermal-aware workload placement for green HPC data centers.
[74] Fakhim B, Behnia M, Armfield SW, Srinarayana N. Cooling solutions in an IEEE; 2010.
operational data centre: a case study,. Appl Therm Eng 2011;31(14–15): [101] López V, Hamann HF. Heat transfer modeling in data centers. Int J Heat Mass
2279–91. Transf 2011;54(25–26):5306–18.
[75] Schmidt R, Iyengar M, Caricari J. Data center housing high performance [102] Jonas M, Gilbert, RR, Ferguson J, Varsamopoulos G, Gupta SKS. A transient
supercomputer cluster: above floor thermal measurements compared to CFD model for data center thermal prediction. In: Proceedings of the 2012
analysis. J Electron Packag 2010;132(2):021009. international green computing conference; June 2012p. 1–10.
[76] Rambo J, Joshi Y. Multi-scale modeling of high power density data centers. [103] Varsamopoulos G, Jonas M, Ferguson J, Banerjee J, Gupta SKS. Using transient
In: Proceedings of IPACK’03 – The pacific Rim/ASME international electronics thermal models to predict cyberphysical phenomena in data centers. Sustain
packaging technical conference and exhibition. Kauai, HI; 2003. Comput Inform Syst 2013;3(3):132–47.
[77] Arghode VK, Kumar P, Joshi Y, Weiss TS. Rack level modeling of air flow [104] Wang X, Wang X, Xing G, Chen J, Lin C-X, Chen Y. Intelligent sensor
through perforated tile in a data center. J Electron Packag 2013;135(3):1–11. placement for hot server detection in data centers. IEEE Trans Parallel Distrib
[78] Ghosh R, Sundaralingam V, Joshi Y. Effect of rack server population on Syst 2012;24(8):1577–88.
temperatures in data centers. In: Proceedings of the 13th intersociety [105] Moore J, Chase J, Ranganathan P, Sharma R. Making scheduling ‘cool’:
conference on thermal and thermomechanical phenomena in electronic temperature-aware workload placement in data centers. In: Proceedings of
systems; May 2012. p. 30–37. the USENIX Annual Technical Conference; 2005. p. 61–74.
[79] Kumar P, Sundaralingam V, Joshi Y. Effect of server load variation on rack air [106] Hall L, Khalifa HE, Schmidt RR. Transient thermal response of servers through
flow distribution in a raised floor data center. In: Proceedings of the 2011 air temperature measurements; 2014. p. 1–6.
27th annual IEEE semiconductor thermal measurement and management [107] Schmidt RR. Analytical modeling of energy consumption and thermal
symposium; March 2011. p. 90–96. performance of data center cooling systems: from the chip to the environ-
[80] Bhagwat H, Singh A, Vasan A, Sivasubramaniam, A. Thermal influence ment, Paper No. IPACK2007-33924, p. 877–886, http://dx.doi.org/10.1115/
indices: causality metrics for efficient exploration of data center cooling. IPACK2007-33924.
IEEE; 2012. [108] Arghode VK, Kumar P, Joshi Y, Weiss T, Meyer G. Rack level modeling of air
[81] N. Rolander, An approach for the robust design of air cooled data center flow through perforated tile in a data center. J Electron Packag 2013;135(3):
server cabinets (MS thesis), 2005, George W. Woodruff School of Mechanical 030902.
Engineering, Georgia Institute of Technology, Atlanta. [109] Hall L, Khalifa HE. Room-level transient cfd modeling of rack shutdown;
[82] Biswas S, Tiwari M. Fighting fire with fire: modeling the datacenter-scale 2014. p. 1–8.
effects of targeted superlattice thermal management. In: Proceedings of the [110] Alkharabsheh SA, Park T. Numerical steady state and dynamic study in a data
38th annual symposium on computer architecture (ISCA); June 4–8, 2011. center using calibrated fan curves for cracs and servers; 2014. p. 1–12.
San Jose, California, USA. p. 331–40. [111] Alkharabsheh S, Ibrahim M, Shrivastava S, Schmidt R, Sammakia B. Transient
[83] Wang Y, Wang X, Zhang Y. Leveraging thermal storage to cut the electricity analysis for contained-cold-aisle data center. In: Proceedings of the ASME
bill for datacenter cooling. In: Proceedings of the 4th workshop on power- 2012 international mechanical engineering congress & exposition,
aware computing and systems – HotPower’11; 2011. p. 1–5. IMECE2012. p. 1–8.
[84] Kelkar,KM, Patankar, SV, Kang SS, Iyengar M, Schmidt RR. Computational [112] Alkharabsheh S, Ibrahim M, Shrivastava S, Schmidt R, Sammakia B. Transient
method for generalized analysis of pumped two-phase cooling systems and analysis for contained-cold-aisle data center. In: Proceedings of the ASME
its application to a system used in data-center environments. In: Proceedings 2012 international mechanical engineering congress & exposition
of the 2010 12th IEEE intersociety conference on thermal and thermome- IMECE2012. p. 1–8.
chanical phenomena in electronic systems; June 2010. p. 1–11. [113] Bhopte S, Agonafer D, Schmidt R, Sammakia B. Optimization of data center
[85] Shrivastava SK, VanGilder JW, Sammakia BG. Optimization of cluster cooling room layout to minimize rack inlet air temperature. J Electron Packag
performance for data centers. In: Proceedings of the 2008 11th intersociety 2006;128(4):380.
conference on thermal and thermomechanical phenomena in electronic [114] Li L, Liang C, Liu J, Nath S, Terzis A, Faloutsos C. Thermocast: a cyber-physical
systems; May 2008. p. 1161–66. forecasting model for data centers. In: Proceedings of KDD; 2011. p. 1370–78.
[86] Rasmussen N. Cooling strategies for ultra-high density racks and blade [115] Shang Y, Li D, Xu M. Green routing in data center network: modeling and
servers. APC White Pap. 46. http//www. apcc. com/prod.; 2006. algorithm design, asiafi.net; 2010.
[87] Patel C, Sharma R. Thermal considerations in cooling large scale high [116] Abdelmaksoud WA, Khalifa HE, Dang TQ, Elhadidi B, Hall L. Experimental and
compute density data centers. IEEE; 2002. p. 767–76. computational study of perforated floor tile in data centers; 2010. p. 1–10.
[88] Iyengar M, Schmidt R. Energy efficient economizer based data centers with [117] Kant K. Data center evolution. Comput Netw 2009;53(17):2939–65.
air cooled servers. IEEE; 2012. [118] Samadiani E, Joshi Y, Mistree F. The thermal design of a next generation data
[89] Almoli A, Thompson A, Kapur N, Summers J, Thompson H, Hannah G. center: a conceptual exposition. J Electron Packag 2008;130(4):041104.
Computational fluid dynamic investigation of liquid rack cooling in data [119] Schmidt R., Iyengar, M. Best practices for data centre thermal and energy
centres. Appl Energy 2012;89(1):150–5. management-review of literature, ASHRAE Trans, 2007, p. 206.

You might also like