Professional Documents
Culture Documents
Ashrae 90 4 Review
Ashrae 90 4 Review
The data center industry is fortunate to have many dedicated professionals volunteering
their time to provide expertise and experience in the development of new guidelines, codes,
and standards. ASHRAE, U.S. Green Building Council, and The Green Grid, among others,
routinely call on these subject matter experts to participate in working committees with the
purpose of advancing the technical underpinnings and long-term viability of the
organizations‘ missions. For the most part, the end goal of these working groups is to
establish consistent, repeatable processes that will be applicable to a wide range of project
sizes, types, and locations. For ASHRAE, this was certainly the case when it came time to
address the future of the ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise
Residential Buildings vis-à-vis how it applies to data centers.
ASHRAE Standard 90.1 and data centers
ASHRAE 90.1 has become the de facto energy standard for U.S. states and cities as well as
many countries around the world. Data centers are considered commercial buildings, so the
use of ASHRAE 90.1 is compulsory to demonstrate minimum energy conformance for
jurisdictions requiring such. Specific to computer rooms, ASHRAE 90.1 has evolved over the
last decade and a half, albeit in a nonlinear fashion. The 2001, 2004, and 2007 editions of
ASHRAE 90.1 all have very similar language for computer rooms, except for humidity
control, economizers, and how the baseline HVAC systems are to be developed. It is not
until the ASHRAE 90.1-2010 edition where there are more in-depth requirements for
computer rooms. For example, ASHRAE 90.1-2010 contains a new term, ―sensible coefficient
of performance‖ (SCOP), an energy benchmark used for computer and data processing
room (CDPR) air conditioning units. The construct of SCOP is dividing the net sensible
cooling capacity (in watts) by the input power (in watts). The definition of SCOP and the
detail on how the units are to be tested comes from the Air Conditioning, Heating, and
Refrigeration Institute (AHRI) in conjunction with the American National Standards Institute
(ANSI) and was published in AHRI/ANSI Standard 1360: Performance Rating of Computer
and Data Processing Room Air Conditioners.
With the release of ASHRAE 90.1-2013, additional clarification, and requirements related to
data centers including information for sizing water economizers and an introduction of a new
alternative compliance path using power-usage effectiveness (PUE) were included. As a part
of the PUE alternate compliance path, cooling, lighting, power distribution losses, and
information technology (IT) equipment energy are to be documented individually. But since
the requisites related to IT equipment (ITE) listed in ASHRAE 90.1 were originally meant for
server closets or computer rooms that consume only a piece of the energy of the total
building, there were still difficulties in demonstrating compliance. Yet there was no
slowdown in technology growth; projects began to slowly include full-sized data centers with
an annual energy usage greater than the building in which they are housed. Even with all
the revisions and additions to ASHRAE 90.1 relating to data centers, there were still
instances that proved difficult in applying ASHRAE 90.1 for energy-use compliance.
ASHRAE 90.4 gives the engineer a completely new method for determining compliance.
ASHRAE introduces new terminology for demonstrating compliance: design and annual
mechanical load component (MLC) and electrical-loss components (ELC). ASHRAE is careful
to note that these values are not comparable to PUE and are to be used only in the context
of ASHRAE 90.4. The standard includes compliance tables consisting of the maximum load
components for each of the 19 ASHRAE climate zones. Assigning an energy efficiency target,
either in the form of design or an annualized MLC to a specific climate zone, will certainly
raise awareness to the inextricable link between climate and data center energy
performance (see figures 1 and 2). Since strategies like using elevated temperatures in the
data center and employing different forms of economization are heavily dependent on the
climate, an important goal is to increase the appreciation and understanding of these
connections throughout the data center design community.
Design mechanical-load component
MLC can be calculated in one of two ways to determine compliance. The first is a summation
of the peak power of the mechanical components in kilowatts, as well as establishing the
design load of the IT equipment, also in kilowatts. ASHRAE 90.4 has a table of climate zones
with the respective design dry-bulb and wet-bulb temperatures that are to be used when
determining the peak mechanical system load. The calculation procedure is shown below. It
must be noted that when comparing the calculated values of design MLC, the analysis must
be done at both 100% and 50% ITE load; both values must be less than or equal to the
values listed in Table 6.2.1 (design MLC) in ASHRAE 90.4.
Design MLC=[cooling design power (kW)+pump design power (kW)+heat rejection design
fan power (kW)+air handler unit design fan power (kW)]÷data center design ITE power
(kW)
Annualized mechanical-load component
The concepts used for the annualized MLC path are like the design MLC, except an hourly
energy analysis is required when using the annualized MLC path.
This energy analysis must be done using software specifically designed for calculating
energy consumption in buildings and must be accepted by the rating authority. Some of the
primary requirements of the software include the dynamic characteristics of the data center,
both inside and outside. The following are some of the software requirements used in the
modeling:
Test in accordance with ASHRAE Standard 140: Standard Method of Test for the
Evaluation of Building Energy Analysis Computer Programs.
Able to evaluate energy-use status for 8,760 hours/year.
Account for hourly variations in IT load, which cascades down to electrical system
efficiency, cooling system operation, and miscellaneous equipment power.
Include provisions for daily, weekly, monthly, and seasonal building-use schedules.
Use performance curves for cooling equipment, adjusting power use based on
outdoor conditions as well as evaporator and condenser temperatures.
Calculate energy savings based on economization strategies for air- and water-based
systems.
Produce hourly reports that compare the baseline HVAC system to a proposed
system to determine compliance with the standard.
Calculate required HVAC equipment capacities and water- and airflow rates.
Since ASHRAE 90.4 categorizes compliance metrics based on climate zone, it is imperative
that the techniques used in simulating the data center‘s energy use are accurate based on
the specific location of the facility. As such, the simulation software must perform the
analysis using climatic data including hourly atmospheric pressure, dry-bulb and dew point
temperatures as well as wet-bulb temperature, relative humidity, and moisture content. This
data is available from different sources and in the form of typical meteorological year,
(TMY2, TMY3), and EnergyPlus Weather (EPW) files that are used as an input to the main
simulation program.
This compulsory hourly energy-use simulation considers fluctuations in mechanical system
energy consumption, particularly in cases where the equipment is designed for some type of
economizer mode, as well as energy reductions in vapor-compression equipment from
reduced lift due to outdoor temperature and moisture levels. This approach seems to be the
most representative of determining the energy performance of the data center, and since it
is based on already established means of determining building energy use (i.e., hourly
energy-use simulation techniques), it also will be the most understandable. Again, it must be
noted that when comparing the calculated values of annualized MLC, the analysis must be
done at both 100% and 50% ITE load; both values must be less than or equal to the values
listed in Table 6.2.1.2 (annualized MLC) in the ASHRAE standard. It also is important to note
that both the design and annualized MLC values are tied to the ASHRAE climate zones.
When energy use is calculated using simulation techniques, it becomes obvious that the
energy used has a direct correlation to the climate zone, primarily due to the ability to
extend economization strategies for longer periods of time throughout the year. If we
compare calculated annualized MLC values for data centers with the MLC values in ASHRAE
90.4, the ASHRAE requirements are relatively flat when plotted across the climate zones.
This means the calculated MLC values in this example have energy-use efficiencies that are
in excess of the minimum required by the standard (see Figure 7).
Annual MLC=[cooling design energy (kWh)+pump design energy (kWh)+heat rejection
design fan energy (kWh)+air handler unit design fan energy (kWh)]÷data center design ITE
energy (kWh)
Design electrical-loss component
Using the ASHRAE 90.4 approach to calculate the ELC defines the electrical system
efficiencies and losses. For the purposes of ASHRAE 90.4, the ELC consists of three parts of
the electrical system architecture:
1. Incoming electrical service segment
2. Uninterruptible power supply (UPS) segment
3. ITE distribution segment.
The segment for electrical distribution for mechanical equipment is stipulated to have losses
that do not exceed 2%, but is not included in the ELC calculations. All the values for
equipment efficiency must be documented using the manufacturer‘s data, which must be
based on standardized testing using the design ITE load. The final submittal to the rating
authority (the organization or agency that adopts or sanctions the results of the analysis)
must consist of an electrical single-line diagram and plans showing areas served by electrical
systems, all conditions and modes of operation used in determining the operating states of
the electrical system, and the design ELC calculations demonstrating compliance. Tables
8.2.1.1 and 8.2.1.2 in ASHRAE 90.4 list the maximum ELC values for ITE loads less than 100
kW and greater than or equal to 100 kW, respectively. The tables show the maximum ELC
for the three segments individually as well as the total.
The electrical distribution system‘s efficiency impacts the data center‘s overall energy
efficiency in two ways: the lower the efficiency, the more incoming power is needed to serve
the IT load. In addition, more air conditioning energy is required to cool the electrical
energy dissipated as heat. ASHRAE 90.4, Section 6.2.1.2.1.1, is explicit on how this should
be handled: ―The system‘s UPS and transformer cooling loads must also be included in [the
MLC], evaluated at their corresponding part-load efficiencies.‖ The standard includes an
approach on how to evaluate single-feed UPS systems (e.g., N, N+1, etc.) and active dual-
feed UPS systems (2N, 2N+1, etc.). The single-feed systems must be evaluated at 100%
and 50% ITE load. The dual active-feed systems must be evaluated at 50% and 25% ITE
load, as these types of systems will not normally operate at a load greater than 50%.
Addressing reliability of systems and equipment
One of the distinctive design requirements of data centers is the high degree of reliability.
One manifestation of this is the use of redundant mechanical equipment. The redundant
equipment will come online when a failure occurs or when maintenance is required without
compromising the original level of redundancy. Different engineers use different approaches
based on their clients‘ needs. Some will design in extra cooling units, pumps, chillers, etc.
and have these pieces of equipment running all the time, cycling units on and off as
necessary. Other designs might have equipment to handle more stringent design conditions,
such as ASHRAE 0.4% climate data (dry-bulb temperatures corresponding to the 0.4%
annual cumulative frequency of occurrence).
And yet others will use variable-speed motors to vary water and airflow, delivering the
required cooling based on a changing ITE load. Since these design approaches are quite
different from one another, Table 6.2.1.2.1.2 in ASHRAE 90.4 provides methods for
calculating MLC compliance under these scenarios.
Performance-based approach
ASHRAE 90.4 uses a performance-based approach rather than a prescriptive one to
accommodate the rapid change in data center technology and to allow for innovation in
developing energy efficiency cooling solutions. Some of the provisions seem to especially
encourage innovative solutions including:
Onsite renewables or recovered energy. The standard allows for a credit to the
annual energy use if onsite renewable energy generation is used or waste heat is
recovered for other uses. Data centers are ideal candidates for renewable energy
generation, as the load can be constant through the course of the daytime and
nighttime hours. Also, when water-cooled computers are used with high-discharge
water temperatures, the water can be used for building heating, boiler-water
preheating, snow melting, or other thermal uses.
Derivation of MLC values. The MLC values in the tables in ASHRAE 90.4 are
considered generic to allow multiple systems to qualify for the path. The MLC values
are based on systems and equipment currently available in the marketplace from
multiple manufacturers. This is the benchmark for minimum compliance that must be
met. But ideally, the project would go beyond the minimum and demonstrate even
greater energy-reduction potential.
Design conditions. The annualized MLC values for air systems are based on a delta T
(temperature rise of the supply air) of 20°F and a return-air temperature of 85°F.
However, the proposed design is not bound to these values if the design
temperatures are in agreement with the performance characteristics of the coils,
pumps, fan capacities, etc. This provision from the standard gives the engineer a lot
of room to innovate and propose nontraditional designs, such as water cooling of the
ITE equipment.
Trade-off method. Sometimes mechanical and electrical systems have constraints
that may disqualify them from meeting the MLC or ELC values on their own merit.
The standard allows, for example, a less efficient mechanical system to be offset by
a more efficient electrical system and vice versa. Another benefit of using this
approach comes from the mechanical and electrical engineer having to collaborate by
going through an iterative, synergistic design process.
Publishing ASHRAE 90.4-2016 is a watershed moment—to date, there has not been a code-
ready, technically robust approach to characterize mechanical and electrical system designs
to judge conformance to an energy standard. This is no small feat, considering that data
center mechanical/electrical systems can have a wide variety of design approaches,
especially as the data center industry continues to develop more efficient ITE equipment
requiring novel means of power and cooling. And since ASHRAE 90.4 is a separate
document from ASHRAE 90.1, as computer technology changes, the process to
augment/revise ASHRAE 90.4 should be less difficult because they are two separate
documents. While certainly not perfect, ASHRAE 90.4 is a major step along the path of
ensuring energy efficiency in data centers.
Bill Kosik is a senior mechanical engineer at exp in Chicago. Kosik is a member of the
Consulting-Specifying Engineer editorial advisory board.
ASHRAE 90.4: Why This Data Center Standard Matters
Nicolas Sagnes | May 15, 2017
In September of 2016, the American Society of Heating, Refrigerating, and Air-Conditioning
Engineers (ASHRAE) published a new and improved standard that establishes the minimum
energy efficiency requirements for data centers.
ASHRAE 90.4-2016 has been in development for several years. Overall, this new standard
contains recommendations for the design, construction, operation, and maintenance of data
centers. Additionally, this standard focuses on the use of both on-site and off-site renewable
energy.
This standard explicitly addresses the unique energy requirements of data centers in
opposition to standard buildings, thus integrating the more critical aspects and risks
surrounding the operation of data centers
What is New?
Standard 90.4 is a performance-based design standard that offers numerous design
components for mechanical load (MLC) and electrical loss (ELC). After determining the
calculations of both the MLC and ELC, these calculations are compared to the maximum
allowable values based on climate zones. Compliance with Standard 90.4 is achieved when
the calculated values do not exceed the values contained in the standard. An alternative
compliance path is provided to allow tradeoffs between the MLC and ELC.
The absence of PUE in 90.4 allows the primary focus to be on energy consumption, rather
than efficiency.
PUE, as a simpler metric, represents efficiency. It allows data center operators to measure
the effectiveness of the power and cooling systems over time. However, PUE is quite limited,
as it measures only the relative difference between power consumed on IT equipment, and
the energy consumed on IT and infrastructure combined. PUE isn‘t a useful tool for
determining whether or not the overall energy consumption was increased at the facility
level.
ASHRAE 90.4 intends to tackle and regulate lower performers, while being mindful of
geographic areas. The new standard aims to impact power utilization as a whole throughout
a data center facility, highlighting the impact of raising the temperature into the white space
on overall energy consumption.
Another key part of this standard is containment. Containment looks closely at the
homogeneity of a given air volume across a data center, therefore limiting the power loads
necessary to overcome hotspots.
How Raritan helps?
As a key player in the most advanced data center efficiency and management practices,
Raritan is allowing end users across the largest facilities to leverage the granular capabilities
of our PX Intelligent Power Distribution Units to centralize key environmental metrics at the
rack and device levels.
Outlet level metering in conjunction with temperature and humidity sensors are useful in
determining whether or not the IT equipment is drawing more power. This is typically
caused by a fan accelerating due a potential rise in temperature.
Creating links between causes and effects across the data center allows Raritan PX users to
comply with ASHRAE 90.4. Users get a clear picture of the containment (or lack thereof), as
well as the effect airflow has on the power chain at a granular level.
Leveraging this data ultimately gives PX users the ability to make insightful decisions about
implementing more efficient containment solutions with Legrand, and urges users to take
action with more effective cooling policies and load balancing across facilities.
Learn more about the PX series of intelligent PDUs for your data center, and let us know if
you have any questions or comments.
How to Design a Data Center Cooling System for ASHRAE 90.4
Liquid cooling
Liquid cooling systems provide an alternative way to dissipate heat from the
system. This approach includes air conditioners or refrigerants with cold water
close to the heat source.
Green cooling
Green cooling (or free cooling) is one of the sustainable technologies used in
data centers. This could involve simply opening a data center window covered
with filters and louvers to allow natural cooling techniques. This approach saves
a tremendous amount of money and energy.
Identifying the right combination of these cooling techniques can be challenging. Here‘s how
CFD simulation can make this task easier.
Case Study: Improving A Data Center Cooling System
Computational fluid dynamics (CFD) can help HVAC engineers and data center designers to
model a virtual data center and investigate the temperature, airflow velocity and pressure
fields in a fast and efficient way. The numerical analysis presents both 3D visual contouring
and quantitative data that is highly detailed yet easy to comprehend. Areas of complex
recirculating flow and hotspots are easily visualized to help identify potential design flaws.
Implementing different design decisions and strategies into the virtual model is relatively
simple and can be simulated in parallel.
Project Overview
For the purpose of this study, we used a simulation project from the SimScale Public
Projects Library that investigates two different data center cooling system designs, their
cooling efficiency, and energy consumption. It can be freely copied and used by any
SimScale user.
The two design scenarios are shown in the figure below:
The first design that we will consider uses a raised floor configuration, a cooling system that
is frequently implemented in data centers. When this technique is used, cold air enters the
room through the perforated tiles in the floor and in turn cools the server racks. Additionally,
the second model uses hot aisle containment and lowered ceiling configuration to improve
the cooling efficiency. We will use CFD to predict and compare the performance of the two
designs and determine the best cooling strategy.
CFD Simulation Results
Baseline Design of the Data Center Cooling System
We investigated the temperature distribution and the velocity field inside of the server room
for both design configurations. The post-processing images below show the velocity and
temperature fields for the midsection of the baseline design. It can be observed that the hot
air is present in the region of the cold aisle. This is due to the mixing of both the cold and
hot aisles within the data center surrounding. The maximum velocity for this baseline design
is at 0.44 m/s with a temperature range of 28.6 to 49.7 degrees Celsius.
The temperature contour shows that the zones between the two server racks are much
more cooled in comparison with the others. The reasons for this can be understood by
looking at the flow patterns.
It is evident in the above image that in the server rows where inlets are present, the top of
the racks sees a descending flow direction, instead of the desirable ascending flow from the
inlets themselves. This is due to the strong recirculation currents driven by thermal
buoyancy forces. This effect is very undesirable, as it reduces the cooling effectiveness
specifically for the top shelves of the server racks. This effect could be minimized by
allowing for proper airflow above the racks, either by increasing the ceiling height, placing
more distributed outlets on the ceiling, or using some kind of active flow control system
(fans) to direct the flow above the server racks. Or more simply, by preventing the hot air
coming from the racks from freely circulating.
The temperature plot shows a significant temperature stratification which is to be expected
given the large recirculation currents. We can observe that only the lowermost servers are
receiving appropriate cooling.
Improved Design of the Data Center Cooling System
The next two pictures show the velocity and temperature distribution for the improved
design scenario, middle section pane.
The velocity field shows that the flow is now driven to the outlets. This is due to the
presence of containment on top of the racks. This also results in a better temperature
distribution. The cold zones between the server racks are particularly extended.
The above image shows how the hot containment prevents the ascending flow from
recirculating back to the inlet rows. This results in a cleaner overall flow pattern compared
to what was seen in the previous design. It is also evident that the new design reduces
temperature stratification, particularly in the contained regions between the servers.
The average temperature calculated for each rack is lower for the improved design by
about 23%.
This is also reflected in the decrease in the amount of power that has to be supplied to the
server to prevent overheating. On average, energy savings of 63% for the data center
cooling system have been achieved.
Conclusions
A typical data center may consume as much energy as 25,000 households. With the rising
cost of electricity, increasing energy efficiency has become the primary concern for today‘s
data center designers.
This case study was just a small illustration of how CFD simulation can help designers and
engineers validate their design decisions and accurately predict the performance of their
data center cooling systems to ensure no energy is wasted. The whole analysis was done in
a web browser and took only a few hours of manual and computing time. To learn more,
watch the recording of the webinar:
If you want to read more about how CFD simulation helps engineers and architects improve
building performance, download this free white paper.
References
W. V. Heddeghem et al., Trends in worldwide ICT electricity consumption from
2007 to 2012, Comput. Commun., vol. 50, pp. 64–76, Sep. 2014
Top 10 energy-saving tips for a greener data center, Info-Tech Research Group,
London, ON, Canada, Apr. 2010,
http://static.infotech.com/downloads/samples/070411_premium_oo_greendc_top
_10.pdf
ASHRAE Standard 90.4-2016 – Energy Standard for Data Centers,
https://www.techstreet.com/ashrae/standards/ashrae-90-4-2016
Data Center PUE vs. ASHRAE 90.4
April 8, 2019 Julius Neudorfer
It has been almost three years since the
ASHRAE 90.4 Energy Standard for Data
Centers was finalized and went into effect in
2016, yet even today, many in the data
center industry are not fully aware of its
existence or its implications. Far more
people are familiar with The Green Grid
power usage effectiveness (PUE) metric,
first introduced in 2007, which started the
data center industry thinking about the
energy efficiency of the physical facility. While originally PUE was based on power
measurements (kW) which were snapshots, it was updated in 2011, and based on
annualized energy usage (kWh), which reflected a more meaningful efficiency picture under
various operating conditions. This snapshot power measurement was one of the loopholes
of the original version of PUE metric. Subsequently in 2011, TGG updated the PUE metric to
Version 2 (https://bit.ly/2usqKPj).
Its purpose was to help data center operators to baseline and improve their own facility.
PUE has been criticized by some since it only covers the energy efficiency of the facility (not
the IT systems), however, that was clearly its stated purpose. Nonetheless, its underlying
simplicity allowed managers to easily calculate (or guess) a facility‘s PUE which drove its
widespread adaption. In addition, the PUE metric helped prompt the U.S. EPA to create the
Energy Star program for Data Centers which became effective in 2010. This program
continues to be a voluntary award program and currently there are 152 Energy Star Certified
Data Centers listed at https://bit.ly/1V5Vafa.
PUE VS ASHRAE 90.4
PUE is considered the defacto metric by the data center industry, and in 2016 became an
ISO standard (ISO/IEC30134). Yet, despite this, most building departments do not know
much about data centers and never heard of The Green Grid or the PUE metric.
Nonetheless, while data centers may be different than an office building, like other
buildings, they still need to comply with any local and state building codes for safety, and
more recently for energy efficiency. In many areas of the U.S., ―ASHRAE 90.1 Energy
Standard for Buildings Except Low-Rise Residential Buildings‖ is referenced and incorporated
as part of some state or local building codes. Data centers were previously more or less
exempted in the 90.1 standard, however as of 2010, data centers were included. There
were complaints that it was too proscriptive from the data center industry and was updated
in 2012 to try to address this issue. In 2016, ―90.4 Energy Standard for Data Centers‖ was
introduced and subsequently 90.1-2016 transferred references to energy performance
requirements for data centers to the newly issued 90.4 standard.
The other aspect of PUE is that technically speaking, it is not a design metric. It is meant to
measure baseline and continuously improve and optimize operating energy efficiency.
Nonetheless, PUE been used as a reference for building design goals before construction. It
is also sometimes referenced in colocation contractual SLA performance or in energy cost
schedules. In contrast, the ASHRAE 90.4 standard is primarily a design standard meant to
be used when submitting plans for approval to build a new data center facility. It also covers
facility capacity upgrades of 10% or greater, which could complicate some facility upgrades.
In contrast, the energy calculation methodology for the 90.4-2016 standard is far more
complex that the PUE metric. However, one of the issues with the PUE metric is that it does
not have a geographic adjustment factor. This means that since cooling system energy
typically represents a significant percentage of the facility energy usage, identically
constructed data centers would each have a different PUE if one were located in Miami,
while the other was in Montana. But PUE originally and still provides a simple uniform
number which is easy to understand and monitor efficiency, regardless of location.
The 90.4-2016 standard separated the electrical power chain losses from the cooling system
energy efficiency calculations. While primarily focused on cooling performance, the 90.4-
2016 standard also details and limits the total maximum electrical losses though the entire
power chain, from the utility handoff, through the UPS and distribution system, and ending
at the cabinet power strips feeding the IT equipment. Moreover, this is very strictly
prescribed by ―Electrical Efficiency Compliance Paths,‖ along with calculations and detailed in
a table with specific limits on energy losses for varying levels of redundancy: N, N+1, 2N,
and 2(N+1) for the UPS and distribution losses at various operating load levels.
ASHRAE typically revises and updates its standard every three to four years, and publishes
proposed revisions for public comments, which can be found at http://osr.ashrae.org/. In
March, there were three proposed Addendums: f, g, and h, released concurrently and
posted for 30-day public review. The first, addendum ―f,‖ is focused on UPS efficiency and is
described ―to better align with current vintages of UPS technology in terms of performance
and industry evolution.‖ The original and proposed Maximum Design Electrical Loss
Component (Design ELC) table had UPS efficiency/loss listings at 100%-50%-25% loads
(per system or module — depending of system design N, N+1, 2N, etc.). For systems with
ITE loads greater than 100 kW, the proposed revision substantially decreases the maximum
allowable UPS losses from 9% to 6.5% (at 100% load), from 10% to 8% (at 50% load),
and from 15% down to 11% (at 25% load). The newer UPS units are more efficient across a
wider range of load levels and many may achieve 93.5% efficiency (6.6% loss factor) at full
load. However, it is more difficult to deliver 89% efficiency (11% loss factor) when
operating at only 25% load.
It is the cooling system calculation section known as mechanical load component (MLC) that
includes location as factor for meeting the cooling system energy compliance. It
incorporates a table with 18 U.S. climate zones listed in ASHRAE Standard 169, each with an
individual Maximum Annualized MLC compliance factor. In the proposed addendum ―g‖
revision, for data centers with greater than 300 kW of ITE load, the maximum MLC
compliance factor was substantially decreased for each climate zone (requiring less cooling
system energy), that really kicks up the requirements a notch. In some cases, the new
maximum MLC would be reduced by as much as 50% to 60% in some zones; such as 4B,
5B, and 6B.
The last item, addendum ―h,‖ has less impact on data centers since it focused on wiring
closets. However, it is an additional cooling efficiency factor that could be subject to scrutiny
and would need to meet mandatory compliance requirements.
Ironically, the ASHRAE Thermal Guidelines for Data Processing Environments, which is
widely considered an industry bible by the majority of data center operators, is not legally
recognized by governmental agencies responsible for overseeing and enforcing building
codes related to the design and construction of buildings. It has undergone four revisions
since its inception in 2004, which had a very tight recommended environmental ITE
envelope. It was the third and fourth editions which promoted and endorsed cooling energy
efficiency by introducing the expanded allowable IT intake temperature ranges and also
broadening humidity ranges, which effectively negated the need for energy intensive tight
humidity control.
The Bottom Line
I have been a longstanding advocate of data center energy efficiency and have written and
spoken about it well before PUE was introduced. When 90.4 originally came out in 2016, I
wrote that it was about to ―move your cheese.‖ The proposed addendums which tighten the
efficiency requirements, that will take effect in 2020, may move it a bit further. But is this
really necessary? Clearly the designs of the newest facilities, and especially the colocation
and hyperscalers, are highly self-motivated to focus on energy efficiency. The massive shift
toward colocation and cloud service providers have directly or indirectly made energy
efficiency a competitive mandate and part of the justification for lowering TCO.
Nonetheless, many older data centers were designed with availability as the highest priority,
efficiency was not given the same consideration as in modern designs. Moreover, it is also
true that prior to the PUE metric many organizations that owned their own data centers
were not very aware of their facility energy efficiency. In some cases, the managers never
saw or were not responsible for energy costs. However, while fewer enterprise organizations
are building their own new sites, there are still many older sites that are still operational. As
a consultant, I perform data center energy efficiency assessments and have seen older
facilities which are still in good condition, which unfortunately may have a PUE of 2 to 3,
primarily due the age of their electrical and cooling infrastructure.
While it is easy to simply recommend equipment upgrades, these are costly and payback
can be hard to economically justify. In addition, it is very difficult or impossible to replace
key, but inefficient, components without shutting down or disrupting the facility. Critical
elements, such as large chillers, cannot be upgraded, especially if there is limited or no
redundancy. Even today I have found that in most instances, a significant amount of cooling
system energy can be saved in data centers by the low cost or no-cost fixes to basic airflow
issues.
Fixing the basic low hanging fruit, such as installing blanking plates, adjusting or relocating
floor grilles, is non-disruptive and is within the capabilities of most in-house staff. This can
solve most of their cooling issues, which allows raising temperatures to save energy. More
importantly, these older sites typically have little or no visibility into how the facility
infrastructure energy is consumed, other than see the total on their monthly utility bills. So if
you are considering major equipment upgrades to an existing facility, now would be a good
time to consider reviewing ASHRAE 90.4-2016, as well as the pending addendums to see if
they apply to your project. Consider purchasing a DCIM system or granular energy metering
and monitoring system to continuously optimize cooling system efficiency before investing in
expensive forklift upgrades.
Examining the Proposed ASHRAE 90.4 Standard
BY JULIUS NEUDORFER - APRIL 4, 2016 LEAVE A COMMENT
The potential impact 90.4 will most likely increase the cost of preparing documentation
during the design stage to submit plans for building department approvals.
The stated purpose of the of the proposed ASHRAE 90.4P standard is ―to establish the
minimum energy efficiency requirements of Data Centers for: the design, construction, and
a plan for operation and maintenance, and utilization of on-site or off-site renewable energy
resources‖. The scope covers a) New Data Centers or portions thereof and their systems, b)
new additions to Data Centers or portions thereof and their systems, and c) modifications to
systems and equipment in existing Data Centers or portions thereof‖. It also states that the
provisions of this standard do not apply to: a. telephone exchange(s) b. essential facility(ies)
c. information technology equipment (ITE).
Companies are becoming more and more technology oriented and getting dependent on IT
for almost all of its operations. This has led to the growth in demand for data center
services. As a result, the data center managers are faced with the challenges of optimum
utilization of available data center space for improving the efficiency. One of the best
approaches to deal with such challenges is to consolidate the white space and gray space in
the data center to improve the operational capabilities.
First let us understand the difference between White space and Grey space in data centers:
White space in data center refers to the area where IT equipment are placed. Whereas Gray
space in the data centers is the area where back-end infrastructure is located.
Servers Switch-gear
Storage UPS
Below are few strategies to consolidate white space and gray space in order to improve data
center operational efficiency and availability. These may relieve the burden for data center
managers in an effective way.
Latest energy storage technology: Technologies like flywheels help in increasing the
momentum of the machine. In cases of power fluctuations, the disk continues spinning
because of this momentum, producing additional power which UPS uses as the emergency
brief standby power. In a way, it reduces the number of batteries required for power supply
which in turn saves space for storing them.
White Space 69: New and improved
February 28, 2017 By: Max Smolaks
Now with 33 percent female presence
This week on White Space, we introduce a new member of the editorial team – please give
a warm welcome to Tanwen Dawn-Hiscox!
We start with the observation that liquid cooling is like a bus: you wait ages for news, and
then three announcements come at once. Swedish company Asetek has signed up a
mysterious data center partner while in the UK, Iceotope has partnered with 2bm. And in
Holland, Asperitas has unveiled an ‗immersed computing‘ system that essentially sinks
servers in giant tubs of Vaseline - cue the lubricant jokes.
There‘s an update on the lawsuit filed by British modular data center specialist Bladeroom
against Facebook - the company claims that the social network stole its designs and
deployed them in Lulea, Sweden. It is now obvious that the case will go to court.
Google has finally made Nvidia GPUs available to the Google Compute Platform customers,
becoming the last of the major cloud vendors to do so: the same GPUs were offered by
SoftLayer in 2015, and both AWS and Microsoft jumped on the bandwagon in 2016. What
makes Google‘s approach different is the intent to also deploy graphics chips from AMD, the
eternal underdog.
Meanwhile Dutch Data Center Association has published a report that claims Dutch
colocation facilities have contributed more than $1 billion to the country‘s GDP in 2016.
There‘s plenty of other content in the show: anecdotes, observations and travel notes. You
can also expect more news on Schneider Electric and AMD in the nearest future.