You are on page 1of 22

Establishing Benchmarks for Data Center Efficiency Measurements

Transcript

Slide 1
Welcome to Establishing Benchmarks for Data Center Efficiency Measurements

Slide 2
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the paperclip icon to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.

Slide 3
At the end of this course, you will be able to:
Discuss how data center operational theory can translate into practice
Identify key steps in the data center design / build process
Review how technology can facilitate the establishment of data center standards
Describe ways to better design, manage, and operate data centers

Slide 4
This course is based upon an actual case study involving a data center that was built and commissioned at
Carnegie Mellon University in Pittsburgh, Pennsylvania.

The agenda for this course is as follows:


We will begin with reviewing the background of this project then proceed with a brief introduction, followed
by a discussion of the mission of Carnegie Mellon’s Data Center Observatory project, we’ll move on to
explore how the project started, research practices utilized, along with design standards employed

Establishing Benchmarks for Data Center Efficiency Measurements Page |1

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
throughout the data center design / build project. We’ll also take some time to examine the operations
methodology applied as well as the best practices developed. We’ll conclude with a brief summary.

Slide 5
Carnegie Technical Schools was founded in 1900 by Andrew Carnegie and became known as the Carnegie
Institute of Technology in 1912. In 1967, after merging with the Mellon Institute, the University adopted the
current name of Carnegie Mellon University. The University consists of seven colleges and schools: The
Carnegie Institute of Technology, the College of Fine Arts, the College of Humanities and Social Sciences,
the Mellon College of Science, the Tepper School of Business, the School of Computer Science and the H.
John Heinz III School of Public Policy and Management.

Slide 6
Recognized as one of the top-ranked national research universities, Carnegie Mellon houses 113 research
centers, 10,000 graduate and undergraduate students, 1,100 faculty members and over 2,700 staff
members.

The Parallel Data Lab (PDL), housed at Carnegie Mellon University, is a multi-disciplinary academic
research organization with strong ties to industry partners. Generally recognized as a premier storage
systems research center, the Parallel Data Lab’s industry sponsors include: Cisco, EMC, HP, Hitachi, IBM,
APC, Intel, Microsoft, Network Appliance, Oracle, Panasas, Seagate Technology and Symantec.

Slide 7
The Parallel Data Lab undertakes a number of research projects. The Data Center Observatory (DCO) is
one of the lab’s most ambitious projects. The DCO was conceived to serve as both a fully operational data
center and a research facility devoted to developing strategies for reducing the operational costs of data
centers. The purpose of this course is to review, in case study format, the data center design / build and
operations processes developed during the creation and operation of this unique functional laboratory.

Establishing Benchmarks for Data Center Efficiency Measurements Page |2

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 8
Data centers are requiring increased amounts of energy and cooling resources to operate. According to the
Data Center Observatory team, an understanding of how to improve the efficiency and operation of data
centers is really in its infancy.

Greg Ganger, the director of the Parallel Data Lab shares his assessment of the situation:
“For years, management of data centers has been problematic. It’s difficult for people running data
centers to quantify what the problem is. It’s a challenge to address a problem that isn’t well defined.
At the same time, power and cooling costs are eating people up.”

Ganger goes on to elaborate “In today’s data center, the potential exists for much improvement in
the areas of data center management, performance, reliability, and availability.”

“The growth of data and the demand for even more complex data analysis, poses some significant
data center capacity and data center data backup challenges. The only way to define the data
center efficiency problem is to characterize it while living and breathing it. That was one of the
principal motivators for creating the data center that houses our Data Center Observatory project.”

Slide 9
A second motivation for launching the project was to leverage the research data center as a hosting facility
in support of the computation and storage activities of several campus research organizations.

Parallel Data Lab Executive Director Bill Courtright, shares his perspective:
“We looked around our university and saw that many research groups were building small, isolated
clusters of computers. These groups were interested in analyzing research data and not interested
in managing IT system clusters. Our vision was to build a data center that could support research
in data center operation, and have that same data center serve as a utility to the university.”
Courtright goes on to explain,

Establishing Benchmarks for Data Center Efficiency Measurements Page |3

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
“By allowing the research groups to co-locate their small clusters into a larger shared cluster in our
data center, we could help the university to cut costs, create a more compelling cluster for
everyone to use, and also create an interesting research platform.”

Slide 10
The team’s goals are to document data center challenges and to experiment with ways to better assemble
and manage a data center. The team has begun to tabulate and monitor a variety of metrics, including data
center power consumption and heat management data, server and storage utilization data, as well as a
record and categorization of the human administration time spent to build and run a fully functional data
center.

Slide 11
When asked to comment, Greg Ganger responds:
“Given the magnitude of the problem, you’d think that more resources would be spent to try and
understand the issues. That is what our entire research project is all about. No one can figure out
how to address these problems by sitting at a desk. You need to get into the data center and
measure what the problem really is.”
He goes on to explain,
“Only then, can you figure out how to address and fix those problems. Then, you need to measure
whether the fixes really make any difference. In order to set the stage for the data center
experimentation, a new data center had to be built, one that would enable the easy deployment of
technology standards and for the development of efficient operational practices.”

Slide 12
In December of 2004 data center space was allocated for the Data Center Observatory project. Although the
new space had no supply of chilled water or other practical means of cooling, existing chiller plants were
located in various sites around the campus. However, the Data Center Observatory team did not wish to
own and operate a chiller plant; instead they wanted to leverage the financial, management, and
maintenance benefits of working with an existing facilities group.

Establishing Benchmarks for Data Center Efficiency Measurements Page |4

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 13
A new private chilled water loop was tapped off of the existing campus loop to supply the building which
houses the data center. To shield the private loop from any pressure variations that could occur in the main
campus chilled water loop, the data center utilizes heat exchangers to couple the new private loop to the
main loop.

Slide 14
In addition, the electrical distribution needed to be extended from the main switchgear in the basement to a
new set of panels in the data center.

Slide 15
The team began its search for a partner who could help to design the data center power and cooling
infrastructure. With the processor densities and anticipated heat output projected to grow significantly over
the next 5 years, the team required a solution that was different from the traditional “stick it in the corner”
central UPS and room based air conditioners.

Establishing Benchmarks for Data Center Efficiency Measurements Page |5

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 16
As Courtright elaborates:
“We are firm believers that the cooling has to get closer and closer to the heat source. We thought
the APC InfraStruXure® solution introduced a new concept that just made sense. It offered us the
possibility of hosting the kinds of high densities we wanted to experiment with.”

The team established a partner relationship with APC. Their decision to go with APC was not just based on
the fact that APC could architect an efficient solution for the Data Center Observatory project but also
because they were excited about collaborating on the long-term data center enhancement research at its
heart.
As Courtright goes on to explain:
“We needed support for the entire data center design cycle and technology rollout, as well as the
research endeavors, and APC was willing and able to provide that support.”

Slide 17
Since the main thrust was to study the overall efficiency of the data center, APC was keenly interested.

Establishing Benchmarks for Data Center Efficiency Measurements Page |6

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
APC offered data center monitoring support, and direct access to APC engineering. The Data Center
Observatory team also sought a partner who would value the results of their research and make
recommendations regarding cooling or valve settings or any other daily data center operational issues. In
the eyes of the Data Center Observatory team, the partner dimension carried as least as much weight as
the ability to provide a technically sound solution.

Let’s take some time and explore how this team went about researching these issues, beginning with a
discussion of their research practices.

Slide 18
The data being gathered by the Data Center Observatory project team includes everything from
environmental and mechanical data—power usage at various points in the room, ambient temperature,
computer load, disk activity, humidity, chilled water temperatures—to the observation of human
administrators and how they spend their time operating the data center.

Courtright explains the rationale:


“If one considers the operational cost of a data center, the power and cooling infrastructure costs
are roughly the same as the hardware equipment costs. The administration costs, however, are
often four or more times greater than that of the hardware equipment.”

Slide 19
Utilizations of individual machines, network data flow, and performance will also be tracked. The information
gathered will periodically be published in research papers and will also be disseminated at conferences and
published in journals.

An important goal for the team is to explore how to fairly and efficiently schedule jobs in a cluster. The ability
to schedule jobs at the appropriate times and to map them to the appropriate machines—and to execute all
of this within the context of reducing power and cooling consumption—will be tested and documented. This

Establishing Benchmarks for Data Center Efficiency Measurements Page |7

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
means that the job scheduling mechanism will also interact with automated control software for powering
down machines to reduce energy consumption.

Now let’s review how APC and the Data Center Observatory team partnered together to design the data
center for this project.

Slide 20
During the early stages of discussion, APC helped the Data Center Observatory team to mesh their
deployment schedule with a build-out schedule of four zones of InfraStruXure hot aisle containment. The
configuration included multiple rack-based 80 kW Symmetra® PX UPSs with self-contained, in-row, rack
based cooling and power distribution.

When questioned about the decision to deploy the InfraStruXure solution, Courtright replied:
“The APC InfraStruXure architecture was appealing to us because of its flexibility.”
He goes on to add:
“The solution was truly modular and scalable. As a result, we avoid investments in capital that are
unused for several years. Additionally, we are able to ride the technology curve and deploy more
efficient power and cooling solutions as they become available.”

Slide 21
The InfraStruXure design was selected because it provided the means of scaling and growing to reflect the
changing needs of the project. The InRow® precision cooling systems were selected for deployment in the
second zone. A design plan was established for the rest of the five year build out involving future products.
The window to the future permitted the team to design and engineer a solution that could support very high
processor densities.

The Data Center Observatory team also worked in conjunction with APC to create a floor plan for the project.
Let’s explore the various decisions made in establishing a workable floor plan, beginning with the cooling
options.

Establishing Benchmarks for Data Center Efficiency Measurements Page |8

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 22
Data center operators have different cooling options. Rather than a traditional room-based approach or a
rack-based cooling approach, the team opted for in-row cooling. Room-based cooling was deemed
inefficient and inflexible. Rack-based cooling was eliminated as an option because of the scale of the data
center.

If the data center consisted of a small IT closet, then the team would have considered rack-based cooling.
For 40 racks in a room, the in-row cooling approach was deemed the most efficient option, and the most
capable of handling the forecasted heat densities.

Slide 23
The APC rack-based solution offered the option of overhead cabling, in addition to the closely coupled
cooling that the Data Center Observatory team required, and scalable self-contained rows of racks. The
racks themselves could support both the physical infrastructure equipment, such as UPS, cooling and
environmental monitoring systems, and the IT equipment.

Slide 24
In addition to the 2,000 square foot (610 square meter) data center, the team also has an adjacent 1,200
square foot (366 square meter) lab. All incoming gear is uncrated in the lab area. Equipment is also tested
in that space and then transported to the Data Center Observatory using a server lift.

The building that houses the data center was built with an 18-inch (45.7 cm) raised floor throughout. Some
of the traditional physical infrastructure solutions the team had considered, which distribute cool air via a
raised floor, would have required a 36-inch (91.4 cm) raised floor (nearly twice the size).

Slide 25
If a new, deeper raised floor were deployed in the Data Center Observatory, any movement of equipment
between the lab and the data center would have required a dock lift to be installed and would have likely
introduced safety issues, additional costs and floor space/work space consumption challenges.

Establishing Benchmarks for Data Center Efficiency Measurements Page |9

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The data center design/build process always involves tradeoffs between data center user preferences and
physical space and financial constraints. Let’s learn how the Data Center Observatory team worked through
this process.

Slide 26
The Data Center Observatory is housed in the Collaborative Innovation Center (CIC), a multi-tenant facility
that, although located on the CMU campus, includes non-university tenants as well. The philosophy of the
CIC is to bring the university, industry, and government together under one roof.

In the early stages, the DCO team held weekly meetings with campus facilities and APC to review
specifications of what they wanted in the 2,000 square foot (610 sq meter) space. APC, as the data center
physical infrastructure partner, was involved in the engineering phase, which also included an outside
engineering firm. The group developed all the plans for the room.

The Data Center Observatory team indicated the number of racks and servers and other equipment that
they thought was needed. The discussion included energy densities and projected the kinds of load profiles
anticipated for the next five years.

Slide 27
The engineering team provided feedback regarding the weight load that the floor could support as well as
the level of power that the building main panels could support.

In order to help preserve the appropriate humidity levels in the data center, the room was sealed with a
vapor barrier. However, for code reasons, a certain percentage of fresh air needed to be supplied to the
room.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 10

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The team therefore designed the data center to accommodate the fresh air changes based on the
assumption that no more than two full-time people would occupy the room at any given time. This required
that the team install a single perforated tile in each corner of the raised floor to supply fresh air.

Slide 28
Humidity was also an important parameter to measure and control in the data center. In the first few months
of operation, the team observed a correlation between the humidity of the building air and the power being
consumed in the data center—as the building’s humidity rose, additional power was consumed in order to
dehumidify the air in the data center.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 11

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Next the team spent some time calculating the power capacity required to support the data center.

Slide 29
To compile an accurate picture of data center load requirements, the team analyzed existing gear at other
campus research lab sites to try and estimate how much power was being consumed.

The capacity planning exercise involved looking back at historical data regarding power consumption, and
then taking some measurements of the current systems.

In addition, the team consulted with industry experts in order to better understand the industry trends. The
gathering of this data served as a basis for formulating capacity forecast projections.

For every watt of power consumed by a computer, the team factored in another watt that would be
consumed as a result of the necessity of the cooling of that equipment.

APC helped to guide the team during the capacity planning exercise based on the projected year-by-year
server counts. The flexibility of the InfraStruXure technology allowed the team to develop a more accurate
technology curve for powering and cooling the information technology systems.

Slide 30
The long term plan for the data center includes an estimate of 40 racks occupying four zones. The plan
projects differing power levels for computing racks and storage, and includes provisions for higher power
densities in later years.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 12

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
This chart denotes the five year projections as calculated by the team. It includes estimates for the total
number of racks, total power consumed in kilowatts, total weight the data center must withstand and the
total heat output.

Criticality requirements will drive availability and redundancy decisions, so let’s see how the team
established criticality levels.

Slide 31
The Data Center Observatory site is not attended 7 x 24. From a criticality perspective, the research
departments whose computers reside in the data center are not demanding “five 9s” of uptime. The Data
Center Observatory does not support the payroll department, for example. The users are scientists, who run
experiments, perform data analysis and who utilize their computers in an intermittent fashion. They log onto
their systems, run experiments, and then log off of their systems.

In addition, the scientists’ research data can often be regenerated. The researchers’ applications are
therefore not as critical as typical business applications.

Slide 32
The criticality requirement of the data center, from an operational point of view, is to assure a quick,
automated, safe, shutdown to avoid equipment damage and data loss.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 13

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In the case of a major outage caused by either a loss of chilled water or loss of power, an operating window
has been established. A fixed length of battery runtime allows for a graceful shutdown of applications and
staging of data to disk prior to shutdown of systems.

Slide 33
In the event of a catastrophic loss of power or chilled water, the electrical and cooling systems were
designed to operate for four minutes when the Data Center Observatory is fully populated with its maximum
design load. Once a catastrophic failure is detected, an automated system is engaged to perform the
controlled shutdown within this four minute window.

With the four minute window in mind, the team has to ensure that enough chilled water can be circulated in
the Data Center Observatory’s private loop in case the main loop fails. Some of the pipes in the loop are
upsized to provide enough volume in the loop so that water can be circulated at full room load for four
minutes and still maintain cooling.

Slide 34
A building management system monitors the water flow rates and temperatures as well as the status of the
pumps. A pair of redundant pumps runs the local chilled water loop which is connected to the university’s
main chilled water loop with a pair of redundant heat exchangers.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 14

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
As Courtright explains “We monitor whether the ‘A’ pump is running or whether the ‘B’ pump is running.
Additionally, we monitor whether a pump is running because the other pump is experiencing a problem.”

Slide 35
The CIC building receives power from two independent feeds that are connected to the building through a
transfer switch. If the building experiences a problem on one feed, the transfer switch will automatically
engage the other feed.

The power from the main feed is then distributed to the Data Center Observatory which then distributes it to
all the rack mounted UPSs. If a rack mounted power panel goes out, the UPS provides for safe shutdown.

Slide 36
One of the data center racks hosts critical networking switches and a key file server. All of the APC
monitoring gear is also installed in that same rack. That particular rack is powered by a rack mounted
transfer switch that is connected to two different UPSs. In that way, if one of the UPSs needs to be taken

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 15

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
down for service, the rack stays up. The data center itself has no access to a backup generator; however,
the chilled water pumps are connected to a backup generator.

Fire is a special concern in any data center. Let’s review how the team addressed fire suppression.

Slide 37
A pre-action fire suppression system was installed when the building was being certified for occupancy. The
system was then enhanced by the installation of a double interlock dry pipe pre-action system. For the
system to engage, at least two of three events need to occur—an alarm handle pulled, a sprinkler head
opened and / or a smoke detector alarmed—-before water can enter the pipe.

For example, if a sprinkler head is accidentally knocked off, the system will not engage. Similarly, if only the
alarm handle is pulled, the system will not engage—the alarm will sound but water will not flow into the pipe.
However, if the smoke detector and the alarm handle are both activated, water will flow into the pipes, but
won’t be dispersed until a head is opened.

Security in the data center is an ongoing concern, so let’s spend some time discussing how the project team
addressed security measures for the data center.

Slide 38
The team elected to use an access control system that was developed by CyLab, a world-renowned security
research center at Carnegie Mellon University. A computer and phone-based system is connected to all

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 16

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
doors that provide a data center access point. Each staff member can remotely enter an authorization code
via the cell phone to open a door.

If an individual who is not normally authorized wants to enter the data center, the computer will route the
request to the security administrator’s cell phone. The administrator can choose to deny entry, authorize
entry for one particular instance only, or authorize entry for an unlimited number of times over an extended
period of time.

Slide 39
The team is also evaluating an APC NetBotz video camera security system. This would provide the staff
with the ability to capture a snapshot or small amount of video every time a door into the data center swings
open.

Motion detectors are also being evaluated to help monitor activities within the data center. If an individual is
moving around in the room, the motion detector would enable that movement to be captured on video. In
this way, the team could go back and review historical data to try and link failures to possible movements or
equipment changes. If a network outage is experienced, for example, it would be easier to determine if an
individual was working in the network rack and whether human error was involved in the outage.

Increasing rack densities is a special concern for today’s data center operators, let’s see how the Data
Center Observatory project team elected to address it through adaptive power management techniques.

Slide 40
The team steadily tracks data regarding rack densities and heat management. Observations are made
regarding what changes occur over time.

The team is interested in measuring the effects of power density and the accompanying heat on the data
center physical infrastructure. One of the team goals is to find ways to limit the negative impacts of a denser

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 17

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
data center environment. The research conducted by the Data Center Observatory team complements
APC’s interest in developing enhanced data center infrastructure solutions.

Slide 41
The zones that partition the data center are an APC concept. Establishing zones helps to accommodate
growth that is incremental.

The data center is partitioned into zones because APC’s cooling concept of bringing the cooling as close to
the load as possible is architected in that manner. Each zone consists of one self-contained heat enclosed
space. Since the equipment that will occupy the various zones will be acquired over time, the team expects
to observe that the density and power usage of each zone will be greater than that of the previous zone.

Slide 42
No decision was made ahead of time whether a particular zone would be high density and another would
not. The data center will grow incrementally over time as the need to expand manifests itself.

Currently, the lowest power density in the data center is 6 kW per rack and the highest is closer to 14 kW
per rack. The plan is to evaluate densities of 25 kW per rack or more.

The team is currently running rack clusters of 1U machines, 3U machines, and blade servers. One rack is
populated with blade servers. Almost all racks are full top to bottom.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 18

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 43
The physical infrastructure is being evaluated to determine what kinds of information can be monitored and
to determine how the information can be fully leveraged to run the cooling systems more efficiently. An
example of this technique would be the automatic adjustment of such variables as fan speeds and valve
settings.

Solid operations management is essential to achieving efficiencies in the data center. Let’s review the
operational procedures established by the project team.

Slide 44
The team has developed a database that catalogs all of their data center equipment. The database contains
information regarding hardware, wiring / cabling, topology, maintenance history, operating system/software
version, as well as equipment name, location, and installation date. The information within the database
includes vendor specifications and troubleshooting information.

The outputs from this database help to run the data center team’s shut down scripts, ensuring that the
automated shutdown process is based upon the latest machine configuration information. The data base
identifies which particular machines will receive sequenced shut down commands.

Slide 45
Human administration time is tracked in a private data base. For example, if a data center administrator
spends two hours rewiring a network rack, that process is documented in the database. Information is
summarized in two dimensions: the domain of work being performed (e.g., storage gear, computation gear,
networking gear, physical infrastructure, etc) and activity type (e.g., installation, configuration, incident
management).

Specifications, processes, and general information are collected and stored in a wiki system. A data center
staff person performing work can edit any page of the wiki and can upload documents as well as enter notes
and updates. The wiki serves as a living compendium of information for the team.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 19

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The longer term plan is to author a formal procedures manual for the data center room. Incident
management documentation in particular, will allow operators to know which rapid actions to execute during
emergency situations.

Slide 46
In the data center, all data and most power cables are configured above the floor, on top of the racks in
troughs. Only the chilled water piping, main electrical feeds to the UPS from the power distribution panel
and leak detection system reside under the raised floor.

A ladder rack supports data cables from a panel on the wall to zone one. The cables are color coded by
function. Every cable is labeled before being put into use, as is every computer. All other devices, including
CRACs, UPSs, PDUs, console servers, etc, are also labeled. All of this information is entered into a
database.

The cable names are self identifying. When an administrator or system operator reads a cable label, they
immediately recognize where the cable ends. Location specific information is communicated on the cable
label.

A cable data base has also been established that documents the entire wiring topology of the data center.
The data base maintains a record of every cable and its end point.

Slide 47
The team has begun to collect individual power consumption data for all of the equipment in the data center.
A baseline will be developed regarding the nature of the load in the data center and the level of the energy
draw. As the team implements new higher power servers and adjusts the cooling systems, the team can
then measure how much energy is being saved.

All projects benefit from capturing “Best Practices”, let’s explore the best practices developed by the Data
Center Observatory team.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 20

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 48
During the data center design phase, rather than just verbalizing ideas and approaches or sharing ideas via
individual email notes, the team succeeded in writing specifications down in an accessible data base. The
written specifications included requirements, projections, and key project steps. If a problem was
encountered, the team documented the changes that needed to be made. Writing up a specification and
passing it around for everyone to sign off was deemed a critical success factor.

For example, if an individual missed a meeting on a particular day, he or she could access the written record
of the minutes. In addition, the discussions did not become circular. Once a decision was made, it was
documented. That helped the team to achieve closure quickly during the design phase and helped the team
to move onto the engineering phase.

With the help of the documented specifications, the engineers had clear guidance to work from. As a result,
the team avoided the problem of having to implement an unanticipated design.

Slide 49
During the design / build process, the team also relied on a facilities department champion who spent a
considerable amount of time helping to shepherd the project. He understood what would be required of the
engineering and construction teams to make the project a success. He kept the Data Center Observatory
team grounded and was able to point out realistic constraints and to balance those constraints against the
team requirements and preferences. His early input helped to avoid design changes later on which would
have been costly and difficult.

Slide 50
Vendor support from APC also helped to make the project deployment successful. APC made themselves
available to consult with staff members and engineers, and provided consistent support from initial planning
through installation. This helped to avoid a multitude of problems. By involving APC from the beginning, the
team didn’t have to go back to square one each time they wanted a change or an addition to the design.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 21

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 51
Let’s take a moment and summarize what we have learned from this case study.

The Carnegie Mellon Data Center Observatory project will provide a benchmark for understanding how
efficiently a data center can be designed, built, and operated. By observing and recording the impact of new
technology on the physical aspects of the data center – the cooling, the power and the overall
environment—the Data Center Observatory team will document and report on ways to best confront data
center challenges.

The study will include data regarding how data center administrator time is spent and discover how high
costs are generated. Armed with this data, researchers will then be able to create detailed breakdowns of
short-term failure events and long-term trends and help to measure the efficiency of various process
techniques. Comparisons of process success rates will help to establish a benchmark for what levels of
efficiency are possible given the physical, technological, and human constraints of the data center.

Slide 52
Thank you for participating in this course.

Establishing Benchmarks for Data Center Efficiency Measurements P a g e | 22

© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.

You might also like