Professional Documents
Culture Documents
G00213184
Data center facilities rarely meet the operational and capacity requirements
of their initial designs. The combination of new technologies (e.g., blade
servers, which require substantial incremental power and cooling capacity),
pressures to consolidate multiple data centers into fewer locations, the need
for incremental space, changes in operational procedures, green IT
initiatives and rising energy costs, and potential changes in safety and
security regulations imposes constant facilities changes in the modern data
center. The overarching rule in data center facilities is to design for flexibility,
efficiency and scalability. Here, we combine many of the best practices in
data center design, and attempt to put a financial value on them.
Key Findings
■ New data center designs should focus on creating the most efficient use of floor space, and the
power and cooling delivered to it.
■ Multizoned designs provide increased flexibility, and can reduce operating costs up to 20% by
reducing overall mechanical/electrical requirements.
■ Selecting the right tier level(s) will be the key cost driver, as mechanical/electric redundancy
contributes to upward of 40% of the overall cost to build.
Recommendations
■ Use the rack unit as the basis for estimating power and space requirements.
■ Design for vertical density, ensuring optimal compute capacity per kilowatt.
■ Design multiple power zones to increase flexibility and reduce upfront capital costs.
■ Take a comprehensive view of location and site selection. Consider utility infrastructure, labor
and real estate markets, and public incentives.
■ Opt for single-story, industrial-type buildings, except for special design considerations (e.g.,
second-floor natural air cooling plenum).
■ Evaluate the level of redundancy (tier level) and its cost with the cost of downtime, while
considering multitier designs.
■ Maintain/establish close communication, and liaison between the data center facilities
organization and the IT organization.
Table of Contents
Analysis.................................................................................................................................................. 3
The Keys to Successful Data Center Design: Flexibility and Scalability.............................................. 3
Density Versus Capacity............................................................................................................. 4
Emphasize Airflow Design...........................................................................................................4
Install Only the Power You Need.................................................................................................4
Use Power Capping Management Technologies.........................................................................5
Use Zones to Reduce Build and Electrical Costs........................................................................ 5
Rack Layout............................................................................................................................... 6
Rack Units..................................................................................................................................6
Availability Design Criteria................................................................................................................. 7
Critical Building Systems.............................................................................................................8
Location................................................................................................................................... 10
Site Selection............................................................................................................................11
Architecture.............................................................................................................................. 12
Power Distribution.................................................................................................................... 13
Power Supply........................................................................................................................... 14
Mechanical Systems.................................................................................................................15
Security Systems......................................................................................................................16
Raised-Access Floor (RAF)....................................................................................................... 16
Fire Detection and Suppression................................................................................................ 18
Data Center Construction Costs..................................................................................................... 19
Sizing Estimates....................................................................................................................... 19
Data Center Cost Estimates......................................................................................................20
Data Center Facilities Management Options................................................................................... 22
Model 1: Assign Data Center Facilities Management Responsibility to Corporate Facilities
Organization............................................................................................................................. 22
Model 2: The Management of Data Center Facilities Is an IT Organizational Responsibility........ 23
Model 3. A Matrix Model, in Which IT Facilities Staff Reports to IT Organization and to Corporate
Facilities Organization............................................................................................................... 23
Recommended Reading.......................................................................................................................23
List of Tables
Table 1. Total Space and Power for IT (Base Case) — 200 Racks.......................................................... 7
List of Figures
Analysis
The Keys to Successful Data Center Design: Flexibility and Scalability
Data center facilities rarely achieve the operational and capacity requirements specified in their
designs. The key to a sustainable data center facility is to consider it an integrated system in which
each component must be considered in the context of flexibility and scalability. Planners need to
anticipate the following "forces of change":
■ New equipment
■ Consolidations and expansions
■ New redundancy requirements
■ Incremental power requirements
■ Sustainability and green IT
■ Incremental cooling demands
■ Constrained floor space
■ New safety and security regulations
■ Changes in operational procedures
■ Changes in mission
■ Cost pressures
The overarching rule in data center facilities is to design for flexibility to change as technology
changes, for efficiency in operations of the facility itself (e.g., energy and airflow), and for vertical
scalability within the floor space, providing the greatest compute per square foot, or per kilowatt
(kW) possible over time. This rule embraces several key principles in the site location, building
selection, floor layout, electrical system design, mechanical design and concept of modularity that
enable the data center to change and adapt as needed, with minimum renovation and changes to
basic building systems.
This research delves into specific guidelines for achieving a high level of flexibility and scalability in
the data center. These best practices address site location, building selection, and principles in the
design and provisioning of critical facilities systems.
Many data centers are overdesigned for electrical capacity because of concerns about meeting the
incremental power and cooling demands of modern server equipment, such as blade servers. Data
center designers must be sure to plan for sufficient capacity relative to patch panels, conduits and
intermediate distribution feeds, as well as pay attention to equipment densities relative to initial and
longer-term electrical power capacities.
upward. The fear of underprovisioning a new building is driving these decisions, rather than a more
logical approach.
Consider this: Current high-density racks can easily demand 10, 15 or even 25 kW per rack (333W,
500W and 833W per square foot, respectively); however, designing to this target assumes that all
workloads are similar. Most racks don't come close to the theoretical maximums, and Gartner
believes that high-density planning estimates of 10 to 12 kW per rack are reasonable, but only for a
subset of your floor space.
A more reasoned approach would be to consider density zones within the new floor space. Simply
put, density zones make the assumption that workloads can be partitioned into low-, medium- and
high-density areas, and the power distribution and cooling capacity for those areas can be
engineered accordingly. In addition, future upgrades can be applied at the zone level, instead of
upgrading the entire data center.
Assuming 10% of the floor space was designed to high density (15 kW per rack), 20% was medium
density (8 kW per rack) and the rest was low or normal density (5 kW per rack), the overall building
cost would be reduced to $18.4 million. At a 75% load, the yearly electrical costs would be $1.37
million in the traditional design, versus $1.1 million using the zoned approach — a $250,000 yearly
reduction in operating costs.
If the zone requirements changed and more high-density floor space was needed, then scaling up
the power distribution units (PDUs) would be a simple method of increasing power. Adding more
on-floor computer room air-conditioning (CRAC) units would help address cooling issues.
However, you will need additional power for air-conditioning, humidification, lighting, and
uninterruptible power supply (UPS) and transformer losses. This additional demand could add one
to one-and-a-half times more wattage to the electrical load, depending on equipment spacing and
air handling efficiencies.
Rack Layout
Gartner recommends that the data center be designed to scale individual racks or zones as needed.
You can add incremental capacity on a modular basis. Focus on density versus capacity, and
consider segregating workloads into hot or cold containment areas for improved cooling
efficiencies.
Several modern blade chassis can be packed into a single-rack enclosure, resulting in power
demands of 18 to 20 kW per rack. The primary issue with dense packing the layout with high-
density servers is that, although it is efficient from a space standpoint, it creates heat problems that
require incremental cooling, which can incur significant incremental electrical costs if not done
properly. The zoned approach (see Figure 1) is becoming common in designs today, and, in some
cases, higher-density zones are either segmented for special cooling or will utilize in-rack cooling
techniques.
Rack Units
Use the rack unit as the primary planning factor for estimating the space and power requirements of
the data center. Each rack configuration should reflect total power, space and floor loading
demands. Strive for an average of 4 to 5 kW per rack for low-density areas. Medium- and high-
density areas should be determined by an application and performance mapping exercise, and
should focus on the power needs and the percentage of total floor space needed.
Traditionally, facilities planners used space planning factors, such as square feet per rack or watts
per square foot, to estimate data center capacity. The problem with this approach is that it fails to
capture the energy intensity associated with high-density rack configurations. An overall watts-per-
square-foot calculation fails to recognize the diversity from one rack configuration to the next,
where significant hot spots can require incremental air-conditioning capacity. Gartner recommends
planning the data center on a rack unit basis (see Table 1). This technique requires that each rack
configuration be calculated from a total-wattage standpoint, in terms of equipment power and
incremental power for air-conditioning.
Table 1. Total Space and Power for IT (Base Case) — 200 Racks
By aggregating the total rack population, you can calculate total kW and space. In the example in
Table 1, a simple rack configuration of 200 racks results in a total space requirement of 6,000
square feet and a total power requirement of 924 kW of IT load and 833 kW of cooling load. As
power density per rack increases, the amount of cooling required also increases. A general rule of
thumb is that below 6 kW per rack, you can assume that 0.6W of cooling will be needed for every
watt of IT load. With between 6 and 10 kW per rack, assume a 1-to-1 ratio of cooling load to IT
load. Above 10 kW per rack, assume 1.5W of cooling for every watt of IT load (e.g., higher heat
generated requires more density in airflow, thus greater cooling watts required).
To determine the total generator load, you would have to factor in conversion losses (AC to DC)
throughout the system. Depending on design, this would average between 8% and 14%. In the
example above, assuming an 11% conversion loss, the total kW required would be 1,950 kW, or a
generator capacity of 2,438 kilovolt-amp (kva), assuming a 0.8 power factor. Another alternative
would be to consider multiple smaller generators, rather than one or two very large ones to support
the load. Smaller systems often have shorter lead times, and, depending on market conditions, can
be less expense per kW delivered.
secondary trend is beginning to emerge in which clients are considering building multiple tiers
within the same site, essentially segmenting applications and service levels by tier, and then
designing the building to suit. This is a newer concept, but it is architecturally feasible and can
reduce upfront capital costs substantially.
Tier 1
Basic data center, no redundancy
Tier 2
Redundant components
Single distribution path with redundant components
Tier 3
Concurrently maintainable
Multiple distribution paths with only one active
Tier 4
Fault tolerant
Multiple active distribution paths
■ Power source — Power distribution, including UPS, backup generators (e.g., diesel or natural
gas), PDUs and intermediate distribution units.
■ Heating, ventilation and air-conditioning (HVAC) systems — These may include rooftop units
and distributed units that provide localized air cooling. Either overhead or underfloor distribution
of air can be an effective means of distributing air evenly throughout the IT area. Additional
cooling and air handling may be required between racks. Perhaps the single greatest issue in
the contemporary data center is maintaining adequate cooling and air movement, given the
intense heat generated by modern blade server and storage devices.
■ Fire protection systems — These include detection and abatement systems that most likely
will combine preaction wet systems interconnected with dry systems (such as FM 200 and
Inergen) for sensitive areas.
■ Security systems — Local and central watch, entrance and egress, monitoring.
■ Raised floor systems — Traditional data center design has always called for a raised floor to
provide an area for power feeds, cabling (in older installations) and a plenum for forced airflow
to support under-rack airflow/cooling. These floors have continued to become higher and
higher, from an average of 12 to 18 inches in the last generation to 24, 26 and even 36 inches
today, depending on power and cooling requirements and the tier level being designed.
However, many of the traditional benefits of a raised floor are no longer valid; in fact, if designed
properly, a data center can be even more efficient and simpler to configure (and reconfigure) if
designed on a slab floor instead.
The days of bulky cabling are gone, replaced by thin copper and fiber optics distributed in cable
trays just above the racks. Dedicated power circuits used to be the design point for underfloor
use, in which these cables, once installed, were rarely changed again. In today's dynamic world
of high-density, rapidly changing environments, the ability to move and upgrade power
connections quickly has moved the physical location of power connectivity overhead. Also, in
traditional data centers, the overall cooling requirements did not change dramatically over time.
An "ecosystem" was created that forced enough cool air under the raised floor to provide
adequate support for all the planned equipment (which had static requirements). In today's
world, the equipment racks require so much power that, in many cases, underfloor cooling is no
longer adequate, and the creation of power zones have also forced companies to look at in-row
and in-rack cooling solutions as the premier way to cool new equipment.
Local generation
HVAC
system Source: Shuangliang
Source: UTC Eco-Energy Systems
CHP Absorption chiller Fire detection/
Local distribution suppression
All systems
other
loads
To the building,
480 volts
Dual-grid feeds Source: APC Source: Liebert Computer Computer
UPS PDU racks equipment
Location
Location will affect operational and cost efficiencies significantly. In many cases, location criteria will
be constrained by operational or technical requirements. For example, backup data centers that
require synchronous replication will be limited to a radius of 30 to 50 miles from the primary data
center. WAN costs may also limit the scope of the site search. However, many data centers can be
located in distant locations; therefore, measures should be taken to optimize the location selection.
Critical factors include:
■ Labor markets
■ Public incentives
■ Communication infrastructure
■ Electrical services
■ Taxes (personal, property)
Site Selection
Selecting a data center site is a complex process involving a large number of variables. In many
instances, it is prudent to use a site selection consultant who has the experience, contacts and
evaluation tools necessary to undertake a complex site selection process. A best practice is to work
through an internal process that establishes the most critical selection criteria, and that assigns
weights or degrees of importance to the criteria. The results of this process can be used as a
foundation for discussions with prospective consultants.
The following are the major criteria that should be considered in the site selection process:
■ HR issues — relate to the labor profile in the site area. It is a very important criterion if the data
center will be fully staffed, and goes down in importance (weighting) as the number of people to
be staffed in the data center decreases. It becomes a nonfactor if you have plans to run a
"lights out" data center. Factors such as skills available in the local market, prevailing labor
rates and commuting distances are some of the important considerations.
■ Network connectivity — is important to ensure that response times are sufficient for all areas
of the company that require access to the data center systems. Today, high-speed connectivity
is available in most areas in most countries; however, you should verify the availability to ensure
there are no surprises. This is especially important as companies become global in scope and
as their data centers must support many continents. Factors such as distance to fiber trunk
lines, availability of multiple carriers, network reliability and options for dark fiber are some of
the important considerations to be examined.
■ Utilities — is a critical consideration, particularly in an era of rising energy rates and demand.
Some areas of the world may not have enough capacity to supply the needs of a large data
center. Other resources, such as water and natural gas, may also be important. Factors such as
electricity rates and capacity, reliability of the power supply, access to dual power grids, and
projected power rates are examples of important considerations to be examined.
■ Environmental concerns — is a relatively new set of criteria for site selection and is focused
on the environment, because of the new emphasis on greenhouse gas emissions and other
environmental concerns. Factors such as applicability for the use of free cooling technologies,
located where the use of renewable energy solutions (wind, sun, etc.) are possible, and local
utility companies' use of renewable power are examples of important considerations to be
examined.
Architecture
The type of building you select can significantly affect occupancy costs, security, expansion and
operational flexibility. As a general rule:
In terms of security, provide for multiple levels of secured access within the data center, and within
specialized, highly secure areas within the data center (such as tape vaults) by using card key
systems in combination with a biometric authentication system.
Power Distribution
The electrical power plant and distribution system design is crucial to data center reliability and
operational efficiency. Blade server technology creates enormous power demands to energize the
servers and to support incremental air-conditioning.
Several fundamental principles should serve as the foundation for the electrical system design.
These include:
■ Maintenance and emergency shutdown switches at all entry points in the facility
■ A grounding system that complies with the National Electrical Code, Article 250 or local codes if
applicable
■ The provision of a signal reference grid (SRG) to reduce high-frequency impedance
■ The use of higher-gauge wire for future electrical expansion
■ The use of scalable PDUs to integrate circuit breakers and equipment connections
■ The use of power conditioning equipment to integrate with the UPS
Consider using electrical cable trays above the floor to separate signal cables from electrical cables,
or, if a raised floor is used, electrical cable trays should be positioned just below the surface to
enhance the airflow throughout the raised floor.
■ Be aware of overall power requirements (that is, use rack unit measure).
■ Strive for multiple utility feeds.
■ Provide for maintenance bypass and emergency shutdown.
■ Determine if equipment requires single-phase or three-phase power.
■ Plan for ambient air temperatures between 76 F and 78 F, or in compliance with the American
Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) standards.
■ Maintain relative humidity levels to minimize electrostatic discharge.
■ Be mindful of electromagnetic interference (EMI); conduct a study to determine if shielding or
other preventive measures are required.
Power Supply
For high availability, data centers should strive to provide at least three lines of defense for backup
power — multiple feeds from the public utility; UPS systems and generators to sustain power for
longer-term outages.
The modern UPS provides data centers with emergency backup power to keep IT equipment
running over short power disruptions, bridge the time needed to start backup generators for longer
power disruptions or allow time for controlled shutdown if no generators are available. UPSs have
also taken over the role of providing conditioned power and isolating building power from noise and
waveform distortion caused by the equipment. When looking to purchase a UPS, multiple factors
must be examined to ensure that it meets the requirements of your data center strategy, while also
meeting total cost of ownership (TCO) goals. Several basic principles should guide the size and
capability of the UPS system. It should:
■ Be sized to energize all computer equipment and other electrical devices (such as emergency
lighting and security devices) for 100% of the power demand for no less than twice the average
start time of your generators. If quick-start generators are expected to transfer load within six
seconds, the UPS load should support at least 12 seconds. Current generation flywheel/
generator combinations support this technique, as well as traditional battery-based UPS
systems.
■ Be sized for peak load or fault overload conditions. This relates to the surge in power demand
when the equipment is first energized. As a rule of thumb, size the UPS for 150% of operating
demand.
■ Be continuously operational to filter and condition the power.
■ Be of high energy efficiency over a broad range of possible loads (20% to 100%).
■ In most instances, doesn't support HVAC systems.
If a UPS is not used, then surge protection should be provided at the panels with a stand-alone
isolation/regulation transformer. To sustain power during an extended period, install a diesel
generator to provide backup power for longer-term outages. For high levels of fault tolerance, an
additional backup generator will be required. Consider local ordinances and codes relating to fuel
tank location and noise abatement. Also, have a plan to monitor the stored fuel to ensure it does not
break down over time and that you have enough fuel on hand to cover long outages.
Periodically test the generators to ensure their operational integrity. Specify the UPS and backup
generator capacity to meet total peak load. Depending on the total generator load, some local
power companies may offer to buy time on the generators during peak demand days, which could
be used to offset some operational costs.
Mechanical Systems
Because cooling the data center has become a crucial issue, when designing an HVAC system,
follow these key guidelines:
■ Maintain a static pressure within the raised floor plenum of 5% greater than the data center
raised-floor area.
■ Selectively position perforated tiles in the raised floor to direct chilled air into the rack area.
■ For high-density zones, hot- or cold-aisle containment should be considered to improve cooling
efficiencies.
■ Seal all penetrations in the raised floor to maintain a constant static pressure, and use blanking
panels within the racks.
■ Establish a vapor barrier throughout the perimeter of the data center to minimize condensation.
Use spot cooling or special rack enclosures for hot spots in the data center layout.
■ Design the airflow to maximize the flow of chilled air across and through the equipment racks.
With raised floors, this requires that chilled air flow from bottom to top and from front to back
through the racks. Alternating aisles between cold aisle and hot aisle facilitates more efficient
temperature control.
Security Systems
IT management needs to clearly define an IT security policy and strategy. An integral part of the IT
security policy should be the data center's physical security. Different security zones should be
established using the concentric circle concept. For example, the first level of security would be the
grounds, then the building, then the data center and then the machine room. Inside the machine
room, you might consider side-by-side access levels (for example, someone can access the server
area, but not the storage or communication area).
When looking at the methods to grant access to different areas, you should use a combination of
methods (at least two), based on:
■ What the person has possession of (picture ID, key, magnetic card, etc.)
■ What the person knows (password, PIN, etc.)
■ Who the person is, based on biometric data (fingerprint, palm print, retina scan, etc.)
Also, access may be based on time of day, temporary need or job title.
In the end, good physical security depends on the effort and accountability of your staff. They
should be trained in basic security actions; in secure areas, for example, they should ask people
they do not recognize, and who appear to have no badge or escort, what they need. They should
also be aware of and prevent anyone from "piggybacking" access behind them.
informed, not a historical, decision. We believe that, by 2015, more than 50% of all new data center
builds will not use a raised floor, up from 3% to 5% today.
Changing technologies and realities have removed the need for most of the above reasons for a
raised floor, especially the newer cooling point solutions, such as in row and in-rack cooling.
Results of polls taken during the past two years show that the use of in-row and in-rack cooling is
continuing to grow in use in the data center. At the end of 2010, 52% of the poll respondents were
using some in-row and/or in-rack cooling.
The elimination of raised floors could remove most, if not all, the following problems with raised
floors that have been experienced by clients:
■ Cost
■ Load factor considerations
■ Stability, especially in earthquake zones
■ Low ceilings due to added height of raised floors
■ Restricted air flow due to obstructions such as cables under the raised floor
■ Problems keeping area under the flooring clean
■ Recabling problems due to raised floor support bars every few feet
■ Safety when floor tile is removed for maintenance work
■ Added fire detection/suppression needs to cover the underfloor area
If a raised floor is desired, then specify a height relative to the overall data center size; consider
using cast aluminum floor tiles to ensure maximum floor loading capability. Some guidelines for
raised-floor heights are:
■ Facilities with less than 1,000 square feet — 12-inch RAF height
■ Facilities with 1,000 to 5,000 square feet — 12- to 18-inch RAF height
■ Facilities with 5,000 to 10,000 square feet — 18- to 24-inch RAF height
■ Facilities with 10,000 square feet and above — 24-inch to 36-inch RAF height
When designing a data center with raised floors, consider some of the newer raised-floor
technologies, such as:
■ Fire prevention — The best way to protect a data center from a fire is to prevent it from ever
happening.
■ Fire detection — As a fire passes through its four stages (precombustion, visible smoke, flame
and intense heat), it becomes harder to extinguish, causes more damage and there is more
potential for loss of life. The goal of fire detection is to discover a fire as early as possible (in the
precombustion stage). The best photoelectric detector for a data center is an air sampling
smoke detector. Because of its sensitivity (up to 1,000 times more sensitive than ordinary
photoelectric or ionization detectors), it is sometimes referred to as a very early smoke detector
(VESD). Placement of the sensors must take into consideration raised floors, suspended
ceilings and hot-aisle/cold-aisle containment areas.
■ First response — In a large percentage of cases, fires in the data center can be extinguished
before any automatic fire suppression equipment is activated. The best defense is usually a
properly trained person with a proper and charged fire extinguisher. Note that retail, powder-
based extinguishers release corrosive dust and should not be allowed in a data center.
■ Control system — The control system can be any type, from a simple sound of an alarm and
activation of the suppression system when a detector goes off to a very intelligent system.
Intelligent control systems can be programmed to vary the sensitivity of the detectors, put in
time delays between the alarm and activation of fire suppression (which allows people to try to
put out the fire first), delay of an alarm and the activation of the fire suppression system until
two different sensors detect a fire (which eliminates the alarm and suppression system
activating because of a faulty sensor), or other things based on programmed scripts.
■ Evacuation — No doors should be left open to the data center after evacuation, as most fire
suppression systems that use clean agents use a total flooding method, which requires the area
to be sealed as tightly as possible. After evacuation, all personnel should know to go to a
designated location, so it can be determined that all staff have exited the data center. Fire drills
should be run at least every six months to ensure that staff can exit the building safely and
promptly.
■ Fire suppression — A large number of fire suppression agents are now on the market to
replace Halon's use for fire suppression in the data center. Data center management and
facilities management need to work together to understand the impact that their fire
suppression choice will have on long-term cost, cleanliness, greenness and evacuation. Even if,
by law, you're required to have water sprinklers in your facility, you need to augment the
sprinkler system with a clean-agent fire suppression system that is set to try to extinguish a fire
before the sprinkler system activates, to avoid the damage that water will cause to a data
center's equipment and media. The two most popular clean agent systems use Inergen (IG-541)
or FM-200 (HFC-227ea). Also popular as clean-agent systems are ECARO-25 (HFC-125) and
Novec 1230 (FK-5-1-12). ECARO-25 can be used as a direct, drop-in replacement using the
same original Halon equipment that might have been previously installed.
■ Continuation/recovery of processing — Part of your contingency planning needs to cover the
actions that must take place when a fire alarm is activated, along with the evacuation and
activation of the fire suppression system. When should you initiate failover to a backup site?
When should you shut down and power down systems? When is it possible to continue running
the systems? These are questions that should be addressed as part of your contingency
planning.
■ Resetting for new coverage — Sometimes overlooked is the process of resetting your system
to be ready to detect and suppress another fire. Recharging handheld and wheeled fire
extinguishers is a high priority, as is the replenishment of your data center fire suppression
system.
Sizing Estimates
In the following examples, we have developed a typical data center build estimate, based on
multiple client inquiries during the past year. These are averages based on dozens of builds of
different sizes and complexities, but we feel they are representative of the market today. The first
step in design is to determine how much space will be needed, both for IT and for the building as a
whole. Figure 4 represents the average sizes needed to support an 8,000-square-foot data center
(IT space), which also houses 32 office staff, three food staff and a network operations center (NOC)
with five people. The space is broken out by tier, because different tiers have different requirements
for mechanical redundancy, and thus varying requirements for floor space.
Calculated Estimates
Support Space 2,400 4,800 7,200 8,000
Circulation Factor 360 720 1,080 1,200
Total Support Area 2,760 5,520 8,280 9,200
NOC Space 750 750 750 750
Office Staff Areas 6,400 6,400 6,400 6,400
Food Services 135 135 135 135
Total Office Space (non-IT) 7,285 7,285 7,285 7,285
Total Floor Space Needed 18,045 20,805 23,565 24,485
Calculated Estimates
Support Space 2,400 4,800 7,200 8,000
Circulation Factor 360 720 1,080 1,200
Total Support Area 2,760 5,520 8,280 9,200
NOC Space 750 750 750 750
Office Staff Areas 6,400 6,400 6,400 6,400
Food Services 135 135 135 135
Total Office Space (non-IT) 7,285 7,285 7,285 7,285
Total Floor Space Needed 18,045 20,805 23,565 24,485
including critical building systems, maintenance, repair and the contracting for specialized services,
including physical security, cleaning, grounds maintenance, shipping and receiving. This structure
leverages the professional expertise of the corporate facilities staff, and ensures close alignment
with facilities standards and policies.
In all cases, it is best to maintain close ties between the IT organization and the corporate facilities
organization. This will require periodic joint staff meetings, project review meetings and
multidiscipline project teams, particularly for data center relocation or expansion projects. It is
particularly important that facilities professionals work closely with IT procurement specialists in
reviewing vendor product specifications relating to power requirements, redundancy features, floor
loading factors, and issues relating to space and cooling. With the advent of high-density servers,
problems with heat and power demands can wreak havoc with facilities power and cooling
systems.
Recommended Reading
Some documents may not be available as part of your current Gartner subscription.
"Shrinking Data Centers: Your Next Data Center Will Be Smaller Than You Think"
"Data Center Decisions: Build, Retrofit or Colocate; Why Not a Hybrid Approach"
"Data Center Fire Suppression Options Are Cost, Toxicity, Green and Clean"
GARTNER HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096
Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM
© 2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartner’s prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see “Guiding Principles on Independence and Objectivity.”