Professional Documents
Culture Documents
Availability expectations: Cost of avoiding downtime should not exceed the cost of
downtime itself[52]
Site selection: Location factors include proximity to power grids, telecommunications
infrastructure, networking services, transportation lines and emergency services. Others are
flight paths, neighboring uses, geological risks and climate (associated with cooling costs).[53]
o Often available power is hardest to change.
High availability[edit]
Main article: High availability
Various metrics exist for measuring the data-availability that results from data-center
availability beyond 95% uptime, with the top of the scale counting how many "nines" can
be placed after "99%".[54]
Modularity and flexibility[edit]
Main article: Modular data center
Modularity and flexibility are key elements in allowing for a data center to grow and
change over time. Data center modules are pre-engineered, standardized building
blocks that can be easily configured and moved as needed. [55]
A modular data center may consist of data center equipment contained within shipping
containers or similar portable containers. [56] Components of the data center can be
prefabricated and standardized which facilitates moving if needed. [57]
Environmental control[edit]
Temperature[note 10] and humidity are controlled via:
Air conditioning
indirect cooling, such as using outside air,[58][59][note 11] Indirect Evaporative Cooling (IDEC)
units, and also using sea water.
Electrical power[edit]
A bank of batteries in a large data center, used to provide power until diesel generators can start
Backup power consists of one or more uninterruptible power supplies, battery banks,
and/or diesel / gas turbine generators.[60]
To prevent single points of failure, all elements of the electrical systems, including
backup systems, are typically fully duplicated, and critical servers are connected to both
the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1
redundancy in the systems. Static transfer switches are sometimes used to ensure
instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing[edit]
Options include:
Typical cold aisle configuration with server rack fronts facing each other and cold air distributed through
the raised floor.
Computer cabinets are often organized for containment of hot/cold aisles. Ducting
prevents cool and exhaust air from mixing. Rows of cabinets are paired to face each
other so that cool air can reach equipment air intakes and warm air can be returned to
the chillers without mixing.
Alternatively, a range of underfloor panels can create efficient cold air pathways
directed to the raised floor vented tiles. Either the cold aisle or the hot aisle can be
contained.[63]
Another alternative is fitting cabinets with vertical exhaust ducts (chimney) [64] Hot exhaust
exits can direct the air into a plenum above a drop ceiling and back to the cooling units
or to outside vents. With this configuration, traditional hot/cold aisle configuration is not
a requirement.[65]
Fire protection[edit]
sprinkler
mist
no water – some of the benefits of using chemical suppression (clean agent gaseous fire
suppression system).
Security[edit]
Main article: Data center security
Power is the largest recurring cost to the user of a data center. [87] Cooling it at or
below 70 °F (21 °C) wastes money and energy.[87] Furthermore, overcooling
equipment in environments with a high relative humidity can expose equipment to a
high amount of moisture that facilitates the growth of salt deposits on conductive
filaments in the circuitry.[88]
A power and cooling analysis, also referred to as a thermal assessment,
measures the relative temperatures in specific areas as well as the capacity of the
cooling systems to handle specific ambient temperatures. [89] A power and cooling
analysis can help to identify hot spots, over-cooled areas that can handle greater
power use density, the breakpoint of equipment loading, the effectiveness of a
raised-floor strategy, and optimal equipment positioning (such as AC units) to
balance temperatures across the data center. Power cooling density is a measure of
how much square footage the center can cool at maximum capacity. [90] The cooling
of data centers is the second largest power consumer after servers. The cooling
energy varies from 10% of the total energy consumption in the most efficient data
centers and goes up to 45% in standard air-cooled data centers.
Energy efficiency analysis[edit]
An energy efficiency analysis measures the energy use of data center IT and
facilities equipment. A typical energy efficiency analysis measures factors such as a
data center's power use effectiveness (PUE) against industry standards, identifies
mechanical and electrical sources of inefficiency, and identifies air-management
metrics.[91] However, the limitation of most current metrics and approaches is that
they do not include IT in the analysis. Case studies have shown that by addressing
energy efficiency holistically in a data center, major efficiencies can be achieved that
are not possible otherwise.[92]
Computational fluid dynamics (CFD) analysis[edit]
Main article: Computational fluid dynamics
This type of analysis uses sophisticated tools and techniques to understand the
unique thermal conditions present in each data center—predicting the
temperature, airflow, and pressure behavior of a data center to assess performance
and energy consumption, using numerical modeling. [93] By predicting the effects of
these environmental conditions, CFD analysis in the data center can be used to
predict the impact of high-density racks mixed with low-density racks [94] and the
onward impact on cooling resources, poor infrastructure management practices and
AC failure or AC shutdown for scheduled maintenance.
Thermal zone mapping[edit]
Thermal zone mapping uses sensors and computer modeling to create a three-
dimensional image of the hot and cool zones in a data center. [95]
This information can help to identify optimal positioning of data center equipment.
For example, critical servers might be placed in a cool zone that is serviced by
redundant AC units.
Green data centers[edit]
Main article: Green data center
This water-cooled data center in the Port of Strasbourg, France claims the attribute green.
Data centers use a lot of power, consumed by two main usages: the power required
to run the actual equipment and then the power required to cool the equipment.
Power-efficiency reduces the first category.
Cooling cost reduction from natural ways includes location decisions: When the
focus is not being near good fiber connectivity, power grid connections and people-
concentrations to manage the equipment, a data center can be miles away from the
users. 'Mass' data centers like Google or Facebook don't need to be near population
centers. Arctic locations can use outside air, which provides cooling, are getting
more popular.[96]
Renewable electricity sources are another plus. Thus countries with favorable
conditions, such as: Canada,[97] Finland,[98] Sweden,[99] Norway,[100] and Switzerland,
[101]
are trying to attract cloud computing data centers.
Bitcoin mining is increasingly being seen as a potential way to build data centers at
the site of renewable energy production. Curtailed and clipped energy can be used
to secure transactions on the Bitcoin blockchain providing another revenue stream
to renewable energy producers.[102]
Energy reuse[edit]
It is very difficult to reuse the heat which comes from air cooled data centers. For
this reason, data center infrastructures are more often equipped with heat pumps.
[103]
An alternative to heat pumps is the adoption of liquid cooling throughout a data
center. Different liquid cooling techniques are mixed and matched to allow for a fully
liquid cooled infrastructure which captures all heat in water. Different liquid
technologies are categorized in 3 main groups, Indirect liquid cooling (water cooled
racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete
immersion in liquid, see Server immersion cooling). This combination of
technologies allows the creation of a thermal cascade as part of temperature
chaining scenarios to create high temperature water outputs from the data center.
Dynamic infrastructure[edit]
Main article: Dynamic infrastructure
reducing cost
facilitating business continuity and high availability
enabling cloud and grid computing.[108]
Network infrastructure[edit]
An operation engineer overseeing a network operations control room of a data center (2006)
An example of network infrastructure of a data center
Software/data backup[edit]
Non-mutually exclusive options for data backup are:
Onsite
Offsite
Onsite is traditional,[110] and one major advantage is immediate availability.
Offsite backup storage
Main article: Disaster recovery § offsite backup storage
Data backup techniques include having an encrypted copy of the data offsite.
Methods used for transporting data are:[111]
having the customer write the data to a physical medium, such as magnetic tape, and
then transporting the tape elsewhere.[112]
directly transferring the data to another site during the backup, using appropriate links
uploading the data "into the cloud"[113]