You are on page 1of 10

Design criteria and trade-offs[edit]

 Availability expectations: Cost of avoiding downtime should not exceed the cost of
downtime itself[52]
 Site selection: Location factors include proximity to power grids, telecommunications
infrastructure, networking services, transportation lines and emergency services. Others are
flight paths, neighboring uses, geological risks and climate (associated with cooling costs).[53]
o Often available power is hardest to change.
High availability[edit]
Main article: High availability

Various metrics exist for measuring the data-availability that results from data-center
availability beyond 95% uptime, with the top of the scale counting how many "nines" can
be placed after "99%".[54]
Modularity and flexibility[edit]
Main article: Modular data center

Modularity and flexibility are key elements in allowing for a data center to grow and
change over time. Data center modules are pre-engineered, standardized building
blocks that can be easily configured and moved as needed. [55]
A modular data center may consist of data center equipment contained within shipping
containers or similar portable containers. [56] Components of the data center can be
prefabricated and standardized which facilitates moving if needed. [57]
Environmental control[edit]
Temperature[note 10] and humidity are controlled via:

 Air conditioning
 indirect cooling, such as using outside air,[58][59][note 11] Indirect Evaporative Cooling (IDEC)
units, and also using sea water.
Electrical power[edit]

A bank of batteries in a large data center, used to provide power until diesel generators can start
Backup power consists of one or more uninterruptible power supplies, battery banks,
and/or diesel / gas turbine generators.[60]
To prevent single points of failure, all elements of the electrical systems, including
backup systems, are typically fully duplicated, and critical servers are connected to both
the "A-side" and "B-side" power feeds. This arrangement is often made to achieve N+1
redundancy in the systems. Static transfer switches are sometimes used to ensure
instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing[edit]
Options include:

 Data cabling can be routed through overhead cable trays[61]


 Raised floor cabling, for security reasons and to avoid the addition of cooling systems
above the racks.
 Smaller/less expensive data centers without raised flooring may use anti-static tiles for a
flooring surface.
Air flow[edit]
Air flow management addresses the need to improve data center computer
cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment
and reducing bypass airflow. There are several methods of separating hot and cold
airstreams, such as hot/cold aisle containment and in-row cooling units. [62]
Aisle containment[edit]
Cold aisle containment is done by exposing the rear of equipment racks, while the fronts
of the servers are enclosed with doors and covers.

Typical cold aisle configuration with server rack fronts facing each other and cold air distributed through
the raised floor.

Computer cabinets are often organized for containment of hot/cold aisles. Ducting
prevents cool and exhaust air from mixing. Rows of cabinets are paired to face each
other so that cool air can reach equipment air intakes and warm air can be returned to
the chillers without mixing.
Alternatively, a range of underfloor panels can create efficient cold air pathways
directed to the raised floor vented tiles. Either the cold aisle or the hot aisle can be
contained.[63]
Another alternative is fitting cabinets with vertical exhaust ducts (chimney) [64] Hot exhaust
exits can direct the air into a plenum above a drop ceiling and back to the cooling units
or to outside vents. With this configuration, traditional hot/cold aisle configuration is not
a requirement.[65]
Fire protection[edit]

FM200 Fire Suppression Tanks

Data centers feature fire protection systems, including passive and Active


Design elements, as well as implementation of fire prevention programs in
operations. Smoke detectors are usually installed to provide early warning of a fire at its
incipient stage.
Two water-based options are:[66]

 sprinkler
 mist
 no water – some of the benefits of using chemical suppression (clean agent gaseous fire
suppression system).
Security[edit]
Main article: Data center security

Physical access is usually restricted. Layered security often starts with


fencing, bollards and mantraps.[67] Video camera surveillance and permanent security
guards are almost always present if the data center is large or contains sensitive
information. Fingerprint recognition mantraps is starting to be commonplace.
Logging access is required by some data protection regulations; some organizations
tightly link this to access control systems. Multiple log entries can occur at the main
entrance, entrances to internal rooms, and at equipment cabinets. Access control at
cabinets can be integrated with intelligent power distribution units, so that locks are
networked through the same appliance.[68]
Energy use[edit]

Google Data Center, The Dalles, Oregon

Main article: IT energy management


Energy use is a central issue for data centers. Power draw ranges from a few kW for a
rack of servers in a closet to several tens of MW for large facilities. Some facilities have
power densities more than 100 times that of a typical office building. [69] For higher power
density facilities, electricity costs are a dominant operating expense and account for
over 10% of the total cost of ownership (TCO) of a data center.[70]
Power costs for 2012 often exceeded the cost of the original capital investment.
[71]
 Greenpeace estimated worldwide data center power consumption for 2012 as about
382 billion kWh.[72] Global data centers used roughly 416 TWh in 2016, nearly 40% more
than the entire United Kingdom; USA DC consumption was 90 billion kWh. [73]
Greenhouse gas emissions[edit]
In 2007 the entire information and communication technologies or ICT sector was
estimated to be responsible for roughly 2% of global carbon emissions with data centers
accounting for 14% of the ICT footprint.[74] The US EPA estimates that servers and data
centers are responsible for up to 1.5% of the total US electricity consumption, [75] or
roughly .5% of US GHG emissions,[76] for 2007. Given a business as usual scenario
greenhouse gas emissions from data centers is projected to more than double from
2007 levels by 2020.[74]
In an 18-month investigation by scholars at Rice University's Baker Institute for Public
Policy in Houston and the Institute for Sustainable and Applied Infodynamics in
Singapore, data center-related emissions will more than triple by 2020. [77]
Energy efficiency and overhead[edit]
The most commonly used energy efficiency metric of data center energy efficiency
is power usage effectiveness (PUE), calculated as the ratio of total power entering the
data center divided by the power used by IT equipment.
It measures the percentage of power used by overhead (cooling, lighting, etc.). The
average USA data center has a PUE of 2.0, [75] meaning two watts of total power
(overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art
is estimated to be roughly 1.2.[78] Google publishes quarterly efficiency from data
centers in operation.[79]
The U.S. Environmental Protection Agency has an Energy Star rating for standalone
or large data centers. To qualify for the ecolabel, a data center must be within the
top quartile of energy efficiency of all reported facilities. [80] The Energy Efficiency
Improvement Act of 2015 (United States) requires federal facilities — including data
centers — to operate more efficiently. California's title 24 (2014) of the California
Code of Regulations mandates that every newly constructed data center must have
some form of airflow containment in place to optimize energy efficiency.
European Union also has a similar initiative: EU Code of Conduct for Data Centres.
[81]

Energy use analysis and projects[edit]


The focus of measuring and analyzing energy use goes beyond what's used by IT
equipment; facility support hardware such as chillers and fans also use energy. [82]
In 2011 server racks in data centers were designed for more than 25 kW and the
typical server was estimated to waste about 30% of the electricity it consumed. The
energy demand for information storage systems was also rising. A high availability
data center was estimated to have a 1 mega watt (MW) demand and consume
$20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of
the data center's total cost of ownership. Calculations showed that in two years the
cost of powering and cooling a server could be equal to the cost of purchasing the
server hardware.[83] Research in 2018 has shown that substantial amount of energy
could still be conserved by optimizing IT refresh rates and increasing server
utilization.[84]
In 2011 Facebook, Rackspace and others founded the Open Compute
Project (OCP) to develop and publish open standards for greener data center
computing technologies. As part of the project Facebook published the designs of its
server, which it had built for its first dedicated data center in Prineville. Making
servers taller left space for more effective heat sinks and enabled the use of fans
that moved more air with less energy. By not buying commercial off-the-
shelf servers, energy consumption due to unnecessary expansion slots on
the motherboard and unneeded components, such as a graphics card, was also
saved.[85] In 2016 Google joined the project and published the designs of its 48V DC
shallow data center rack. This design had long been part of Google data centers. By
eliminating the multiple transformers usually deployed in data centers, Google had
achieved a 30% increase in energy efficiency.[86] In 2017 sales for data center
hardware built to OCP designs topped $1.2 billion and are expected to reach $6
billion by 2021.[85]
Power and cooling analysis[edit]
Data center at CERN (2010)

Power is the largest recurring cost to the user of a data center. [87] Cooling it at or
below 70 °F (21 °C) wastes money and energy.[87] Furthermore, overcooling
equipment in environments with a high relative humidity can expose equipment to a
high amount of moisture that facilitates the growth of salt deposits on conductive
filaments in the circuitry.[88]
A power and cooling analysis, also referred to as a thermal assessment,
measures the relative temperatures in specific areas as well as the capacity of the
cooling systems to handle specific ambient temperatures. [89] A power and cooling
analysis can help to identify hot spots, over-cooled areas that can handle greater
power use density, the breakpoint of equipment loading, the effectiveness of a
raised-floor strategy, and optimal equipment positioning (such as AC units) to
balance temperatures across the data center. Power cooling density is a measure of
how much square footage the center can cool at maximum capacity. [90] The cooling
of data centers is the second largest power consumer after servers. The cooling
energy varies from 10% of the total energy consumption in the most efficient data
centers and goes up to 45% in standard air-cooled data centers.
Energy efficiency analysis[edit]
An energy efficiency analysis measures the energy use of data center IT and
facilities equipment. A typical energy efficiency analysis measures factors such as a
data center's power use effectiveness (PUE) against industry standards, identifies
mechanical and electrical sources of inefficiency, and identifies air-management
metrics.[91] However, the limitation of most current metrics and approaches is that
they do not include IT in the analysis. Case studies have shown that by addressing
energy efficiency holistically in a data center, major efficiencies can be achieved that
are not possible otherwise.[92]
Computational fluid dynamics (CFD) analysis[edit]
Main article: Computational fluid dynamics

This type of analysis uses sophisticated tools and techniques to understand the
unique thermal conditions present in each data center—predicting the
temperature, airflow, and pressure behavior of a data center to assess performance
and energy consumption, using numerical modeling. [93] By predicting the effects of
these environmental conditions, CFD analysis in the data center can be used to
predict the impact of high-density racks mixed with low-density racks [94] and the
onward impact on cooling resources, poor infrastructure management practices and
AC failure or AC shutdown for scheduled maintenance.
Thermal zone mapping[edit]
Thermal zone mapping uses sensors and computer modeling to create a three-
dimensional image of the hot and cool zones in a data center. [95]
This information can help to identify optimal positioning of data center equipment.
For example, critical servers might be placed in a cool zone that is serviced by
redundant AC units.
Green data centers[edit]
Main article: Green data center

This water-cooled data center in the Port of Strasbourg, France claims the attribute green.

Data centers use a lot of power, consumed by two main usages: the power required
to run the actual equipment and then the power required to cool the equipment.
Power-efficiency reduces the first category.
Cooling cost reduction from natural ways includes location decisions: When the
focus is not being near good fiber connectivity, power grid connections and people-
concentrations to manage the equipment, a data center can be miles away from the
users. 'Mass' data centers like Google or Facebook don't need to be near population
centers. Arctic locations can use outside air, which provides cooling, are getting
more popular.[96]
Renewable electricity sources are another plus. Thus countries with favorable
conditions, such as: Canada,[97] Finland,[98] Sweden,[99] Norway,[100] and Switzerland,
[101]
 are trying to attract cloud computing data centers.
Bitcoin mining is increasingly being seen as a potential way to build data centers at
the site of renewable energy production. Curtailed and clipped energy can be used
to secure transactions on the Bitcoin blockchain providing another revenue stream
to renewable energy producers.[102]
Energy reuse[edit]
It is very difficult to reuse the heat which comes from air cooled data centers. For
this reason, data center infrastructures are more often equipped with heat pumps.
[103]
 An alternative to heat pumps is the adoption of liquid cooling throughout a data
center. Different liquid cooling techniques are mixed and matched to allow for a fully
liquid cooled infrastructure which captures all heat in water. Different liquid
technologies are categorized in 3 main groups, Indirect liquid cooling (water cooled
racks), Direct liquid cooling (direct-to-chip cooling) and Total liquid cooling (complete
immersion in liquid, see Server immersion cooling). This combination of
technologies allows the creation of a thermal cascade as part of temperature
chaining scenarios to create high temperature water outputs from the data center.

Dynamic infrastructure[edit]
Main article: Dynamic infrastructure

Dynamic infrastructure[104] provides the ability to intelligently, automatically and


securely move workloads within a data center [105] anytime, anywhere, for
migrations, provisioning,[106] to enhance performance, or building co-location facilities.
It also facilitates performing routine maintenance on either physical or virtual
systems all while minimizing interruption. A related concept is Composable
infrastructure, which allows for the dynamic reconfiguration of the available
resources to suit needs, only when needed.[107]
Side benefits include

 reducing cost
 facilitating business continuity and high availability
 enabling cloud and grid computing.[108]

Network infrastructure[edit]

An operation engineer overseeing a network operations control room of a data center (2006)
An example of network infrastructure of a data center

Communications in data centers today are most often based on networks running


the IP protocol suite. Data centers contain a set of routers and switches that
transport traffic between the servers and to the outside world [109] which are connected
according to the data center network architecture. Redundancy of the Internet
connection is often provided by using two or more upstream service providers
(see Multihoming).
Some of the servers at the data center are used for running the basic Internet
and intranet services needed by internal users in the organization, e.g., e-mail
servers, proxy servers, and DNS servers.
Network security elements are also usually
deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also
common are monitoring systems for the network and some of the applications.
Additional off site monitoring systems are also typical, in case of a failure of
communications inside the data center.

Software/data backup[edit]
Non-mutually exclusive options for data backup are:

 Onsite
 Offsite
Onsite is traditional,[110] and one major advantage is immediate availability.
Offsite backup storage
Main article: Disaster recovery §  offsite backup storage
Data backup techniques include having an encrypted copy of the data offsite.
Methods used for transporting data are:[111]

 having the customer write the data to a physical medium, such as magnetic tape, and
then transporting the tape elsewhere.[112]
 directly transferring the data to another site during the backup, using appropriate links
 uploading the data "into the cloud"[113]

Modular data center[edit]


Main article: Modular data center

A 40-foot Portable Modular Data Center

For quick deployment or disaster recovery, several large hardware vendors have


developed mobile/modular solutions that can be installed and made operational in
very short time.

You might also like