Professional Documents
Culture Documents
designed to be better.™
EXECUTIVE even hinder supplying power where
needed and cause the cooling
will be reliable, flexible, scalable and
efficient in many ways beyond just
INTRODUCTION ■■
■■
Element #1: Performance
Element #2: Time
performance. Ethernet networks
can experience several different
problems including slow response
The demands on data centers and
■■ Element #3: Space time, application sluggishness,
networks are growing dramatically. ■■ Element #4: Experience and even a loss of link connection.
Most traditional communications ■■ Element #5: Sustainability In a study of 67 data centers with
media including telephone, ■■ Conclusions a minimum size of 2,500 square
music, film, and television are feet, done by the Ponemon Institute
5 KEY ELEMENTS
being reshaped or redefined by in 2013 (sponsored by Emerson
the Internet, giving birth to new Network Power), the average cost
services such as Voice over Internet
Protocol (VoIP) and Internet Protocol TO CONSIDER of data center downtime across
industries was approximately $7,900
Television (IPTV). More and more
devices and “things” are being WHEN DESIGNING per minute. This was a 41 percent
increase from $5,600 in 2010.
connected to the internet as we
evolve towards the Internet
of Everything.
A DATA CENTER To maximize network performance
three parts of the infrastructure
must be considered: the structured
Data centers that deliver critical cabling, racks and cabinets, and the
With the growing use of streaming services for businesses have
video (Netflix, iTunes, YouTube, cable management.
always been concerned with costly
primetime television, etc.), BYOD downtime. Around 2005, there was
(bring your own device), and a surge in the number of servers in
other connected technologies, a data center, drastically impacting
the demands on data centers are power requirements and cooling
growing exponentially. Keeping concerns. Shortly after, with the
up with the growing demands is downturn of the economy, power
becoming a very large problem for and cooling efficiency began to
the data center manager today. take precedence over downtime.
Although important, a good power Designing for efficiency by only
and cooling strategy doesn’t considering power and cooling
guarantee a good data center design. strategies is short-cited. There are
Even the most efficient power and other efficiencies that will enhance
cooling design doesn’t guarantee the data center’s ability to cost-
reliable network performance; a effectively adapt to business strategy
cable could be kinked or unplugged changes and increased computing
causing costly downtime. A good demand. A data center solution that
power and cooling design doesn’t considers and designs for the five Maximize channel performance with co-engineered
simplify the installation and key elements; performance, time, cabling and connectivity.
management of the infrastructure. space, experience and sustainability,
The physical infrastructure can
2
Maximizing the structured cabling minimize the mixing of hot and cold three parts of the infrastructure:
channel performance takes air to help maximize the efficiencies structured cabling, racks and
more than just selecting the best of the cooling solution. cabinets, and cable management.
performing connectivity and cable in The last component of the Select a cabling solution with co-
the market. This is because the TIA infrastructure is cable management. engineered cable and connectivity
component performance is specified The infrastructure must have a to maximize channel performance.
as a range, not a single value, so well-designed cable management Look for flexible and scalable
the channel performance becomes solution that is more than just a path
an average. (See “Importance of for the cable. Vertical managers
Component Compliance” white should be large so they provide room
paper.) Also, design decisions in to grow, make moves, adds and
the cable can negatively impact changes easier and reduce cable
the performance of the connector obstruction for better airflow. Look
and vice-versa. To maximize for features in the cable management
channel performance with the most that have been designed for both
headroom, the connectivity and copper and fiber; maintain proper
cable components should be bend radius for both copper and
co-engineered. Insertion loss should fiber, and protect the cable from
also be minimized. This is especially damage. Tight bends and excessive
critical with fiber systems since the pulling on copper cables can
loss budgets for 40 and 100 Gigabit create crosstalk and return loss.
Ethernet can easily be exceeded, A 3dB (decibel) loss equates to
even if components meet the approximately 50% of the signal
maximum insertion loss specified power being lost. Excessive bending
by the TIA standards. (See “Preparing or pulling of optical fiber cable can
for Future Structured Cabling result in additional link loss and
Requirements in the Data Center: damage to the fibers which could
40/100 Gigabit Ethernet and Beyond” cause the fibers to fail eventually.
white paper). Excessive bends in copper and
Active equipment will change at least fiber cable or equipment cords can
once, usually several times, during cause stress on a port resulting in a
the life of the physical infrastructure. damaged equipment port. Figure 1
In fact, 90% of active equipment shows an example of maintaining the
will be replaced in five years or less bend radius while protecting
(Source: BSRIA, 2011). Right-sizing the cable. Physical support should be flexible and scalable to
the rack or cabinet initially is not Some equipment manufacturers
accommodate new equipment and have options to help
manage airflow.
difficult, but the rack or cabinet must include equipment cord managers
be able to adjust to accommodate with their equipment. Because
new equipment with different size these managers are designed to
and weight requirements. Look for attach directly to the active piece
features like adjustable side rails and of equipment, they can cause a
higher weight thresholds. sharp-bend in the cord, resulting in
Airflow may also change since some damage to the cord and stress on the
equipment breaths side-to-side equipment port. These managers do
instead of front-to-back – rising not compensate for the stress that
temperatures will negatively affect can be introduced at the network
network performance. The mixing port by poor routing practices.
of cold and hot air will decrease the Replacing a patch or equipment
efficiency of the cooling solution. cord is inexpensive. A Cisco Nexus
The rack or cabinet should have 5000 switch port is more than $400
options that help manage airflow (based on Cisco Nexus 5000 Unified
like baffles that can be easily added Fabric TCO calculator) so replacing
for equipment with side-to-side an equipment port is significant,
airflow, blanking filler panels to fill especially since individual ports
open rack units and brush grommets cannot be replaced. Figure 1: Fingers designed into Legrand vertical cable
(or something similar) at the cable To maximize network performance,
managers support and protect cable at 1-inch intervals.
Bend limiting clips snap onto fingers or side of
entrance and exit points. This will make informed decisions about the fiber enclosure.
3
rack, cabinet and cable management should easily integrate with the rack or cabinet to make it truly modular.
solutions that can accommodate There are several time-savings features to look for in a rack or cabinet
higher weight thresholds, have like being quick and easy to assemble. They should be able to be installed
adjustable rails and wider vertical and correctly spaced, without having to have the vertical cable managers
managers, along with integrated mounted. This provides room for equipment and cabling to be installed
cable and airflow management without the vertical manager in the way, hindering installation. Racks and
options for better protection and cabinets with higher weight thresholds and taller heights will scale to new
airflow. The physical support equipment needs instead of having to be replaced, saving both time and money.
solution should support copper Other rack and cabinet modular components for effective airflow
and fiber media. Also, work with a management and cooling should be available. Baffles that can be added
manufacturer that is active in the for side-breathing equipment, blanking panels and brush grommets, etc.
standards. Standards usually take should be available to help passively manage airflow. This improves the
two to three years to develop, so efficiency of the existing cooling solution, especially when using a traditional
these manufacturers will be aware of hot-aisle/cold-aisle arrangement. If equipment densities are greater than
the upcoming requirements of new 5 kW (kilowatts), consider a rack based cooling solution. New solutions like
technology long before the standard refrigerant-based close-coupled cooling are now available. When integrated
is published. within the enclosure, these solutions save time because they can easily be
added to support higher densities when needed.
ELEMENT #2:
TIME
Data centers are growing in size and
complexity but often require faster
deployment times. They must be able
to adapt quickly and easily to support
changing business requirements.
Selecting infrastructure solutions
that optimize time, result in faster
deployments, reduced cost, and Wide vertical cable management scale to future needs and save time when installing and dressing cable
easier moves, adds and changes. and when doing MAC work.
Since equipment will change Wider vertical cable managers should be used to provide abundant room
several times during the life of the for cable densities into the future. They effectively save time because there
infrastructure, the infrastructure is plenty of room to install, dress, and later manage the cable. If a rack or
must be able to support new weight cabinet is loaded with patching equipment or 1U/2U servers, there will be
requirements, increasing port and a lot of cable to manage. If a rack or cabinet contains blade servers, there
cable densities, and cable media will be fewer cables, but cables still must be properly managed so airflow
changes (i.e. replacing copper is not blocked to prevent heat accumulation. Blade servers often have
with fiber). At the same time, the hot-swappable components. The cable management must keep the cable
infrastructure must be able to from blocking these components, either in the front or back, depending on
support common topologies like ToR the server. Modular cable management should mount on the front or back
(top of rack), MoR (middle of row) and of the rack or cabinet to be available where needed.
EoR (end of row) along with future Cable pathways should be integrated
topologies. with the rack and cabinet solution.
A modular solution provides the Running cable overhead is a growing
foundation for a flexible and scalable trend in data centers. It eliminates
building infrastructure. It combines concerns about blocking airflow
the advantages of standardization under a raised floor and makes the
with those of customization. cable more accessible to do moves,
The modular components must adds and changes easier. When the
be designed to be integrated in overhead path (e.g. cable/basket
order to maximize time savings in tray, ladder rack) is integrated in
deployment, installation and future the modular design, it will be quick
moves, adds and changes. and easy to install. Integration also
minimizes the pathway elevation
The modular design should be based changes because it has been
around the rack or cabinet. Cable designed to work effortlessly with Integrated pathways reduce installation time and
management and pathways the rack or cabinet. minimize pathway elevation changes.
4
flexibility and scalability by allowing
connectivity to be added as needed,
eliminating the need to know the
exact number of ports or type of
connectors. Fiber cable is displacing
existing copper cable. LC (2 fiber)
Pre-terminated fiber trunks save time and help migration beyond 10GbE. Trunks shown from left to right are connectors will be replaced by
LC-to-LC, LC-to-MPO and MPO-to-MPO. MPO (multi-fiber, typically 12 or 24
Using pre-terminated copper and fiber cabling solutions are also a great fibers) connectors as 10 GbE (gigabit
way to save time during installation and later when performing moves, Ethernet) migrates to 40 GbE and
adds and changes. As fiber technology migrates from 10 Gigabit Ethernet 100 GbE. Multi-media connectivity
(GbE) to faster applications like 40 and 100 Gigabit Ethernet, the ability
to field terminate cable changes. 10GbE uses two fibers terminated with
discrete connectors like the LC or SC connector. Beyond 10GbE, parallel
transmission using multiple fibers for transmitting and multiple fibers for
receiving is employed using the MPO connector, which terminates twelve
fibers (or twenty-four fibers) in one connector. MPO connectors cannot be
field-terminated. Pre-terminated fiber systems can facilitate the migration
to higher speeds that will be required. (see “Preparing For Future Structured
Cabling Requirements In The Data Center: 40/100 Gigabit Ethernet And
Beyond” white paper) Multi-media and high density options optimize
Modular solutions should offer the following time-savings attributes: the use of space.
5
hold additional or heavier equipment ■■ Consider patching outside the seamlessly together. This expertise
without having to be replaced. This rack and cabinet (e.g. overhead) to should be utilized to extend the
allows more equipment in the conserve space for equipment equipment life, reduce cost and
same square foot area of the data ■■ Select a rack or cabinet solution address the unique challenges
center providing a better return-on- that easily integrates with in data centers, accomplished
investment for that floor space. overhead pathways through one point of contact from
the manufacturer.
CONCLUSIONS
Data centers deliver critical services for the business. Trying to maximize
efficiency in a data center design focused only on power and cooling
strategies is short-cited. There are other efficiencies that will enhance the
data center’s ability to cost-effectively adapt to business strategy changes
and increased computing demand. A data center solution should be designed
with five key goals; guaranteed performance, saving time, optimizing space,
enhance experience by utilizing resources and enable sustainability. This will
help attain complete efficiency in the data center design.
7
©2014 Legrand All Rights Reserved rev.11|14
designed to be better.™
Data Communications
125 Eugene O’Neill Drive
New London, CT 06320
800.934.5432
www.legrand.us