Professional Documents
Culture Documents
The New Data Center Brocade PDF
The New Data Center Brocade PDF
DATA
CENTER
FIRST EDITION
New technologies are radically
reshaping the data center
TOM CLARK
Tom Clark, 1947–2010
All too infrequently we have the true privilege of knowing a friend
and colleague like Tom Clark. We mourn the passing of a special
person, a man who was inspired as well as inspiring, an intelligent
and articulate man, a sincere and gentle person with enjoyable
humor, and someone who was respected for his great achievements.
We will always remember the endearing and rewarding experiences
with Tom and he will be greatly missed by those who knew him.
Mark S. Detrick
© 2010 Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView,
NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered
trademarks, and Brocade Assurance, Brocade NET Health, Brocade One,
Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade
Communications Systems, Inc., in the United States and/or in other countries.
Other brands, products, or service names mentioned are or may be
trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set
forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade
reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document
describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of
technical data contained in this document may require an export license from
the United States government.
Brocade Bookshelf Series designed by Josh Judd
The New Data Center
Written by Tom Clark
Reviewed by Brook Reams
Edited by Victoria Thomas
Design and Production by Victoria Thomas
Illustrated by Jim Heuser, David Lehmann, and Victoria Thomas
Printing History
First Edition, August 2010
Acknowledgements
I would first of all like to thank Ron Totah, Senior Director of Marketing at
Brocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers.
Ron's consistent support and encouragement for the Brocade Bookshelf
projects and Brocade TechBytes Webcast series provides sustained
momentum for getting technical information into the hands of our customers.
The real work of project management, copyediting, content generation,
assembly, publication, and promotion is done by Victoria Thomas, Technical
Marketing Manager at Brocade. Without Victoria's steadfast commitment,
none of this material would see the light of day.
I would also like to thank Brook Reams, Solution Architect for Applications
on the Integrated Marketing team, for reviewing my draft manuscript and
providing suggestions and invaluable insights on the technologies under
discussion.
Finally, a thank you to the entire Brocade team for making this a first-class
company that produces first-class products for first-class customers
worldwide.
Preface ....................................................................................................... xv
Chapter 1: Supply and Demand ..............................................................1
Chapter 2: Running Hot and Cold ...........................................................9
Energy, Power, and Heat ...................................................................................... 9
Environmental Parameters ................................................................................10
Rationalizing IT Equipment Distribution ............................................................11
Economizers ........................................................................................................14
Monitoring the Data Center Environment .........................................................15
Chapter 3: Doing More with Less ......................................................... 17
VMs Reborn ......................................................................................................... 17
Blade Server Architecture ..................................................................................21
Brocade Server Virtualization Solutions ...........................................................22
Brocade High-Performance 8 Gbps HBAs .................................................23
Brocade 8 Gbps Switch and Director Ports ..............................................24
Brocade Virtual Machine SAN Boot ...........................................................24
Brocade N_Port ID Virtualization for Workload Optimization ..................25
Configuring Single Initiator/Target Zoning ................................................26
Brocade End-to-End Quality of Service ......................................................26
Brocade LAN and SAN Security .................................................................27
Brocade Access Gateway for Blade Frames ..............................................28
The Energy-Efficient Brocade DCX Backbone Platform for
Consolidation ..............................................................................................28
Enhanced and Secure Client Access with Brocade LAN Solutions .........29
Brocade Industry Standard SMI-S Monitoring ..........................................29
Brocade Professional Services ..................................................................30
FCoE and Server Virtualization ..........................................................................31
Chapter 4: Into the Pool ........................................................................ 35
Optimizing Storage Capacity Utilization in the Data Center .............................35
Building on a Storage Virtualization Foundation ..............................................39
Centralizing Storage Virtualization from the Fabric .......................................... 41
Brocade Fabric-based Storage Virtualization ...................................................43
Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-
itate a compact, high-performance FCoE deployment. ....................................33
Figure 19. Conventional storage configurations often result in over- and under-
utilization of storage capacity across multiple storage arrays. .......................36
Figure 20. Storage virtualization aggregates the total storage capacity of mul-
tiple physical arrays into a single virtual pool. ..................................................37
Figure 21. The virtualization abstraction layer provides virtual targets to real
hosts and virtual hosts to real targets. .............................................................38
Figure 22. Leveraging classes of storage to align data storage to the business
value of data over time. .....................................................................................40
Figure 23. FAIS splits the control and data paths for more efficient execution
of metadata mapping between virtual storage and servers. ..........................42
Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-
ta map execution for non-disruptive storage pooling, mirroring and data migra-
tion. ......................................................................................................................43
Figure 25. A storage-centric core/edge topology provides flexibility in deploying
servers and storage assets while accommodating growth over time. ............47
Figure 26. Brocade QoS gives preferential treatment to high-value applications
through the fabric to ensure reliable delivery. ..................................................49
Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-
tion by throttling the transmission rate of the offending initiator. ..................50
Figure 28. Preferred paths are established through traffic isolation zones,
which enforce separation of traffic through the fabric based on designated
applications. ........................................................................................................51
Figure 29. By monitoring traffic activity on each port, Top Talkers can identify
which applications would most benefit from Adaptive Networking services. 52
Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port
compared to the competition. ...........................................................................54
Figure 31. The Brocade Encryption Switch provides secure encryption for disk
or tape. ................................................................................................................56
Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58
Figure 33. Integrating formerly standalone mid-tier servers into the data center
fabric with an iSCSI blade in the Brocade DCX. ...............................................61
Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-
wide disruptions. ................................................................................................62
Figure 35. IR facilitates resource sharing between physically independent
SANs. ...................................................................................................................64
Figure 36. Long-distance connectivity options using Brocade devices. ........67
Figure 37. Access, aggregation, and core layers in the data center
network. ...............................................................................................................71
Figure 38. Access layer switch placement is determined by availability, port
density, and cable strategy. ...............................................................................73
Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager
(bottom). ........................................................................................................... 115
Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118
Figure 70. A Brocade VCS reference network architecture. ........................ 122
Entrance Room
Offices Carriers Carrier Equipment Carriers
Operations Center and Demarcations
Support
Horizontal
cabling Backbone cabling COMPUTER ROOM
Backbone
Telecom room cabling Main
Office & Operations Distribution Area
Center LAN Switches Routers, backbone LAN/SAN/KVM Switches
PBX, M13 Muxes
Horizontal Backbone
Distribution Area cabling
LAN/SAN/KVM Switches
Horizontal Horizontal
Zone Distribution Area Distribution Area
LAN/SAN/KVM Switches LAN/SAN/KVM Switches
Distribution Area
Horizontal cabling
Entrance Room
Offices Carriers Carrier Equipment Carriers
Operations Center and Demarcations
Support
Horizontal
cabling Backbone cabling COMPUTER ROOM UPS
Backbone
Telecom room cabling Main
Battery
Office & Operations Distribution Area
Routers, backbone LAN/SAN/KVM Switches Room
Center LAN Switches
PBX, M13 Muxes
Horizontal
Backbone
Distribution Area cabling Backup
LAN/SAN/KVM Switches Generators
Horizontal Horizontal
Zone Distribution Area Distribution Area
LAN/SAN/KVM Switches LAN/SAN/KVM Switches
Distribution Area
Horizontal cabling
Diesel
Equipment Equipment Equipment Fuel
Distribution Area Distribution Area Distribution Area Reserves
Rack / Cabinets Rack / Cabinets Rack / Cabinets
Power Distribution
Cooling
Fire Suppression Computer Room CRAC Towers
System Air Conditioners (CRAC) Conduits
The diagram in Figure 2 shows the basic functional areas for IT pro-
cessing supplemented by the key data center support systems
required for high availability data access. Each unit of powered equip-
ment has a multiplier effect on total energy draw. First, each data
center element consumes electricity according to its specific load
requirements, typically on a 7x24 basis. Second, each unit dissipates
heat as a natural by-product of its operation, and heat removal and
cooling requires additional energy draw in the form of the computer
room air conditioning system. The CRAC system itself generates heat,
which also requires cooling. Depending on the design, the CRAC sys-
tem may require auxiliary equipment such as cooling towers, pumps,
and so on, which draw additional power. Because electronic equip-
ment is sensitive to ambient humidity, each element also places an
additional load on the humidity control system. And finally, each ele-
What differentiates the new data center architecture from the old may
not be obvious at first glance. There are, after all, still endless racks of
blinking lights, cabling, network infrastructure, storage arrays, and
other familiar systems and a certain chill in the air. The differences are
found in the types of technologies deployed and the real estate
required to house them.
As we will see in subsequent chapters, the new data center is an
increasingly virtualized environment. The static relationships between
clients, applications, and data characteristic of conventional IT pro-
cessing are being replaced with more flexible and mobile relationships
that enables IT resources to be dynamically allocated when and where
they are needed most. The enabling infrastructure in the form of vir-
tual servers, virtual fabrics, and virtual storage has the added benefit
of reducing the physical footprint of IT and its accompanying energy
consumption. The new data center architecture thus reconciles the
conflict between supply and demand by requiring less energy while
supplying higher levels of IT productivity.
Environmental Parameters
Because data centers are closed environments, ambient temperature
and humidity must also be considered. ASHRAE Thermal Guidelines
for Data Processing Environments provides best practices for main-
taining proper ambient conditions for operating IT equipment within
data centers. Data centers typically run fairly cool at about 68 degrees
Fahrenheit and 50% relative humidity. While legacy mainframe sys-
tems did require considerable cooling to remain within operational
norms, open systems IT equipment is less demanding. Consequently,
there has been a more recent trend to run data centers at higher
ambient temperatures, sometimes disturbingly referred to as
“Speedo” mode data center operation. Although ASHRAE's guidelines
present fairly broad allowable ranges of operation (50 to 90 degrees,
20 to 80% relative humidity), recommended ranges are still somewhat
narrow (68 to 77 degrees, 40 to 55% relative humidity).
Cold aisle
Equipment row
Hot aisle
Cold aisle
Equipment row
Hot aisle
More
even
cooling
Equipment
at bottom
is cooler
Server rack with constant speed fans Server rack with variable speed fans
Work cell
Cold aisle
Equipment racks
Hot aisle
When energy was plentiful and cheap, it was often easy to overlook the
basic best practices for data center hardware deployment and the sim-
ple remedies to correct inefficient air flow. Blanking plates, for
example, are used to cover unused rack or cabinet slots and thus
enforce more efficient airflow within an individual rack. Blanking
plates, however, are often ignored, especially when equipment is fre-
quently moved or upgraded. Likewise, it is not uncommon to find
decommissioned equipment still racked up (and sometimes actually
powered on). Racked but unused equipment can disrupt air flow within
a cabinet and become a heat trap for heat generated by active hard-
ware. In raised floor data centers, decommissioned cabling can
disrupt cold air circulation and unsealed cable cutouts can result in
continuous and fruitless loss of cooling. Because the cooling plant
itself represents such a significant share of data center energy use,
even seemingly minor issues can quickly add up to major inefficien-
cies and higher energy bills.
Economizers
Traditionally, data center cooling has been provided by large air condi-
tioning systems (computer room air conditioning, or CRAC) that used
CFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-
ants. Since both CFCs and HCFCs are ozone depleting, current
systems use ozone-friendly refrigerants to minimize broader environ-
mental impact. Conventional CRAC systems, however, consume
significant amounts of energy and may account for nearly half of a
data center power bill. In addition, these systems are typically over-pro-
visioned to accommodate data center growth and consequently incur
a higher operational expense than is justified for the required cooling
capacity.
For new data centers in temperate or colder latitudes, economizers
can provide part or all of the cooling requirement. Economizer technol-
ogy dates to the mid-1800s but has seen a revival in response to rising
energy costs. As shown in Figure 6, an economizer (in this case, a dry-
side economizer) is essentially a heat exchanger that leverages cooler
outside ambient air temperature to cool the equipment racks.
Humidifier/
dehumidifier
Damper Particulate
filter
Outside air
Air return
Use of outside air has its inherent problems. Data center equipment is
sensitive to particulates that can build up on circuit boards and con-
tribute to heating issues. An economizer may therefore incorporate
particulate filters to scrub the external air before the air flow enters the
data center. In addition, external air may be too humid or too dry for
data center use. Integrated humidifiers and dehumidifiers can condi-
tion the air flow to meet operational specifications for data center use.
As stated above, ASHRAE recommends 40 to 55% relative humidity.
VMs Reborn
The concept of virtual machines dates back to mainframe days. To
maximize the benefit of mainframe processing, a single physical sys-
tem was logically partitioned into independent virtual machines. Each
VM ran its own operating system and applications in isolation although
the processor and peripherals could be shared. In today's usage, VMs
typically run on open systems servers and although direct-connect
storage is possible, shared storage on a SAN or NAS is the norm.
Unlike previous mainframe implementations, today's virtualization
software can support dozens of VMs on a single physical server. Typi-
cally, 10 or fewer VM instances are run per physical platform although
more powerful server platforms can support 20 or more VMs.
Hypervisor
Hardware
CPU Memory NIC Storage I/O
OS OS OS OS
Hypervisor
Hardware
CPU Memory NIC Storage I/O
Power
Power
CPU / AUX logic
CPU / AUX logic
CPU / AUX logic
CPU / AUX logic
Fan supply
Network I/O
CPU supply
Memory
Fans
Network
Memory I/O
Bus
AUX Storage
Bus
External
SAN
storage
Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for
an aggregate 16 Gbps bandwidth and 1000 IOPS.
The Brocade 815 and 825 HBAs are further optimized for server virtu-
alization connectivity by supporting advanced intelligent services that
enable end-to-end visibility and management. As discussed below,
Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) and
integrated Quality of Service (QoS) provide powerful tools for simplify-
ing virtual machine deployments and providing proactive alerts directly
to server virtualization managers.
Brocade 8 Gbps Switch and Director Ports
In virtual server environments, the need for speed does not end at the
network or storage port. Because more traffic is now traversing fewer
physical links, building high-performance network infrastructures is a
prerequisite for maintaining non-disruptive, high-performance virtual
machine traffic flows. Brocade's support of 8 Gbps ports on both
switch and enterprise-class platforms enables customers to build high-
performance, non-blocking storage fabrics that can scale from small
VM configurations to enterprise-class data center deployments.
Designing high-performance fabrics ensures that applications running
on virtual machines are not exposed to bandwidth issues and can
accommodate high volume traffic patterns required for data backup
and other applications.
Brocade Virtual Machine SAN Boot
For both standalone physical servers and blade server environments,
the ability to boot from the storage network greatly simplifies virtual
machine deployment and migration of VM instances from one server
to another. As shown in Figure 11, SAN boot centralizes management
of boot images and eliminates the need for local storage on each phys-
ical server platform. When virtual machines are migrated from one
hardware platform to another, the boot images can be readily
accessed across the SAN via Brocade HBAs.
... ...
Boot Servers
images
Brocade
... ... 825 HBAs
Servers
SAN
switches
Direct-
attached
storage
(DAS)
Storage
arrays
Boot images
Figure 12. Brocade's QoS enforces traffic prioritization from the server
HBA to the storage port across the fabric.
Brocade edge switches with Power over Ethernet (PoE) support enable
customers to integrate a wide variety of IP business applications, includ-
ing voice over IP (VoIP), wireless access points, and security monitoring.
Brocade SecureIron switches bring advanced security protection for cli-
ent access into virtualized server clusters, while Brocade ServerIron
switches provide Layer 4–7 application switching and load balancing.
Brocade LAN solutions provide up to 10 Gbps throughput per port and
so can accommodate the higher traffic loads typical of virtual machine
environments.
Brocade Industry Standard SMI-S Monitoring
Virtual server deployments dramatically increase the number of data
flows and requisite bandwidth per physical server or blade server.
Because server virtualization platforms can support dynamic migration
of application workloads between physical servers, complex traffic pat-
terns are created and unexpected congestion can occur. This
complicates server management and can impact performance and
availability. Brocade can proactively address these issues by integrating
communication between Brocade intelligent fabric services with VM
FC traffic
FCoE and
FC traffic Ethernet
Ethernet traffic
Ethernet traffic
Figure 16. FCoE simplifies the server cable plant by reducing the num-
ber of network interfaces required for client, peer-to-peer, and storage
access.
Given the more rigorous requirements for storage data handling and
performance, FCoE is not intended to run on conventional Ethernet net-
works. In order to replicate the low latency, deterministic delivery, and
high performance of traditional Fibre Channel, FCoE is best supported
on a new, hardened form of Ethernet known as Converged Enhanced
Ethernet (CEE), or Data Center Bridging (DCB), at 10 Gbps. Without the
enhancements of DCB, standard Ethernet is too unreliable to support
high-performance block storage transactions. Unlike conventional Ether-
net, DCB provides much more robust congestion management and high-
availability features characteristic of data center Fibre Channel.
DCB replicates Fibre Channel's buffer-to-buffer credit flow control func-
tionality via priority-based flow control (PFC) using 802.1Qbb pause
frames. Instead of buffer credits, pause quanta are used to restrict traf-
fic for a given period to relieve network congestion and avoid dropped
frames. To accommodate the larger payload of Fibre Channel frames,
DCB-enabled switches must also support jumbo frames so that entire
Fibre Channel frames can be encapsulated in each Ethernet transmis-
sion. Other standards initiatives such as TRILL (Transparent
Interconnect for Lot of Links) are being developed to enable multiple
pathing through DCB-switched infrastructures.
FCoE switch
IP
LAN FC
SAN
Servers
with CNAs
Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre
Channel ports and provides protocol conversion to the data center
SAN.
In this example, the client, peer-to-peer, and block storage traffic share a
common 10 Gbps network interface. The FCoE switch acts as a Fibre
Channel Forwarder (FCF) and converts FCoE frames into conventional
Fibre Channel frames for redirection to the fabric. Peer-to-peer or clus-
tering traffic between servers in the same rack is simply switched at
Layer 2 or 3, and client traffic is redirected via the LAN.
Like many new technologies, FCoE is often overhyped as a cure-all for
pervasive IT ills. The benefit of streamlining server connectivity, however,
should be balanced against the cost of deployment and the availability
of value-added features that simplify management and administration.
As an original contributor to the FCoE specification, Brocade has
designed FCoE products that integrate with existing infrastructures so
that the advantages of FCoE can be realized without adversely impact-
ing other operations. Brocade offers the 1010 (single port) and 1020
(dual port) CNAs, shown in Figure 18, at 10 Gbps DCB per port. From
the host standpoint, the FCoE functionality appears as a conventional
Fibre Channel HBA.
Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000
Switch facilitate a compact, high-performance FCoE deployment.
SAN
Physical storage
Array A Array C
Array B
SAN
Physical storage
Array A Array C
Array B
Real intiators
Virtual
target
Virtualization Metadata
engine map
Virtual
initiator
Real targets
Virtualization
engine
Control path
Storage pool
Figure 23. FAIS splits the control and data paths for more efficient
execution of metadata mapping between virtual storage and servers.
Servers
Virtualization
contol processor
Brocade DCX/DCX-4S
or Brocade 48000
with the Brocade FA4-18
Application Blade
Storage pools
As shown in Figure 24, compatibility with both the Brocade 48000 and
Brocade DCX chassis enables the Brocade FA4-18 Application Blade to
extend the benefits of Brocade energy-efficient design and high band-
width to advanced fabric services without requiring a separate
enclosure. Interoperability with existing SAN infrastructures amplifies
this advantage, since any server connected to the SAN can be directed
to the FA4-18 blade for virtualization services. Line-speed metadata
mapping is achieved through purpose-built components instead of
relying on general-purpose processors that other vendors use.
centric SAN designs that more readily accommodate the unique and
more demanding requirements of storage traffic and ensure stable
and highly available connectivity between servers and storage
systems.
A storage-centric fabric design is facilitated by concentrating key cor-
porate storage elements at the core, while accommodating server
access and departmental storage at the edge. As shown in Figure 25,
the SAN core can be built with high-port-density backbone platforms.
With up to 384 x 8 Gbps ports in a single chassis or up to 768 ports in
a dual-chassis configuration, the core layer can support hundreds of
storage ports and, depending on the appropriate fan-in ratio, thou-
sands of servers in a single high-performance solution. The Brocade
DCX Backbone, a 14U chassis with eight vertical blade slots is also
available in a 192-port 8U Brocade DCX-4S with four horizontal blade
slots—with compatibility for any Brocade DCX blade. Because two or
even three backbone chassis can be deployed in a single 19" rack or
adjacent racks, real estate is kept to a minimum. Power consumption
of less than a half watt per Gbps provides over 10x the energy effi-
ciency of comparable enterprise-class products. Doing more with less
is thus realized through compact product design and engineering
power efficiency down to the port
Servers
High-
performance
servers
Edge
switches
Brocade
DCX core
Departmental
storage
Primary
corporate
storage
Intelligent by Design
The new data center fabric is characterized by high port density, com-
pact footprint, low energy costs, and streamlined management, but
the most significant differentiating features compared to conventional
SANs revolve around increased intelligence for storage data transport.
New functionality that streamlines data delivery, automates data
flows, and adapts to changed network conditions both ensures stable
operation and reduces the need for manual intervention and adminis-
trative oversight. Brocade has developed a number of intelligent fabric
capabilities under the umbrella term of Adaptive Networking services
to streamline fabric operations.
Large complex SANs, for example, typically support a wide variety of
business applications, ranging from high-performance and mission-
critical to moderate-performance requirements. In addition, storage-
specific applications such as tape backup may share the same infra-
structure as production applications. If all storage traffic types were
treated with the same priority, the potential would exist for congestion
and disruption of high-value applications impacted negatively by the
QoS Priorities
High
Medium Servers
Low
Edge
Tape switches
Brocade
DCX core
Disk
initiating source. Ingress rate limiting allows the fabric switch to throt-
tle the transmission rate of a server to a speed lower than the
originally negotiated link speed.
mit
Rate li
Servers
Tape
Edge
switches
stion
Conge Brocade
DCX core
Disk
Figure 27. Ingress rate limiting enables the fabric to alleviate potential
congestion by throttling the transmission rate of the offending initiator.
In the example shown in Figure 27, the Brocade DCX monitors poten-
tial congestion on the link to a storage array and proactively reduces
the rate of transmission at the server source. If, for example, the
server HBA had originally negotiated an 8 Gbps transmission rate
when it initially logged in to the fabric, ingress rate limiting could
reduce the transmission rate to 4 Gbps or lower, depending on the vol-
ume of traffic to be reduced to alleviate congestion at the storage port.
Thus, without operator intervention, potentially disruptive congestion
events can be resolved proactively, while ensuring continuous opera-
tion of all applications.
Brocade's Adaptive Networking services also enable storage adminis-
trators to establish preferred paths for specific applications through
the fabric and the ability to fail over from a preferred path to an alter-
nate path if the preferred path is unavailable. This capability is
especially useful for isolating certain applications such as tape backup
or disk-to-disk replication to ensure that they always enter or exit on
the same inter-switch link to optimize the data flow and avoid over-
whelming other application streams.
Backup
ERP
Oracle
Figure 29. By monitoring traffic activity on each port, Top Talkers can OA251 3/1 E12D2
The centrality of the fabric in providing both host and storage connec-
tivity provides new opportunities for safeguarding storage data. As with
other intelligent fabric services, fabric-based security mechanisms can
help ensure consistent implementation of security policies and the
flexibility to apply higher levels of security where they are most
needed.
Servers
Key
management
Both the 16-port encryption blade for Brocade DCX and the 32-port
encryption switch provide 8 Gbps per port for fabric or device connec-
tivity and an aggregate 96 Gbps of hardware-based encryption
throughput and 48 Gbps of data compression bandwidth. The combi-
nation of encryption and data compression enables greater efficiency
in both storing and securing data. For encryption to disk, the IEEE
AES256-XTS encryption algorithm facilitates encryption of disk blocks
without increasing the amount of data per block. For encryption to
tape, the AES256-GCM encryption algorithm appends authenticating
metadata to each encrypted data block. Because tape devices accom-
modate variable block sizes, encryption does not impede backup
operations. From the host standpoint, both encryption processes are
transparent and due to the high performance of the Brocade encryp-
tion engine there is no impact on response time.
X
Unauthorized
device
Figure 32. Using fabric ACLs to secure switch and device connectivity.
FC servers
GbE
switches
Brocade director
with iSCSI blade
Rack-mount
1U servers
FC storage arrays FC tape
Virtual Fabric 2
Virtual Fabric 3
SAN B
SAN C
IR SAN
router
SAN A
Brocade Brocade
DCX DCX
DWDM
Brocade Brocade
FX8-24 in DCX FX8-24 in DCX
IP
Brocade Brocade
7800 7800
IP
As shown in Figure 36, Brocade DCX and SAN extension products offer
a variety of ways to implement long-distance SAN connectivity for
disaster recovery and other remote implementations. For synchronous
disk-to-disk data replication within a metropolitan circumference,
native Fibre Channel at 8 Gbps or 10 Gbps can be driven directly from
Brocade DCX ports over dark fiber or DWDM. For asynchronous repli-
cation over hundreds or thousands of miles, the Brocade 7800 and
FX8-24 extension platforms covert native Fibre Channel to FCIP for
transport over conventional IP network infrastructures. These solu-
tions provide flexible options for storage architects to deploy the most
appropriate form of data protection based on specific application
needs. Many large data centers use a combination of extension tech-
nologies to provide both synchronous replication within metro
boundaries to capture every transaction and asynchronous FCIP-
based extension to more distant recovery sites as a safeguard against
regional disruptions.
tivity is important for both rationalizing the cable plant and in providing
flexibility to accommodate mobility of VMs as applications are
migrated from one platform to another. Where previously server net-
work access was adequately served by 1 Gbps ports, top-of-rack
access layer switches now must provide compact connectivity at 10
Gbps. This, in turn, requires more high-speed ports at the aggregation
and core layers to accommodate higher traffic volumes.
Other trends such as software as a service (SaaS) and Web-based
business applications are shifting the burden of data processing from
remote or branch clients back to the data center. To maintain accept-
able response times and ensure equitable service to multiple
concurrent clients, preprocessing of data flows helps offload server
CPU cycles and provides higher availability. Application layer (Layer 4–
7) networking is therefore gaining traction as a means to balance
workloads and offload networking protocol processing. By accelerating
application access, more transactions can be handled in less time and
with less congestion at the server front-end. Web-based applications
in particular benefit from a network-based hardware assist to ensure
reliability and availability to internal and external users.
Even with server consolidation, blade frames, and virtualization, serv-
ers collectively still account for the majority of data center power and
cooling requirements. Network infrastructure, however, still incurs a
significant power and cooling overhead and data center managers are
now evaluating power consumption as one of the key criteria in net-
work equipment selection. In addition, data center floor space is at a
premium and more compact, higher-port-density network switches can
save valuable real estate.
Another cost-cutting trend for large enterprises is the consolidation of
multiple data centers to one or just a few larger regional data centers.
Such large-scale consolidation typically involves construction of new
facilities that can leverage state-of-the-art energy efficiencies such as
solar power, air economizers, fly-wheel technology, and hot/cold aisle
floor plans (see Figure 3 on page 11). The selection of new IT equip-
ment is also an essential factor in maximizing the benefit of
consolidation, maintaining availability, and reducing ongoing opera-
tional expense. Since the new data center network infrastructure must
now support client traffic that was previously distributed over multiple
data centers, deploying a high-performance LAN with advanced appli-
cation support is crucial for a successful consolidation strategy. In
addition, the reduction of available data centers increases the need
for security throughout the network infrastructure to ensure data integ-
rity and application availability.
A Layered Architecture
With tens of thousands of installations worldwide, data center net-
works have evolved into a common infrastructure built on multiple
layers of connectivity. The three fundamental layers common to nearly
all data center networks are the access, aggregation, and core layers.
This basic architecture has proven to be the most suitable for providing
flexibility, high performance, and resiliency and can be scaled from
moderate to very large infrastructures.
Mission-critical General-purpose
application servers application servers
Access
Aggregation
Core
External
network
Figure 37. Access, aggregation, and core layers in the data center
network.
64 x 10 Gbps ports
Design Considerations
Although each network tier has unique functional requirements, the
entire data center LAN must provide high availability, high perfor-
mance, security for data flows, and visibility for management. Proper
product selection and interoperability between tiers is therefore essen-
tial for building a resilient data center network infrastructure that
enables maximum utilization of resources while minimizing opera-
tional expense. A properly designed network infrastructure, in turn, is a
foundation layer for building higher-level network services to automate
data transport processes such as network resource allocation and pro-
active network management.
Consolidate to Accommodate Growth
One of the advantages of a tiered data center LAN infrastructure is
that it can be expanded to accommodate growth of servers and clients
by adding more switches at the appropriate layers. Unfortunately, this
frequently results in the spontaneous acquisition of more and more
equipment over time as network managers react to increasing
demand. At some point the sheer number of network devices makes
the network difficult to manage and troubleshoot, increases the com-
plexity of the cable plant, and invariably introduces congestion points
that degrade network performance.
50-60% 48%
80%
Storage
IT equipment (disk and tape)
37-40%
48-50% Network
(SAN/LAN/WAN) Tape drive
library
Other
Other
Network
Clients
VMs
SSL/encryption
DNS Mail Radius Firewalls IPD/IDS Cache Switches
Network
Clients
APIs
Server Storage
virtualization virtualization
Orchestration
framework
APIs APIs
Netowrk
virtualization
Microsoft System
Center Operations
Manager
Brocade
Brocade HBA plus QoS
DCFM
Engine
Brocade Management Pack for
Microsoft System Center VMM
QoS
Engine
SAN
The SAN Call Home events displayed in the Microsoft System Center
Operations Center interface is shown in Figure 50 on page 94.
Server Adapters
In mid-2008, Brocade released a family of Fibre Channel HBAs with
8 and4 Gbps HBAs. Highlights of Brocade FC HBAs include:
• Maximizes bus throughput with a Fibre Channel-to-PCIe 2.0a
Gen2 (x8) bus interface with intelligent lane negotiation
• Prioritizes traffic and minimizes network congestion with target
rate limiting, frame-based prioritization, and 32 Virtual Channels
per port with guaranteed QoS
Leveraging IEEE standards for Data Center Bridging (DCB), the Bro-
cade 1000 Series CNAs provide a highly efficient way to transport
Fibre Channel storage traffic over Ethernet links-addressing the highly
sensitive nature of storage traffic.
Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over
Ethernet-to-PCIe CNA.
Access Gateway
Brocade Access Gateway simplifies server and storage connectivity by
enabling direct connection of servers to any SAN fabric-enhancing
scalability by eliminating the switch domain identity and simplifying
local switch device management. Brocade blade server SAN switches
and the Brocade 300 and Brocade 5100 rack-mount switches are key
components of enterprise data centers, bringing a wide variety of scal-
ability, manageability, and cost advantages to SAN environments.
These switches can be used in Access Gateway mode, available in the
standard Brocade Fabric OS, for enhanced server connectivity to
SANs.
Access Gateway provides:
• Seamless connectivity with any SAN fabric
• Improved scalability
• Simplified management
• Automatic failover and failback for high availability
• Lower total cost of ownership
Figure 50. SAN Call Home events displayed in the Microsoft System
Center Operations Center interface.
Feature Details
Feature Details
Organizations can easily enable Access Gateway mode (see page 151)
via the FOS CLI, Brocade Web Tools, or Brocade Fabric Manager. Key
benefits of Access Gateway mode include:
• Improved scalability for large or rapidly growing server and virtual
server environments
• Simplified management through the reduction of domains and
management tasks
• Fabric interoperability for mixed vendor SAN configurations that
require full functionality
Brocade VA-40FC Switch
The Brocade VA-40FC is a high-performance Fibre Channel edge
switch optimized for server connectivity in large-scale enterprise SANs
As organizations consolidate data centers, expand application ser-
vices, and begin to implement cloud initiatives, large-scale server
architectures are becoming a standard part of the data center. Mini-
mizing the network deployment steps and simplifying management
can help organizations grow seamlessly while reducing operating
costs.
The Brocade VA-40FC helps meet this challenge, providing the first
Fibre Channel edge switch optimized for server connectivity in large
core-to-edge SANs. By leveraging Brocade Access Gateway technology,
the Brocade VA-40FC enables zero-configuration deployment and
reduces management of the network edge—increasing scalability and
simplifying management for large-scale server architectures.
The Brocade FX8-24 Extension Blade, designed specifically for the Bro-
cade DCX Backbone, helps provide the network infrastructure for
remote data replication, backup, and migration. Leveraging next-gen-
eration 8 Gbps Fibre Channel, 10 GbE and advanced FCIP technology,
the Brocade FX8-24 provides a flexible and extensible platform to
move more data faster and further than ever before.
Figure 61. Brocade DCFM main window showing the topology view.
Access
The access layer provides the direct network connection to application
and file servers. Servers are typically provisioned with two or more GbE
or 10 GbE network ports for redundant connectivity. Server platforms
vary from standalone servers to 1U rack-mount servers and blade
servers with passthrough cabling or bladed Ethernet switches.
Brocade TurboIron 24X Switch
The Brocade TurboIron 24X switch is a compact, high-performance,
high-availability, and high-density 10/1 GbE dual-speed solution that
meets mission-critical data center ToR and High-Performance Cluster
Computing (HPCC) requirements. An ultra-low-latency, cut-through,
non-blocking architecture and low power consumption help provide a
cost-effective solution for server or compute-node connectivity.
Additional highlights include:
• Highly efficient power and cooling with front-to-back airflow, auto-
matic fan speed adjustment, and use of SFP+ and direct attached
SFP+ copper (Twinax)
• High availability with redundant, load-sharing, hot-swappable,
auto-sensing/switching power supplies and triple-fan assembly
• End-to-end QoS with hardware-based marking, queuing, and con-
gestion management
• Embedded per-port sFlow capabilities to support scalable hard-
ware-based traffic monitoring
• Wire-speed performance with an ultra-low-latency, cut-through,
non-blocking architecture ideal for HPC, iSCSI storage, real-time
application environments
Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port con-
figurations in both Hybrid Fiber (HF) and RJ45 versions.
Brocade Mobility
While once considered a luxury, Wi-Fi connectivity is now an integral
part of the modern enterprise. To that end, most IT organizations are
deploying Wireless LANs (WLANs). With the introduction of the IEEE
802.11n standard, these organizations can save significant capital
and feel confident in expanding their wireless deployments to busi-
ness-critical applications. In fact, wireless technologies often match
the performance of wired networks-all with simplified deployment,
robust security, and at a significantly lower cost. Brocade offers all the
pieces to deploy a wireless enterprise. In addition to indoor networking
equipment, Brocade also provides the tools to wirelessly connect mul-
tiple buildings across a corporate campus.
Brocade offers two models of controllers: the Brocade RFS6000 and
RFS7000 Controller. Brocade Mobility controllers enables wireless
enterprises by providing an integrated communications platform that
delivers secure and reliable voice, video, and data applications in
Wireless LAN (WLAN) environments. Based on an innovative architec-
ture, Brocade mobility controllers provide:
• Wired and wireless networking services
• Multiple locationing technologies such as Wi-i and RFID
• Resiliency via 3G/4G wireless broadband backhaul
• High performance with 802.11n networks
The Brocade Mobility RFS7000 features a multicore, multithreaded
architecture designed for large-scale, high-bandwidth enterprise
deployments. It easily handles from 8000 to 96,000 mobile devices
and 256 to 3000 802.11 dual-radio a/b/g/n access points or 1024
adaptive access points (Brocade Mobility 5181 a/b/g or Brocade
Mobility 7131 a/b/g/n) per controller. The Brocade Mobility RFS7000
provides the investment protection enterprises require: innovative
clustering technology provides a 12X capacity increase, and smart
licensing enables efficient, scalable network expansion.
Figure 69. The pillars of Brocade VCS (detailed in the next section).
Ethernet Fabric
In the new data center LAN, Spanning Tree Protocol is no longer neces-
sary, because the Ethernet fabric appears as a single logical switch to
connected servers, devices, and the rest of the network. Also, Multi-
Chassis Trunking (MCT) capabilities in aggregation switches enable a
logical one-to-one relationship between the access (VCS) and aggrega-
tion layers of the network. The Ethernet fabric is an advanced multi-
path network utilizing TRILL, in which all paths in the network are
active and traffic is automatically distributed across the equal-cost
paths. In this optimized environment, traffic automatically takes the
shortest path for minimum latency without manual configuration.
And, unlike switch stacking technologies, the Ethernet fabric is master-
less. This means that no single switch stores configuration information
or controls fabric operations. Events such as added, removed, or failed
links are not disruptive to the Ethernet fabric and do not require all
traffic in the fabric to stop. If a single link fails, traffic is automatically
rerouted to other available paths in less than a second. Moreover, sin-
gle component failures do not require the entire fabric topology to
reconverge, helping to ensure that no traffic is negatively impacted by
an isolated issue.
Distributed Intelligence
Brocade VCS also enhances server virtualization with technologies
that increase VM visibility in the network and enable seamless migra-
tion of policies along with the VM. VCS achieves this through a
distributed services architecture that makes the fabric aware of all of
connected devices and shares the information across those devices.
Automatic Migration of Port Profiles (AMPP), a VCS feature, enables a
VM's network profiles—such as security or QoS levels—to follow the VM
during migrations without manual intervention. This unprecedented
level of VM visibility and automated profile management helps intelli-
gently remove the physical barriers to VM mobility that exists in current
technologies and network architectures.
Distributed intelligence allows the Ethernet fabric to be “self-forming.”
When two VCS-enabled switches are connected, the fabric is automati-
cally created, and the switches discover the common fabric
configuration. Scaling bandwidth in the fabric is as simple as connect-
ing another link between switches or adding a new switch as required.
The Ethernet fabric does not dictate a specific topology, so it does not
restrict oversubscription ratios. As a result, network architects can cre-
ate a topology that best meets specific application requirements.
Unlike other technologies, VCS enables different end-to-end subscrip-
tion ratios to be created or fine- tuned as application demands change
over time.
Logical Chassis
All switches in an Ethernet fabric are managed as if they were a single
logical chassis. To the rest of the network, the fabric looks no different
than any other Layer 2 switch. The network sees the fabric as a single
switch, whether the fabric contains as few as 48 ports or thousands of
ports. Each physical switch in the fabric is managed as if it were a port
module in a chassis. This enables fabric scalability without manual
configuration. When a port module is added to a chassis, the module
does not need to be configured, and a switch can be added to the
Ethernet fabric just as easily. When a VCS-enabled switch is connected
to the fabric, it inherits the configuration of the fabric and the new
ports become available immediately.
The logical chassis capability significantly reduces management of
small-form-factor edge switches. Instead of managing each top-of-rack
switch (or switches in blade server chassis) individually, organizations
can manage them as one logical chassis, which further optimizes the
network in the virtualized data center and will further enable a cloud
computing model.
Dynamic Services
Brocade VCS also offers dynamic services so that you can add new
network and fabric services to Brocade converged fabrics, including
capabilities such as fabric extension over distance, application deliv-
ery, native Fibre Channel connectivity, and enhanced security services
such as firewalls and data encryption. Through VCS, the new switches
and software with these services behave as service modules within a
logical chassis. Furthermore, the new services are then made avail-
able to the entire converged fabric, dynamically evolving the fabric with
new functionality. Switches with these unique capabilities can join the
Ethernet fabric, adding a network service layer across the entire fabric.
Security ervices
VCS SAN
(firewall, encryption)
Dedicated Fibre
FC/FCoE/ Channel SAN for
iSCSI/NAS Tier 1 applications
VM VM
VM
VM
storage
VM VM
VM VM
VM VM
VM VM
Rack-mount
Blade VM VM
servers
servers
October 2008
Authored by Tom Clark, Brocade, Green Storage Initiative (GSI) Chair
and Dr. Alan Yoder, NetApp, GSI Governing Board
Reprinted with permission of the SNIA
Introduction
The energy required to support data center IT operations is becoming
a central concern worldwide. For some data centers, additional energy
supply is simply not available, either due to finite power generation
capacity in certain regions or the inability of the power distribution grid
to accommodate more lines. Even if energy is available, it comes at an
ever increasing cost. With current pricing, the cost of powering IT
equipment is often higher than the original cost of the equipment
itself. The increasing scarcity and higher cost of energy, however, is
being accompanied by a sustained growth of applications and data.
Simply throwing more hardware assets at the problem is no longer via-
ble. More hardware means more energy consumption, more heat
generation and increasing load on the data center cooling system.
Companies are therefore now seeking ways to accommodate data
growth while reducing their overall power profile. This is a difficult
challenge.
Data center energy efficiency solutions span the spectrum from more
efficient rack placement and alternative cooling methods to server
and storage virtualization technologies. The SNIA's Green Storage Ini-
tiative was formed to identify and promote energy efficiency solutions
specifically relating to data storage. This document is the first iteration
of the SNIA GASSY's recommendations for maximizing utilization of
1. “Gartner Says 50 Percent of Data Centers Will Have Insufficient Power and Cooling
Capacity by 2008,” Gartner Inc. Press Release, November 29, 2006
2. “Emerson Network Power Presents Industry Survey Results That Project 96 Percent
of Today`s Data Centers Will Run Out of Capacity by 2011" Emerson Press Release,
November 16, 2006
Shades of Green
The quandary for data center managers is in identifying which new
technologies will actually have a sustainable impact for increasing
energy efficiency and which are only transient patches whose initial
energy benefit quickly dissipates as data center requirements change.
Unfortunately, the standard market dynamic that eventually separates
weak products from viable ones has not had sufficient time to elimi-
nate the green pretenders. Consequently, analysts often complain
about the 'greenwashing' of vendor marketing campaigns and the
opportunistic attempt to portray marginally useful solutions as the
cure to all the IT manager's energy ills.
Within the broader green environmental movement greenwashing is
also known as being “lite green” or sometimes “light green”. There
are, however, other shades of green. Dark green refers to environmen-
tal solutions that rely on across-the-board reductions in energy and
material consumption. For a data center, a dark green tactic would be
to simply reduce the number of applications and associated hardware
and halt the expansion of data growth. Simply cutting back, however, is
not feasible for today's business operations. To remain competitive,
businesses must be able to accommodate growth and expansion of
operations.
Consequently, viable energy efficiency for ongoing data center opera-
tions must be based on solutions that are able to leverage state-of-the-
art technologies to do much more with much less. This aligns to yet
another shade of environmental green known as “bright green”. Bright
green solutions reject both the superficial lite green and the Luddite
dark green approaches to the environment and rely instead on techni-
- Green technologies use less raw capacity to
store and use the same data set
Test
- Power consumption falls accordingly
Test
Test
10 TB Test
Test Test
Test
Test Test Test Test
Test
Test Test Test Test
Archive Test Test Test Test
Test Test Test
Backup Archive
Snapshots Test Test
5 TB “Growth” Backup
Archive
Archive Archive
RAID10 Snapshots Backup Backup Backup Archive
“Growth” Backup
Data Snapshot s Snapshots Snapshot s Snapshots
RAID DP “Growth” “G rowth” “Growth” “Growth”
Data RAID DP RAID DP RAID DP RAID DP
Snapshots
Data Data Data Data
“Growth” Snapshots
“Growth” Snapshot s Snapshots Snapshot s Snapshots
RAID10
1 TB RAIDDP
“Growth”
RAIDDP
“G rowth”
RAIDDP
“Growth”
RAIDDP
“Growth”
RAIDDP
Data Data Data Data Data Data
SAN
LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6
aisle temperature. So the more precisely that air delivery can be con-
trolled and measured, the higher the temperature one can run in the
“cold” aisles.
Benefits of higher temperatures include raised chiller water tempera-
tures and efficiency, reduced fan speed, noise and power draw, and
increased ability to use outside air for cooling through an economizer.
Best Practice #20: Work with Your Regional Utilities
Some electrical utility companies and state agencies are partnering
with customers by providing financial incentives for deploying more
energy efficient technologies. If you are planning a new data center or
consolidating an existing one, incentive programs can provide guid-
ance for the types of technologies and architectures that will give the
best results.
For more information about the SNIA Green Storage Initiative, link to:
http://www.snia.org/forums/green/
To view the SNIA GSI Green Tutorials, link to:
http://www.snia.org/education/tutorials#green
Asynchronous Data For storage, writing the same data to two separate disk
Replication arrays based on a buffered scheme that may not
capture every data write, typically used for long-
distance disaster recovery.
BTU British Thermal Unit, a metric for heat dissipation.
Blade server A server architecture that minimizes the number of
components required per blade, while relying on the
shared elements (power supply, fans, memory, I/O) of
a common frame.
Blanking plates Metal plates used to cover unused portions of
equipment racks to enhance air flow.
Bright green Applying new technologies to enhance energy
efficiency while maintaining or improving productivity.
CEE Converged Enhanced Ethernet, modifications to
conventional 10 Gbps Ethernet to provide the
deterministic data delivery associated with Fibre
Channel, also known as Data Center Bridging (DCB).
CFC Chlorofluorocarbon, a refrigerant that has been shown
to deplete ozone.
Control path In networking, handles configuration and traffic
exceptions and is implemented in software. Since it
takes more time to handle control path messages, it is
often logically separated from the data path to improve
performance.
CAN Converged network adapter, a DCB-enabled adapter
that supports both FCoE and conventional TCP/IP
traffic.
CRAC Computer room air conditioning
Core layer Typically high-performance network switches that
provide centralized connectivity for the data center
aggregation and access layer switches.
Data compression Bit-level reduction of redundant bit patterns in a data
stream via encoding. Typically used for WAN
transmissions and archival storage of data to tape.
Data deduplication Block-level reduction of redundant data by replacing
duplicate data blocks with pointers to a single good
block.
Data path In networking, handles data flowing between devices
(servers, clients, storage, and so on). To keep up with
increasing speeds, the data path is often implemented
in hardware, typical ASICs.
Dark green Addressing energy consumption by the across-the-
board reduction of energy consuming activities.
D F
dark fiber 65, 67 F_Port Trunking 28
Data Center Bridging (DCB) 119 Fabric Application Interface
data center consolidation 46, 48 Standard (FAIS) 41, 84
data center evolution 117 fabric management 119
Data Center Infrastructure Efficiency fabric-based security 55
(DCiE) 6 fabric-based storage
data center LAN virtualization 41
bandwidth 69 fabric-based zoning 26
consolidation 76 fan modules 53
design 75 FastWrite acceleration 66
infrastructure 70 Fibre Channel over Ethernet (FCoE)
security 77 compared to iSCSI 61
server platforms 72 Fibre Channel over IP (FCIP) 62
data encryption 56 FICON acceleration 66
data encryption for data-at-rest. 27 floor plan 11
decommissioned equipment 13 forwarding information base
dehumidifiers 14 (FIB) 78
denial of service (DoS) attacks 77 frame redirection in Brocade FOS 57
dense wavelength division Fujitsu fiber optic system 15
multiplexing (DWDM) 65, 67
Device Connection Control (DCC) 57 G
disaster recovery (DR) 65 Gartner prediction 1
distance extension 65 Gigabit Ethernet 59
technologies 66 global server load balancing
distributed DoS (DDoS) attacks 77 (GSLB) 82
Distributed Management Task Force Green Storage Initiative (GSI) 53
(DMTF) 19, 84 Green Storage Technical Working
dry-side economizers 15 Group (GS TWG) 53
E H
economizers 14 HCFC (hydrochlorofluorocarbon) 14
EMC Invista software 44, 87 high-level metrics 7
Emerson Power survey 1 Host bus adapters (HBAs) 23
encryption 56 hot aisle/cold aisle 11
data-in-flight 27 HTTP (HyperText Transfer
encryption keys 56 Protocol) 80
energy efficiency 7 HTTPS (HyperText Transfer Protocol
Brocade DCX 54 Secure) 80
new technology 70 humidifiers 14
product design 53, 79 humidity 10
Environmental Protection Agency humidity probes 15
(EPA) 10 hypervisor 18
EPA Energy Star 17 secure access 19
Ethernet networks 69
external air 14
I O
IEEE open systems approach 84
AES256-GCM encryption algo- Open Virtual Machine Format
rithm for tape 56 (OVF) 84
AES256-XTS encryption algorithm outside air 14
for disk 56 ozone 14
information lifecycle management
(ILM) 39 P
ingress rate limiting (IRL) 49 particulate filters 14
Integrated Routing (IR) 63 Patterson and Pratt research 12
Intel x86 18 power consumption 70
intelligent fabric 48 power supplies 53
inter-chassis links (ICLs) 28 preferred paths 50
Invista software from EMC 44
IP address spoofing 78 Q
IP network links 66 quality of service
IP networks application tiering 49
layered architecture 71 Quality of Service (QoS) 24, 26
resiliency 76
iSCSI 58
R
Serial RDMA (iSER) 60
Rapid Spanning Tree Protocol
IT processes 83
(RSTP) 77
recovery point objective (RPO) 65
K recovery time objective (RTO) 65
key management solutions 57 refrigerants 14
registered state change notification
L (RSCN) 63
Layer 4–7 70 RFC 3176 standard 77
Layer 4–7 switches 80 RFC 3704 (uRPF) standard 78
link congestion 49 RFC 3768 standard 76
logical fabrics 63 role-based access control (RBAC) 27
long-distance SAN connectivity 67 routing information base (RIB) 78
M S
management framework 85 SAN boot 24
measuring energy consumption 12 SAN design 45, 46
metadata mapping 42, 43 storage-centric design 48
Metro Ring Protocol (MRP) 77 security
Multi- Chassis Trunking (MCT) 120 SAN 55
SAN security myths 55
N Web applications 81
N_Port ID Virtualization security solutions 27
(NPIV) 24, 28 Server and StorageIO Group 78
N_Port Trunking 23
network health monitoring 85
network segmentation 78
server virtualization 18 U
IP networks 69 Unicast Reverse Path Forwarding
mainstream 86 (uRPF) 78
networking complement 79 UPS systems 3
service-level agreements (SLAs) 27 Uptime Institute 5
network 80
sFlow V
RFC 3176 standard 77
variable speed fans 12
simple name server (SNS) 60, 63
Virtual Cluster Switching (VCS)
Site Infrastructure Energy Efficiency
architecture 122
Ratio (SI-EER) 5
Virtual Fabrics (VF) 62
software as a service (SaaS) 70
virtual IPs (VIPs) 79
Spanning Tree Protocol (STP) 73
virtual LUNs 37
standardized units of joules 9
virtual machines (VMs) 17
state change notification (SCN) 45
migration 86, 120
Storage Application Services
mobility 20
(SAS) 87
Virtual Router Redundancy Protocol
Storage Networking Industry
(VRRP) 76
Association (SNIA) 53
Virtual Routing and Forwarding
Green Storage Power Measure-
(VRF) 78
ment Specification 53
virtual server pool 20
Storage Management Initiative
Virtual Switch Redundancy Protocol
(SMI) 84
(VSRP) 77
storage virtualization 35
virtualization
fabric-based 41
network 79
metadata mapping 38
orchestratration 84
tiered data storage 40
server 18
support infrastructure 4
storage 35
Switch Connection Control (SCC) 57
Virtualization Management Initiative
synchronous data replication 65
(VMAN) 19
Synchronous Optical Networking
VM mobility
(SONET) 65
IP networks 70
VRRP Extension (VRRPE) 76
T
tape pipelining algorithms 66 W
temperature probes 15
wet-side economizers 15
The Green Grid 6
work cell 12
tiered data storage 40
World Wide Name (WWN) 25
Top Talkers 26, 51
top-of-rack access solution 73
traffic isolation (TI) 51
traffic prioritization 26
Transparent Interconnection of Lots
of Links (TRILL) 119