You are on page 1of 176

THE NEW

DATA
CENTER
FIRST EDITION
New technologies are radically
reshaping the data center

TOM CLARK
Tom Clark, 1947–2010
All too infrequently we have the true privilege of knowing a friend
and colleague like Tom Clark. We mourn the passing of a special
person, a man who was inspired as well as inspiring, an intelligent
and articulate man, a sincere and gentle person with enjoyable
humor, and someone who was respected for his great achievements.
We will always remember the endearing and rewarding experiences
with Tom and he will be greatly missed by those who knew him.
Mark S. Detrick
© 2010 Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView,
NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered
trademarks, and Brocade Assurance, Brocade NET Health, Brocade One,
Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade
Communications Systems, Inc., in the United States and/or in other countries.
Other brands, products, or service names mentioned are or may be
trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set
forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade
reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document
describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of
technical data contained in this document may require an export license from
the United States government.
Brocade Bookshelf Series designed by Josh Judd
The New Data Center
Written by Tom Clark
Reviewed by Brook Reams
Edited by Victoria Thomas
Design and Production by Victoria Thomas
Illustrated by Jim Heuser, David Lehmann, and Victoria Thomas

Printing History
First Edition, August 2010

iv The New Data Center


Important Notice
Use of this book constitutes consent to the following conditions. This book is
supplied “AS IS” for informational purposes only, without warranty of any kind,
expressed or implied, concerning any equipment, equipment feature, or
service offered or to be offered by Brocade. Brocade reserves the right to
make changes to this book at any time, without notice, and assumes no
responsibility for its use. This informational document describes features that
may not be currently available. Contact a Brocade sales office for information
on feature and product availability. Export of technical data contained in this
book may require an export license from the United States government.
Brocade Corporate Headquarters
San Jose, CA USA
T: +01-408-333-8000
info@brocade.com
Brocade European Headquarters
Geneva, Switzerland
T: +41-22-799-56-40
emea-info@brocade.com
Brocade Asia Pacific Headquarters
Singapore
T: +65-6538-4700
apac-info@brocade.com

Acknowledgements
I would first of all like to thank Ron Totah, Senior Director of Marketing at
Brocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers.
Ron's consistent support and encouragement for the Brocade Bookshelf
projects and Brocade TechBytes Webcast series provides sustained
momentum for getting technical information into the hands of our customers.
The real work of project management, copyediting, content generation,
assembly, publication, and promotion is done by Victoria Thomas, Technical
Marketing Manager at Brocade. Without Victoria's steadfast commitment,
none of this material would see the light of day.
I would also like to thank Brook Reams, Solution Architect for Applications
on the Integrated Marketing team, for reviewing my draft manuscript and
providing suggestions and invaluable insights on the technologies under
discussion.
Finally, a thank you to the entire Brocade team for making this a first-class
company that produces first-class products for first-class customers
worldwide.

The New Data Center v


About the Author
Tom Clark was a resident SAN evangelist for Brocade and represented
Brocade in industry associations, conducted seminars and tutorials at
conferences and trade shows, promoted Brocade storage networking
solutions, and acted as a customer liaison. A noted author and industry
advocate of storage networking technology, he was a board member of the
Storage Networking Industry Association (SNIA) and former Chair of the SNIA
Green Storage Initiative. Clark has published hundreds of articles and white
papers on storage networking and is the author of Designing Storage Area
Networks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI,
iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001),
Storage Virtualization: Technologies for Simplifying Data Storage and
Management (Addison-Wesley 2005), and Strategies for Data Protection
(Brocade Bookshelf, 2008).
Prior to joining Brocade, Clark was Director of Solutions and Technologies
for McDATA Corporation and the Director of Technical Marketing for Nishan
Systems, the innovator of storage over IP technology. As a liaison between
marketing, engineering, and customers, he focused on customer education
and defining features that ensure productive deployment of SANs. With more
than 20 years experience in the IT industry, Clark held technical marketing and
systems consulting positions with storage networking and other data
communications companies.
Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knows
that he was intelligent, quick, a voice of sanity and also sarcasm, and a
pragmatist with a great heart. He was indeed the heart of Brocade TechBytes,
a monthly Webcast he described as “a late night technical talk show,” which
was launched in November 2008 and is still part of Brocade’s Technical
Marketing program.

vi The New Data Center


Contents

Preface ....................................................................................................... xv
Chapter 1: Supply and Demand ..............................................................1
Chapter 2: Running Hot and Cold ...........................................................9
Energy, Power, and Heat ...................................................................................... 9
Environmental Parameters ................................................................................10
Rationalizing IT Equipment Distribution ............................................................11
Economizers ........................................................................................................14
Monitoring the Data Center Environment .........................................................15
Chapter 3: Doing More with Less ......................................................... 17
VMs Reborn ......................................................................................................... 17
Blade Server Architecture ..................................................................................21
Brocade Server Virtualization Solutions ...........................................................22
Brocade High-Performance 8 Gbps HBAs .................................................23
Brocade 8 Gbps Switch and Director Ports ..............................................24
Brocade Virtual Machine SAN Boot ...........................................................24
Brocade N_Port ID Virtualization for Workload Optimization ..................25
Configuring Single Initiator/Target Zoning ................................................26
Brocade End-to-End Quality of Service ......................................................26
Brocade LAN and SAN Security .................................................................27
Brocade Access Gateway for Blade Frames ..............................................28
The Energy-Efficient Brocade DCX Backbone Platform for
Consolidation ..............................................................................................28
Enhanced and Secure Client Access with Brocade LAN Solutions .........29
Brocade Industry Standard SMI-S Monitoring ..........................................29
Brocade Professional Services ..................................................................30
FCoE and Server Virtualization ..........................................................................31
Chapter 4: Into the Pool ........................................................................ 35
Optimizing Storage Capacity Utilization in the Data Center .............................35
Building on a Storage Virtualization Foundation ..............................................39
Centralizing Storage Virtualization from the Fabric .......................................... 41
Brocade Fabric-based Storage Virtualization ...................................................43

The New Data Center vii


Contents

Chapter 5: Weaving a New Data Center Fabric ................................. 45


Better Fewer but Better ......................................................................................46
Intelligent by Design ...........................................................................................48
Energy Efficient Fabrics ......................................................................................53
Safeguarding Storage Data ................................................................................55
Multi-protocol Data Center Fabrics ....................................................................58
Fabric-based Disaster Recovery ........................................................................64
Chapter 6: The New Data Center LAN ................................................. 69
A Layered Architecture ....................................................................................... 71
Consolidating Network Tiers .............................................................................. 74
Design Considerations .......................................................................................75
Consolidate to Accommodate Growth .......................................................75
Network Resiliency .....................................................................................76
Network Security .........................................................................................77
Power, Space and Cooling Efficiency .........................................................78
Network Virtualization ................................................................................79
Application Delivery Infrastructure ....................................................................80
Chapter 7: Orchestration ....................................................................... 83
Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89
Server Adapters ..................................................................................................89
Brocade 825/815 FC HBA .........................................................................90
Brocade 425/415 FC HBA .........................................................................91
Brocade FCoE CNAs ....................................................................................91
Brocade 8000 Switch and FCOE10-24 Blade ..................................................92
Access Gateway ..................................................................................................93
Brocade Management Pack ..............................................................................94
Brocade ServerIron ADX .....................................................................................95
Chapter 9: Brocade SAN Solutions ...................................................... 97
Brocade DCX Backbones (Core) ........................................................................98
Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100
Brocade 5300 Switch ...............................................................................101
Brocade 5100 Switch .............................................................................. 102
Brocade 300 Switch ................................................................................ 103
Brocade VA-40FC Switch ......................................................................... 104
Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105
Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106
Brocade Optical Transceiver Modules .............................................................107
Brocade Data Center Fabric Manager ............................................................ 108
Chapter 10: Brocade LAN Network Solutions ..................................109
Core and Aggregation ...................................................................................... 110
Brocade NetIron MLX Series ................................................................... 110
Brocade BigIron RX Series ...................................................................... 111

viii The New Data Center


Contents

Access .............................................................................................................. 112


Brocade TurboIron 24X Switch ................................................................ 112
Brocade FastIron CX Series ..................................................................... 113
Brocade NetIron CES 2000 Series ......................................................... 113
Brocade FastIron Edge X Series ............................................................. 114
Brocade IronView Network Manager .............................................................. 115
Brocade Mobility .............................................................................................. 116
Chapter 11: Brocade One ....................................................................117
Evolution not Revolution ..................................................................................117
Industry's First Converged Data Center Fabric .............................................. 119
Ethernet Fabric ........................................................................................ 120
Distributed Intelligence ........................................................................... 120
Logical Chassis ........................................................................................ 121
Dynamic Services .................................................................................... 121
The VCS Architecture ....................................................................................... 122
Appendix A: “Best Practices for Energy Efficient Storage
Operations” .............................................................................................123
Introduction ...................................................................................................... 123
Some Fundamental Considerations ............................................................... 124
Shades of Green .............................................................................................. 125
Best Practice #1: Manage Your Data ..................................................... 126
Best Practice #2: Select the Appropriate Storage RAID Level .............. 128
Best Practice #3: Leverage Storage Virtualization ................................ 129
Best Practice #4: Use Data Compression .............................................. 130
Best Practice #5: Incorporate Data Deduplication ................................131
Best Practice #6: File Deduplication .......................................................131
Best Practice #7: Thin Provisioning of Storage to Servers .................... 132
Best Practice #8: Leverage Resizeable Volumes .................................. 132
Best Practice #9: Writeable Snapshots ................................................. 132
Best Practice #10: Deploy Tiered Storage ............................................. 133
Best Practice #11: Solid State Storage .................................................. 133
Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133
Best Practice #13: Tape Subsystems ..................................................... 134
Best Practice #14: Fabric Design ........................................................... 134
Best Practice #15 - File System Virtualization ....................................... 134
Best Practice #16: Server, Fabric and Storage Virtualization .............. 135
Best Practice #17: Flywheel UPS Technology ........................................ 135
Best Practice #18: Data Center Air Conditioning Improvements ......... 136
Best Practice #19: Increased Data Center temperatures .................... 136
Best Practice #20: Work with Your Regional Utilities .............................137
What the SNIA is Doing About Data Center Energy Usage .............................137
About the SNIA ................................................................................................. 138
Appendix B: Online Sources .................................................................139
Glossary ..................................................................................................141
Index ........................................................................................................153

The New Data Center ix


Contents

x The New Data Center


Figures

Figure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3


Figure 2. The support infrastructure adds substantial cost and energy over-
head to the data center. ...................................................................................... 4
Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11
Figure 4. Variable speed fans enable more efficient distribution of cooling. 12
Figure 5. The concept of work cell incorporates both equipment power draw
and requisite cooling. .........................................................................................13
Figure 6. An economizer uses the lower ambient temperature of outside air to
provide cooling. ...................................................................................................14
Figure 7. A native or Type 1 hypervisor. ...........................................................18
Figure 8. A hosted or Type 2 hypervisor. ..........................................................19
Figure 9. A blade server architecture centralizes shared resources while reduc-
ing individual blade server elements. ...............................................................21
Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag-
gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23
Figure 11. SAN boot centralizes management of boot images and facilitates
migration of virtual machines between hosts. .................................................25
Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to
the storage port across the fabric. ....................................................................26
Figure 13. Brocade SecureIron switches provide firewall traffic management
and LAN security for client access to virtual server clusters. ..........................27
Figure 14. The Brocade Encryption Switch provides high-performance data en-
cryption to safeguard data written to disk or tape. ..........................................27
Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3
switching in three compact, energy-efficient form factors. .............................29
Figure 16. FCoE simplifies the server cable plant by reducing the number of
network interfaces required for client, peer-to-peer, and storage access. ....31
Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channel
ports and provides protocol conversion to the data center SAN. ...................32

The New Data Center xi


Figures

Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-
itate a compact, high-performance FCoE deployment. ....................................33
Figure 19. Conventional storage configurations often result in over- and under-
utilization of storage capacity across multiple storage arrays. .......................36
Figure 20. Storage virtualization aggregates the total storage capacity of mul-
tiple physical arrays into a single virtual pool. ..................................................37
Figure 21. The virtualization abstraction layer provides virtual targets to real
hosts and virtual hosts to real targets. .............................................................38
Figure 22. Leveraging classes of storage to align data storage to the business
value of data over time. .....................................................................................40
Figure 23. FAIS splits the control and data paths for more efficient execution
of metadata mapping between virtual storage and servers. ..........................42
Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-
ta map execution for non-disruptive storage pooling, mirroring and data migra-
tion. ......................................................................................................................43
Figure 25. A storage-centric core/edge topology provides flexibility in deploying
servers and storage assets while accommodating growth over time. ............47
Figure 26. Brocade QoS gives preferential treatment to high-value applications
through the fabric to ensure reliable delivery. ..................................................49
Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-
tion by throttling the transmission rate of the offending initiator. ..................50
Figure 28. Preferred paths are established through traffic isolation zones,
which enforce separation of traffic through the fabric based on designated
applications. ........................................................................................................51
Figure 29. By monitoring traffic activity on each port, Top Talkers can identify
which applications would most benefit from Adaptive Networking services. 52
Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port
compared to the competition. ...........................................................................54
Figure 31. The Brocade Encryption Switch provides secure encryption for disk
or tape. ................................................................................................................56
Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58
Figure 33. Integrating formerly standalone mid-tier servers into the data center
fabric with an iSCSI blade in the Brocade DCX. ...............................................61
Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-
wide disruptions. ................................................................................................62
Figure 35. IR facilitates resource sharing between physically independent
SANs. ...................................................................................................................64
Figure 36. Long-distance connectivity options using Brocade devices. ........67
Figure 37. Access, aggregation, and core layers in the data center
network. ...............................................................................................................71
Figure 38. Access layer switch placement is determined by availability, port
density, and cable strategy. ...............................................................................73

xii The New Data Center


Figures

Figure 39. A Brocade BigIron RX Series switch consolidates connectivity in a


more energy efficient footprint. .........................................................................75
Figure 40. Network infrastructure typically contributes only 10% to 15% of total
data center IT equipment power usage. ...........................................................79
Figure 41. Application congestion (traffic shown as a dashed line) on a Web-
based enterprise application infrastructure. ....................................................80
Figure 42. Application workload balancing, protocol processing offload and se-
curity via the Brocade ServerIron ADX. .............................................................81
Figure 43. Open systems-based orchestration between virtualization
domains. ..............................................................................................................84
Figure 44. Brocade Management Pack for Microsoft Service Center Virtual
Machine Manager leverages APIs between the SAN and SCVMM to trigger VM
migration. ............................................................................................................86
Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90
Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91
Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-
PCIe CNA. ............................................................................................................92
Figure 48. Brocade 8000 Switch. ....................................................................92
Figure 49. Brocade FCOE10-24 Blade. ............................................................93
Figure 50. SAN Call Home events displayed in the Microsoft System Center
Operations Center interface. .............................................................................94
Figure 51. Brocade ServerIron ADX 1000. ......................................................95
Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98
Figure 53. Brocade 5300 Switch. ................................................................. 101
Figure 54. Brocade 5100 Switch. ................................................................. 102
Figure 55. Brocade 300 Switch. .................................................................... 103
Figure 56. Brocade VA-40FC Switch. ............................................................ 104
Figure 57. Brocade Encryption Switch. ......................................................... 105
Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105
Figure 59. Brocade 7800 Extension Switch. ................................................ 106
Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107
Figure 61. Brocade DCFM main window showing the topology view. ......... 108
Figure 62. Brocade NetIron MLX-4. ............................................................... 110
Figure 63. Brocade BigIron RX-16. ................................................................ 111
Figure 64. Brocade TurboIron 24X Switch. ................................................... 112
Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113
Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura-
tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114
Figure 67. Brocade FastIron Edge X 624. ..................................................... 114

The New Data Center xiii


Figures

Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager
(bottom). ........................................................................................................... 115
Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118
Figure 70. A Brocade VCS reference network architecture. ........................ 122

xiv The New Data Center


Preface

Data center administrators today are facing unprecedented chal-


lenges. Business applications are shifting from conventional client/
server relationships to Web-based applications, data center real
estate is at a premium, energy costs continue to escalate, new regula-
tions are imposing more rigorous requirements for data protection and
security, and tighter corporate budgets are making it difficult to
accommodate client demands for more applications and data storage.
Since all major enterprises run their businesses on the basis of digital
information, the consequences of inadequate processing power, stor-
age, network accessibility, or data availability can have a profound
impact on the viability of the enterprise itself.
At the same time, new technologies that promise to alleviate some of
these issues require both capital expenditures and a sharp learning
curve to successfully integrate new solutions that can increase produc-
tivity and lower ongoing operational costs. The ability to quickly adapt
new technologies to new problems is essential for creating a more flex-
ible data center strategy that can meet both current and future
requirements. This effort necessitates cooperation between both data
center administrators and vendors and between the multiple vendors
responsible for providing the elements that compose a comprehensive
data center solution.
The much overused term “ecosystem” is nonetheless an accurate
description of the interdependencies of technologies required for
twenty-first century data center operation. No single vendor manufac-
tures the full spectrum of hardware and software elements required to
drive data center IT processing. This is especially true when each of
the three major domains of IT operations -server, storage, and net-
working-are each undergoing profound technical evolution in the form
of virtualization. Not only must products be designed and tested for

The New Data Center xv


standards compliance and multi-vendor operability, but management
between the domains must be orchestrated to ensure stable opera-
tions and coordination of tasks.
Brocade has a long and proven track record in data center network
innovation and collaboration with partners to create new solutions to
solve real problems and at the same time reducing deployment and
operational costs. This book provides an overview of the new technolo-
gies that are radically transforming the data center into a more cost-
effective corporate asset and the specific Brocade products that can
help you achieve this goal.
The book is organized as follows:
• “Chapter 1: Supply and Demand” starting on page 1 examines the
technological and business drivers that are forcing changes in the
conventional data center paradigm. Due to increased business
demands (even in difficult economic times), data centers are run-
ning out of space and power and this in turn is driving new
initiatives for server, storage and network consolidation.
• “Chapter 2: Running Hot and Cold” starting on page 9 looks at
data center power and cooling issues that threaten productivity
and operational budgets. New technologies such as wet and dry-
side economizers, hot aisle/cold aisle rack deployment, and
proper sizing of the cooling plant can help maximize productive
use of existing real estate and reduce energy overhead.
• “Chapter 3: Doing More with Less” starting on page 17 provides
an overview of server virtualization and blade server technology.
Server virtualization, in particular, is moving from secondary to pri-
mary applications and requires coordination with upstream
networking and downstream storage for successful implementa-
tion. Brocade has developed a suite of new technologies to
leverage the benefits of server virtualization and coordinate oper-
ation between virtual machine managers and the LAN and SAN
networks.
• “Chapter 4: Into the Pool” starting on page 35 reviews the poten-
tial benefits of storage virtualization for maximizing utilization of
storage assets and automating life cycle management.

xvi The New Data Center


• “Chapter 5: Weaving a New Data Center Fabric” starting on
page 45 examines the recent developments in storage networking
technology, including higher bandwidth, fabric virtualization,
enhanced security, and SAN extension. Brocade continues to pio-
neer more productive solutions for SANs and is the author or co-
author of the significant standards underlying these new
technologies.
• “Chapter 6: The New Data Center LAN” starting on page 69 high-
lights the new challenges that virtualization and Web-based
applications present to the data communications network. Prod-
ucts like the Brocade ServerIron ADX Series of application delivery
controller provide more intelligence in the network to offload
server protocol processing and provide much higher levels of avail-
ability and security.
• “Chapter 7: Orchestration” starting on page 83 focuses on the
importance of standards-based coordination between server, stor-
age and network domains so that management frameworks can
provide a comprehensive view of the entire infrastructure and pro-
actively address potential bottlenecks.
• Chapters 8, 9, and 10 provide brief descriptions of Brocade prod-
ucts and technologies that have been developed to solve data
center problems.
• “Chapter 11: Brocade One” starting on page 117 described a new
Brocade direction and innovative technologies to simplify the com-
plexity of virtualized data centers.
• “Appendix A: “Best Practices for Energy Efficient Storage Opera-
tions”” starting on page 123 is a reprint of an article written by
Tom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green Storage
Initiative (GSI).
• “Appendix B: Online Sources” starting on page 139 is a list of
online resources.
• The “Glossary” starting on page 141 is a list of data center net-
work terms and definitions.

The New Data Center xvii


xviii The New Data Center
Supply and Demand
1
The collapse of the old data center paradigm

As in other social and economic sectors, information technology has


recently found itself in the awkward position of having lived beyond its
means. The seemingly endless supply of affordable real estate, elec-
tricity, data processing equipment, and technical personnel enabled
companies to build large data centers to house their mainframe and
open systems infrastructures and to support the diversity of business
applications typical of modern enterprises. In the new millennium,
however, real estate has become prohibitively expensive, the cost of
energy has skyrocketed, utilities are often incapable of increasing sup-
ply to existing facilities, data processing technology has become more
complex, and the pool of technical talent to support new technologies
is shrinking.
At the same time, the increasing dependence of companies and insti-
tutions on electronic information and communications has resulted in
a geometric increase in the amount of data that must be managed
and stored. Since 2000, the amount of corporate data generated
worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300
exabytes, with projections of about 1 zetabyte (1000 exabytes) by
2010. This data must be stored somewhere. The installation of more
servers and disk arrays to accommodate data growth is simply not sus-
tainable as data centers run out of floor space, cooling capacity, and
energy to feed additional hardware. The demands constantly placed
on IT administrators to expand support for new applications and data
are now in direct conflict with the supply of data center space and
power.
Gartner predicted that by 2009, half of the world's data centers will
not have sufficient power to support their applications. An Emerson
Power survey projects that 96% of all data centers will not have suffi-
cient power by 2011.

The New Data Center 1


Chapter 1: Supply and Demand

The conventional approach to data center design and operations has


endured beyond its usefulness primarily due to a departmental silo
effect common to many business operations. A data center adminis-
trator, for example, could specify the near-term requirements for power
distribution for IT equipment but because the utility bill was often paid
for by the company's facilities management, the administrator would
be unaware of continually increasing utility costs. Likewise, individual
business units might deploy new rich content applications resulting in
a sudden spike in storage requirements and additional load placed on
the messaging network, with no proactive notification of the data cen-
ter and network operators.
In addition, the technical evolution of data center design, cooling tech-
nology, and power distribution has lagged far behind the rapid
development of server platforms, networks, storage technology, and
applications. Twenty-first century technology now resides in twentieth
century facilities that are proving too inflexible to meet the needs of
the new data processing paradigm. Consequently, many IT managers
are looking for ways to align the data center infrastructure to the new
realities of space, power, and budget constraints.
Although data centers have existed for over 50 years, guidelines for
data center design were not codified into standards until 2005. The
ANSI/TIA-942 Telecommunications Infrastructure Standard for Data
Centers focuses primarily on cable plant design but also includes
power distribution, cooling, and facilities layout. TIA-942 defines four
basic tiers for data center classification, characterized chiefly by the
degree of availability each provides:
• Tier 1. Basic data center with no redundancy
• Tier 2. Redundant components but single distribution path
• Tier 3. Concurrently maintainable with multiple distribution paths
and one active
• Tier 4. Fault tolerant with multiple active distribution paths
A Tier 4 data center is obviously the most expensive to build and main-
tain but fault tolerance is now essential for most data center
implementations. Loss of data access is loss of business and few com-
panies can afford to risk unplanned outages that disrupt customers
and revenue streams. A “five-nines” (99.999%) availability that allows
for only 5.26 minutes of data center downtime annually requires
redundant electrical, UPS, mechanical, and generator systems. Dupli-
cation of power and cooling sources, cabling, network ports, and
storage, however, both doubles the cost of the data center infrastruc-

2 The New Data Center


ture and the recurring monthly cost of energy. Without new means to
reduce the amount of space, cooling, and power while maintaining
high data availability, the classic data center architecture is not
sustainable.

Entrance Room
Offices Carriers Carrier Equipment Carriers
Operations Center and Demarcations
Support

Horizontal
cabling Backbone cabling COMPUTER ROOM
Backbone
Telecom room cabling Main
Office & Operations Distribution Area
Center LAN Switches Routers, backbone LAN/SAN/KVM Switches
PBX, M13 Muxes

Horizontal Backbone
Distribution Area cabling
LAN/SAN/KVM Switches

Horizontal Horizontal
Zone Distribution Area Distribution Area
LAN/SAN/KVM Switches LAN/SAN/KVM Switches
Distribution Area
Horizontal cabling

Equipment Equipment Equipment


Distribution Area Distribution Area Distribution Area
Rack / Cabinets Rack / Cabinets Rack / Cabinets

Figure 1. The ANSI/TIA-942 standard functional area connectivity.

As shown in Figure 1, the TIA-942 standard defines the main func-


tional areas and interconnecting cable plant for the data center.
Horizontal distribution is typically subfloor for older raised-floor data
centers or ceiling rack drop for newer facilities. The definition of pri-
mary functional areas is meant to rationalize the cable plant and
equipment placement so that space is used more efficiently and ongo-
ing maintenance and troubleshooting can be minimized. As part of the
mainframe legacy, many older data centers are victims of indiscrimi-
nant cable runs, often strung reactively in response to an immediate
need. The subfloors of older data centers can be clogged with aban-
doned bus and tag cables, which are simply too long and too tangled
to remove. This impedes airflow and makes it difficult to accommo-
date new cable requirements.
Note that the overview in Figure 1 does not depict the additional data
center infrastructure required for UPS systems (primarily battery
rooms), cooling plant, humidifiers, backup generators, fire suppres-
sion equipment, and other facilities support systems. Although the
support infrastructure represents a significant part of the data center
investment, it is often over-provisioned for the actual operational
power and cooling requirements of IT equipment. Even though it may

The New Data Center 3


Chapter 1: Supply and Demand

be done in anticipation of future growth, over-provisioning is now a lux-


ury that few data centers can afford. Properly sizing the computer
room air conditioning (CRAC) to the proven cooling requirement is one
of the first steps in getting data center power costs under control.

Entrance Room
Offices Carriers Carrier Equipment Carriers
Operations Center and Demarcations
Support

Horizontal
cabling Backbone cabling COMPUTER ROOM UPS
Backbone
Telecom room cabling Main
Battery
Office & Operations Distribution Area
Routers, backbone LAN/SAN/KVM Switches Room
Center LAN Switches
PBX, M13 Muxes

Horizontal
Backbone
Distribution Area cabling Backup
LAN/SAN/KVM Switches Generators
Horizontal Horizontal
Zone Distribution Area Distribution Area
LAN/SAN/KVM Switches LAN/SAN/KVM Switches
Distribution Area
Horizontal cabling
Diesel
Equipment Equipment Equipment Fuel
Distribution Area Distribution Area Distribution Area Reserves
Rack / Cabinets Rack / Cabinets Rack / Cabinets

Power Distribution

Cooling
Fire Suppression Computer Room CRAC Towers
System Air Conditioners (CRAC) Conduits

Figure 2. The support infrastructure adds substantial cost and energy


overhead to the data center.

The diagram in Figure 2 shows the basic functional areas for IT pro-
cessing supplemented by the key data center support systems
required for high availability data access. Each unit of powered equip-
ment has a multiplier effect on total energy draw. First, each data
center element consumes electricity according to its specific load
requirements, typically on a 7x24 basis. Second, each unit dissipates
heat as a natural by-product of its operation, and heat removal and
cooling requires additional energy draw in the form of the computer
room air conditioning system. The CRAC system itself generates heat,
which also requires cooling. Depending on the design, the CRAC sys-
tem may require auxiliary equipment such as cooling towers, pumps,
and so on, which draw additional power. Because electronic equip-
ment is sensitive to ambient humidity, each element also places an
additional load on the humidity control system. And finally, each ele-

4 The New Data Center


ment requires UPS support for continuous operation in the event of a
power failure. Even in standby mode, the UPS draws power for monitor-
ing controls, charging batteries, and fly-wheel operation.
Air conditioning and air flow systems typically represent about 37% of
a data center's power bill. Although these systems are essential for IT
operations, they are often over-provisioned in older data centers and
the original air flow strategy may not work efficiently for rack-mount
open systems infrastructure. For an operational data center, however,
retrofitting or redesigning air conditioning and flow during production
may not be feasible.
For large data centers in particular, the steady accumulation of more
servers, network infrastructure, and storage elements and their
accompanying impact on space, cooling, and energy capabilities high-
lights the shortcomings of conventional data center design. Additional
space simply may not be available, the air flow inadequate for suffi-
cient cooling, and utility-supplied power already at their maximum. And
yet the escalating requirements for more applications, more data stor-
age, faster performance, and higher availability continue unabated.
Resolving this contradiction between supply and demand requires
much closer attention to both the IT infrastructure and the data center
architecture as elements of a common ecosystem.

As long as energy was relatively inexpensive, companies tended to


simply buy additional floor space and cooling to deal with increasing IT
processing demands. Little attention was paid to the efficiency of elec-
trical distribution systems or the IT equipment they serviced. With
energy now at a premium, maximizing utilization of available power by
increasing energy efficiency is essential.
Industry organizations have developed new metrics for calculating the
energy efficiency of data centers and providing guidance for data cen-
ter design and operations. The Uptime Institute, for example, has
formulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) to
analyze the relationship between total power supplied to the data cen-
ter and the power that is supplied specifically to operate IT equipment.
The total facilities power input divided by the IT equipment power draw
highlights the energy losses due to power conversion, heating/cooling,
inefficient hardware, and other contributors. A SI-EER of 2 would indi-
cate that for every 2 watts of energy input at the data center meter,
only 1 watt is drives IT equipment. By the Uptime Institute's own mem-
ber surveys, a SI-EER of 2.5 is not uncommon.

The New Data Center 5


Chapter 1: Supply and Demand

Likewise, The Green Grid, a global consortium of IT companies and


professionals seeking to improve energy efficiency in data centers and
business computing ecosystems, has proposed a Data Center Infra-
structure Efficiency (DCiE) ratio that divides the IT equipment power
draw by the total data center facility power. This is essentially the recip-
rocal of SI-EER, yielding a fractional ratio between the facilities power
supplied and the actual power draw for IT processing. With DCiE or SI-
EER, however, it is not possible to achieve a 1:1 ratio that would
enable every watt supplied to the data center to be productively used
for IT processing. Cooling, air flow, humidity control, fire suppression,
power distribution losses, backup power, lighting, and other factors
inevitably consume power. These supporting elements, however, can
be managed so that productive utilization of facilities power is
increased and IT processing itself is made more efficient via new tech-
nologies and better product design.
Although SI-EER and DCiE are useful tools for a top-down analysis of
data center efficiency, it is difficult to support these high-level metrics
with real substantiating data. It is not sufficient, for example, to simply
use the manufacturer's stated power figures for specific equipment,
especially since manufacturer power ratings are often based on pro-
jected peak usage and not normal operations. In addition, stated
ratings cannot account for hidden inefficiencies (for example, failure to
use blanking panels in 19" racks) that periodically increase the overall
power draw depending on ambient conditions. The alternative is to
meter major data center components to establish baselines of opera-
tional power consumption. Although it may be feasible to design in
metering for a new data center deployment, it is more difficult for exist-
ing environments. The ideal solution is for facilities and IT equipment
to have embedded power metering capability that can be solicited via
network management frameworks.

6 The New Data Center


High-level SI-EER and DCiE metrics focus on data center energy effi-
ciency to power IT equipment. Unfortunately, this does not provide
information on the energy efficiency or productivity of the IT equipment
itself. Suppose that there were two data centers with equivalent IT pro-
ductivity, the one drawing 50 megawatts of power to drive 25
megawatts of IT equipment would have the same DCiE as a data cen-
ter drawing 10 megawatts to drive 5 megawatts of IT equipment. The
IT equipment energy efficiency delta could be due to a number of dif-
ferent technology choices, including server virtualization, more
efficient power supplies and hardware design, data deduplication,
tiered storage, storage virtualization, or other elements. The practical
usefulness of high-level metrics is therefore dependent on underlying
opportunities to increase energy efficiency in individual products and
IT systems. Having a tighter ratio between facilities power input and IT
output is good, but lowering the overall input number is much better.
Data center energy efficiency has external implications as well. Cur-
rently, data centers in the US alone require the equivalent of more
than 6 x 1000 megawatt power plants at a cost of approximately $3B
annually. Although that represents less than 2% of US power consump-
tion, it is still a significant and growing number. Global data center
power usage is more than twice the US figure. Given that all modern
commerce and information exchange is based ultimately on digitized
data, the social cost in terms of energy consumption for IT processing
is relatively modest. In addition, the spread of digital information and
commerce has already provided environmentally friendly benefits in
terms of electronic transactions for banking and finance, e-commerce
for both retail and wholesale channels, remote online employment,
electronic information retrieval, and other systems that have increased
productivity and reduced the requirement for brick-and-mortar onsite
commercial transactions.
Data center managers, however, have little opportunity to bask in the
glow of external efficiencies especially when energy costs continue to
climb and energy sourcing becomes problematic. Although $3B may
be a bargain for modern US society as a whole, achieving higher levels
of data center efficiency is now a prerequisite for meeting the contin-
ued expansion of IT processing requirements. More applications and
more data means either more hardware and energy draw or the adop-
tion of new data center technologies and practices that can achieve
much more with far less.

The New Data Center 7


Chapter 1: Supply and Demand

What differentiates the new data center architecture from the old may
not be obvious at first glance. There are, after all, still endless racks of
blinking lights, cabling, network infrastructure, storage arrays, and
other familiar systems and a certain chill in the air. The differences are
found in the types of technologies deployed and the real estate
required to house them.
As we will see in subsequent chapters, the new data center is an
increasingly virtualized environment. The static relationships between
clients, applications, and data characteristic of conventional IT pro-
cessing are being replaced with more flexible and mobile relationships
that enables IT resources to be dynamically allocated when and where
they are needed most. The enabling infrastructure in the form of vir-
tual servers, virtual fabrics, and virtual storage has the added benefit
of reducing the physical footprint of IT and its accompanying energy
consumption. The new data center architecture thus reconciles the
conflict between supply and demand by requiring less energy while
supplying higher levels of IT productivity.

8 The New Data Center


Running Hot and Cold
2
Taking the heat

Dissipating the heat generated by IT equipment is a persistent prob-


lem for data center operations. Cooling systems alone can account for
one third to one half of data center energy consumption. Over-provi-
sioning the thermal plant to accommodate current and future
requirements leads to higher operational costs. Under-provisioning the
thermal plant to reduce costs can negatively impact IT equipment,
increase the risk of equipment outages, and disrupt ongoing business
operations. Resolving heat generation issues therefore requires a
multi-pronged approach to address (1) the source of heat from IT
equipment, (2) the amount and type of cooling plant infrastructure
required, and (3) the efficiency of air flow around equipment on the
data center floor to remove heat.

Energy, Power, and Heat


In common usage, energy is the capacity of a physical system to do
work and is expressed in standardized units of joules (the work done
by a force of one newton moving one meter along the line of direction
of the force). Power, by contrast, is the rate at which energy is
expended over time, with one watt of power equal to one joule of
energy per second. The power of a 100-watt light bulb, for example, is
equivalent to 100 joules of energy per second, and the amount of
energy consumed by the bulb over an hour would be 6000 joules.
Because electrical systems often consume thousands of watts, the
amount of energy consumed is expressed in kilowatt hours (kWh), and
in fact the kilowatt hour is the preferred unit used by power companies
for billing purposes. A system that requires 10,000 watts of power
would thus consume and be billed for 10 kWh of energy for each hour
of operation, or 240 kWh per day, or 87,600 kWh per year. The typical
American household consumes 10,656 kWh per year.

The New Data Center 9


Chapter 2: Running Hot and Cold

Medium and large IT hardware products are typically in the 1000+


watt range. Fibre Channel directors, for example, can be as efficient as
1300 watts (Brocade) to more than 3000 watts (competition). A large
storage array can be in the 6400 watt range. Although low-end servers
may be rated at ~200 watts, higher-end enterprise servers can be as
much as 8000 watts. With the high population of servers and the req-
uisite storage infrastructure to support them in the data center, plus
the typical 2x factor for the cooling plant energy draw, it is not difficult
to understand why data center power bills keep escalating. According
to the Environmental Protection Agency (EPA), data centers in the US
collectively consume the energy equivalent of approximately 6 million
households, or about 61 billion kWh per year.
Energy consumption generates heat. While energy consumption is
expressed in watts, heat dissipation is expressed in BTU (British Ther-
mal Units) per hour (h). One watt is approximately 3.4 BTU/h. Because
BTUs quickly add up to tens or hundreds of thousands per hour in
complex systems, heat can also be expressed in therms, with one
therm equal to 100,000 BTU. Your household heating bill, for example,
is often listed as therms averaged per day or billing period.

Environmental Parameters
Because data centers are closed environments, ambient temperature
and humidity must also be considered. ASHRAE Thermal Guidelines
for Data Processing Environments provides best practices for main-
taining proper ambient conditions for operating IT equipment within
data centers. Data centers typically run fairly cool at about 68 degrees
Fahrenheit and 50% relative humidity. While legacy mainframe sys-
tems did require considerable cooling to remain within operational
norms, open systems IT equipment is less demanding. Consequently,
there has been a more recent trend to run data centers at higher
ambient temperatures, sometimes disturbingly referred to as
“Speedo” mode data center operation. Although ASHRAE's guidelines
present fairly broad allowable ranges of operation (50 to 90 degrees,
20 to 80% relative humidity), recommended ranges are still somewhat
narrow (68 to 77 degrees, 40 to 55% relative humidity).

10 The New Data Center


Rationalizing IT Equipment Distribution

Rationalizing IT Equipment Distribution


Servers and network equipment are typically configured in standard
19" (wide) racks and rack enclosures, in turn, are arranged for accessi-
bility for cabling and servicing. Increasingly, however, the floor plan for
data center equipment distribution must also accommodate air flow
for equipment cooling. This requires that individual units be mounted
in a rack for consistent air flow direction (all exhaust to the rear or all
exhaust to the front) and that the rows of racks be arranged to exhaust
into a common space, called a hot aisle/cold aisle plan, as shown in
Figure 3.

Cold aisle

Equipment row

Hot aisle

Equipment row Air flow

Cold aisle

Equipment row

Hot aisle

Figure 3. Hot aisle/cold aisle equipment floor plan.

A hot aisle/cold aisle floor plan provides greater cooling efficiency by


directing cold to hot air flow for each equipment row into a common
aisle. Each cold aisle feeds cool air for two equipment rows while each
hot aisle allows exhaust for two equipment rows, thus enabling maxi-
mum benefit for the hot/cold circulation infrastructure. Even greater
efficiency is achieved by deploying equipment with variable-speed
fans.

The New Data Center 11


Chapter 2: Running Hot and Cold

More
even
cooling

Equipment
at bottom
is cooler

Server rack with constant speed fans Server rack with variable speed fans

Figure 4. Variable speed fans enable more efficient distribution of


cooling.

Variable speed fans increase or decrease their spin rate in response to


changes in equipment temperature. As shown in Figure 4, cold air flow
into equipment racks with constant speed fans favors the hardware
mounted in the lower equipment slots and thus nearer to the cold air
feed. Equipment mounted in the upper slots is heated by their own
power draw as well as the heat exhaust from the lower tiers. Use of
variable speed fans, by contrast, enables each unit to selectively apply
cooling as needed, with more even utilization of cooling throughout the
equipment rack.
Research done by Michael Patterson and Annabelle Pratt of Intel lever-
ages the hot aisle/cold aisle floor plan approach to create a metric for
measuring energy consumption of IT equipment. By convention, the
energy consumption of a unit of IT hardware can be measured physi-
cally via use of metering equipment or approximated via use of the
manufacturer's stated power rating (in watts or BTUs).
As shown in Figure 5 Patterson and Pratt incorporate both the energy
draw of the equipment mounted within a rack and the associated hot
aisle/cold aisle real estate required to cool the entire rack. This “work
cell” u nit thus provides a more accurate description of what is actually
required to power and cool IT equipment and, supposing the equip-
ment (for example, servers) is uniform across a row, provides a useful
multiplier for calculating total energy consumption of an entire row of
mounted hardware.

12 The New Data Center


Rationalizing IT Equipment Distribution

Work cell

Cold aisle

Equipment racks

Hot aisle

Figure 5. The concept of work cell incorporates both equipment power


draw and requisite cooling.

When energy was plentiful and cheap, it was often easy to overlook the
basic best practices for data center hardware deployment and the sim-
ple remedies to correct inefficient air flow. Blanking plates, for
example, are used to cover unused rack or cabinet slots and thus
enforce more efficient airflow within an individual rack. Blanking
plates, however, are often ignored, especially when equipment is fre-
quently moved or upgraded. Likewise, it is not uncommon to find
decommissioned equipment still racked up (and sometimes actually
powered on). Racked but unused equipment can disrupt air flow within
a cabinet and become a heat trap for heat generated by active hard-
ware. In raised floor data centers, decommissioned cabling can
disrupt cold air circulation and unsealed cable cutouts can result in
continuous and fruitless loss of cooling. Because the cooling plant
itself represents such a significant share of data center energy use,
even seemingly minor issues can quickly add up to major inefficien-
cies and higher energy bills.

The New Data Center 13


Chapter 2: Running Hot and Cold

Economizers
Traditionally, data center cooling has been provided by large air condi-
tioning systems (computer room air conditioning, or CRAC) that used
CFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-
ants. Since both CFCs and HCFCs are ozone depleting, current
systems use ozone-friendly refrigerants to minimize broader environ-
mental impact. Conventional CRAC systems, however, consume
significant amounts of energy and may account for nearly half of a
data center power bill. In addition, these systems are typically over-pro-
visioned to accommodate data center growth and consequently incur
a higher operational expense than is justified for the required cooling
capacity.
For new data centers in temperate or colder latitudes, economizers
can provide part or all of the cooling requirement. Economizer technol-
ogy dates to the mid-1800s but has seen a revival in response to rising
energy costs. As shown in Figure 6, an economizer (in this case, a dry-
side economizer) is essentially a heat exchanger that leverages cooler
outside ambient air temperature to cool the equipment racks.

Humidifier/
dehumidifier
Damper Particulate
filter

Outside air

Air return

Figure 6. An economizer uses the lower ambient temperature of out-


side air to provide cooling.

Use of outside air has its inherent problems. Data center equipment is
sensitive to particulates that can build up on circuit boards and con-
tribute to heating issues. An economizer may therefore incorporate
particulate filters to scrub the external air before the air flow enters the
data center. In addition, external air may be too humid or too dry for
data center use. Integrated humidifiers and dehumidifiers can condi-
tion the air flow to meet operational specifications for data center use.
As stated above, ASHRAE recommends 40 to 55% relative humidity.

14 The New Data Center


Monitoring the Data Center Environment

Dry-side economizers depend on the external air supply temperature


to be sufficiently lower than the data center itself, and this may fluctu-
ate seasonally. Wet-side economizers thus include cooling towers as
part of the design to further condition the air supply for data center
use. Cooling towers present their own complications, which are tough,
especially in more arid geographies where water resources are expen-
sive and scarce. Ideally, economizers should leverage as much
recyclable resources as possible to accomplish the task of cooling
while reducing any collateral environmental impact.

Monitoring the Data Center Environment


Because vendor wattage and BTU specifications may assume maxi-
mum load conditions, using data sheet specifications or equipment
label declarations does not provide an accurate basis for calculating
equipment power draw or heat dissipation. An objective multi-point
monitoring system for measuring heat and humidity throughout the
data center is really the only means to observe and proactively
respond to changes in the environment.
A number of monitoring options are available today. For example,
some vendors are incorporating temperature probes into their equip-
ment design to provide continuous reporting of heat levels via
management software. Some solutions provide rack-mountable sys-
tems that include both temperature and humidity probes and
monitoring through a Web interface. Fujitsu offers a fiber optic system
that leverages the affect of temperature on light propagation to pro-
vide a multi-point probe using a single fiber optic cable strung
throughout equipment racks. Accuracy is reported to be within a half
degree Celsius and within 1 meter of the measuring point. In addition,
new monitoring software products can render a three-dimensional
view of temperature distribution across the entire data center, analo-
gous to an infrared photo of a heat source.
Although monitoring systems add cost to data center design, they are
invaluable diagnostic tools for fine-tuning airflow and equipment
placement to maximize cooling and keeping power and cooling costs
to a minimum. Many monitoring systems can be retrofitted to existing
data center plants so that even older sites can leverage new
technologies.

The New Data Center 15


Chapter 2: Running Hot and Cold

16 The New Data Center


Doing More with Less
3
Leveraging virtualization and blade server
technologies

Of the three primary components of an IT data center infrastructure—


servers, storage and network—servers are by far the most populous
and have the highest energy impact. Servers represent approximately
half of the IT equipment energy cost and about a quarter of the total
data center power bill. Server technology has therefore been a prime
candidate for regulation via EPA Energy Star and other market-driven
initiatives and has undergone a transformation in both hardware and
software. Server virtualization and blade server design, for example,
are distinct technologies fulfilling different goals but together have a
multiplying affect on server processing performance and energy effi-
ciency. In addition, multi-core processors and multi-processor
motherboards have dramatically increased server processing power in
a more compact footprint.

VMs Reborn
The concept of virtual machines dates back to mainframe days. To
maximize the benefit of mainframe processing, a single physical sys-
tem was logically partitioned into independent virtual machines. Each
VM ran its own operating system and applications in isolation although
the processor and peripherals could be shared. In today's usage, VMs
typically run on open systems servers and although direct-connect
storage is possible, shared storage on a SAN or NAS is the norm.
Unlike previous mainframe implementations, today's virtualization
software can support dozens of VMs on a single physical server. Typi-
cally, 10 or fewer VM instances are run per physical platform although
more powerful server platforms can support 20 or more VMs.

The New Data Center 17


Chapter 3: Doing More with Less

The benefits of server virtualization are as obvious as the potential


risks. Running 10 VMs on a single server platform eliminates the need
for 9 additional servers with their associated cost, components, and
accompanying power draw and heat dissipation. For data centers with
hundreds or thousands of servers, virtualization offers an immediate
solution for server sprawl and ever increasing costs.
Like any virtualization strategy, however, the logical separation of VMs
must be maintained and access to server memory and external
peripherals negotiated to prevent conflicts or errors. VMs on a single
platform are hosted by a hypervisor layer which runs either directly
(Type 1 or native) on the server hardware or on top of (Type 2 or
hosted) the conventional operating system already running on the
server hardware.

Application Application Application


Service
console
OS OS OS

Hypervisor

Hardware
CPU Memory NIC Storage I/O

Figure 7. A native or Type 1 hypervisor.

In a native Type 1 virtualization implementation, the hypervisor runs


directly on the server hardware as shown in Figure 7. This type of
hypervisor must therefore support all CPU, memory, network and stor-
age I/O traffic directly without the assistance of an underlying
operating system. The hypervisor is consequently written to a specific
CPU architecture (for open systems, typically an Intel x86 design) and
associated I/O. Clearly, one of the benefits of native hypervisors is that
overall latency can be minimized as individual VMs perform the normal
functions required by their applications. With the hypervisor directly
managing hardware resources, it is also less vulnerable over time to
code changes or updates that might be required if an underlying OS
were used.

18 The New Data Center


VMs Reborn

Application Application Application Application

OS OS OS OS

Hypervisor

Host Operating System

Hardware
CPU Memory NIC Storage I/O

Figure 8. A hosted or Type 2 hypervisor.

As shown in Figure 8, a hosted or Type 2 server virtualization solution


is installed on top of the host operating system. The advantage of this
approach is that virtualization can be implemented on existing servers
to more fully leverage existing processing power and support more
applications in the same footprint. Given that the host OS and hypervi-
sor layer inserts additional steps between the VMs and the lower level
hardware, this hosted implementation incurs more latency than native
hypervisors. On the other hand, hosted hypervisors can readily support
applications with moderate performance requirements and still
achieve the objective of consolidating compute resources.
In both native and hosted hypervisor environments, the hypervisor
oversees the creation and activity of its VMs to ensure that each VM
has its requisite resources and does not interfere with the activity of
other VMs. Without the proper management of shared memory tables
by the hypervisor, for example, one VM instance could easily crash
another. The hypervisor must also manage the software traps created
to intercept hardware calls made by the guest OS and provide the
appropriate emulation of normal OS hardware access and I/O.
Because the hypervisor is now managing multiple virtual computers,
secure access to the hypervisor itself must be maintained. Efforts to
standardize server virtualization management for stable and secure
operation are being led by the Distributed Management Task Force
(DMTF) through its Virtualization Management Initiative (VMAN) and
through collaborative efforts by virtualization vendors and partner
companies.

The New Data Center 19


Chapter 3: Doing More with Less

Server virtualization software is now available for a variety of CPUs,


hardware platforms and operating systems. Adoption for mid-tier, mod-
erate performance applications has been enabled by the availability of
economical dual-core CPUs and commodity rack-mount servers. High-
performance requirements can be met with multi-CPU platforms opti-
mized for shared processing. Although server virtualization has
steadily been gaining ground in large data centers, there has been
some reluctance to commit the most mission-critical applications to
VM implementations. Consequently, mid-tier applications have been
first in line and as these deployments become more pervasive and
proven, mission-critical applications will follow.
In addition to providing a viable means to consolidate server hardware
and reduce energy costs, server virtualization enables a degree of
mobility unachievable via conventional server management. Because
the virtual machine is now detached from the underlying physical pro-
cessing, memory, and I/O hardware, it is now possible to migrate a
virtual machine from one hardware platform to another non-disrup-
tively. If, for example, an application's performance is beginning to
exceed the capabilities of its shared physical host, it can be migrated
onto a less busy host or one that supports faster CPUs and I/O. This
application agility that initially was just an unintended by-product of
migrating virtual machines has become one of the compelling reasons
to invest in a virtual server solution. With ever-changing business,
workload and application priorities, the ability to quickly shift process-
ing resources where most needed is a competitive business
advantage.
As discussed in more detail below, virtual machine mobility creates
new opportunities for automating application distribution within the
virtual server pool and implementing policy-based procedures to
enforce priority handling of select applications over others. Communi-
cation between the virtualization manager and the fabric via APIs, for
example, enable proactive response to potential traffic congestion or
changes in the state of the network infrastructure. This further simpli-
fies management of application resources and ensures higher
availability.

20 The New Data Center


Blade Server Architecture

Blade Server Architecture


Server consolidation in the new data center can also be achieved by
deploying blade server frames. The successful development of blade
server architecture has been dependent on the steady increase in CPU
processing power and solving basic problems around shared power,
cooling, memory, network, storage, and I/O resources. Although blade
servers are commonly associated with server virtualization, these are
distinct technologies that have a multiplying benefit when combined.
Blade server design strips away all but the most essential dedicated
components from the motherboard and provides shared assets as
either auxiliary special function blades or as part of the blade chassis
hardware. Consequently, the power consumption of each blade server
is dramatically reduced while power supply, fans and other elements
are shared with greater efficiency. A standard data center rack, for
example, can accommodate 42 1U conventional rack-mount servers,
but 128 or more blade servers in the same space. A single rack of
blade servers can therefore house the equivalent of 3 racks of conven-
tional servers; and although the cooling requirement for a fully
populated blade server rack may be greater than for a conventional
server rack, it is still less than the equivalent 3 racks that would other-
wise be required.
As shown in Figure 9, a blade server architecture offloads all compo-
nents that can be supplied by the chassis or by supporting specialized
blades. The blade server itself is reduced to one or more CPUs and
requisite auxiliary logic. The degree of component offload and avail-
ability of specialized blades varies from vendor to vendor, but the net
result is essentially the same. More processing power can now be
packed into a much smaller space and compute resources can be
managed more efficiently.
Brocade Access Gateway

Power
Power
CPU / AUX logic
CPU / AUX logic
CPU / AUX logic
CPU / AUX logic

CPU / AUX logic


CPU / AUX logic
CPU / AUX logic

Fan supply
Network I/O

CPU supply
Memory

Fans
Network
Memory I/O
Bus
AUX Storage
Bus

External
SAN
storage

Figure 9. A blade server architecture centralizes shared resources


while reducing individual blade server elements.

The New Data Center 21


Chapter 3: Doing More with Less

By significantly reducing the number of discrete components per pro-


cessing unit, the blade server architecture achieves higher efficiencies
in manufacturing, reduced consumption of resources, streamlined
design and reduced overall costs of provisioning and administration.
The unique value-add of each vendor's offering may leverage hot-swap
capability, variable-speed fans, variable-speed CPUs, shared memory
blades and consolidated network access. Brocade has long worked
with the major blade server manufacturers to provide optimized
Access Gateway and switch blades to centralize storage network capa-
bility and the specific features of these products will be discussed in
the next section.
Although consolidation ratios of 3:1 are impressive, much higher
server consolidation is achieved when blade servers are combined
with server virtualization software. A fully populated data center rack
of 128 blade servers, for example, could support 10 or more virtual
machines per blade for a total of 1280 virtual servers. That would be
the equivalent of 30 racks (at 42 servers per rack) of conventional 1U
rack-mount servers running one OS instance per server. From an
energy savings standpoint, that represents the elimination of over
1000 power supplies, fan units, network adapters, and other elements
that contribute to higher data center power bills and cooling load.
As a 2009 survey by blade.org shows, adoption of blade server tech-
nology has been increasing in both large data centers and small/
medium business (SMB) environments. Slightly less than half of the
data center respondents and approximately a third of SMB operations
have already implemented blade servers and over a third in both cate-
gories have deployment plans in place. With limited data center real
estate and increasing power costs squeezing data center budgets, the
combination of blade servers and server virtualization is fairly easy to
justify.

Brocade Server Virtualization Solutions


Whether on standalone servers or blade server frames, implementing
server virtualization has both upstream (client) and downstream (stor-
age) impact in the data center. Because Brocade offers a full spectrum
of products spanning LAN, WAN and SAN, it can help ensure that a
server virtualization deployment proactively addresses the new
requirements of both client and storage access. The value of a server
virtualization solution is thus amplified when combined with Brocade's
network technology.

22 The New Data Center


Brocade Server Virtualization Solutions

To maximize the benefits of network connectivity in a virtualized server


environment, Brocade has worked with the major server virtualization
solutions and managers to deliver high performance, high availability,
security, energy efficiency, and streamlined management end to end.
The following Brocade solutions can enhance a server virtualization
deployment and help eliminate potential bottlenecks:
Brocade High-Performance 8 Gbps HBAs
In a conventional server, a host bus adapter (HBA) provides storage
access for a single operating system and its applications. In a virtual
server configuration, the HBA may be supporting 10 to 20 OS
instances, each running its own application. High performance is
therefore essential for enabling multiple virtual machines to share
HBA ports without congestion. The Brocade 815 (single port) and 825
HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and
500,000 I/Os per second (IOPS) performance per port to ensure the
maximum throughput for shared virtualized connectivity. Brocade
N_Port Trunking enables the 825 to deliver an unprecedented 16
Gbps bandwidth (3200 MBps) and one million IOPS performance. This
exceptional performance helps ensure that server virtualization con-
figurations can expand over time to accommodate additional virtual
machines without impacting the continuous operation of existing
applications.

Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for
an aggregate 16 Gbps bandwidth and 1000 IOPS.

The New Data Center 23


Chapter 3: Doing More with Less

The Brocade 815 and 825 HBAs are further optimized for server virtu-
alization connectivity by supporting advanced intelligent services that
enable end-to-end visibility and management. As discussed below,
Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) and
integrated Quality of Service (QoS) provide powerful tools for simplify-
ing virtual machine deployments and providing proactive alerts directly
to server virtualization managers.
Brocade 8 Gbps Switch and Director Ports
In virtual server environments, the need for speed does not end at the
network or storage port. Because more traffic is now traversing fewer
physical links, building high-performance network infrastructures is a
prerequisite for maintaining non-disruptive, high-performance virtual
machine traffic flows. Brocade's support of 8 Gbps ports on both
switch and enterprise-class platforms enables customers to build high-
performance, non-blocking storage fabrics that can scale from small
VM configurations to enterprise-class data center deployments.
Designing high-performance fabrics ensures that applications running
on virtual machines are not exposed to bandwidth issues and can
accommodate high volume traffic patterns required for data backup
and other applications.
Brocade Virtual Machine SAN Boot
For both standalone physical servers and blade server environments,
the ability to boot from the storage network greatly simplifies virtual
machine deployment and migration of VM instances from one server
to another. As shown in Figure 11, SAN boot centralizes management
of boot images and eliminates the need for local storage on each phys-
ical server platform. When virtual machines are migrated from one
hardware platform to another, the boot images can be readily
accessed across the SAN via Brocade HBAs.

24 The New Data Center


Brocade Server Virtualization Solutions

... ...
Boot Servers
images
Brocade
... ... 825 HBAs
Servers

SAN
switches
Direct-
attached
storage
(DAS)
Storage
arrays

Boot images

Figure 11. SAN boot centralizes management of boot images and


facilitates migration of virtual machines between hosts.

Brocade 815 and 825 HBAs provide the ability to automatically


retrieve boot LUN parameters from a centralized fabric-based registry.
This eliminates the error-prone manual host-based configuration
scheme required by other HBA vendors. Brocade's SAN boot and boot
LUN discovery facilitates migration of virtual machines from host to
host, removes the need for local storage and improves reliability and
performance.
Brocade N_Port ID Virtualization for Workload
Optimization
In a virtual server environment, the individual virtual machine
instances are unaware of physical ports since the underlying hardware
has been abstracted by the hypervisor. This creates potential problems
for identifying traffic flows from virtual machines through shared phys-
ical ports. NPIV is an industry standard that enables multiple Fibre
Channel addresses to share a single physical Fibre Channel port. In a
server virtualization environment, NPIV allows each virtual machine
instance to have a unique World Wide Name (WWN) or virtual HBA
port. This in turn provides a level of granularity for identifying each VM
attached to the fabric for end-to-end monitoring, accounting, and con-
figuration. Because the WWN is now bound to an individual virtual
machine, the WWN follows the VM when it is migrated to another plat-
form. In addition, NPIV creates the linkage required for advanced
services such as QoS, security, and zoning as discussed in the next
section.

The New Data Center 25


Chapter 3: Doing More with Less

Configuring Single Initiator/Target Zoning


Brocade has been a pioneer in fabric-based zoning to segregate fabric
traffic and restrict visibility of storage resources to only authorized
hosts. As a recognized best practice for server to storage configura-
tion, NPIV and single initiator/target zoning ensures that individual
virtual machines have access only to their designated storage assets.
This feature minimizes configuration errors during VM migration and
extends the management visibility of fabric connections to specific vir-
tual machines.
Brocade End-to-End Quality of Service
The combination of NPIV and zoning functionality on Brocade HBAs
and switches provides the foundation for higher-level fabric services
including end-to-end QoS. Because the traffic flows from each virtual
machine can be identified by virtual WWN and segregated via zoning,
each can be assigned a delivery priority (low, medium or high) that is
enforced fabric-wide from the host connection to the storage port, as
shown in Figure 12.

QoS Priorities App 1 App 2 App 3 App 4


High
Medium
Low
Virtual Channels technology
enables QoS at the ASIC
level in the HBA
Default QoS HBA
priority Frame-level interleaving of
is Medium outbound data maximizes
initiator link utilization

Figure 12. Brocade's QoS enforces traffic prioritization from the server
HBA to the storage port across the fabric.

While some applications running on virtual machines are logical candi-


dates for QoS prioritization (for example, SQL Server), Brocade's Top
Talkers management feature can help identify which VM applications
may require priority treatment. Because Brocade end-to-end QoS is ulti-
mately tied to the virtual machine's virtualized WWN address, the QoS
assignment follows the VM if it is migrated from one hardware platform

26 The New Data Center


Brocade Server Virtualization Solutions

to another. This feature ensures that applications enjoy non-disruptive


data access despite adds/moves and changes to the downstream envi-
ronment and enables administrators to more easily fulfill client service-
level agreements (SLAs).
Brocade LAN and SAN Security
Most companies are now subject to government regulations that man-
date the protection and security of customer data transactions. Planning
a virtualization deployment must therefore also account for basic secu-
rity mechanisms for both client and storage access. Brocade offers a
broad spectrum of security solutions, including LAN and WAN-based
technologies and storage-specific SAN security features. For example,
Brocade SecureIron products, shown in Figure 13, provide firewall traffic
management and LAN security to safeguard access from clients to vir-
tual hosts on the IP network.

Figure 13. Brocade SecureIron switches provide firewall traffic man-


agement and LAN security for client access to virtual server clusters.

Brocade SAN security features include authentication via access control


lists (ACLs) and role-based access control (RBAC) as well as security
mechanisms for authenticating connectivity of switch ports and devices
to fabrics. In addition, the Brocade Encryption Switch, shown in
Figure 14, and FS8-18 Encryption Blade for the Brocade DCX Backbone
platform provide high-performance (96 Gbps) data encryption for data-
at-rest. Brocade's security environment thus protects data-in-flight from
client to virtual host as well as data written to disk across the SAN.

Figure 14. The Brocade Encryption Switch provides high-performance


data encryption to safeguard data written to disk or tape.

The New Data Center 27


Chapter 3: Doing More with Less

Brocade Access Gateway for Blade Frames


Server virtualization software can be installed on conventional server
platforms or blade server frames. Blade server form factors offer the
highest density for consolidating IT processing in the data center and
leverage shared resources across the backplane. To optimize storage
access from blade server frames, Brocade has partnered with blade
server providers to create high-performance, high-availability Access
Gateway blades for Fibre Channel connectivity to the SAN. Brocade
Access Gateway technology leverages NPIV to simplify virtual machine
addressing and F_Port Trunking for high utilization and automatic link
failover. By integrating SAN connectivity into a virtualized blade server
chassis, Brocade helps to streamline deployment and simplify manage-
ment while reducing overall costs.
The Energy-Efficient Brocade DCX Backbone Platform for
Consolidation
With 4x the performance and over 10x the energy efficiency of other
SAN directors, the Brocade DCX delivers the high performance required
for virtual server implementation and can accommodate growth in VM
environments in a compact footprint. The Brocade DCX supports 384
ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speed
inter-chassis links (ICLs) allow further expansion of the SAN core for
scaling to meet the requirements of very large server virtualization
deployments. The Brocade DCX is also designed to non-disruptively inte-
grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging
(DCB) for future virtual server connectivity. The Brocade DCX is also
available in a 192-port configuration (as the Brocade DCX-4S) to support
medium VM configurations, while providing the same high availability,
performance, and advanced SAN services.
The Brocade DCX's Adaptive Networking services for QoS, ingress rate
limiting, congestion detection, and management ensure that traffic
streams from virtual machines are proactively managed throughout the
fabric and accommodate the varying requirements of upper-layer busi-
ness applications. Adaptive Networking services provide greater agility
in managing application workloads as they migrate between physical
servers.

28 The New Data Center


Brocade Server Virtualization Solutions

Enhanced and Secure Client Access with Brocade LAN


Solutions
Brocade offers a full line of sophisticated LAN switches and routers for
Ethernet and IP traffic from Layer 2/3 to Layer 4–7 application switch-
ing. This product suite is the natural complement to Brocade's robust
SAN products and enables customers to build full-featured and secure
networks end to end. As with the Brocade DCX architecture for SANs,
Brocade BigIron RX, shown in Figure 15, and FastIron SuperX switches
incorporate best-in-class functionality and low power consumption to
deliver high-performance core switching for data center LAN backbones.

Figure 15. Brocade BigIron RX platforms offer high-performance Layer


2/3 switching in three compact, energy-efficient form factors.

Brocade edge switches with Power over Ethernet (PoE) support enable
customers to integrate a wide variety of IP business applications, includ-
ing voice over IP (VoIP), wireless access points, and security monitoring.
Brocade SecureIron switches bring advanced security protection for cli-
ent access into virtualized server clusters, while Brocade ServerIron
switches provide Layer 4–7 application switching and load balancing.
Brocade LAN solutions provide up to 10 Gbps throughput per port and
so can accommodate the higher traffic loads typical of virtual machine
environments.
Brocade Industry Standard SMI-S Monitoring
Virtual server deployments dramatically increase the number of data
flows and requisite bandwidth per physical server or blade server.
Because server virtualization platforms can support dynamic migration
of application workloads between physical servers, complex traffic pat-
terns are created and unexpected congestion can occur. This
complicates server management and can impact performance and
availability. Brocade can proactively address these issues by integrating
communication between Brocade intelligent fabric services with VM

The New Data Center 29


Chapter 3: Doing More with Less

managers. As the fabric monitors potential congestion on a per-VM


basis, it can proactively alert virtual machine management that a work-
load should be migrated to a less utilized physical link. Because this
diagnostic functionality is fine tuned to the workflows of each VM,
changes can be restricted to only the affected VM instances.
Open management standards such as the Storage Management Initia-
tive (SMI) are the appropriate tools for integrating virtualization
management platforms with fabric management services. As one of the
original contributors to the SMI-S specification, Brocade is uniquely posi-
tioned to provide a truly open systems solution end to end. In addition,
configuration management, capacity planning, SLA policies, and virtual
machine provisioning can be integrated with Brocade fabric services
such as Adaptive Networking, encryption and security policies.
Brocade Professional Services
Even large companies who want to take advantage of the cost savings,
consolidation and lower energy consumption characteristic of server vir-
tualization technology may not have the staff or in-house expertise to
plan and implement a server virtualization project. Many organizations
fail to consider the overall impact of virtualization on the data center
and that in turn can lead to degraded application performance, inade-
quate data protection, and increased management complexity. Because
Brocade technology is ubiquitous in the vast majority of data centers
worldwide and Brocade has years of experience in the most mission-crit-
ical IT environments, it can provide a wealth of practical knowledge and
insight into the key issues surrounding client-to-server and server-to-
storage data access. Brocade Professional Services has helped hun-
dreds of customers upgrade to virtualized server infrastructures and
provides a spectrum of services from virtual server assessments, audits
and planning to end-to-end deployment and operation. A well-conceived
and executed virtualization strategy can ensure that a virtual machine
deployment achieves its budgetary goals and fulfills the prime directive
to do far more with much less.

30 The New Data Center


FCoE and Server Virtualization

FCoE and Server Virtualization


Fibre Channel over Ethernet is an optional storage network interconnect
for both conventional and virtualized server environments. As a means
to encapsulate Fibre Channel frames in Ethernet, FCoE enables a simpli-
fied cabling solution for reducing the number of network and storage
interfaces per server attachment. The combined network and storage
connection is now provided by a converged network adapter (CNA), as
shown in Figure 16.

FC traffic
FCoE and
FC traffic Ethernet

Ethernet traffic

Ethernet traffic

Figure 16. FCoE simplifies the server cable plant by reducing the num-
ber of network interfaces required for client, peer-to-peer, and storage
access.

Given the more rigorous requirements for storage data handling and
performance, FCoE is not intended to run on conventional Ethernet net-
works. In order to replicate the low latency, deterministic delivery, and
high performance of traditional Fibre Channel, FCoE is best supported
on a new, hardened form of Ethernet known as Converged Enhanced
Ethernet (CEE), or Data Center Bridging (DCB), at 10 Gbps. Without the
enhancements of DCB, standard Ethernet is too unreliable to support
high-performance block storage transactions. Unlike conventional Ether-
net, DCB provides much more robust congestion management and high-
availability features characteristic of data center Fibre Channel.
DCB replicates Fibre Channel's buffer-to-buffer credit flow control func-
tionality via priority-based flow control (PFC) using 802.1Qbb pause
frames. Instead of buffer credits, pause quanta are used to restrict traf-
fic for a given period to relieve network congestion and avoid dropped
frames. To accommodate the larger payload of Fibre Channel frames,
DCB-enabled switches must also support jumbo frames so that entire
Fibre Channel frames can be encapsulated in each Ethernet transmis-
sion. Other standards initiatives such as TRILL (Transparent
Interconnect for Lot of Links) are being developed to enable multiple
pathing through DCB-switched infrastructures.

The New Data Center 31


Chapter 3: Doing More with Less

FCoE is not a replacement for conventional Fibre Channel but is an


extension of Fibre Channel over a different link layer transport. Enabling
an enhanced Ethernet to carry both Fibre Channel storage data as well
as other data types, for example, file data, Remote Direct Memory
Access (RDMA), LAN traffic, and VoIP, allows customers to simplify
server connectivity and still retain the performance and reliability
required for storage transactions. Instead of provisioning a server with
dual-redundant Ethernet and Fibre Channel ports (a total of four ports),
servers can be configured with two DCB-enabled 10 Gigabit Ethernet
(GbE) ports. For blade server installations, in particular, this reduction in
the number of interfaces greatly simplifies deployment and ongoing
management of the cable plant.
The FCoE initiative has been developed in the ANSI T11 Technical Com-
mittee, which deals with FC-specific issues and is included in a new
Fibre Channel Backbone Generation 5 (FC-BB-5) specification. Because
FCoE takes advantage of further enhancements to Ethernet, close col-
laboration has been required between ANSI T11 and the Institute of
Electrical and Electronics Engineers (IEEE), which governs Ethernet and
the new DCB standards.
Storage access is provided by an FCoE-capable blade in a director chas-
sis (end of row) or by a dedicated FCoE switch (top of rack), as shown in
Figure 17.

FCoE switch
IP
LAN FC

SAN
Servers
with CNAs

Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre
Channel ports and provides protocol conversion to the data center
SAN.

32 The New Data Center


FCoE and Server Virtualization

In this example, the client, peer-to-peer, and block storage traffic share a
common 10 Gbps network interface. The FCoE switch acts as a Fibre
Channel Forwarder (FCF) and converts FCoE frames into conventional
Fibre Channel frames for redirection to the fabric. Peer-to-peer or clus-
tering traffic between servers in the same rack is simply switched at
Layer 2 or 3, and client traffic is redirected via the LAN.
Like many new technologies, FCoE is often overhyped as a cure-all for
pervasive IT ills. The benefit of streamlining server connectivity, however,
should be balanced against the cost of deployment and the availability
of value-added features that simplify management and administration.
As an original contributor to the FCoE specification, Brocade has
designed FCoE products that integrate with existing infrastructures so
that the advantages of FCoE can be realized without adversely impact-
ing other operations. Brocade offers the 1010 (single port) and 1020
(dual port) CNAs, shown in Figure 18, at 10 Gbps DCB per port. From
the host standpoint, the FCoE functionality appears as a conventional
Fibre Channel HBA.

Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000
Switch facilitate a compact, high-performance FCoE deployment.

The Brocade 8000 Switch provides top-of-rack connectivity for servers


with 24 ports of 10 Gbps DCB and 8 ports of 8 Gbps Fibre Channel.
Fibre Channel ports support trunking for a total of 64 Gbps bandwidth,
while the 10 Gbps DCB ports support standard Link Aggregation Control
Protocol (LACP). Fibre Channel connectivity can be directly to storage
end-devices or to existing fabrics, enabling greater flexibility for allocat-
ing storage assets to hosts.

The New Data Center 33


Chapter 3: Doing More with Less

34 The New Data Center


Into the Pool
4
Transcending physical asset management
with storage virtualization

Server virtualization achieves greater asset utilization by supporting


multiple instances of discrete operating systems and applications on a
single hardware platform. Storage virtualization, by contrast, provides
greater asset utilization by treating multiple physical platforms as a
single virtual asset or pool. Consequently, although storage virtualiza-
tion does not provide a comparable direct benefit in terms of reduced
footprint or energy consumption in the data center, it does enable a
substantial benefit in productive use of existing storage capacity. This
in turn often reduces the need to deploy new storage arrays, and so
provides an indirect benefit in terms of continued acquisition costs,
deployment, management, and energy consumption.

Optimizing Storage Capacity Utilization in the


Data Center
Storage administrators typically manage multiple storage arrays, often
from different vendors with the unique characteristics of each system.
Because servers are bound to Logical Unit Numbers (LUNs) in specific
storage arrays, high-volume applications may suffer from over-utiliza-
tion of storage capacity while low-volume applications under-utilize
their storage targets.

The New Data Center 35


Chapter 4: Into the Pool

Server 1 Server 2 Server 3

SAN

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55

Physical storage
Array A Array C
Array B

Figure 19. Conventional storage configurations often result in over-


and under-utilization of storage capacity across multiple storage
arrays.

As shown in Figure 19, the uneven utilization of storage capacity


across multiple arrays puts some applications at risk of running out of
disk space, while neighboring arrays still have excess idle capacity.
This problem is exacerbated by server virtualization, since each physi-
cal server now supports multiple virtual machines with additional
storage LUNs and a more dynamic utilization of storage space. The
hard-coded assignment of storage capacity on specific storage arrays
to individual servers or VMs is too inflexible to meet the requirements
of the more fluid IT environments characteristic of today's data
centers.
Storage virtualization solves this problem by inserting an abstraction
layer between the server farm and the downstream physical storage
targets. This abstraction layer can be supported on the host, the stor-
age controller, within the fabric, or on dedicated virtualization
appliances.

36 The New Data Center


Optimizing Storage Capacity Utilization in the Data Center

Server 1 Server 2 Server 3

SAN

LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

Virtualized Storage Pool

LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55

Physical storage
Array A Array C
Array B

Figure 20. Storage virtualization aggregates the total storage capacity


of multiple physical arrays into a single virtual pool.

As illustrated in Figure 20, storage virtualization breaks the physical


assignment between servers and their target LUNs. The storage
capacity of each physical storage system is now assigned to a virtual
storage pool from which virtual LUNs (for example, LUNs 1 through 6
at the top of the figure) can be created and dynamically assigned to
servers. Because the availability of storage capacity is now no longer
restricted to individual storage arrays, LUN creation and sizing is
dependent only on the total capacity of the virtual pool. This enables
more efficient utilization of the aggregate storage space and facilitates
the creation and management of dynamic volumes that can be sized
to changing application requirements.
In addition to enabling more efficient use of storage assets, the
abstraction layer provided by storage virtualization creates a homoge-
nous view of storage. The physical arrays shown in Figure 20, for
example, could be from any vendor and have proprietary value-added
features. Once LUNs are created and assigned to the storage pool,
however, vendor-specific functionality is invisible to the servers. From
the server perspective, the virtual storage pool is one large generic
storage system. Storage virtualization thus facilitates sharing of stor-
age capacity among systems that would otherwise be incompatible
with each other.

The New Data Center 37


Chapter 4: Into the Pool

As with all virtualization solutions, masking the underlying complexity


of physical systems does not make that complexity disappear. Instead,
the abstraction layer provided by virtualization software and hardware
logic (the virtualization engine) must assume responsibility for errors
or changes that occur at the physical layer. In the case of storage virtu-
alization specifically, management of backend complexity centers
primarily on the maintenance of the metadata mapping required to
correlate virtual storage addresses to real ones, as shown in
Figure 21.

Real intiators

Virtual
target

Virtualization Metadata
engine map
Virtual
initiator

Real targets

Figure 21. The virtualization abstraction layer provides virtual targets


to real hosts and virtual hosts to real targets.

Storage virtualization proxies virtual targets (storage) and virtual initia-


tors (servers) so that real initiators and targets can connect to the
storage pool without modification using conventional SCSI commands.
The relationship between virtual and real storage LUN assignment is
maintained by the metadata map. A virtual LUN of 500 GB, for exam-
ple, may map to storage capacity spread across several physical
arrays. Loss of the metadata mapping would mean loss of access to
the real data. A storage virtualization solution must therefore guaran-
tee the integrity of the metadata mapping and provide safeguards in
the form of replication and synchronization of metadata map copies.

38 The New Data Center


Building on a Storage Virtualization Foundation

As a data center best practice, creation of storage pools from multiple


physical storage arrays should be implemented by storage class. High-
end RAID arrays contribute to one virtual pool; lower performance
arrays should be assigned to a separate pool. Aggregating like assets
in the same pool ensures consistent performance and comparable
availability for all virtual LUNs and thus minimizes problematic incon-
sistencies among disparate systems. In addition, there are benefits in
maintaining separate classes of virtualized storage systems for appli-
cations such as lifecycle management as will be discussed in the next
section.

Building on a Storage Virtualization Foundation


Storage virtualization is an enabling technology for higher levels of
data management and data protection and facilitates centralized
administration and automation of storage operations. Vendor litera-
ture on storage virtualization is consequently often linked t o snapshot
technology for data protection, replication for disaster recovery, virtual
tape backup, data migration, and information lifecycle management
(ILM). Once storage assets have been vendor-neutralized and pooled
via virtualization, it is easier to overlay advanced storage services that
are not dependent on vendor proprietary functionality. Data replication
for remote disaster recovery, for example, no longer depends on a ven-
dor-specific application and licensing but can be executed via a third-
party solution.
One of the central challenges of next-generation data center design is
to align infrastructure to application requirements. In the end, it's
really about the upper-layer business applications, their availability
and performance, and safeguarding the data they generate and pro-
cess. For data storage, aligning infrastructure to applications requires
a more flexible approach to the handling and maintenance of data
assets as the business value of the data itself changes over time. As
shown in Figure 22, information lifecycle management can leverage
virtualized storage tiers to pair the cost of virtual storage containers to
the value of the data they contain.
Providing that each virtual storage tier is composed of a similar class
of products, each tier represents different performance and availabil-
ity characteristics, burdened cost of storage, and energy consumption.

The New Data Center 39


Chapter 4: Into the Pool

Class of storage Tier 1 Tier 2 Tier 1 Tier 4


Burdened cost per GB 10x 4x 10x 0.5x
Value of data High Moderate Low Archive

Figure 22. Leveraging classes of storage to align data storage to the


business value of data over time.

By migrating data from one level to another as its immediate business


value declines, capacity on high-value systems is freed to accommo-
date new active transactions. Business practice or regulatory
compliance, however, can require that migrated data remain accessi-
ble within certain time frames. Tier 2 and 3 classes may not have the
performance and 99.999% availability of Tier 1 systems but still pro-
vide adequate accessibility before the data can finally be retired to
tape. In addition, if each tier is a virtual storage pool, maximum utiliza-
tion of the storage capacity of a tier can help reduce overall costs and
more readily accommodate the growth of aged data without the addi-
tion of new disk drives.
Establishing tiers of virtual storage pools by storage class also pro-
vides the foundation for automating data migration from one level to
another over time. Policy-based data migration can be triggered by a
number of criteria, including frequency of access to specific data sets,
the age of the data, flagging transactions as completed, or appending
metadata to indicate data status. Reducing or eliminating the need for
manual operator intervention can significantly reduce administrative
costs and enhance the return on investment (ROI) of a virtualization
deployment.

40 The New Data Center


Centralizing Storage Virtualization from the Fabric

Centralizing Storage Virtualization from the


Fabric
Although storage virtualization can be implemented on host systems,
storage controllers, dedicated appliances, or within the fabric via direc-
tors or switches, there are trade-offs for each solution in terms of
performance and flexibility. Because host-based storage virtualization
is deployed per server, for example, it incurs greater overhead in terms
of administration and consumes CPU cycles on each host. Dedicated
storage virtualization appliances are typically deployed between multi-
ple hosts and their storage targets, making it difficult to scale to larger
configurations without performance and availability issues. Imple-
menting storage virtualization on the storage array controllers is a
viable alternative, providing that the vendor can accommodate hetero-
geneous systems for multi-vendor environments. Because all storage
data flows through the storage network, or fabric, however, fabric-
based storage virtualization has been a compelling solution for cen-
tralizing the virtualization function and enabling more flexibility in
scaling and deployment.
The central challenge for fabric-based virtualization is to achieve the
highest performance while maintaining the integrity of metadata map-
ping and exception handling. Fabric-based virtualization is now
codified in an ANSI/INCITS T11.5 standard, which provides APIs for
communication between virtualization software and the switching ele-
ments embedded in a switch or director blade. The Fabric Application
Interface Standard (FAIS) separates the control path to a virtualization
engine (typically external to the switch) and the data paths between
initiators and targets. As shown in Figure 23, the Control Path Proces-
sor (CPP) represents the virtualization intelligence layer and the FAIS
interface, while the Data Path Controller (DPC) ensures that the proper
connectivity is established between the servers, storage ports, and the
virtual volume created via the CPP. Exceptions are forwarded to the
CPP, freeing the DPC to continue processing valid transactions.

The New Data Center 41


Chapter 4: Into the Pool

Virtualization
engine
Control path

Storage pool

Figure 23. FAIS splits the control and data paths for more efficient
execution of metadata mapping between virtual storage and servers.

Because the DPC function can be executed in an ASIC at the switch


level, it is possible to achieve very high performance without impacting
upper-layer applications. This is a significant benefit over host-based
and appliance-based solutions. And because communication between
the virtualization engine and the switch is supported by standards-
based APIs, it is possible to run a variety of virtualization software
solutions.
The central role a switch plays in providing connectivity between serv-
ers and storage and the FAIS-enabled ability to execute metadata
mapping for virtualization also creates new opportunities for fabric-
based services such as mirroring or data migration. With high perfor-
mance and support for heterogeneous storage systems, fabric-based
services can be implemented with much greater transparency than
alternate approaches and can scale over time to larger deployments.

42 The New Data Center


Brocade Fabric-based Storage Virtualization

Brocade Fabric-based Storage Virtualization


Engineered to the ANSI/INCITS T11 FAIS specification, the Brocade
FA4-18 Application Blade provides high-performance storage virtual-
ization for the Brocade 48000 Director and Brocade DCX Backbone.
NOTE: Information for the Brocade DCX Backbone also includes the
Brocade DCX-4S Backbone unless otherwise noted.

Servers

Virtualization
contol processor
Brocade DCX/DCX-4S
or Brocade 48000
with the Brocade FA4-18
Application Blade

Storage pools

Figure 24. The Brocade FA4-18 Application Blade provides line-speed


metadata map execution for non-disruptive storage pooling, mirroring
and data migration.

As shown in Figure 24, compatibility with both the Brocade 48000 and
Brocade DCX chassis enables the Brocade FA4-18 Application Blade to
extend the benefits of Brocade energy-efficient design and high band-
width to advanced fabric services without requiring a separate
enclosure. Interoperability with existing SAN infrastructures amplifies
this advantage, since any server connected to the SAN can be directed
to the FA4-18 blade for virtualization services. Line-speed metadata
mapping is achieved through purpose-built components instead of
relying on general-purpose processors that other vendors use.

The New Data Center 43


Chapter 4: Into the Pool

The virtualization application is provided by third-party and partner


solutions, including EMC Invista software. For Invista specifically, a
Control Path Cluster (CPC) consisting of two processor platforms
attached to the FA4-18 provides high availability and failover in the
event of link or unit failure. Initial configuration of storage pools is per-
formed on the Invista CPC and downloaded to the FA-18 for execution.
Because the virtualization functionality is driven in the fabric and
under configuration control of the CPC, this solution requires no host
middleware or host CPU cycles for attached servers.
For new data center design or upgrade, storage virtualization is a natu-
ral complement to server virtualization. Fabric-based storage
virtualization offers the added advantage of flexibility, performance,
and transparency to both servers and storage systems as well as
enhanced control over the virtual environment.

44 The New Data Center


Weaving a New Data
5
Center Fabric
Intelligent design in the storage infrastructure

In the early days of SAN adoption, storage networks tended to evolve


spontaneously in reaction to new requirements for additional ports to
accommodate new servers and storage devices. In practice, this
meant acquiring new fabric switches and joining them to an existing
fabric via E_Port connection, typically in a mesh configuration to pro-
vide alternate switch-to-switch links. As a result, data centers gradually
built very large and complex storage networks composed of 16- or 32-
port Fibre Channel switches. At a certain critical mass, these large
multi-switch fabrics became problematic and vulnerable to fabric-wide
disruptions through state change notification (SCN) broadcasts or fab-
ric reconfigurations. For large data centers in particular, the response
was to begin consolidating the fabric by deploying high-port-count
Fibre Channel directors at the core and using the 16- or 32-port
switches at the edge for device fan-out.
Consolidation of the fabric brings several concrete benefits, including
greater stability, high performance, and the ability to accommodate
growth in ports without excessive dependence on inter-switch links
(ISLs) to provide connectivity. A well-conceived core/edge SAN design
can provide optimum pathing between groups of servers and storage
ports with similar performance requirements, while simplifying man-
agement of SAN traffic. The concept of a managed unit of SAN is
predicated on the proper sizing of a fabric configuration to meet both
connectivity and manageability requirements. Keeping the SAN design
within rational boundaries, however, is now facilitated with new stan-
dards and features that bring more power and intelligence to the
fabric.

The New Data Center 45


Chapter 5: Weaving a New Data Center Fabric

As with server and storage consolidation, fabric consolidation is also


driven by the need to reduce the number of physical elements in the
data center and their associated power requirements. Each additional
switch means additional redundant power supplies, fans, heat genera-
tion, cooling load, and data center real estate. As with blade server
frames, high-port-density platforms such as the Brocade DCX Back-
bone enable more concentrated productivity in a smaller footprint and
with a lower total energy budget. The trend in new data center design
is therefore to architect the entire storage infrastructure for minimal
physical and energy impact while accommodating inevitable growth
over time. Although lower-port-count switches are still viable solutions
for departmental, small-to-medium size business (SMB) and fan-out
applications, Brocade backbones are now the cornerstone for opti-
mized data center fabric designs.

Better Fewer but Better


Storage area networks substantially differ from conventional data
communications networks in a number of ways. A typical LAN, for
example, is based on peer-to-peer communications with all end-points
(nodes) sharing equal access. The underlying assumption is that any
node can communicate with any other node at any time. A SAN, by
contrast, cannot rely on peer-to-peer connectivity since some nodes
are active (initiators/servers) and others are passive (storage targets).
Storage systems do not typically communicate with each other (with
the exception of disk-to-disk data replication or array-based virtualiza-
tion) across the SAN. Targets also do not initiate transactions, but
passively wait for an initiator to access them. Consequently, storage
networks must provide a range of unique services to facilitate discov-
ery of storage targets by servers, restrict access to only authorized
server/target pairs, zone or segregate traffic between designated
groups of servers and their targets, and provide notifications when
storage assets enter or depart the fabric. These services are not
required in conventional data communication networks. In addition,
storage traffic requires deterministic delivery, whereas LAN and WAN
protocols are typically best-effort delivery systems.
These distinctions play a central role in the proper design of data cen-
ter SANs. Unfortunately, some vendors fail to appreciate the unique
requirements of storage environments and recommend what are
essentially network-centric architectures instead of the more appropri-
ate storage-centric approach. Applying a network-centric design to
storage inevitably results in a failure to provide adequate safeguards
for storage traffic and a greater vulnerability to inefficiencies, disrup-
tion, or poor performance. Brocade's strategy is to promote storage-

46 The New Data Center


Better Fewer but Better

centric SAN designs that more readily accommodate the unique and
more demanding requirements of storage traffic and ensure stable
and highly available connectivity between servers and storage
systems.
A storage-centric fabric design is facilitated by concentrating key cor-
porate storage elements at the core, while accommodating server
access and departmental storage at the edge. As shown in Figure 25,
the SAN core can be built with high-port-density backbone platforms.
With up to 384 x 8 Gbps ports in a single chassis or up to 768 ports in
a dual-chassis configuration, the core layer can support hundreds of
storage ports and, depending on the appropriate fan-in ratio, thou-
sands of servers in a single high-performance solution. The Brocade
DCX Backbone, a 14U chassis with eight vertical blade slots is also
available in a 192-port 8U Brocade DCX-4S with four horizontal blade
slots—with compatibility for any Brocade DCX blade. Because two or
even three backbone chassis can be deployed in a single 19" rack or
adjacent racks, real estate is kept to a minimum. Power consumption
of less than a half watt per Gbps provides over 10x the energy effi-
ciency of comparable enterprise-class products. Doing more with less
is thus realized through compact product design and engineering
power efficiency down to the port

Servers
High-
performance
servers

Edge
switches

Brocade
DCX core

Departmental
storage

Primary
corporate
storage

Figure 25. A storage-centric core/edge topology provides flexibility in


deploying servers and storage assets while accommodating growth
over time.
The New Data Center 47
Chapter 5: Weaving a New Data Center Fabric

In this example, servers and storage assets are configured to best


meet the performance and traffic requirements of specific business
applications. Mission-critical servers with high-performance require-
ments, for example, can be attached directly to the core layer to
provide the optimum path to primary storage. Departmental storage
can be deployed at the edge layer, while still enabling servers to
access centralized storage resources. With 8 Gbps port connectivity
and the ability to trunk multiple inter-switch links between the edge
and core, this design provides the flexibility to support different band-
width and performance needs for a wide range of business
applications in a single coherent architecture.
In terms of data center consolidation, a single-rack, dual-chassis Bro-
cade DCX configuration of 768 ports can replace 48 x 16-port or 24 x
32-port switches, providing a much more efficient use of fabric
address space, centralized management, and microcode version con-
trol and a dramatic decrease in maintenance overhead, energy
consumption, and cable complexity. Consequently, current data center
best practices for storage consolidation now incorporate fabric consol-
idation as a foundation for shrinking the hardware footprint and its
associated energy costs. In addition, because the Brocade DCX 8 Gbps
port blades are backward compatible with 1, 2, and 4 Gbps speeds,
existing devices can be integrated into a new consolidated design with-
out expensive upgrades.

Intelligent by Design
The new data center fabric is characterized by high port density, com-
pact footprint, low energy costs, and streamlined management, but
the most significant differentiating features compared to conventional
SANs revolve around increased intelligence for storage data transport.
New functionality that streamlines data delivery, automates data
flows, and adapts to changed network conditions both ensures stable
operation and reduces the need for manual intervention and adminis-
trative oversight. Brocade has developed a number of intelligent fabric
capabilities under the umbrella term of Adaptive Networking services
to streamline fabric operations.
Large complex SANs, for example, typically support a wide variety of
business applications, ranging from high-performance and mission-
critical to moderate-performance requirements. In addition, storage-
specific applications such as tape backup may share the same infra-
structure as production applications. If all storage traffic types were
treated with the same priority, the potential would exist for congestion
and disruption of high-value applications impacted negatively by the

48 The New Data Center


Intelligent by Design

traffic load of moderate-value applications. Brocade addresses this


problem via a quality of service mechanism, which enables the storage
administrator to assign priority values to different applications.

QoS Priorities
High
Medium Servers
Low

Edge
Tape switches

Brocade
DCX core

Disk

Figure 26. Brocade QoS gives preferential treatment to high-value


applications through the fabric to ensure reliable delivery.

As shown in Figure 26, applications running on conventional or virtual-


ized servers can be assigned high, medium, or low priority delivery
through the fabric. This QoS solution guarantees that essential but
lower-priority applications such as tape backup do not overwhelm mis-
sion-critical applications such as online transaction processing (OLTP).
It also makes it much easier to deploy new applications over time or
migrate existing virtual machines since the QoS priority level of an
application moderates its consumption of available bandwidth. When
combined with the high performance and 8 Gbps port speed of Bro-
cade HBAs, switches, directors, and backbone platforms, QoS provides
an additional means to meet application requirements despite fluctua-
tions in aggregate traffic loads.
Because traffic loads vary over time and sudden spikes in workload
can occur unexpectedly, congestion on a link, particularly between the
fabric and a burdened storage port, can occur. Ideally, a flow control
mechanism would enable the fabric to slow the pace of traffic at the
source of the problem, typically a very active server generating an
atypical workload. Another Adaptive Networking service, Brocade
ingress rate limiting (IRL) proactively monitors the traffic levels on all
links and, when congestion is sensed on a specific link, identifies the

The New Data Center 49


Chapter 5: Weaving a New Data Center Fabric

initiating source. Ingress rate limiting allows the fabric switch to throt-
tle the transmission rate of a server to a speed lower than the
originally negotiated link speed.

mit
Rate li

Servers

Tape
Edge
switches

stion
Conge Brocade
DCX core

Disk

Figure 27. Ingress rate limiting enables the fabric to alleviate potential
congestion by throttling the transmission rate of the offending initiator.

In the example shown in Figure 27, the Brocade DCX monitors poten-
tial congestion on the link to a storage array and proactively reduces
the rate of transmission at the server source. If, for example, the
server HBA had originally negotiated an 8 Gbps transmission rate
when it initially logged in to the fabric, ingress rate limiting could
reduce the transmission rate to 4 Gbps or lower, depending on the vol-
ume of traffic to be reduced to alleviate congestion at the storage port.
Thus, without operator intervention, potentially disruptive congestion
events can be resolved proactively, while ensuring continuous opera-
tion of all applications.
Brocade's Adaptive Networking services also enable storage adminis-
trators to establish preferred paths for specific applications through
the fabric and the ability to fail over from a preferred path to an alter-
nate path if the preferred path is unavailable. This capability is
especially useful for isolating certain applications such as tape backup
or disk-to-disk replication to ensure that they always enter or exit on
the same inter-switch link to optimize the data flow and avoid over-
whelming other application streams.

50 The New Data Center


Intelligent by Design

Backup

ERP

Oracle

Figure 28. Preferred paths are established through traffic isolation


zones, which enforce separation of traffic through the fabric based on
designated applications.

Figure 28 illustrates a fabric with two primary business applications


(ERP and Oracle) and a tape backup segment. In this example, the
tape backup preferred path is isolated from the ERP and Oracle data-
base paths so that the high volume of traffic generated by backup
does not interfere with the production applications. Because the pre-
ferred path traffic isolation zone also accommodates failover to
alternate paths, the storage administrator does not have to intervene
manually if issues arise in a particular isolation zone.
To more easily identify which applications might require specialized
treatment with QoS, rate limiting, or traffic isolation, Brocade has pro-
vided a Top Talkers monitor for devices in the fabric. Top Talkers
automatically monitors the traffic pattern on each port to diagnose
over- or under-utilization of port bandwidth.

The New Data Center 51


Chapter 5: Weaving a New Data Center Fabric

Port 14 Port 20 Port 56


OA251 3/1 E12D2
OA251 3/1 E12D2

OA251 3/1 E12D2


OA251 3/1 E12D2

OA251 3/1 E12D2


OA251 3/1 E12D2

OA251 3/1 E12D2


OA251 3/1 E12D2

OA251 3/1 E12D2


OA251 3/1 E12D2

OA251 3/1 E12D2

Figure 29. By monitoring traffic activity on each port, Top Talkers can OA251 3/1 E12D2

identify which applications would most benefit from Adaptive Network-


ing services.

Applications that generate higher volumes of traffic through the fabric


are primary candidates for Adaptive Networking services, as shown in
Figure 29. This functionality is especially useful in virtual server envi-
ronments, since the deployment of new VMs or migration of VMs from
one platform to another can have unintended consequences. Top Talk-
ers can help indicate when a migration might be desirable to benefit
from higher bandwidth or preferred pathing.
In terms of aligning infrastructure to applications, Top Talkers allows
administrators to deploy fabric resources where and when they are
needed most. Configuring additional ISLs to create a higher-perfor-
mance trunk, for example, might be required for particularly active
applications, while moderate performance applications could continue
to function quite well on conventional links.

52 The New Data Center


Energy Efficient Fabrics

Energy Efficient Fabrics


In the previous era of readily available and relatively cheap energy,
data center design focused more on equipment placement and conve-
nient access than on the power requirements of the IT infrastructure.
Today, many data centers simply cannot obtain additional power
capacity from their utilities or are under severe budget constraints to
cover ongoing operational expense. Consequently, data center manag-
ers are scrutinizing the power requirements of every hardware
element and looking for means to reduce the total data center power
budget. As we have seen, this is a major driver for technologies such
as server virtualization and consolidation of hardware assets across
the data center, including storage and storage networking.
The energy consumption of data center storage systems and storage
networking products has been one of the key focal points of the Stor-
age Networking Industry Association (SNIA) in the form of the SNIA
Green Storage Initiative (GSI) and Green Storage Technical Working
Group (GS TWG). In January 2009, the SNIA GSI released the SNIA
Green Storage Power Measurement Specification as an initial docu-
ment to formulate standards for measuring the energy efficiency of
different classes of storage products. For storage systems, energy effi-
ciency can be defined in terms of watts per megabyte of storage
capacity. For fabric elements, energy efficiency can be defined in watts
per gigabytes/second bandwidth. Brocade played a leading role in the
formation of the SNIA GSI and participation in the GS TWG and leads
by example in pioneering the most energy-efficient storage fabric prod-
ucts in the market.
Achieving the greatest energy efficiency in fabric switches and direc-
tors requires a holistic view of product design so that all components
are optimized for low energy draw. Enterprise switches and directors,
for example, are typically provisioned with dual-redundant power sup-
plies for high availability. From an energy standpoint, it would be
preferable to operate with only a single power supply, but business
availability demands redundancy for failover. Consequently, it is critical
to design power supplies that have at least 80% efficiency in convert-
ing AC input power into DC output to service switch components.
Likewise, the cooling efficiency of fan modules and selection and
placement of discrete components for processing elements and port
cards all add to a product design optimized for high performance and
low energy consumption. Typically, for every watt of power consumed
for productive IT processing, another watt is required to cool the equip-

The New Data Center 53


Chapter 5: Weaving a New Data Center Fabric

ment. Dramatically lowering the energy consumption of fabric switches


and directors therefore has a dual benefit in terms of reducing both
direct power costs and indirect cooling overhead.
The Brocade DCX achieves an energy efficiency of less than a watt of
power per gigabit of bandwidth. That is 10x more efficient than compa-
rable directors on the market and frees up available power for other IT
equipment. To highlight this difference in product design philosophy, in
laboratory tests a fully loaded Brocade director consumed less power
(4.6 Amps) than an empty chassis from a competitor (5.1 Amps). The
difference in energy draw of two comparably configured directors
would be enough to power an entire storage array. Energy efficient
switch and director designs have a multiplier benefit as more ele-
ments are added to the SAN. Although the fabric infrastructure as a
whole is a small part of the total data center energy budget, it can be
leveraged to reduce costs and make better use of available power
resources. As shown in Figure 30, power measurements on an 8 Gbps
port at full speed show the Brocade DCX advantage.

Figure 30. Brocade DCX power consumption at full speed on an 8


Gbps port compared to the competition.

54 The New Data Center


Safeguarding Storage Data

Safeguarding Storage Data


Unfortunately, SAN security has been a back-burner issue for many
storage administrators due in part to several myths about the security
of data centers in general. These myths (listed below) are addressed in
detail in Roger Bouchard's Securing Fibre Channel Fabrics (Brocade
Bookshelf) and include assumptions about data center physical secu-
rity and the difficulty of hacking into Fibre Channel networks and
protocols. Given that most breaches in storage security occur through
operator error and lost disks or tape cartridges, however, threats to
storage security are typically internal, not external, risks.

SAN Security Myths


• SAN Security Myth #1. SANs are inherently secure since they
are in a closed, physically protected environment.
• SAN Security Myth #2. The Fibre Channel protocol is not well
known by hackers and there are almost no avenues available
to attack FC fabrics.
• SAN Security Myth #3. You can't “sniff” optical fiber without
cutting it first and causing disruption.
• SAN Security Myth #4. The SAN is not connected to the Inter-
net so there is no risk from outside attackers.
• SAN Security Myth #5. Even if fiber cables could be sniffed,
there are so many protocol layers, file systems, and database
formats that the data would not be legible in any case.
• SAN Security Myth #6. Even if fiber cables could be sniffed,
the amount of data to capture is simply too large to capture
realistically and would require expensive equipment to do so.
• SAN Security Myth #7. If the switches already come with built-
in security features, why should I be concerned with imple-
menting security features in the SAN?

The centrality of the fabric in providing both host and storage connec-
tivity provides new opportunities for safeguarding storage data. As with
other intelligent fabric services, fabric-based security mechanisms can
help ensure consistent implementation of security policies and the
flexibility to apply higher levels of security where they are most
needed.

The New Data Center 55


Chapter 5: Weaving a New Data Center Fabric

Because data on disk or tape is vulnerable to theft or loss, sensitive


information is at risk unless the data itself is encrypted. Best practices
for guarding corporate and customer information consequently man-
date full encryption of data as it is written to disk or tape and a secure
means to manage the encryption keys used to encrypt and decrypt the
data. Brocade has developed a fabric-based solution for encrypting
data-at-rest that is available as a blade for the Brocade DCX Backbone
(Brocade FS8-18 Encryption Blade) or as a standalone switch (Bro-
cade Encryption Switch).

Servers

Key
management

Storage arrays Tape

Figure 31. The Brocade Encryption Switch provides secure encryption


for disk or tape.

Both the 16-port encryption blade for Brocade DCX and the 32-port
encryption switch provide 8 Gbps per port for fabric or device connec-
tivity and an aggregate 96 Gbps of hardware-based encryption
throughput and 48 Gbps of data compression bandwidth. The combi-
nation of encryption and data compression enables greater efficiency
in both storing and securing data. For encryption to disk, the IEEE
AES256-XTS encryption algorithm facilitates encryption of disk blocks
without increasing the amount of data per block. For encryption to
tape, the AES256-GCM encryption algorithm appends authenticating
metadata to each encrypted data block. Because tape devices accom-
modate variable block sizes, encryption does not impede backup
operations. From the host standpoint, both encryption processes are
transparent and due to the high performance of the Brocade encryp-
tion engine there is no impact on response time.

56 The New Data Center


Safeguarding Storage Data

As shown in Figure 31, the Brocade Encryption Switch supports both


fabric attachment and end-device connectivity. Within both the encryp-
tion blade and switch, virtual targets are presented to the hosts and
virtual initiators are presented to the downstream storage array or
tape subsystem ports. Frame redirection, a Fabric OS technology, is
used to forward traffic to the encryption device for encryption on data
writes and decryption on data reads. In the case of direct device
attachment (for example, the tape device connected to the encryption
switch in Figure 31), the encrypted data is simply switched to the
appropriate port.
Because no additional middleware is required for hosts or storage
devices, this solution easily integrates into existing fabrics and can
provide a much higher level of data security with minimal reconfigura-
tion. Key management for safeguarding and authenticating encryption
keys is provided via Ethernet connection to the Brocade encryption
device.

Brocade Key Management Solutions


• NetApp KM500 Lifetime Key Management (LKM) Appliance
• EMC RSA Key Manager (RKM) Server Appliance
• HP StorageWorks Secure Key Manager (SKM)
• Thales Encryption Manager for Storage
For more about these key management solutions, visit the Brocade
Encryption Switch product page on www.brocade.com and find the
Technical Briefs section a the bottom of the page.

In addition to data encryption for disk and tape, fabric-based security


includes features for protecting the integrity of fabric connectivity and
safeguarding management interfaces. Brocade switches and directors
use access control lists (ACLs) to allow access to the fabric for only
authorized switches and end devices. Based on the port or device's
WWN, Switch Connection Control (SCC) and Device Connection Control
(DCC) prevent the intentional or accidental connection of a new switch
or device that would potentially pose a security threat, as shown in
Figure 32. Once configured, the fabric is essentially locked down to
prevent unauthorized access until the administrator specifically
defines a new connection. Although this requires additional manage-

The New Data Center 57


Chapter 5: Weaving a New Data Center Fabric

ment intervention, it precludes disruptive fabric reconfigurations and


security breaches that could otherwise occur through deliberate action
or operator error.

X
Unauthorized
device

Figure 32. Using fabric ACLs to secure switch and device connectivity.

For securing storage data-in-flight, Brocade also provides hardware-


based encryption on its 8 Gbps HBAs and the Brocade 7800 Extension
Switch and FX8-24 Extension Blade products. In high-security environ-
ments, meeting regulatory compliance standards can require
encrypting all data along the entire data path from host to the primary
storage target as well as secondary storage in disaster recovery sce-
narios. This capability is now available across the entire fabric with no
impact to fabric performance or availability.

Multi-protocol Data Center Fabrics


Data center best practices have historically prescribed the separation
of networks according to function. Creating a dedicated storage area
network, for example, ensures that storage traffic is unimpeded by the
more erratic traffic patterns typical of messaging or data communica-
tions networks. In part, this separation was facilitated by the fact that
nearly all storage networks used a unique protocol and transport, that
is, Fibre Channel, while LANs are almost universally based on Ether-
net. This situation changed somewhat with the introduction of iSCSI for
transporting SCSI block data over conventional Ethernet and TCP/IP,
although most iSCSI vendors still recommend building a dedicated IP
storage network for iSCSI hosts and storage.

58 The New Data Center


Multi-protocol Data Center Fabrics

Fibre Channel continues to be the protocol of choice for high-perfor-


mance, highly available SANs. There are several reasons for this,
including the ready availability of diverse Fibre Channel products and
the continued evolution of the technology to higher speeds and richer
functionality over time. Still, although nearly all data centers worldwide
run their most mission-critical applications on Fibre Channel SANs,
many data centers also house hundreds or thousands of moderate-
performance standalone servers with legacy DAS. It is difficult to cost
justify installation of Fibre Channel HBAs into low-cost servers if the
cost of storage connectivity exceeds the cost of the server itself.
iSCSI has found its niche market primarily in cost-sensitive small and
medium business (SMB) environments. It offers the advantage of low-
cost per-server connectivity since iSCSI device drivers are readily avail-
able for a variety of operating systems at no cost and can be run over
conventional Ethernet or (preferably) Gigabit Ethernet interfaces. The
IP SAN switched infrastructure can be built with off-the-shelf, low-cost
Ethernet switches. And various storage system vendors offer iSCSI
interfaces for mid-range storage systems and tape backup subsys-
tems. Of course, Gigabit Ethernet does not have the performance of 4
or 8 Gbps Fibre Channel, but for mid-tier applications Gigabit Ethernet
may be sufficient and the total cost for implementing shared storage is
very reasonable, even when compared to direct-attached SCSI
storage.
Using iSCSI to transition from direct-attached to shared storage yields
most of the benefits associated with traditional SANs. Using iSCSI con-
nectivity, servers are no longer the exclusive “owner” of their own
(direct-attached) storage, but can share storage systems over the stor-
age network. If a particular server fails, alternate servers can bind to
the failed server's LUNs and continue operation. As with conventional
Fibre Channel SANs, adding storage capacity to the network is no lon-
ger disruptive and can be performed on the fly. In terms of
management overhead, the greatest benefit of converting from direct-
attached to shared storage is the ability to centralize backup opera-
tions. Instead of backing up individual standalone servers, backup can
now be performed across the IP SAN without disrupting client access.
In addition, features such as iSCSI SAN boot can simplify server
administration by centralizing management of boot images instead of
touching hundreds of individual servers.

The New Data Center 59


Chapter 5: Weaving a New Data Center Fabric

One significant drawback of iSCSI, however, is that by using commodity


Ethernet switches for the IP SAN infrastructure, none of the storage-
specific features built into Fibre Channel fabric switches are available.
Fabric login services, automatic address assignment, simple name
server (SNS) registration, device discovery, zoning, and other storage
services are simply unavailable in conventional Ethernet switches.
Consequently, although small iSCSI deployments can be configured
manually to ensure proper assignment of servers to their storage
LUNs, iSCSI is difficult to manage when scaled to larger deployments.
In addition, because Ethernet switches are indifferent to the upper-
layer IP protocols they carry, it is more difficult to diagnose storage-
related problems that might arise. iSCSI standards do include the
Internet Simple Name Server (iSNS) protocol for device authentication
and discovery, but iSNS must be supplied as a third-party add-on to
the IP SAN.
Collectively, these factors overshadow the performance difference
between Gigabit Ethernet and 8 Gbps Fibre Channel. Performance
becomes less of an issue when iSCSI is run over 10 Gigabit Ethernet,
but that typically requires a specialized iSCSI network interface card
(NIC) with TCP offload, Serial RDMA for iSCSI (iSER), 10 GbE switches,
and 10 GbE storage ports. The cost advantage of iSCSI at 1 GbE is
therefore quickly undermined when iSCSI attempts to achieve the per-
formance levels common to Fibre Channel. Even with these additional
costs, the basic fabric and storage services embedded in Fibre Chan-
nel switches are still unavailable.
For data center applications, however, low-cost iSCSI running over
standard Gigabit Ethernet does make sense when standalone DAS
servers are integrated into existing Fibre Channel SAN infrastructures
via gateway products with iSCSI-to-Fibre Channel protocol conversion.
The Brocade FC4-16IP iSCSI Blade for the Brocade 48000 Director, for
example, can aggregate hundreds of iSCSI-based servers for connec-
tivity into an existing SAN, as shown in Figure 33. This enables
formerly standalone low-cost servers to enjoy the benefits of shared
storage while advanced storage services are supplied by the fabric
itself. Simplifying tape backup operations in itself is often sufficient
cost justification for iSCSI integration via gateways and if free iSCSI
device drivers are used, the per-server connectivity cost is negligible.

60 The New Data Center


Multi-protocol Data Center Fabrics

FC servers

GbE
switches
Brocade director
with iSCSI blade

Rack-mount
1U servers
FC storage arrays FC tape

Figure 33. Integrating formerly standalone mid-tier servers into the


data center fabric with an iSCSI blade in the Brocade DCX.

As discussed in Chapter 3, FCoE is another multi-protocol option for


integrating new servers into existing data center fabrics. Unlike the
iSCSI protocol which uses Layer 3 IP routing and TCP for packet recov-
ery, FCoE operates at Layer 2 switching and relies on Fibre Channel
protocols for recovery. FCoE is therefore much closer to native Fibre
Channel in terms of protocol overhead and performance but does
require an additional level of frame encapsulation and decapsulation
for transport over Ethernet. Another dissimilarity to iSCSI is that FCoE
requires a specialized host adapter card, a CNA that supports FCoE
and 10 Gbps Data Center Bridging. In fact, to replicate the flow control
and deterministic performance of native Fibre Channel, Ethernet
switches between the host and target must be DCB capable. FCoE
therefore does not have the obvious cost advantage of iSCSI but does
offer a comparable means to simplify cabling by reducing the number
of server connections needed to carry both messaging and storage
traffic.
Although FCoE is being aggressively promoted by some network ven-
dors, the cost/benefit advantage has yet to be demonstrated in
practice. In current economic conditions, many customers are hesitant
to adopt new technologies that have no proven track record or viable
ROI. Although Brocade has developed both CNA adapters and FCoE
switch products for customers who are ready to deploy them, the mar-
ket will determine if simplifying server connectivity is sufficient cost

The New Data Center 61


Chapter 5: Weaving a New Data Center Fabric

justification for FCoE adoption. At the point when 10 Gbps DCB-


enabled switches and CNA technology become commoditized, FCoE
will certainly become an attractive option.
Other enhanced solutions for data center fabrics include Fibre Chan-
nel over IP (FCIP) for SAN extension, Virtual Fabrics (VF), and
Integrated Routing (IR). As discussed in the next section on disaster
recovery, FCIP is used to extend Fibre Channel over conventional IP
networks for remote data replication or remote tape backup. Virtual
Fabrics protocols enable a single complex fabric to be subdivided into
separate virtual SANs in order to segregate different applications and
protect against fabric-wide disruptions. IR SAN routing protocols
enable connectivity between two or more independent SANs for
resource sharing without creating one large flat network.

Virtual Fabric 1 Virtual Fabric 2

Virtual Fabric 2

Virtual Fabric 3

Figure 34. Using Virtual Fabrics to isolate applications and minimize


fabric-wide disruptions.

62 The New Data Center


Multi-protocol Data Center Fabrics

As shown in Figure 34, Virtual Fabrics is used to divide a single physi-


cal SAN into multiple logical SANs. Each virtual fabric behaves as a
separate fabric entity, logical fabric, with its own simple name server
(SNS) and registered state change notification (RSCN) Brocade
domain. Logical fabrics can span multiple switches, providing greater
flexibility in how servers and storage within a logical fabric can be
deployed. To isolate frame routing between the logical fabrics, VF tag-
ging headers are applied to the appropriate frames as they are issued.
The headers are then removed by the destination switch before the
frames are sent on to the appropriate initiator or target. Theoretically,
the VF tagging header would allow for 4096 logical fabrics on a single
physical SAN configuration, although in practice only a few are typically
used.
Virtual Fabrics is a means to consolidate SAN assets while enforcing
managed units of SAN. In the example shown in Figure 33, for exam-
ple, each of the three Logical Fabrics could be administered by a
separate department with different storage, security, and bill-back pol-
icies. Although the total SAN configuration may be quite large, the
division into separately managed logical fabrics simplifies administra-
tion while leveraging the data center's investment in SAN technology.
Brocade Fabric OS supports Virtual Fabrics across Brocade switches,
director, and backbone platforms.
Where Virtual Fabrics technology can be used to isolate resources on
the same physical fabric, Integrated Routing (IR) is used to share
resources between separate physical fabrics. Without IR, connecting
two or more fabrics together would create a large flat network, analo-
gous to bridging in LAN environments. Creating very large fabrics,
however, can lead to much greater complexity in management and vul-
nerability to fabric-wide disruptions.

The New Data Center 63


Chapter 5: Weaving a New Data Center Fabric

SAN B

SAN C

IR SAN
router
SAN A

Figure 35. IR facilitates resource sharing between physically indepen-


dent SANs.

As shown in Figure 35, IR SAN routers provide both connectivity and


fault isolation between separate SANs. In this example, a server on
SAN A can access a storage array on SAN B (dashed line) via the SAN
router. From the perspective of the server, the storage array is a local
resource on SAN A. The SAN router performs network address transla-
tion to proxy the appearance of the storage array and to conform to the
address space of each SAN. Because each SAN is autonomous, fabric
reconfigurations or RSCN broadcasts on one SAN will not adversely
impact the others. Brocade products such as the Brocade 7800 Exten-
sion Switch and FX4-24 Extension Blade for the Brocade DCX
Backbone provide routing capability for non-disruptive resource shar-
ing between independent SANs.

Fabric-based Disaster Recovery


Deploying new technologies to achieve greater energy efficiency, hard-
ware consolidation, and more intelligence in the data center fabric
cannot ensure data availability if the data center itself is vulnerable to
disruption or outage. Although data center facilities may be designed
to withstand seismic or catastrophic weather events, a major disrup-
tion can result in prolonged outages that put business or the viability
of the enterprise at risk. Consequently, most data centers have some
degree of disaster recovery planning that provides either instanta-

64 The New Data Center


Fabric-based Disaster Recovery

neous failover to an alternate site or recovery within acceptable time


frames for business resumption. Fortunately, disaster recovery tech-
nology has improved significantly in recent years and now enables
companies to implement more economical disaster recovery solutions
that do not burden the data center with excessive costs or
administration.
Disaster recovery planning today is bounded by tighter budget con-
straints and conventional recovery point and recovery time (RPO/RTO)
objectives. In addition, more recent examples of region-wide disrup-
tions (for example, Northeast power blackouts and hurricanes Katrina
and Rita in the US) have raised concerns over how far away a recovery
site must be to ensure reliable failover. The distance between primary
and failover sites is also affected by the type of data protection
required. Synchronous disk-to-disk data replication, for example, is lim-
ited to metropolitan distances, typically 100 miles or less.
Synchronous data replication ensures that every transaction is safely
duplicated to a remote location, but the distance may not be sufficient
to protect against regional events. Asynchronous data replication buf-
fers multiple transactions before transmission, and so may miss the
most recent transaction if a failure occurs. It does, however, tolerate
extremely long-distance replication and is currently deployed for disas-
ter recovery installations that span transoceanic and transcontinental
distances.
Both synchronous and asynchronous data replication over distance
require some kind of wide area service such as metro dark fiber,
dense wavelength division multiplexing (DWDM), Synchronous Optical
Networking (SONET), or IP network—and the recurring monthly cost of
WAN links is typically the most expensive operational cost in a disaster
recovery implementation. To connect primary and secondary data cen-
ter SANs efficiently, then, requires technology to optimize use of wide
area links in order to transmit more data in less time and the flexibility
to deploy long-distance replication over the most cost-effective WAN
links appropriate for the application.

The New Data Center 65


Chapter 5: Weaving a New Data Center Fabric

Achieving maximum utilization of metro or wide area links is facilitated


by combining several technologies, including high-speed bandwidth,
port buffers, data compression, rate limiting, and specialized algo-
rithms such as SCSI write acceleration and tape pipelining. For
metropolitan distances suitable for synchronous disk-to-disk data rep-
lication, for example, native Fibre Channel extension can be
implemented up to 218 miles at 8 Gbps using Brocade 8 Gbps port
cards in the Brocade 48000 or Brocade DCX. While the distance sup-
ported is more than adequate for synchronous applications, the 8
Gbps bandwidth ensures maximum utilization of dark fiber or MAN
services. In order to avoid credit starvation at high speeds, Brocade
switch architecture allocates additional port buffers for continuous
performance. Even longer distances for native Fibre Channel transport
are possible at lower port speeds.
Commonly available IP network links are typically used for long-dis-
tance asynchronous data replication. Fibre Channel over IP enables
Fibre Channel-originated traffic to pass over conventional IP infrastruc-
tures via frame encapsulation of Fibre Channel within TCP/IP. FCIP is
now used for disaster recovery solutions that span thousands of miles
and because it uses standard IP services is more economical than
other WAN transports. Brocade has developed auxiliary technologies
to achieve even higher performance over IP networks. Data compres-
sion, for example, can provide a 5x or greater increase in link capacity
and so enable slower WAN links to carry more useful traffic. A 45
Megabits per second (Mbps) T3 WAN link typically provides about 4.5
Megabytes per second (MBps) of data throughput. By using data com-
pression, the throughput can be increased to 25 MBps. This is
equivalent to using far more expensive 155 Mbps OC3 WAN links to
achieve the same data throughput.
Likewise, significant performance improvements over conventional IP
networks can be achieved with Brocade FastWrite acceleration and
tape pipelining algorithms. These features dramatically reduce the
protocol overhead that would otherwise occupy WAN bandwidth and
enable much faster data transfers on a given link speed. Brocade
FICON acceleration provides comparable functionality for mainframe
environments. Collectively, these features achieve the objectives of
maximizing utilization of expensive WAN services, while ensuring data
integrity for disaster recovery and remote replication applications.

66 The New Data Center


Fabric-based Disaster Recovery

Brocade Brocade
DCX DCX

DWDM

Brocade Brocade
FX8-24 in DCX FX8-24 in DCX

IP

Brocade Brocade
7800 7800

IP

Figure 36. Long-distance connectivity options using Brocade devices.

As shown in Figure 36, Brocade DCX and SAN extension products offer
a variety of ways to implement long-distance SAN connectivity for
disaster recovery and other remote implementations. For synchronous
disk-to-disk data replication within a metropolitan circumference,
native Fibre Channel at 8 Gbps or 10 Gbps can be driven directly from
Brocade DCX ports over dark fiber or DWDM. For asynchronous repli-
cation over hundreds or thousands of miles, the Brocade 7800 and
FX8-24 extension platforms covert native Fibre Channel to FCIP for
transport over conventional IP network infrastructures. These solu-
tions provide flexible options for storage architects to deploy the most
appropriate form of data protection based on specific application
needs. Many large data centers use a combination of extension tech-
nologies to provide both synchronous replication within metro
boundaries to capture every transaction and asynchronous FCIP-
based extension to more distant recovery sites as a safeguard against
regional disruptions.

The New Data Center 67


Chapter 5: Weaving a New Data Center Fabric

68 The New Data Center


The New Data Center LAN
6
Building a cost-effective, energy-efficient,
high-performance, and intelligent network

Just as data center fabrics bind application servers to storage, the


data center Ethernet network brings server resources and processing
power to clients. Although the fundamental principles of data center
network design have not changed significantly, the network is under
increasing pressure to serve more complex and varied client needs.
According to the International Data Corporation (IDC), for example, the
growth of non-PC client data access is five times greater than that of
conventional PC-based users as shown by the rapid proliferation of
PDAs, smart phones, and other mobile and wireless devices. This
change applies to traditional in-house clients as well as external cus-
tomers and puts additional pressure on both corporate intranet and
Internet network access.
Bandwidth is also becoming an issue. The convergence of voice, video,
graphics, and data over a common infrastructure is a driving force
behind the shift from 1 GbE to 10 GbE in most data centers. Rich con-
tent is not simply a roadside attraction for modern business but a
necessary competitive advantage for attracting and retaining custom-
ers. Use of multi-core processors in server platforms increases the
processing power and reduces the number of requisite connections
per platform, but also requires more raw bandwidth per connection.
Server virtualization is having the same effect. If 20 virtual machines
are now sharing the same physical network port previously occupied
by one physical machine, the port speed must necessarily be
increased to accommodate the potential 20x increase in client
requests.
Server virtualization's dense compute environment is also driving port
density in the network interconnect, especially when virtualization is
installed on blade servers. Physical consolidation of network connec-

The New Data Center 69


Chapter 6: The New Data Center LAN

tivity is important for both rationalizing the cable plant and in providing
flexibility to accommodate mobility of VMs as applications are
migrated from one platform to another. Where previously server net-
work access was adequately served by 1 Gbps ports, top-of-rack
access layer switches now must provide compact connectivity at 10
Gbps. This, in turn, requires more high-speed ports at the aggregation
and core layers to accommodate higher traffic volumes.
Other trends such as software as a service (SaaS) and Web-based
business applications are shifting the burden of data processing from
remote or branch clients back to the data center. To maintain accept-
able response times and ensure equitable service to multiple
concurrent clients, preprocessing of data flows helps offload server
CPU cycles and provides higher availability. Application layer (Layer 4–
7) networking is therefore gaining traction as a means to balance
workloads and offload networking protocol processing. By accelerating
application access, more transactions can be handled in less time and
with less congestion at the server front-end. Web-based applications
in particular benefit from a network-based hardware assist to ensure
reliability and availability to internal and external users.
Even with server consolidation, blade frames, and virtualization, serv-
ers collectively still account for the majority of data center power and
cooling requirements. Network infrastructure, however, still incurs a
significant power and cooling overhead and data center managers are
now evaluating power consumption as one of the key criteria in net-
work equipment selection. In addition, data center floor space is at a
premium and more compact, higher-port-density network switches can
save valuable real estate.
Another cost-cutting trend for large enterprises is the consolidation of
multiple data centers to one or just a few larger regional data centers.
Such large-scale consolidation typically involves construction of new
facilities that can leverage state-of-the-art energy efficiencies such as
solar power, air economizers, fly-wheel technology, and hot/cold aisle
floor plans (see Figure 3 on page 11). The selection of new IT equip-
ment is also an essential factor in maximizing the benefit of
consolidation, maintaining availability, and reducing ongoing opera-
tional expense. Since the new data center network infrastructure must
now support client traffic that was previously distributed over multiple
data centers, deploying a high-performance LAN with advanced appli-
cation support is crucial for a successful consolidation strategy. In
addition, the reduction of available data centers increases the need
for security throughout the network infrastructure to ensure data integ-
rity and application availability.

70 The New Data Center


A Layered Architecture

A Layered Architecture
With tens of thousands of installations worldwide, data center net-
works have evolved into a common infrastructure built on multiple
layers of connectivity. The three fundamental layers common to nearly
all data center networks are the access, aggregation, and core layers.
This basic architecture has proven to be the most suitable for providing
flexibility, high performance, and resiliency and can be scaled from
moderate to very large infrastructures.

Mission-critical General-purpose
application servers application servers

Access

Aggregation

Core

External
network

Figure 37. Access, aggregation, and core layers in the data center
network.

As shown in Figure 37, the conventional three-layer network architec-


ture provides a hierarchy of connectivity that enable servers to
communicate with each other (for cluster and HPC environments) and
with external clients. Typically, higher bandwidth is provided at the
aggregation and core layers to accommodate the high volume of
access layer inputs, although high-performance applications may also

The New Data Center 71


Chapter 6: The New Data Center LAN

require 10 Gbps links. Scalability is achieved by adding more switches


at the requisite layers as the population of physical or virtual servers
and volume of traffic increases over time.
The access layer provides the direct network connection to application
and file servers. Servers are typically provisioned with two or more GbE
or 10 GbE network ports for redundant connectivity. Server platforms
vary from standalone servers to 1U rack-mount servers and blade
servers with passthrough cabling or bladed Ethernet switches. Access
layer switches typically provide basic Layer 2 (MAC-based) and Layer 3
(IP-based) switching for server connectivity and often have higher
speed 10 GbE uplink ports to consolidate connectivity to the aggrega-
tion layer.
Because servers represent the highest population of platforms in the
data center, the access layer functions as the fan-in point to join many
dedicated network connections to fewer but higher-speed shared con-
nections. Unless designed otherwise, the access layer is therefore
typically oversubscribed in a 6:1 or higher ratio of server network ports
to uplink ports. In Figure 37 on page 71, for example, the mission-criti-
cal servers could be provisioned with 10 GbE network interfaces and a
1:1 ratio for uplink. The general purpose servers, by contrast, would be
adequately supported with 1 GbE network ports and a 6:1 or higher
oversubscription ratio.
Access layer switches are available in a variety of port densities and
can be deployed for optimal cabling and maintenance. Options for
switch placement range from top of rack to middle of rack, middle of
row, and end of row. As illustrated in Figure 38, top-of-rack access
layer switches are typically deployed in redundant pairs with cabling
run to each racked server. This is a common configuration for medium
and small server farms and enables each rack to be managed as a sin-
gle entity. A middle-of-rack configuration is similar but with multiple 1U
switches deployed throughout the stack to further simplify cabling.
For high-availability environments, however, larger switches with
redundant power supplies and switch modules can be positioned in
middle-of-row or end-of-row configurations. In these deployments, mid-
dle-of-row placement facilitates shorter cable runs, while end-of-row
placement requires longer cable runs to the most distant racks. In
either case, high-availability network access is enabled by the hard-
ened architecture of HA access switches.

72 The New Data Center


A Layered Architecture

Top of rack Middle of row End of row


(ToR) (MoR) (EoR)

Figure 38. Access layer switch placement is determined by availability,


port density, and cable strategy.

Examples of top-of-rack access solutions include Brocade FastIron


Edge Series switches. Because different applications can have differ-
ent performance and availability requirements, these access switches
offer multiple connectivity options (10, 100, or 1000 Mbps and 10
Gbps) and redundant features. Within the data center, the access
layer typically supports application servers but can also be used to
support in-house client workstations. In conventional use, the data
center access layer supports servers and clients and workstations are
connected at the network edge.
In addition to scalable server connectivity, upstream links to the aggre-
gation layer can be optimized for high availability in metropolitan area
networks (MANs) through value-added features such as the Metro
Ring Protocol (MRP) and Virtual Switch Redundancy Protocol (VSRP).
As discussed in more detail later in this chapter, these features
replace conventional Spanning Tree Protocol (STP) for metro and cam-
pus environments with a much faster, sub-second recovery time for
failed links.
For modern data centers, access layer services can also include power
over Ethernet (PoE) to support voice over IP (VoIP) telecommunications
systems and wireless access points for in-house clients as well as
security monitoring. The ability to provide both data and power over
Ethernet greatly simplifies the wiring infrastructure and facilitates
resource management.

The New Data Center 73


Chapter 6: The New Data Center LAN

At the aggregation layer, uplinks from multiple access-layer switches


are further consolidated into fewer high-availability and high-perfor-
mance switches, which provide advanced routing functions and
upstream connectivity to the core layer. Examples of aggregation-layer
switches include the Brocade BigIron RX Series (with up to 5.12 Tbps
switching capacity) with Layer 2 and Layer 3 switching and the Bro-
cade ServerIron ADX series with Layer 4–7 application switching.
Because the aggregation layer must support the traffic flows of poten-
tially thousands of downstream servers, performance and availability
are absolutely critical.
As the name implies, the network core is the nucleus of the data cen-
ter LAN and provides the top-layer switching between all devices
connected via the aggregation and access layers. In a classic three-tier
model, the core also provides connectivity to the external corporate
network, intranet, and Internet. In addition to high-performance 10
Gbps Ethernet ports, core switches can be provisioned with OC-12 or
higher WAN interfaces. Examples of network core switches include the
Brocade NetIron MLX Series switches with up to 7.68 Tbps switching
capacity. These enterprise-class switches provide high availability and
fault tolerance to ensure reliable data access.

Consolidating Network Tiers


The access/aggregation/core architecture is not a rigid blueprint for
data center networking. Although it is possible to attach servers
directly to the core or aggregation layer, there are some advantages to
maintaining distinct connectivity tiers. Layer 2 domains, for example,
can be managed with a separate access layer linked through aggrega-
tion points. In addition, advanced service options available for
aggregation-class switches can be shared by more downstream
devices connected to standard access switches. A three-tier architec-
ture also provides flexibility in selectively deploying bandwidth and
services that align with specific application requirements.
With products such as the Brocade BigIron RX Series switches, how-
ever, it is possible to collapse the functionality of a conventional multi-
tier architecture into a smaller footprint. By providing support for 768 x
1 Gbps downstream ports and 64 x 10 Gbps upstream ports, consoli-
dation of port connectivity can be achieved with an accompanying
reduction in power draw and cooling overhead compared to a standard
multi-switch design, as shown in Figure 39.

74 The New Data Center


Design Considerations

64 x 10 Gbps ports

768 x 1 Gbps ports

Figure 39. A Brocade BigIron RX Series switch consolidates connectiv-


ity in a more energy efficient footprint.

In this example, Layer 2 domains are segregated via VLANs and


advanced aggregation-level services can be integrated directly into the
BigIron chassis. In addition, different-speed port cards can be provi-
sioned to accommodate both moderate- and high-performance
requirements, with up to 512 x 10 Gbps ports per chassis. For modern
data center networks, the advantage of centralizing connectivity and
management is complemented by reduced power consumption and
consolidation of rack space.

Design Considerations
Although each network tier has unique functional requirements, the
entire data center LAN must provide high availability, high perfor-
mance, security for data flows, and visibility for management. Proper
product selection and interoperability between tiers is therefore essen-
tial for building a resilient data center network infrastructure that
enables maximum utilization of resources while minimizing opera-
tional expense. A properly designed network infrastructure, in turn, is a
foundation layer for building higher-level network services to automate
data transport processes such as network resource allocation and pro-
active network management.
Consolidate to Accommodate Growth
One of the advantages of a tiered data center LAN infrastructure is
that it can be expanded to accommodate growth of servers and clients
by adding more switches at the appropriate layers. Unfortunately, this
frequently results in the spontaneous acquisition of more and more
equipment over time as network managers react to increasing
demand. At some point the sheer number of network devices makes
the network difficult to manage and troubleshoot, increases the com-
plexity of the cable plant, and invariably introduces congestion points
that degrade network performance.

The New Data Center 75


Chapter 6: The New Data Center LAN

Network consolidation via larger, higher-port-density switches can help


resolve space and cooling issues in the data center, and it can also
facilitate planning for growth. Brocade BigIron RX Series switches, for
example, are designed to scale from moderate to high port-count
requirements in a single chassis for both access and aggregation layer
deployment (greater than 1500 x 1 Gbps or 512 x 10 Gbps ports).
Increased port density alone, however, is not sufficient to accommo-
date growth if increasing the port count results in degraded
performance on each port. Consequently, BigIron RX Series switches
are engineered to support over 5 Tbps aggregate bandwidth to ensure
that even fully loaded configurations deliver wire-speed throughput.
From a management standpoint, network consolidation significantly
reduces the number of elements to configure and monitor and stream-
lines microcode upgrades. A large multi-slot chassis that replaces 10
discrete switches, for example, simplifies the network management
map and makes it much easier to identify traffic flows through the net-
work infrastructure.
Network Resiliency
Early proprietary data communications networks based on SNA and
3270 protocols were predicated on high availability for remote user
access to centralized mainframe applications. IP networking, by con-
trast, was originally a best-effort delivery mechanism designed to
function in potentially congested or lossy infrastructures (for example
disruption due to nuclear exchange). Now that IP networking is the
mainstream mechanism for virtually all business transactions world-
wide, high availability is absolutely essential for day-to-day operations
and best-effort delivery is no longer acceptable.
Network resiliency has two major components: the high availability
architecture of individual switches and the high availability design of a
multi-switch network. For the former, redundant power supplies, fan
modules, and switching blades ensure that an individual unit can with-
stand component failures. For the latter, redundant pathing through
the network using failover links and routing protocols ensures that the
loss of an individual switch or link will not result in loss of data access.
Resilient routing protocols such as Virtual Router Redundancy Protocol
(VRRP) as defined in RFC 3768 provide a standards-based mechanism
to ensure high availability access to a network subnet even if a primary
router or path fails. Multiple routers can be configured as a single vir-
tual router. If a master router fails, a backup router automatically
assumes the routing task for continued service, typically within 3 sec-
onds of failure detection. VRRP Extension (VRRPE) is an extension of

76 The New Data Center


Design Considerations

VRRP that uses Bidirectional Forwarding Detection (BFD) to shrink the


failover window to about 1 second. Because networks now carry more
latency-sensitive protocols such as voice over IP, failover must be per-
formed as quickly as possible to ensure uninterrupted access.
Timing can also be critical for Layer 2 network segments. At Layer 2,
resiliency is enabled by the Rapid Spanning Tree Protocol (RSTP).
Spanning tree allows redundant pathing through the network while dis-
abling redundant links to prevent multiple loops. If a primary link fails,
conventional STP can identify the failure and enable a standby link
within 30 to 50 seconds. RSTP decreases the failover window to about
1 second. Innovative protocols such as Brocade Virtual Switch Redun-
dancy Protocol (VSRP) and Metro Ring Protocol (MRP), however, can
accelerate the failover process to a sub-second response time. In addi-
tion to enhanced resiliency, VSRP enables more efficient use of
network resources by allowing a link that is in standby or blocked
mode for one VLAN to be active for another VLAN.
Network Security
Data center network administrators must now assume that their net-
works are under a constant threat of attack from both internal and
external sources. Attack mechanisms such as denial of service (DoS)
are today well understood and typically blocked by a combination of
access control lists (ACLs) and rate limiting algorithms that prevent
packet flooding. Brocade, for example, provides enhanced hardware-
based, wire-speed ACL processing to block DoS and the more sinister
distributed DoS (DDoS) attacks. Unfortunately, hackers are constantly
creating new means to penetrate or disable corporate and government
networks and network security requires more than the deployment of
conventional firewalls.
Continuous traffic analysis to monitor the behavior of hosts is one
means to guard against intrusion. The sFlow (RFC 3176) standard
defines a process for sampling network traffic at wire speed without
impacting network performance. Packet sampling is performed in
hardware by switches and routers in the network and samples are for-
ward to a central sFlow server or collector for analysis. Abnormal traffic
patterns or host behavior can then be identified and proactively
responded to in real time. Brocade IronView Network Manager (INM),
for example, incorporates sFlow for continuous monitoring of the net-
work in addition to ACL and rate limiting management of network
elements.

The New Data Center 77


Chapter 6: The New Data Center LAN

Other security considerations include IP address spoofing and network


segmentation. Unicast Reverse Path Forwarding (uRPF) as defined in
RFC 3704 provides a means to block packets from sources that have
not been already registered in a router's routing information base (RIB)
or forwarding information base (FIB). Address spoofing is typically used
to disguise the source of DoS attacks, so uRPF is a further defense
against attempts to overwhelm network routers. Another spoofing haz-
ard is Address Resolution Protocol (ARP) spoofing, which attempts to
associate an attacker's MAC address with a valid user IP address to
sniff or modify data between legitimate hosts. ARP spoofing can be
thwarted via ARP inspection or monitoring of ARP requests to ensure
that only valid queries are allowed.
For very large data center networks, risks to the network as a whole
can be reduced by segmenting the network through use of Virtual
Routing and Forwarding (VRF). VRF is implemented by enabling a
router with multiple independent instances of routing tables and this
essentially turns a single router into multiple virtual routers. A single
physical network can thus be subdivided into multiple virtual networks
with traffic isolation between designated departments or applications.
Brocade switches and routers provide an entire suite of security proto-
cols and services to protect the data center network and maintain
stable operation and management.
Power, Space and Cooling Efficiency
According to The Server and StorageIO Group, IT consultants, network
infrastructure contributes only from 10% to 15% of IT equipment
power consumption in the data center, as shown in Figure 40. Com-
pared to server power consumption at 48%, 15% may not seem a
significant number, but considering that a typical data center can
spend close to a million dollars per year on power, the energy effi-
ciency of every piece of IT equipment represents a potential savings.
Closer cooperation between data center administrators and the facili-
ties management responsible for the power bill can lead to a closer
examination of the power draw and cooling requirements of network
equipment and selection of products that provide both performance
and availability as well as lower energy consumption. Especially for
networking products, there can be a wide disparity between vendors
who have integrated energy efficiency into their product design philos-
ophy and those who have not.

78 The New Data Center


Design Considerations

IT Data Center Typical Power Consumption

Cooling/HVAC Servers External storage


(all tiers)

50-60% 48%

80%
Storage
IT equipment (disk and tape)
37-40%
48-50% Network
(SAN/LAN/WAN) Tape drive
library
Other
Other

Figure 40. Network infrastructure typically contributes only 10% to


15% of total data center IT equipment power usage.

Designing for data center energy efficiency includes product selection


that provides the highest productivity with the least energy footprint.
Use of high-port-density switches, for example, can reduce the total
number of power supplies, fans, and other components that would
otherwise be deployed if smaller switches were used. Combining
access and aggregation layers with a BigIron RX Series switch likewise
reduces the total number of elements required to support host con-
nectivity. Selecting larger end-of-row access-layer switches instead of
individual top-of-rack switches has a similar affect.
The increased energy efficiency of these network design options, how-
ever, still ultimately depends on how the vendor has incorporated
energy saving components into the product architecture. As with SAN
products such as the Brocade DCX Backbone, Brocade LAN solutions
are engineered for energy efficiency and consume less than a fourth
of the power of competing products in comparable classes of
equipment.
Network Virtualization
The networking complement to server virtualization is a suite of virtu-
alization protocols that enable extended sections of a shared multi-
switch network to function as independent LANs (VLANs) or for a single
switch to operate as multiple virtual switches (virtual routing and for-
warding (VRF) as discussed earlier). In addition, protocols such as
virtual IPs (VIPs) can be used to extend virtual domains between data
centers or multiple sites over distance. As with server virtualization,
the intention of network virtualization is to maximize productive use of
existing infrastructure to reinforce traffic separation, security, availabil-

The New Data Center 79


Chapter 6: The New Data Center LAN

ity, and performance. Application separation via VLANs at Layer 2 or


VRF at Layer 3, for example, can provide a means to better meet ser-
vice-level agreements (SLAs) and conform to regulatory compliance
requirements. Likewise, network virtualization can be used to create
logically separate security zones for policy enforcement without
deploying physically separate networks.

Application Delivery Infrastructure


One of the major transformations in business applications over the
past few years has been the shift from conventional applications to
Web-based enterprise applications. Use of Internet-enabled protocols
such as HTTP (HyperText Transfer Protocol) and HTTPS (HyperText
Transfer Protocol Secure) has streamlined application development
and delivery and is now a prerequisite for next-generation cloud com-
puting solutions. At the same time, however, Web-based enterprise
applications present a number of challenges due to increased network
and server loads, increased user access, greater applications load,
and security concerns. The concurrent proliferation of virtualized serv-
ers helps to alleviate the application workload issues but adds
complexity in designing resilient configurations that can provide con-
tinuous access. As discussed in “Chapter 3: Doing More with Less”
starting on page 17, implementing a successful server virtualization
plan requires careful attention to both upstream LAN network impact
as well as downstream SAN impact. Application delivery controllers
(also known as Layer 4–7 switches) provide a particularly effective
means to address the upstream network consequences of increased
traffic volumes when Web-based enterprise applications are sup-
ported on higher populations of virtualized servers.

Applications Oracle Sap Microsoft Web/Mail/DNS

Layer 2-3 switches

Network

Clients

Figure 41. Application congestion (traffic shown as a dashed line) on


a Web-based enterprise application infrastructure.

80 The New Data Center


Application Delivery Infrastructure

As illustrated in Figure 41, conventional network switching and routing


cannot prevent higher traffic volumes generated by user activity from
overwhelming applications. Without a means to balance the workload
between applications servers, response time suffers even when the
number of application server instances has been increased via server
virtualization. In addition, whether access is over the Internet or
through a company intranet, security vulnerabilities such as DoS
attacks still exist.
The Brocade ServerIron ADX application delivery controller addresses
these problems by providing hardware-assisted protocol processing
offload, server workload balancing, and firewall protection to ensure
that application access is distributed among the relevant servers and
that access is secured. As shown in Figure 42, this solution solves
multiple application-related issues simultaneously. By implementing
Web-based protocol processing offload, server CPU cycles can be used
more efficiently to process application requests. In addition, load bal-
ancing across multiple servers hosting the same application can
ensure that no individual server or virtual machine is overwhelmed
with requests. By also offloading HTTPS/SSL security protocols, the
Brocade ServerIron ADX provides the intended level of security without
further burdening the server pool. The Brocade ServerIron ADX also
provides protection against DoS attacks and so facilitates application
availability.

Applications Oracle Sap Microsoft Web/Mail/DNS

VMs

SSL/encryption
DNS Mail Radius Firewalls IPD/IDS Cache Switches

Brocade ServerIron ADX


Application Delivery Controllers

Layer 2-3 switches

Network

Clients

Figure 42. Application workload balancing, protocol processing


offload and security via the Brocade ServerIron ADX.

The New Data Center 81


Chapter 6: The New Data Center LAN

The value of application delivery controllers in safeguarding and equal-


izing application workloads appreciates as more business applications
shift to Web-based applications. Cloud computing is the ultimate
extension of this trend, with the applications themselves migrating
from clients to enterprise application servers, which can be physically
located across dispersed data center locations or outsourced to ser-
vice providers. The Brocade ServerIron ADX provides global server load
balancing (GSLB) for load balancing not only between individual serv-
ers but between geographically dispersed server or VM farms. With
GSLB, clients are directed to the best site for the fastest content deliv-
ery given current workloads and optimum network response time. This
approach also integrates enterprise-wide disaster recovery for applica-
tion access without disruption to client transactions.
As with other network solutions, the benefits of application delivery
controller technology can be maintained only if the product architec-
ture maintains or improves client performance. Few network designers
are willing to trade network response time for enhanced services. The
Brocade ServerIron ADX, for example, provides an aggregate of 70
Gbps Layer 7 throughput and over 16 million Layer 4 transactions per
second. Although the Brocade ServerIron ADX sits physically in the
path between the network and its servers, performance is actually
substantially increased compared to conventional connectivity. In
addition to performance, the Brocade ServerIron ADX maintains Bro-
cade's track record of providing the industry's most energy efficient
network products by using less than half the power of the closest com-
peting application delivery controller product.

82 The New Data Center


Orchestration
7
Automating data center processes

So far virtualization has been “the” buzzword of twenty-first century IT


parliance and unfortunately has undergone depreciation due to over-
use-in particular, over-marketing-of the term. As with elephants and
blind men, virtualization appears to mean different things to different
people depending on their areas of responsibility and unique issues.
For revitalizing an existing data center or designing a new one, the
umbrella term “virtualization” covers three primary domains: virtual-
ization of compute power in the form of server virtualization,
virtualization of data storage capacity in the form of storage virtualiza-
tion, and virtualization of the data transport in the form of network
virtualization. The common denominator between these three primary
domains of virtualization is the use of new technology to streamline
and automate IT processes while maximizing productive use of the
physical IT infrastructure.
As with graphical user interfaces, virtualization hides the complexity of
underlying hardware elements and configurations. The complexity
does not go away but is now the responsibility of the virtualization layer
inserted between physical and logical domains. From a client perspec-
tive, for example, an application running on a single physical server
behaves the same as one running on a virtual machine. In this exam-
ple, the hypervisor assumes responsibility for supplying all the
expected CPU, memory, I/O, and other elements typical of a conven-
tional server. The level of actual complexity of a virtualized
environment is powers of ten greater than ordinary configurations but
so is the level of productivity and resource utilization. The same
applies to the other domains of storage and network virtualization,
and this places tremendous importance on the proper selection of
products to extend virtualization across the enterprise.

The New Data Center 83


Chapter 7: Orchestration

Next-generation data center design necessarily incorporates a variety


of virtualization technologies, but to virtualize the entire data center
requires first of all a means to harmoniously orchestrate these tech-
nologies into an integral solution, as depicted in Figure 43. Because
no single vendor can provide all the myriad elements found in a mod-
ern data center, orchestration requires vendor cooperation and new
open systems standards to ensure stability and resilience. The alterna-
tive is proprietary solutions and products and the implicit vendor
monopoly that accompanies single-source technologies. The market
long ago rejected this vendor lock-in and has consistently supported
an open systems approach to technology development and
deployment.

APIs
Server Storage
virtualization virtualization

Orchestration
framework

APIs APIs

Netowrk
virtualization

Figure 43. Open systems-based orchestration between virtualization


domains.

For large-scale virtualization environments, standards-based orches-


tration is all the more critical because virtualization in each domain is
still undergoing rapid technical development. The Distributed Manage-
ment Task Force (DMTF), for example, developed the Open Virtual
Machine Format (OVF) standard for VM deployment and mobility. The
Storage Networking Industry Association (SNIA) Storage Management
Initiative (SMI) includes open standards for deployment and manage-
ment of virtual storage environments. The American National
Standards Institute T11.5 work group developed the Fabric Application
Interface Standard (FAIS) to promote open APIs for implementing stor-
age virtualization via the fabric. IEEE and IETF have progressively
developed more sophisticated open standards for network virtualiza-
tion, from VLANs to VRF. The development of open standards and

84 The New Data Center


common APIs is the prerequisite for developing comprehensive
orchestration frameworks that can automate the creation, allocation,
and management of virtualized resources across data center
domains. In addition, open standards become the guideposts for fur-
ther development of specific virtualization technologies, so that
vendors can develop products with a much higher degree of
interoperability.
Data center orchestration assumes that a single conductor—in this
case, a single management framework—provides configuration,
change, and monitoring management over an IT infrastructure that is
based on a complex of virtualization technologies. This in turn implies
that the initial deployment of an application, any changes to its envi-
ronment, and proactive monitoring of its health are no longer manual
processes but are largely automated according to a set of defined IT
policies. Enabled by open APIs in the server, storage, and network
domains, the data center infrastructure automatically allocates the
requisite CPU, memory, I/O, and resilience for a particular application;
assigns storage capacity, boot LUNs, and any required security or QoS
parameters needed for storage access; and provides optimized client
access through the data communications network such as VLANs,
application delivery, load balancing, or other network tuning to support
the application. As application workloads change over time, the appli-
cation can be migrated from one server resource to another, storage
volumes increased or decreased, QoS levels adjusted appropriately,
security status changed, and bandwidth adjusted for upstream client
access. The ideal of data center orchestration is that the configura-
tion, deployment, and management of applications on the underlying
infrastructure should require little or no human intervention and
instead rely on intelligence engineered into each domain.
Of course, servers, storage, and network equipment do not rack them-
selves up and plug themselves in. The physical infrastructure must be
properly sized, planned, selected, and deployed before logical automa-
tion and virtualization can be applied. With tight budgets, it may not be
possible to provision all the elements needed for full data center
orchestration, but careful selection of products today can lay the
groundwork for fuller implementation tomorrow.
With average corporate data growth at 60% per year, data center
orchestration is becoming a business necessity. Companies cannot
continue to add staff to manage increased volumes of applications
and data, and administrator productivity cannot meet growth rates
without full-scale virtualization and automation of IT processes. Serv-
ers, storage, and networking, which formerly stood as isolated

The New Data Center 85


Chapter 7: Orchestration

management domains, are being transformed into interrelated ser-


vices. For Brocade, technology, network infrastructure as a service
requires richer intelligence in the network to coordinate provisioning of
bandwidth, QoS, resiliency, and security features to support server and
storage services.
Because this uber-technology is still under construction, not all neces-
sary components are currently available but substantial progress has
been made. Server virtualization, for example, is now a mature tech-
nology that is moving from secondary applications to primary ones.
Brocade is working with VMware, Microsoft, and others to coordinate
communication between the SAN and LAN infrastructure and various
virtualization hypervisors so that proactive monitoring of storage band-
width and QoS can trigger migration of VMs to more available
resources should congestion occur, as shown in Figure 44.

VMs move from first physical


server to next available

LAN Microsoft System


Center VMM

Microsoft System
Center Operations
Manager

Brocade
Brocade HBA plus QoS
DCFM
Engine
Brocade Management Pack for
Microsoft System Center VMM
QoS
Engine

SAN

Figure 44. Brocade Management Pack for Microsoft Service Center


Virtual Machine Manager leverages APIs between the SAN and
SCVMM to trigger VM migration.

The SAN Call Home events displayed in the Microsoft System Center
Operations Center interface is shown in Figure 50 on page 94.

86 The New Data Center


On the storage front, Brocade supports fabric-based storage virtualiza-
tion with the Brocade FA4-18 Application Blade and Brocade's Storage
Application Services (SAS) APIs. Based on FAIS standards, the Brocade
FA4-18 supports applications such as EMC Invista to maximize effi-
cient utilization of storage assets. For client access, the Brocade ADX
application delivery controller automates load balancing of client
requests and offloads upper-layer protocol processing from the desti-
nation VMs. Other capabilities such as 10 Gigabit Ethernet and 8 Gbps
Fibre Channel connectivity, fabric-based storage encryption and virtual
routing protocols can help data center network designers allocate
enhanced bandwidth and services to accommodate both current
requirements and future growth. Collectively, these building blocks
facilitate higher degrees of data center orchestration to achieve the IT
business goal of doing far more with much less.

The New Data Center 87


Chapter 7: Orchestration

88 The New Data Center


Brocade Solutions Optimized
8
for Server Virtualization
Enabling server consolidation and end-to-end
fabric management

Brocade has engineered a number of different network components


that enable server virtualization in the data center fabric. The sections
in this chapter introduce you to these products and briefly describe
them. For the most current information, visit www.brocade.com > Prod-
ucts and Solutions. Choose a product from the drop-down list on the
left and then scroll down to view Data Sheets, FAQs, Technical Briefs,
and White Papers.
The server connectivity and convergence products described in this
chapter are:
• “Server Adapters” on page 89
• “Brocade 8000 Switch and FCOE10-24 Blade” on page 92
• “Access Gateway” on page 93
• “Brocade Management Pack” on page 94
• “Brocade ServerIron ADX” on page 95

Server Adapters
In mid-2008, Brocade released a family of Fibre Channel HBAs with
8 and4 Gbps HBAs. Highlights of Brocade FC HBAs include:
• Maximizes bus throughput with a Fibre Channel-to-PCIe 2.0a
Gen2 (x8) bus interface with intelligent lane negotiation
• Prioritizes traffic and minimizes network congestion with target
rate limiting, frame-based prioritization, and 32 Virtual Channels
per port with guaranteed QoS

The New Data Center 89


Chapter 8: Brocade Solutions Optimized for Server Virtualization

• Enhances security with Fibre Channel-Security Protocol (FC-SP) for


device authentication and hardware-based AES-GCM; ready for in-
flight data encryption
• Supports virtualized environments with NPIV for 255 virtual ports
• Uniquely enables end-to-end (server-to-storage) management in
Brocade Data Center Fabric environments
Brocade 825/815 FC HBA
The Brocade 815 (single port) and Brocade 825 (dual ports) 8 Gbps
Fibre Channel-to-PCIe HBAs provides industry-leading server connec-
tivity through unmatched hardware capabilities and unique software
configurability. This class of HBAs is designed to help IT organizations
deploy and manage true end-to-end SAN service across next-genera-
tion data centers.

Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown).

The Brocade 8 Gbps FC HBA also:


• Maximizes I/O transfer rates with up to 500,000 IOPS per port at
8 Gbps
• Utilizes N_Port Trunking capabilities to create a single logical
16 Gbps high-speed link

90 The New Data Center


Server Adapters

Brocade 425/415 FC HBA


The Brocade 4 Gbps FC HBA has capabilities similar to those
described for the 8 Gbps version. The Brocade 4 Gbps FC HBA also:
• Maximizes I/O transfer rates with up to 500,000 IOPS per port at
4 Gbps
• Utilizes N_Port Trunking capabilities to create a single logical
8 Gbps high-speed link

Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown).

Brocade FCoE CNAs


The Brocade 1010 (single port) and Brocade 1020 (dual ports) 10
Gbps Fibre Channel over Ethernet-to-PCIe CNAs provide server I/O con-
solidation by transporting both storage and Ethernet networking traffic
across the same physical connection. Industry-leading hardware capa-
bilities, unique software configurability, and unified management all
contribute to exceptional flexibility.
The Brocade 1000 Series CNAs combine the powerful capabilities of
storage (Fibre Channel) and networking (Ethernet) devices. This
approach helps improve TCO by significantly reducing power, cooling,
and cabling costs through the use of a single adapter. It also extends
storage and networking investments, including investments made in
management and training. Utilizing hardware-based virtualization
acceleration capabilities, organizations can optimize performance in
virtual environments to increase overall ROI and improve TCO even
further.

The New Data Center 91


Chapter 8: Brocade Solutions Optimized for Server Virtualization

Leveraging IEEE standards for Data Center Bridging (DCB), the Bro-
cade 1000 Series CNAs provide a highly efficient way to transport
Fibre Channel storage traffic over Ethernet links-addressing the highly
sensitive nature of storage traffic.

Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over
Ethernet-to-PCIe CNA.

Brocade 8000 Switch and FCOE10-24 Blade


The Brocade 8000 is a top-of-rack Layer 2 CEE/FCoE switch with 24 x
10 GbE ports for LAN connections and 8 x FC ports (with up to 8 Gbps
speed) for Fibre Channel SAN connections. The Brocade 8000 pro-
vides advanced Fibre Channel services, supports Ethernet and CEE
capabilities, and is managed by Brocade DCFM.
Supporting Windows and Linux environments, the Brocade 8000
Switch enables access to both LANs and SANs over a common server
connection by utilizing Converged Enhanced Ethernet (CEE) and FCoE
protocols. LAN traffic is forwarded to aggregation-layer Ethernet
switches using conventional 10 GbE connections, and storage traffic is
forwarded to Fibre Channel SANs over 8 Gbps FC connections.

Figure 48. Brocade 8000 Switch.

92 The New Data Center


Access Gateway

The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though, non-


blocking architecture designed for use with the Brocade DCX. It fea-
tures 24 x 10 Gbps CEE ports and extends CEE/FCoE capabilities to
backbone platforms enabling end-of-row CEE/FCoE deployment. By
providing first-hop connectivity for access-layer servers, the Brocade
FCOE10-24 also enables server I/O consolidation for servers with Tier
3 and some Tier 2 virtualized applications.

Figure 49. Brocade FCOE10-24 Blade.

Access Gateway
Brocade Access Gateway simplifies server and storage connectivity by
enabling direct connection of servers to any SAN fabric-enhancing
scalability by eliminating the switch domain identity and simplifying
local switch device management. Brocade blade server SAN switches
and the Brocade 300 and Brocade 5100 rack-mount switches are key
components of enterprise data centers, bringing a wide variety of scal-
ability, manageability, and cost advantages to SAN environments.
These switches can be used in Access Gateway mode, available in the
standard Brocade Fabric OS, for enhanced server connectivity to
SANs.
Access Gateway provides:
• Seamless connectivity with any SAN fabric
• Improved scalability
• Simplified management
• Automatic failover and failback for high availability
• Lower total cost of ownership

The New Data Center 93


Chapter 8: Brocade Solutions Optimized for Server Virtualization

Access Gateway mode eliminates traditional heterogeneous switch-to-


switch interoperability challenges by utilizing NPIV standards to pres-
ent Fibre Channel server connections as logical devices to SAN fabrics.
Attaching through NPIV-enabled edge switches or directors, Access
Gateway seamlessly connects servers to Brocade, McDATA, Cisco, or
other SAN fabrics.

Brocade Management Pack


Brocade Management Pack for Microsoft System Center monitors the
health and performance of Brocade HBA-to-SAN links and works with
Microsoft System Center to provide intelligent recommendations for
dynamically optimizing the performance of virtualized workloads. It
provides Brocade HBA performance and health monitoring capabilities
to System Center Operations Manager (SCOM), and that information
can be used to dynamically optimize server resources in virtualized
data centers via System Center Virtual Machine Manager (SCVMM).
It enables real-time monitoring of Brocade HBA links through SCOM,
combined with proactive remediation action in the form of recom-
mended Performance and Resource Optimization (PRO) Tips handled
by SCVMM. As a result, IT organizations can improve efficiency while
reducing their overall operating costs.

Figure 50. SAN Call Home events displayed in the Microsoft System
Center Operations Center interface.

94 The New Data Center


Brocade ServerIron ADX

Brocade ServerIron ADX


The Brocade ServerIron ADX Series of switches provides Layer 4–7
switching performance in an intelligent, modular application delivery
controller platform. The switches—including the ServerIron ADX 1000,
4000, and 10000 models—enable highly secure and scalable service
infrastructures to help applications run more efficiently and with
higher availability. ServerIron ADX switches use detailed application
message information beyond the traditional Layer 2 and 3 packet
headers, directing client requests to the most available servers. These
intelligent application switches transparently support any TCP- or UDP-
based application by providing specialized acceleration, content cach-
ing, firewall load balancing, network optimization, and host offload
features for Web services.
The Brocade ServerIron ADX Series also provides a reliable line of
defense by securing servers and applications against many types of
intrusion and attack without sacrificing performance.
All Brocade ServerIron ADX switches forward traffic flows based on
Layer 4–7 definitions, and deliver industry-leading performance for
higher-layer application switching functions. Superior content switch-
ing capabilities include customizable rules based on URL, HOST, and
other HTTP headers, as well as cookies, XML, and application content.
Brocade ServerIron ADX switches simplify server farm management
and application upgrades by enabling organizations to easily remove
and insert resources into the pool. The Brocade ServerIron ADX pro-
vides hardware-assisted, standards-based network monitoring for all
application traffic, improving manageability and security for network
and server resources. Extensive and customizable service health
check capabilities monitor Layer 2, 3, 4, and 7 connectivity along with
service availability and server response, enabling real-time problem
detection. To optimize application availability, these switches support
many high-availability mode options, with real-time session synchroni-
zation between two Brocade ServerIron ADX switches to protect
against session loss during outages.

Figure 51. Brocade ServerIron ADX 1000.

The New Data Center 95


Chapter 8: Brocade Solutions Optimized for Server Virtualization

96 The New Data Center


Brocade SAN Solutions
9
Meeting the most demanding data center
requirements today and tomorrow

Brocade leads the pack in networked storage from the development of


Fibre Channel to its current family of high-performance, energy-effi-
cient SAN switches, directors, and backbones and advanced fabric
capabilities such as encryption and distance extension. The sections
in this chapter introduce you to these products and briefly describe
them. For the most current information, visit www.brocade.com > Prod-
ucts and Solutions. Choose a product from the drop-down list on the
left and then scroll down to view Data Sheets, FAQs, Technical Briefs,
and White Papers.
The SAN products described in this chapter are:
• “Brocade DCX Backbones (Core)” on page 98
• “Brocade 8 Gbps SAN Switches (Edge)” on page 100
• “Brocade Encryption Switch and FS8-18 Encryption Blade” on
page 105
• “Brocade 7800 Extension Switch and FX8-24 Extension Blade” on
page 106
• “Brocade Optical Transceiver Modules” on page 107
• “Brocade Data Center Fabric Manager” on page 108

The New Data Center 97


Chapter 9: Brocade SAN Solutions

Brocade DCX Backbones (Core)


The Brocade DCX and DCX-4S Backbone offer flexible management
capabilities as well as Adaptive Networking services and fabric-based
applications to help optimize network and application performance. To
minimize risk and costly downtime, the platform leverages the proven
“five-nines” (99.999%) reliability of hundreds of thousands of Brocade
SAN deployments.

Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone.

The Brocade DCX facilitates the consolidation of server-to-server,


server-to-storage, and storage-to-storage networks with highly avail-
able, lossless connectivity. In addition, it operates natively with
Brocade and Brocade M-Series components, extending SAN invest-
ments for maximum ROI. It is designed to support a broad range of
current and emerging network protocols to form a unified, high-perfor-
mance data center fabric.

98 The New Data Center


Brocade DCX Backbones (Core)

Table 1. Brocade DCX Capabilities

Feature Details

Industry-leading • Industry-leading Performance 8 Gbps per-port, full-


capabilities for line-rate performance
large enterprises • 13 Tbps aggregate dual-chassis bandwidth (6.5
Tbps for a single chassis)
• 1 Tbps of aggregate ICL bandwidth
• More than 5x the performance of competitive
offerings
High scalability • High-density, bladed architecture
• Up to 384 8 Gbps Fibre Channel ports in a single
chassis
• Up to 768 8 Gbps Fibre Channel ports in a dual-
chassis configuration
• 544 Gbps aggregate bandwidth per slot plus local
switching
• Fibre Channel Integrated Routing
• Specialty blades for 10 Gbps connectivity,
Fibre Channel Routing over IP, and fabric-based
applications
Energy efficiency • Energy efficiency less than one-half watt per Gbps
• 10x more energy efficient than competitive
offerings
Ultra-High • Designed to support 99.99% uptime
Availability • Passive backplane, separate and redundant
control processor and core switching blades
• Hot-pluggable components, including redundant
power supplies, fans, WWN cards, blades, optics
Fabric services and • Adaptive Networking services, including QoS,
applications ingress rate limiting, traffic isolation, and Top
Talkers
• Plug-in services for fabric-based storage
virtualization, continuous data protection and
replication, and online data migration
Multiprotocol • Support for Fibre Channel, FICON, FCIP, and IPFC
capabilities • Designed for future 10 Gigabit Ethernet (GbE),
and fabric Data Center Bridging (DCB), and Fibre Channel
interoperability over Ethernet (FCoE)
• Native connectivity in Brocade and Brocade
M-Series fabrics, including backward and forward
compatibility

The New Data Center 99


Chapter 9: Brocade SAN Solutions

Table 1. Brocade DCX Capabilities

Feature Details

Intelligent • Full utilization of the Brocade Fabric OS embedded


management and operating system
monitoring • Flexibility to utilize a CLI, Brocade DCFM, Brocade
Advanced Web Tools, and Brocade Advanced
Performance Monitoring
• Integration with third-party management tools

Brocade 8 Gbps SAN Switches (Edge)


Industry-leading Brocade switches are the foundation for connecting
servers and storage devices in SANs, enabling organizations to access
and share data in a high-performance, manageable, and scalable
manner. To protect existing investments, Brocade switches are fully
forward and backward compatible—providing a seamless migration
path to 8 Gbps connectivity and future technologies. This capability
enables organizations to deploy 1, 2, 4, and 8 Gbps fabrics with highly
scalable core-to-edge configurations.
Brocade standalone switch models offer flexible configurations rang-
ing from 8 to 80 ports, and can function as core or edge switches,
depending upon business requirements. With native E_Port interoper-
ability, Brocade switches connect to the vast majority of fabrics in
operation today, allowing organizations to seamlessly integrate and
scale their existing SAN infrastructures. Moreover, Brocade switches
are backed by FOS engineering, test, and support expertise to provide
reliable operation in mixed fabrics. All switches feature flexible port
configuration with Ports On Demand capabilities for straightforward
scalability. Organizations can also experience high performance
between switches by using Brocade ISL Trunking to achieve up to 64
Gbps total throughput

100 The New Data Center


Brocade 8 Gbps SAN Switches (Edge)

Brocade switches meet high-availability requirements with Brocade


5300, 5100, and 300 switches offering redundant, hot-pluggable
components. All Brocade switches feature non-disruptive software
upgrades, automatic path rerouting, and extensive diagnostics. Lever-
aging the Brocade networking model, these switches can provide a
fabric capable of delivering overall system
Designed for flexibility, Brocade switches provide a low-cost solution
for Direct-Attached Storage (DAS)-to-SAN migration, small SAN islands,
Network-Attached Storage (NAS) back-ends, and the edge of core-to-
edge enterprise SANs. As a result, these switches are ideal as stand-
alone departmental SANs or as high-performance edge switches in
large enterprise SANs.
The Brocade 5300 and 5100 switches support full Fibre Channel rout-
ing capabilities with the addition of the Fibre Channel Integrated
Routing (IR) option. Using built-in routing capabilities, organizations
can selectively share devices while still maintaining remote fabric iso-
lation. They include a Virtual Fabrics feature that enables the
partitioning of a physical SAN into logical fabrics. This provides fabric
isolation by application, business group, customer, or traffic type with-
out sacrificing performance, scalability, security, or reliability.
Brocade 5300 Switch
As the value and volume of business data continue to rise, organiza-
tions need technology solutions that are easy to implement and
manage and that can grow and change with minimal disruption. The
Brocade 5300 Switch is designed to consolidate connectivity in rapidly
growing mission-critical environments, supporting 1, 2, 4, and 8 Gbps
technology in configurations of 48, 64, or 80 ports in a 2U chassis.
The combination of density, performance, and “pay-as-you-grow” scal-
ability increases server and storage utilization, while reducing
complexity for virtualized servers and storage.

Figure 53. Brocade 5300 Switch.

The New Data Center 101


Chapter 9: Brocade SAN Solutions

Used at the fabric core or at the edge of a tiered core-to-edge infra-


structure, the Brocade 5300 operates seamlessly with existing
Brocade switches through native E_Port connectivity into Brocade FOS
or M-EOS) environments. The design makes it very efficient in power,
cooling, and rack density to help enable midsize and large server and
storage consolidation. The Brocade 5300 also includes Adaptive Net-
working capabilities to more efficiently manage resources in highly
consolidated environments. It supports Fibre Channel Integrated Rout-
ing for selective device sharing and maintains remote fabric isolation
for higher levels of scalability and fault isolation.
The Brocade 5300 utilizes ASIC technology featuring eight 8-port
groups. Within these groups, an inter-switch link trunk can supply up to
68 Gbps of balanced data throughput. In addition to reducing conges-
tion and increasing bandwidth, enhanced Brocade ISL Trunking
utilizes ISLs more efficiently to preserve the number of usable switch
ports. The density of the Brocade 5300 uniquely enables fan-out from
the core of the data center fabric with less than half the number of
switch devices to manage compared to traditional 32- or 40-port edge
switches.
Brocade 5100 Switch
The Brocade 5100 Switch is designed for rapidly growing storage
requirements in mission-critical environments combining 1, 2, 4, and
8 Gbps Fibre Channel technology in configurations of 24, 32, or 40
ports in a 1U chassis. As a result, it provides low-cost access to indus-
try-leading SAN technology and pay-as-you-grow scalability for
consolidating storage and maximizing the value of virtual server
deployments.

Figure 54. Brocade 5100 Switch.

Similar to the Brocade 5300, the Brocade 5100 features a flexible


architecture that operates seamlessly with existing Brocade switches
through native E_Port connectivity into Brocade FOS or M-EOS environ-
ments. With the highest port density of any midrange enterprise
switch, it is designed for a broad range of SAN architectures, consum-
ing less than 2.5 watts of power per port for exceptional power and
cooling efficiency. It features consolidated power and fan assemblies

102 The New Data Center


Brocade 8 Gbps SAN Switches (Edge)

to improve environmental performance. The Brocade 5100 is a cost-


effective building block for standalone networks or the edge of enter-
prise core-to-edge fabrics.
Additional performance capabilities include the following:
• 32 Virtual Channels on each ISL enhance QoS traffic prioritization
and “anti-starvation” capabilities at the port level to avoid perfor-
mance degradation.
• Exchange-based Dynamic Path Selection optimizes fabric-wide
performance and load balancing by automatically routing data to
the most efficient available path in the fabric. It augments ISL
Trunking to provide more effective load balancing in certain con-
figurations. In addition, DPS can balance traffic between the
Brocade 5100 and Brocade M-Series devices enabled with Bro-
cade Open Trunking.
Brocade 300 Switch
The Brocade 300 Switch provides small to midsize enterprises with
SAN connectivity that simplifies IT management infrastructures,
improves system performance, maximizes the value of virtual server
deployments, and reduces overall storage costs. The 8 Gbps Fibre
Channel Brocade 300 provides a simple, affordable, single-switch
solution for both new and existing SANs. It delivers up to 24 ports of 8
Gbps performance in an energy-efficient, optimized 1U form factor.

Figure 55. Brocade 300 Switch.

To simplify deployment, the Brocade 300 features the EZSwitchSetup


wizard and other ease-of-use and configuration enhancements, as
well as the optional Brocade Access Gateway mode of operation (sup-
ported with 24-port configurations only). Access Gateway mode
enables connectivity into any SAN by utilizing NPIV switch standards to
present Fibre Channel connections as logical devices to SAN fabrics.
Attaching through NPIV-enabled switches and directors, the Brocade
300 in Access Gateway mode can connect to FOS-based, M-EOS-
based, or other SAN fabrics.

The New Data Center 103


Chapter 9: Brocade SAN Solutions

Organizations can easily enable Access Gateway mode (see page 151)
via the FOS CLI, Brocade Web Tools, or Brocade Fabric Manager. Key
benefits of Access Gateway mode include:
• Improved scalability for large or rapidly growing server and virtual
server environments
• Simplified management through the reduction of domains and
management tasks
• Fabric interoperability for mixed vendor SAN configurations that
require full functionality
Brocade VA-40FC Switch
The Brocade VA-40FC is a high-performance Fibre Channel edge
switch optimized for server connectivity in large-scale enterprise SANs
As organizations consolidate data centers, expand application ser-
vices, and begin to implement cloud initiatives, large-scale server
architectures are becoming a standard part of the data center. Mini-
mizing the network deployment steps and simplifying management
can help organizations grow seamlessly while reducing operating
costs.
The Brocade VA-40FC helps meet this challenge, providing the first
Fibre Channel edge switch optimized for server connectivity in large
core-to-edge SANs. By leveraging Brocade Access Gateway technology,
the Brocade VA-40FC enables zero-configuration deployment and
reduces management of the network edge—increasing scalability and
simplifying management for large-scale server architectures.

Figure 56. Brocade VA-40FC Switch.

The Brocade VA-40FC is in Access Gateway mode by default, which is


ideal for larger SAN fabrics that can benefit from the scalability of
fixed-port switches at the edge of the network. Some use cases for
Access Gateway mode are:
• Connectivity of many servers into large SAN fabrics
• Connectivity of servers into Brocade, Cisco, or any NPIV-enabled
SAN fabrics
• Connectivity into multiple SAN fabrics

104 The New Data Center


Brocade Encryption Switch and FS8-18 Encryption Blade

The Brocade VA-40FC also supports Fabric Switch mode to provide


standard Fibre Channel switching and routing capabilities that are
available on all Brocade enterprise-class 8 Gbps solutions.

Brocade Encryption Switch and FS8-18


Encryption Blade
The Brocade Encryption Switch is a high-performance standalone
device for protecting data-at-rest in mission-critical environments. It
scales non-disruptively, providing from 48 up to 96 Gbps of disk
encryption processing power. Moreover, the Brocade Encryption
Switch is tightly integrated with industry-leading, enterprise-class key
management systems that can scale to support key lifecycle services
across distributed environments.
It is also FIPS 140-2 Level 3-compliant. Based on industry standards,
Brocade encryption solutions for data-at-rest provide centralized, scal-
able encryption services that seamlessly integrate into existing
Brocade Fabric OS environments.

Figure 57. Brocade Encryption Switch.

Figure 58. Brocade FS8-18 Encryption Blade.

The New Data Center 105


Chapter 9: Brocade SAN Solutions

Brocade 7800 Extension Switch and FX8-24


Extension Blade
The Brocade 7800 Extension Switch helps provide network infrastruc-
ture for remote data replication, backup, and migration. Leveraging
next-generation Fibre Channel and advanced FCIP technology, the Bro-
cade 7800 provides a flexible and extensible platform to move more
data faster and further than ever before.
It can be configured for simple point-to-point or comprehensive multi-
site SAN extension. Up to 16 x 8 Gbps Fibre Channel ports and 6 x 1
GbE ports provide unmatched Fibre Channel and FCIP bandwidth, port
density, and throughput for maximum application performance over
WAN links.

Figure 59. Brocade 7800 Extension Switch.

The Brocade 7800 is an ideal platform for building or expanding a


high-performance SAN extension infrastructure. It leverages cost-
effective IP WAN transport to extend open systems and mainframe
disk and tape storage applications over distances that would other-
wise be impossible, impractical, or too expensive with standard Fibre
Channel connections. A broad range of optional advanced extension,
FICON, and SAN fabric services are available.
• The Brocade 7800 16/6 Extension Switch is a robust platform for
data centers and multisite environments implementing disk and
tape solutions for open systems and mainframe environments.
Organizations can optimize bandwidth and throughput through 16
x 8 Gbps FC ports and 6 x 1 GbE ports.
• The Brocade 7800 4/2 Extension Switch is a cost-effective option
for smaller data centers and remote offices implementing point-to-
point disk replication for open systems. Organizations can opti-
mize bandwidth and throughput through 4 x 8 Gbps FC ports and
2 x 1 GbE ports. The Brocade 7800 4/2 can be easily upgraded to
the Brocade 7800 16/6 through software licensing.

106 The New Data Center


Brocade Optical Transceiver Modules

The Brocade FX8-24 Extension Blade, designed specifically for the Bro-
cade DCX Backbone, helps provide the network infrastructure for
remote data replication, backup, and migration. Leveraging next-gen-
eration 8 Gbps Fibre Channel, 10 GbE and advanced FCIP technology,
the Brocade FX8-24 provides a flexible and extensible platform to
move more data faster and further than ever before.

Figure 60. Brocade FX8-24 Extension Blade.

Up to two Brocade FX8-24 blades can be installed in a Brocade DCX or


DCX-4S Backbone. Activating the optional 10 GbE ports doubles the
aggregate bandwidth to 20 Gbps and enables additional FCIP port
configurations (10 x 1 GbE ports and 1 x 10 GbE port, or 2 x 10 GbE
ports).

Brocade Optical Transceiver Modules


Brocade optical transceiver modules, also known as Small Form-factor
Pluggables (SFPs), plug into Brocade switches, directors, and back-
bones to provide Fibre Channel connectivity and satisfy a wide range
of speed and distance requirements. Brocade transceiver modules are
optimized for Brocade 8 Gbps platforms to maximize performance,
reduce power consumption, and help ensure the highest availability of
mission-critical applications. These transceiver modules support data
rates up to 8 Gbps Fibre Channel and link lengths up to 30 kilometers
(for 4 Gbps Fibre Channel).

The New Data Center 107


Chapter 9: Brocade SAN Solutions

Brocade Data Center Fabric Manager


Brocade Data Center Fabric Manager (DCFM) Enterprise unifies the
management of large, multifabric, or multisite storage networks
through a single pane of glass. It features enterprise-class reliability,
availability, and serviceability (RAS), as well as advanced features such
as proactive monitoring and alert notification. As a result, it helps opti-
mize storage resources, maximize performance, and enhance the
security of storage network infrastructures.
Brocade DCFM Enterprise configures and manages Brocade DCX
Backbone family, directors, switches, and extension solutions, as well
as Brocade data-at-rest encryption, FCoE/DCB, HBA, and CNA prod-
ucts. It is part of a common framework designed to manage entire
data center fabrics, from the storage ports to the HBAs, both physical
and virtual. Brocade DCFM Enterprise tightly integrates with Brocade
Fabric OS (FOS) to leverage key features such as Advanced Perfor-
mance Monitoring, Fabric Watch, and Adaptive Networking services.
As part of a common management ecosystem, Brocade DCFM Enter-
prise integrates with leading partner data center automation solutions
through frameworks such as the Storage Management Initiative-Speci-
fication (SMI-S).

Figure 61. Brocade DCFM main window showing the topology view.

108 The New Data Center


Brocade LAN Network
10
Solutions
End-to-end networking from the edge to the
core of today's networking infrastructures

Brocade offers a complete line of enterprise and service provider


Ethernet switches, Ethernet routers, application management, and
network-wide security products. With industry-leading features, perfor-
mance, reliability, and scalability capabilities, these products enable
network convergence and secure network infrastructures to support
advanced data, voice, and video applications. The complete Brocade
product portfolio enables end-to-end networking from the edge to the
core of today's networking infrastructures. The sections in this chapter
introduce you to these products and briefly describe them. For the
most current information, visit www.brocade.com > Products and Solu-
tions. Choose a product from the drop-down list on the left and then
scroll down to view Data Sheets, FAQs, Technical Briefs, and White
Papers.
The LAN products described in this chapter are:
• “Core and Aggregation” on page 110
• “Access” on page 112
• “Brocade IronView Network Manager” on page 115
• “Brocade Mobility” on page 116
For a more detailed discussion of the access, aggregation, and core
layers in the data center network, see “Chapter 6: The New Data Cen-
ter LAN” starting on page 69.

The New Data Center 109


Chapter 10: Brocade LAN Network Solutions

Core and Aggregation


The network core is the nucleus of the data center LAN. In a three-tier
model, the core also provides connectivity to the external corporate
network, intranet, and Internet. At the aggregation layer, uplinks from
multiple access-layer switches are further consolidated into fewer
high-availability and high-performance switches.
For application delivery and control, see also “Brocade ServerIron
ADX” on page 95.
Brocade NetIron MLX Series
The Brocade NetIron MLX Series of switching routers is designed to
provide the right mix of functionality and high performance. while
reducing TCO in the data center. Built with the Brocade state-of-the-art,
fifth-generation, network-processor-based architecture and Terabit-
scale switch fabrics, the NetIron MLX Series offers network planners a
rich set of high-performance IPv4, IPv6, MPLS, and Multi-VRF capabili-
ties as well as advanced Layer 2 switching capabilities.
The NetIron MLX Series includes the 4-slot NetIron MLX-4, 8-slot
NetIron MLX-8, 16-slot NetIron MLX-16, and the 32-slot NetIron MLX-
32. The series offers industry-leading port capacity and density with
up to 256 x 10 GbE, 1536 x 1 GbE, 64 x OC-192, or 256 x OC-48 ports
in a single system.

Figure 62. Brocade NetIron MLX-4.

110 The New Data Center


Core and Aggregation

Brocade BigIron RX Series


The Brocade BigIron RX Series of switches provides the first 2.2 billion
packet-per-second device that scale cost-effectively from the enter-
prise edge to the core with hardware-based IP routing to 512,000 IP
routes per line module. The high-availability design features redundant
and hot-pluggable hardware, hitless software upgrades, and graceful
BGP and OSPF restart.
The BigIron RX Series of Layer 2/3 Ethernet switches enables network
designers to deploy an Ethernet infrastructure that addresses today's
requirements with a scalable and future-ready architecture that will
support network growth and evolution for years to come. BigIron RX
Series incorporates the latest advances in switch architecture, system
resilience, QoS, and switch security in a family of modular chassis, set-
ting leading industry benchmarks for price performance, scalability
and TCO.

Figure 63. Brocade BigIron RX-16.

The New Data Center 111


Chapter 10: Brocade LAN Network Solutions

Access
The access layer provides the direct network connection to application
and file servers. Servers are typically provisioned with two or more GbE
or 10 GbE network ports for redundant connectivity. Server platforms
vary from standalone servers to 1U rack-mount servers and blade
servers with passthrough cabling or bladed Ethernet switches.
Brocade TurboIron 24X Switch
The Brocade TurboIron 24X switch is a compact, high-performance,
high-availability, and high-density 10/1 GbE dual-speed solution that
meets mission-critical data center ToR and High-Performance Cluster
Computing (HPCC) requirements. An ultra-low-latency, cut-through,
non-blocking architecture and low power consumption help provide a
cost-effective solution for server or compute-node connectivity.
Additional highlights include:
• Highly efficient power and cooling with front-to-back airflow, auto-
matic fan speed adjustment, and use of SFP+ and direct attached
SFP+ copper (Twinax)
• High availability with redundant, load-sharing, hot-swappable,
auto-sensing/switching power supplies and triple-fan assembly
• End-to-end QoS with hardware-based marking, queuing, and con-
gestion management
• Embedded per-port sFlow capabilities to support scalable hard-
ware-based traffic monitoring
• Wire-speed performance with an ultra-low-latency, cut-through,
non-blocking architecture ideal for HPC, iSCSI storage, real-time
application environments

Figure 64. Brocade TurboIron 24X Switch.

112 The New Data Center


Access

Brocade FastIron CX Series


The Brocade FastIron CX Series of switches provides new levels of per-
formance, scalability, and flexibility required for today's enterprise
networks. With advanced capabilities, these switches deliver perfor-
mance and intelligence to the network edge in a flexible 1U form
factor, which helps reduce infrastructure and administrative costs.
Designed for wire-speed and non-blocking performance, FastIron CX
switches include 24- and 48-port models, in both Power over Ethernet
(PoE) and non-PoE versions. Utilizing built-in 16 Gbps stacking ports
and Brocade IronStack technology, organizations can stack up to eight
switches into a single logical switch with up to 384 ports. PoE models
support the emerging Power over Ethernet Plus (PoE+) standard to
deliver up to 30 watts of power to edge devices, enabling next-genera-
tion campus applications.

Figure 65. Brocade FastIron CX-624S-HPOE Switch.

Brocade NetIron CES 2000 Series


Whether they are located at a central office or remote site, the avail-
ability of space often determines the feasibility of deploying new
equipment and services in a data center environment. The Brocade
NetIron Compact Ethernet Switch (CES) 2000 Series is purpose-built
to provide flexible, resilient, secure, and advanced Ethernet and MPLS-
based services in a compact form factor.
The NetIron CES 2000 Series is a family of compact 1U, multiservice
edge/aggregation switches that combine powerful capabilities with
high performance and availability. The switches provide a broad set of
advanced Layer 2, IPv4, and MPLS capabilities in the same device. As
a result, they support a diverse set of applications in data center, and
large enterprise networks.

The New Data Center 113


Chapter 10: Brocade LAN Network Solutions

Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port con-
figurations in both Hybrid Fiber (HF) and RJ45 versions.

Brocade FastIron Edge X Series


The Brocade FastIron Edge X Series switches are high-performance
data center-class switches that provide Gigabit copper and fiber-optic
connectivity and 10 GbE uplinks. Advanced Layer 3 routing capabilities
and full IPv6 support are designed for the most demanding
environments.
FastIron Edge X Series offers a diverse range of switches that meet
Layer 2/3 edge, aggregation, or small-network backbone-connectivity
requirements with intelligent network services, including superior QoS,
predictable performance, advanced security, comprehensive manage-
ment, and integrated resiliency. It is the ideal networking platform to
deliver 10 GbE.

Figure 67. Brocade FastIron Edge X 624.

114 The New Data Center


Brocade IronView Network Manager

Brocade IronView Network Manager


Brocade IronView Network Manager (INM) provides a comprehensive
tool for configuring, managing, monitoring, and securing Brocade
wired and wireless network products. It is an intelligent network man-
agement solution that reduces the complexity of changing, monitoring,
and managing network-wide features such as Access Control Lists
(ACLs), rate limiting policies, VLANs, software and configuration
updates, and network alarms and events.
Using Brocade INM, organizations can automatically discover Brocade
network equipment and immediately acquire, view, and archive config-
urations for each device. In addition, they can easily configure and
deploy policies for wired and wireless products.

Figure 68. Brocade INM Dashboard (top) and Backup Configuration


Manager (bottom).

The New Data Center 115


Chapter 10: Brocade LAN Network Solutions

Brocade Mobility
While once considered a luxury, Wi-Fi connectivity is now an integral
part of the modern enterprise. To that end, most IT organizations are
deploying Wireless LANs (WLANs). With the introduction of the IEEE
802.11n standard, these organizations can save significant capital
and feel confident in expanding their wireless deployments to busi-
ness-critical applications. In fact, wireless technologies often match
the performance of wired networks-all with simplified deployment,
robust security, and at a significantly lower cost. Brocade offers all the
pieces to deploy a wireless enterprise. In addition to indoor networking
equipment, Brocade also provides the tools to wirelessly connect mul-
tiple buildings across a corporate campus.
Brocade offers two models of controllers: the Brocade RFS6000 and
RFS7000 Controller. Brocade Mobility controllers enables wireless
enterprises by providing an integrated communications platform that
delivers secure and reliable voice, video, and data applications in
Wireless LAN (WLAN) environments. Based on an innovative architec-
ture, Brocade mobility controllers provide:
• Wired and wireless networking services
• Multiple locationing technologies such as Wi-i and RFID
• Resiliency via 3G/4G wireless broadband backhaul
• High performance with 802.11n networks
The Brocade Mobility RFS7000 features a multicore, multithreaded
architecture designed for large-scale, high-bandwidth enterprise
deployments. It easily handles from 8000 to 96,000 mobile devices
and 256 to 3000 802.11 dual-radio a/b/g/n access points or 1024
adaptive access points (Brocade Mobility 5181 a/b/g or Brocade
Mobility 7131 a/b/g/n) per controller. The Brocade Mobility RFS7000
provides the investment protection enterprises require: innovative
clustering technology provides a 12X capacity increase, and smart
licensing enables efficient, scalable network expansion.

116 The New Data Center


Brocade One
11
Simplifying complexity in the virtualized
data center

Brocade One, announced in mid-2010, is the unifying network archi-


tecture and strategy that enables customers to simplify the complexity
of virtualizing their applications. By removing network layers, simplify-
ing management, and protecting existing technology investments,
Brocade One helps customers migrate to a world where information
and services are available anywhere in the cloud.

Evolution not Revolution


In the data center, Brocade shares a common industry view that IT
infrastructures will eventually evolve to a highly virtualized, services-
on-demand state enabled through the cloud. The process, an evolu-
tionary path toward this desired end-state, is as important as reaching
the end-state. This evolution has already started inside the data center
and Brocade offers insights on the challenges faced as it moves out to
the rest of the network.
The realization of this vision requires radically simplified network archi-
tectures. This is best achieved through a deep understanding of data
center networking intricacies and the rejection of rip-and-replace
deployment scenarios with vertically integrated stacks sourced from a
single vendor. In contrast, the Brocade One architecture takes a cus-
tomer-centric approach with the following commitments:
• Unmatched simplicity. Dramatically simplifying the design, deploy-
ment, configuration, and ongoing support of IT infrastructures.
• Investment protection. Emphasizing an approach that builds on
existing customer multivendor infrastructures while improving
their total cost of ownership.

The New Data Center 117


Chapter 11: Brocade One

• High-availability networking. Supporting the ever-increasing


requirements for unparalleled uptime by setting the standard for
continuous operations, ease of management, and resiliency.
• Optimized applications. Optimizing current and future customer
applications.
The new Brocade converged fabric solutions include unique and pow-
erful innovations customized to support virtualized data centers,
including:

• Brocade Virtual Cluster Switching™ (VCS). A new class of Brocade-


developed technologies designed to address the unique require-
ments of virtualized data centers. Brocade VCS, available in
shipping product in late 2010, overcomes the limitations of con-
ventional Ethernet networking by applying non-stop operations,
any-to-any connectivity and the intelligence of fabric switching.

Brocade Virtual Cluster


Switching (VCS)

Ethernet Distributed Logical


Fabric Intelligence Chassis

No STP Self-forming Logically flattens and


Multi-path, deterministic Arbitrary topology collapses network layers
Auto-healing, non-disruptive Network aware of all Scale edge and manage
members, devices, VMs as if single switch
Lossless, low latency
Masterless control, no Auto-configuration
Convergence ready
reconfiguration Centralized or distributed
VAL interaction management, end-to-end

Dynamic Services Connectivity over distance, Native Fibre Channel,


Security Services, Layer 4 - 7, and so on

Figure 69. The pillars of Brocade VCS (detailed in the next section).

• Brocade Virtual Access Layer (VAL). A logical layer between Bro-


cade converged fabric and server virtualization hypervisors that
will help ensure a consistent interface and set of services for vir-
tual machines (VMs) connected to the network. Brocade VAL is
designed to be vendor agnostic and will support all major hypervi-
sors by utilizing industry-standard technologies, including the
emerging Virtual Ethernet Port Aggregator (VEPA) and Virtual
Ethernet Bridging (VEB) standards.

118 The New Data Center


Industry's First Converged Data Center Fabric

• Brocade Open Virtual Compute Blocks. Brocade is working with


leading systems and IT infrastructure vendors to build tested and
verified data center blueprints for highly scalable and cost-effec-
tive deployment of VMs on converged fabrics.
• Brocade Network Advisor. A best-in-class element management
toolset that will help provide industry-standard and customized
support for industry-leading network management, storage man-
agement, virtualization management, and data center
orchestration tools.
• Multiprotocol Support. Brocade converged fabrics are designed to
transport all types of network and storage traffic over a single wire
to reduce complexity and help ensure a simplified migration path
from current technologies.

Industry's First Converged Data Center Fabric


Brocade designed VCS as the core technology for building large, high-
performance and flat Layer 2 data center fabrics to better support the
increased adoption of server virtualization. Brocade VCS is built on
Data Center Bridging technologies to meet the increased network reli-
ability and performance requirements as customers deploy more and
more VMs. Brocade helped pioneer DCB through industry standards
bodies to ensure that the technology would be suitable for the rigors of
data center networking.
Another key technology in Brocade VCS is the emerging IETF standard
Transparent Interconnection of Lots of Links (TRILL), which will provide
a more efficient way of moving data throughout converged fabrics by
automatically determining the shortest path between routes. Both DCB
and TRILL are advances to current technologies and are critical for
building large, flat, and efficient converged fabrics capable of support-
ing both Ethernet and storage traffic. They are also examples of how
Brocade has been able to leverage decades of experience in building
data center fabrics to deliver the industry's first converged fabrics.
Brocade VCS also simplifies the management of Brocade converged
fabrics by managing multiple discrete switches as one logical entity.
These VCS features allow customers to flatten network architectures
into a single Layer 2 domain that can be managed as a single switch.
This reduces network complexity and operational costs while allowing
VCS users to scale their VM environments to global topologies.

The New Data Center 119


Chapter 11: Brocade One

Ethernet Fabric
In the new data center LAN, Spanning Tree Protocol is no longer neces-
sary, because the Ethernet fabric appears as a single logical switch to
connected servers, devices, and the rest of the network. Also, Multi-
Chassis Trunking (MCT) capabilities in aggregation switches enable a
logical one-to-one relationship between the access (VCS) and aggrega-
tion layers of the network. The Ethernet fabric is an advanced multi-
path network utilizing TRILL, in which all paths in the network are
active and traffic is automatically distributed across the equal-cost
paths. In this optimized environment, traffic automatically takes the
shortest path for minimum latency without manual configuration.
And, unlike switch stacking technologies, the Ethernet fabric is master-
less. This means that no single switch stores configuration information
or controls fabric operations. Events such as added, removed, or failed
links are not disruptive to the Ethernet fabric and do not require all
traffic in the fabric to stop. If a single link fails, traffic is automatically
rerouted to other available paths in less than a second. Moreover, sin-
gle component failures do not require the entire fabric topology to
reconverge, helping to ensure that no traffic is negatively impacted by
an isolated issue.
Distributed Intelligence
Brocade VCS also enhances server virtualization with technologies
that increase VM visibility in the network and enable seamless migra-
tion of policies along with the VM. VCS achieves this through a
distributed services architecture that makes the fabric aware of all of
connected devices and shares the information across those devices.
Automatic Migration of Port Profiles (AMPP), a VCS feature, enables a
VM's network profiles—such as security or QoS levels—to follow the VM
during migrations without manual intervention. This unprecedented
level of VM visibility and automated profile management helps intelli-
gently remove the physical barriers to VM mobility that exists in current
technologies and network architectures.
Distributed intelligence allows the Ethernet fabric to be “self-forming.”
When two VCS-enabled switches are connected, the fabric is automati-
cally created, and the switches discover the common fabric
configuration. Scaling bandwidth in the fabric is as simple as connect-
ing another link between switches or adding a new switch as required.

120 The New Data Center


Industry's First Converged Data Center Fabric

The Ethernet fabric does not dictate a specific topology, so it does not
restrict oversubscription ratios. As a result, network architects can cre-
ate a topology that best meets specific application requirements.
Unlike other technologies, VCS enables different end-to-end subscrip-
tion ratios to be created or fine- tuned as application demands change
over time.
Logical Chassis
All switches in an Ethernet fabric are managed as if they were a single
logical chassis. To the rest of the network, the fabric looks no different
than any other Layer 2 switch. The network sees the fabric as a single
switch, whether the fabric contains as few as 48 ports or thousands of
ports. Each physical switch in the fabric is managed as if it were a port
module in a chassis. This enables fabric scalability without manual
configuration. When a port module is added to a chassis, the module
does not need to be configured, and a switch can be added to the
Ethernet fabric just as easily. When a VCS-enabled switch is connected
to the fabric, it inherits the configuration of the fabric and the new
ports become available immediately.
The logical chassis capability significantly reduces management of
small-form-factor edge switches. Instead of managing each top-of-rack
switch (or switches in blade server chassis) individually, organizations
can manage them as one logical chassis, which further optimizes the
network in the virtualized data center and will further enable a cloud
computing model.
Dynamic Services
Brocade VCS also offers dynamic services so that you can add new
network and fabric services to Brocade converged fabrics, including
capabilities such as fabric extension over distance, application deliv-
ery, native Fibre Channel connectivity, and enhanced security services
such as firewalls and data encryption. Through VCS, the new switches
and software with these services behave as service modules within a
logical chassis. Furthermore, the new services are then made avail-
able to the entire converged fabric, dynamically evolving the fabric with
new functionality. Switches with these unique capabilities can join the
Ethernet fabric, adding a network service layer across the entire fabric.

The New Data Center 121


Chapter 11: Brocade One

The VCS Architecture


The VCS architecture, shown in Figure 70, flattens the network by col-
lapsing the traditional access and aggregation layers. Since the fabric
is self-aggregating, there is no need for aggregation switches to man-
age subscription ratios and provide server-to-server communication.
For maximum flexibility of server and storage connectivity, multiple
protocols and speeds are supported: 1 GbE, 10 GbE, 10 GbE with DCB,
and Fibre Channel. Since the Ethernet fabric is one logical chassis with
distributed intelligence, the VM sphere of mobility spans the entire
VCS. Mobility extends even further with the VCS fabric extension
Dynamic Service. At the core of the data center, routers are virtualized
using MCT and provide high-performance connectivity between Ether-
net fabrics, inside the data center or across data centers.
Servers running high-priority applications or other servers requiring
the highest block storage service levels connect to the SAN using
native Fibre Channel. For lower-tier applications, FCoE or iSCSI storage
can be connected directly to the Ethernet fabric, providing shared stor-
age for servers connected to that fabric.
Remote
data center
Public
VM
Network
VCS
Core
VM
routers VM
VM
VM
VCS fabric
Layer 4–7 VM
extension
application delivery
VCS fabric
extension

Security ervices
VCS SAN
(firewall, encryption)

Dedicated Fibre
FC/FCoE/ Channel SAN for
iSCSI/NAS Tier 1 applications
VM VM
VM
VM
storage
VM VM
VM VM
VM VM
VM VM
Rack-mount
Blade VM VM
servers
servers

Figure 70. A Brocade VCS reference network architecture.

122 The New Data Center


“Best Practices for Energy
A
Efficient Storage Operations”
Version 1.0

October 2008
Authored by Tom Clark, Brocade, Green Storage Initiative (GSI) Chair
and Dr. Alan Yoder, NetApp, GSI Governing Board
Reprinted with permission of the SNIA

Introduction
The energy required to support data center IT operations is becoming
a central concern worldwide. For some data centers, additional energy
supply is simply not available, either due to finite power generation
capacity in certain regions or the inability of the power distribution grid
to accommodate more lines. Even if energy is available, it comes at an
ever increasing cost. With current pricing, the cost of powering IT
equipment is often higher than the original cost of the equipment
itself. The increasing scarcity and higher cost of energy, however, is
being accompanied by a sustained growth of applications and data.
Simply throwing more hardware assets at the problem is no longer via-
ble. More hardware means more energy consumption, more heat
generation and increasing load on the data center cooling system.
Companies are therefore now seeking ways to accommodate data
growth while reducing their overall power profile. This is a difficult
challenge.
Data center energy efficiency solutions span the spectrum from more
efficient rack placement and alternative cooling methods to server
and storage virtualization technologies. The SNIA's Green Storage Ini-
tiative was formed to identify and promote energy efficiency solutions
specifically relating to data storage. This document is the first iteration
of the SNIA GASSY's recommendations for maximizing utilization of

The New Data Center 123


Appendix A: “Best Practices for Energy Efficient Storage Operations”

data center storage assets while reducing overall power consumption.


We plan to expand and update the content over time to include new
energy-related storage technologies as well as SNIA-generated metrics
for evaluating energy efficiency in storage product selection.

Some Fundamental Considerations


Reducing energy consumption is both an economic and a social imper-
ative. While data centers represent only ~2% of total energy
consumption in the US, the dollar figure is approximately $4B annu-
ally. In terms of power generation, data centers in the US require the
equivalent of six 1000 MegaWatt power plants to sustain current oper-
ations. Global power consumption for data centers is more than twice
the US figures. The inability of the power generation and delivery infra-
structure to accommodate the growth in continued demand, however,
means that most data centers will be facing power restrictions in the
coming years. Gartner predicts that by 2009, half of the world's data
centers will not have sufficient power to support their applications1.
An Emerson Power survey projects that 96% of all data centers will not
have sufficient power by 2011.2 Even if there was a national campaign
to build alternative energy generation capability, new systems would
not be online soon enough to prevent a widespread energy deficit. This
simply highlights the importance of finding new ways to leverage tech-
nology to increase energy efficiency within the data center and
accomplish more IT processing with fewer energy resources.
In addition to the pending scarcity and increased cost of energy to
power IT operations, data center managers face a continued explosion
in data growth. Since 2000, the amount of corporate data generated
worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300
exabytes, with projections of ~1 zetabyte (1000 exabytes) by 2010.
This data must be stored somewhere. The sustained growth of data
requires new tools for data management, storage allocation, data
retention and data redundancy.

1. “Gartner Says 50 Percent of Data Centers Will Have Insufficient Power and Cooling
Capacity by 2008,” Gartner Inc. Press Release, November 29, 2006
2. “Emerson Network Power Presents Industry Survey Results That Project 96 Percent
of Today`s Data Centers Will Run Out of Capacity by 2011" Emerson Press Release,
November 16, 2006

124 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

The conflict between the available supply of energy to power IT opera-


tions and the increasing demand imposed by data growth is further
exacerbated by the operational requirement for high availability access
to applications and data. Mission-critical applications in particular are
high energy consumers and require more powerful processors, redun-
dant servers for failover, redundant networking connectivity,
redundant fabric pathing, and redundant data storage in the form of
mirroring and data replication for disaster recovery. These top tier
applications are so essential for business operations, however, that
the doubling of server and storage hardware elements and the accom-
panying doubling of energy draw have been largely unavoidable. Here
too, though, new green storage technologies and best practices can
assist in retaining high availability of applications and data while
reducing total energy requirements.

Shades of Green
The quandary for data center managers is in identifying which new
technologies will actually have a sustainable impact for increasing
energy efficiency and which are only transient patches whose initial
energy benefit quickly dissipates as data center requirements change.
Unfortunately, the standard market dynamic that eventually separates
weak products from viable ones has not had sufficient time to elimi-
nate the green pretenders. Consequently, analysts often complain
about the 'greenwashing' of vendor marketing campaigns and the
opportunistic attempt to portray marginally useful solutions as the
cure to all the IT manager's energy ills.
Within the broader green environmental movement greenwashing is
also known as being “lite green” or sometimes “light green”. There
are, however, other shades of green. Dark green refers to environmen-
tal solutions that rely on across-the-board reductions in energy and
material consumption. For a data center, a dark green tactic would be
to simply reduce the number of applications and associated hardware
and halt the expansion of data growth. Simply cutting back, however, is
not feasible for today's business operations. To remain competitive,
businesses must be able to accommodate growth and expansion of
operations.
Consequently, viable energy efficiency for ongoing data center opera-
tions must be based on solutions that are able to leverage state-of-the-
art technologies to do much more with much less. This aligns to yet
another shade of environmental green known as “bright green”. Bright
green solutions reject both the superficial lite green and the Luddite
dark green approaches to the environment and rely instead on techni-

The New Data Center 125


Appendix A: “Best Practices for Energy Efficient Storage Operations”

cal innovation to provide sustainable productivity and growth while


steadily driving down energy consumption. The following SNIA GSI best
practices include many bright green solutions that accomplish the goal
of energy reduction while increasing productivity of IT storage
operations.
Although the Best Practices recommendations listed below are num-
bered sequentially, no prioritization is implied. Every data center
operation has different characteristics and what is suitable for one
application environment may not work in another.
These recommendations collectively fall into the category of “silver
buckshot” in addressing data center storage issues. There is no single
silver bullet to dramatically reduce IT energy consumption and cost.
Instead, multiple energy efficient technologies can be deployed in con-
cert to reduce the overall energy footprint and bring costs under
control. Thin provisioning and data deduplication, for example, are dis-
tinctly different technologies that together can help reduce the
amount of storage capacity required to support applications and thus
the amount of energy-consuming hardware in the data center. When
evaluating specific solutions, then, it is useful to imagine how they will
work in concert with other products to achieve greater efficiencies.
Best Practice #1: Manage Your Data
A significant component of the exponential growth of data is the
growth of redundant copies of data. By some industry estimates, over
half of the total volume of a typical company's data exists in the form
of redundant copies dispersed across multiple storage systems and
client workstations. Consider the impact, for example, of emailing a
4MB PowerPoint attachment to 100 users instead of simply sending a
link to the file. The corporate email servers now have an additional
400 MB of capacity devoted to redundant copies of the same data.
Even if individual users copy the attachment to their local drives, the
original email and attachment may languish on the email server for
months before the user tidies their Inbox. In addition, some users may
copy the attachment to their individual share on a data center file
server, further compounding the duplication. And to make matters
worse, the lack of data retention policies can result in duplicate copies
of data being maintained and backed up indefinitely.
This phenomenon is replicated daily across companies of every size
worldwide, resulting in ever increasing requirements for storage, lon-
ger backup windows and higher energy costs. A corporate policy for
data management, redundancy and retention is therefore an essential
first step in managing data growth and getting storage energy costs

126 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

under control. Many companies lack data management policies or


effective means to enforce them because they are already over-
whelmed with the consequences of prior data avalanches. Responding
reactively to the problem, however, typically results in the spontaneous
acquisition of more storage capacity, longer backup cycles and more
energy consumption. To proactively deal with data growth, begin with
an audit of your existing applications and data and begin prioritizing
data in terms of its business value.
Although tools are available to help identify and reduce data redun-
dancy throughout the network, the primary outcome of a data audit
should be to change corporate behavior. Are data sets periodically
reviewed to ensure that only information that is relevant to business is
retained? Does your company have a data retention policy and mecha-
nisms to enforce it? Are you educating your users on the importance of
managing their data and deleting non-essential or redundant copies of
files? Are your Service Level Agreements (SLAs) structured to reward
more efficient data management by individual departments? Given
that data generators (i.e., end users) typically do not understand
where their data resides or what resources are required to support it,
creating policies for data management and retention can be a useful
means to educate end users about the consequences of excessive
data redundancy.
Proactively managing data also requires aligning specific applications
and their data to the appropriate class of storage. Without a logical pri-
oritization of applications in terms of business value, all applications
and data receive the same high level of service. Most applications,
however, are not truly mission-critical and do not require the more
expensive storage infrastructure needed for high availability and per-
formance. In addition, even high-value data does not typically sustain
its value over time. As we will see in the recommendations below,
aligning applications and data to the appropriate storage tier and
migrating data from one tier to another as its value changes can
reduce both the cost of storage and the cost of energy to drive it. This
is especially true when SLAs are structured to require fewer backup
copies as data value declines.

The New Data Center 127


Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #2: Select the Appropriate Storage RAID


Level
Storage networking provides multiple levels of data protection, ranging
from simple CRC checks on data frames to more sophisticated data
recovery mechanisms such as RAID. RAID guards against catastrophic
loss of data when disk drives fail by creating redundant copies of data
or providing parity reconstruction of data onto spare disks.
RAID 1 mirroring creates a duplicate copy of disk data, but at the
expense of doubling the number of disk drives and consequently dou-
bling the power consumption of the storage infrastructure. The primary
advantage of RAID 1 is that it can withstand the failure of one or all of
the disks in one mirror of a given RAID set. For some mission-critical
environments, the extra cost and power usage characteristic of RAID 1
may be unavoidable. Accessibility to data is sometimes so essential for
business operations that the ability to quickly switch from primary stor-
age to its mirror without any RAID reconstruct penalty is an absolute
business requirement. Likewise, asynchronous and synchronous data
replication provide redundant copies of disk data for high availability
access and are widely deployed as insurance against system or site
failure. As shown in Best Practices #1, however, not all data is mission
critical and even high value data may decrease in value over time. It is
therefore essential to determine what applications and data are abso-
lutely required for continuous business operations and thus merit
more expensive and less energy efficient RAID protection.
RAID 5's distributed parity algorithm enables a RAID set to withstand
the loss of a single disk drive in a RAID set. In that respect, it offers the
basic data protection against disk failure that RAID 1 provides, but
only against a single disk failure and with no immediate failover to a
mirrored array. While the RAID set does remain online, a failed disk
must be reconstructed from the distributed parity on the surviving
drives in the set, possibly impacting performance. Unlike RAID 1, how-
ever, RAID 5 only requires one spare drive in a RAID set. Fewer
redundant drives means less energy consumption as well as better uti-
lization of raw capacity.
By adding two additional drives, RAID 6 can withstand the loss of two
disk drives in a RAID set, providing a higher availability than RAID 5.
Both solutions, however, are more energy efficient than RAID 1 mirror-
ing (or RAID 1+0 mirroring and striping) and should be considered for
applications that do not require an immediate failover to a secondary
array.

128 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

 
- Green technologies use less raw capacity to
store and use the same data set
Test
- Power consumption falls accordingly
Test
Test
10 TB Test
Test Test
Test
Test Test Test Test
Test
Test Test Test Test
Archive Test Test Test Test
Test Test Test
Backup Archive
Snapshots Test Test
5 TB “Growth” Backup
Archive
Archive Archive
RAID10 Snapshots Backup Backup Backup Archive
“Growth” Backup
Data Snapshot s Snapshots Snapshot s Snapshots
RAID DP “Growth” “G rowth” “Growth” “Growth”
Data RAID DP RAID DP RAID DP RAID DP
Snapshots
Data Data Data Data
“Growth” Snapshots
“Growth” Snapshot s Snapshots Snapshot s Snapshots
RAID10
1 TB RAIDDP
“Growth”
RAIDDP
“G rowth”
RAIDDP
“Growth”
RAIDDP
“Growth”
RAIDDP
Data Data Data Data Data Data

RAID 5/6 Thin Multi- Virtual Dedupe


Provisioning Use Clones &
Backups Compression

Figure 1. Software Technologies for Green Storage, © 2008 Storage


Networking Industry Association, All Rights Reserved, Alan Yoder,
NetApp
As shown in Figure 1, the selection of the appropriate RAID levels to
retain high availability data access while reducing the storage hard-
ware footprint can enable incremental green benefits when combined
with other technologies.
Best Practice #3: Leverage Storage Virtualization
Storage virtualization refers to a suite of technologies that create a log-
ical abstraction layer above the physical storage layer. Instead of
managing individual physical storage arrays, for example, virtualization
enables administrators to manage multiple storage systems as a sin-
gle logical pool of capacity, as shown in Figure 2.

The New Data Center 129


Appendix A: “Best Practices for Energy Efficient Storage Operations”

  Server 1 Server 2 Server 3 Server 1 Server 2 Server 3

SAN
LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6

LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55


virtualized storage pool

LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55

Array A Array B Array C Array A Array B Array C


physical storage physical storage

Figure 2. Storage Virtualization: Technologies for Simplifying Data Stor-


age and Management, T. Clark, Addison-Wesley, used with permission
from the author
On its own, storage virtualization is not inherently more energy effi-
cient than conventional storage management but can be used to
maximize efficient capacity utilization and thus slow the growth of
hardware acquisition. By combining dispersed capacity into a single
logical pool, it is now possible to allocate additional storage to
resource-starved applications without having to deploy new energy-
consuming hardware. Storage virtualization is also an enabling foun-
dation technology for thin provisioning, resizeable volumes, snapshots
and other solutions that contribute to more energy efficient storage
operations.
Best Practice #4: Use Data Compression
Compression has long been used in data communications to minimize
the number of bits sent along a transmission link and in some storage
technologies to reduce the amount of data that must be stored.
Depending on implementation, compression can impose a perfor-
mance penalty because the data must be encoded when written and
decoded (decompressed) when read. Simply minimizing redundant or
recurring bit patterns via compression, however, can reduce the
amount of processed data that is stored by one half or more and thus
reduce the amount of total storage capacity and hardware required.
Not all data is compressible, though, and some data formats have
already undergone compression at the application layer. JPEG, MPEG
and MP3 file formats, for example, are already compressed and will
not benefit from further compression algorithms when written to disk
or tape.

130 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

When used in combination with security mechanisms such as data


encryption, compression must be executed in the proper sequence.
Data should be compressed before encryption on writes and
decrypted before decompression on reads.
Best Practice #5: Incorporate Data Deduplication
While data compression works at the bit level, conventional data dedu-
plication works at the disk block level. Redundant data blocks are
identified and referenced to a single identical data block via pointers
so that the redundant blocks do not have to be maintained intact for
backup (virtual to disk or actual to tape). Multiple copies of a docu-
ment, for example, may only have minor changes in different areas of
the document while the remaining material in the copies have identi-
cal content. Data deduplication also works at the block level to reduce
redundancy of identical files. By retaining only unique data blocks and
providing pointers for the duplicates, data deduplication can reduce
storage requirements by up to 20:1. As with data compression, the
data deduplication engine must reverse the process when data is read
so that the proper blocks are supplied to the read request.
Data deduplication may be done either in band, as data is transmitted
to the storage medium, or in place, on existing stored data. In band
techniques have the obvious advantage that multiple copies of data
never get made, and therefore never have to be hunted down and
removed. In place techniques, however, are required to address the
immense volume of already stored data that data center managers
must deal with.
Best Practice #6: File Deduplication
File deduplication operates at the file system level to reduce redun-
dant copies of identical files. Similar to block level data deduplication,
the redundant copies must be identified and then referenced via
pointers to a single file source. Unlike block level data deduplication,
however, file deduplication lacks the granularity to prevent redundancy
of file content. If two files are 99% identical in content, both copies
must be stored in their entirety. File deduplication therefore only pro-
vides a 3 or 4 to 1 reduction in data volume in general. Rich targets
such as full network-based backup of laptops may do much better
than this, however.

The New Data Center 131


Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #7: Thin Provisioning of Storage to Servers


In classic server-storage configurations, servers are allocated storage
capacity based on the anticipated requirements of the applications
they support. Because exceeding that storage capacity over time
would result in an application failure, administrators typically over-pro-
vision storage to servers. The result of fat provisioning is higher cost,
both for the extra storage capacity itself and in the energy required to
support additional spinning disks that are not actively used for IT
processing.
Thin provisioning is a means to satisfy the application server's expecta-
tion of a certain volume size while actually allocating less physical
capacity on the storage array or virtualized storage pool. This elimi-
nates the under-utilization issues typical of most applications,
provides storage on demand and reduces the total disk capacity
required for operations. Fewer disks equate to lower energy consump-
tion and cost and by monitoring storage usage the storage
administrator can add capacity only as required.
Best Practice #8: Leverage Resizeable Volumes
Another approach to increasing capacity utilization and thus reducing
the overall disk storage footprint is to implement variable size vol-
umes. Typically, storage volumes are of a fixed size, configured by the
administrator and assigned to specific servers. Dynamic volumes, by
contrast, can expand or contract depending on the amount of data
generated by an application. Resizeable volumes require support from
the host operating system and relevant applications, but can increase
efficient capacity utilization to 70% or more. From a green perspective,
more efficient use of existing disk capacity means fewer hardware
resources over time and a much better energy profile.
Best Practice #9: Writeable Snapshots
Application development and testing are integral components of data
center operations and can require significant increases in storage
capacity to perform simulations and modeling against real data.
Instead of allocating additional storage space for complete copies of
live data, snapshot technology can be used to create temporary copies
for testing. A snapshot of the active, primary data is supplemented by
writing only the data changes incurred by testing. This minimizes the
amount of storage space required for testing while allowing the active
non-test applications to continue unimpeded.

132 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #10: Deploy Tiered Storage


Storage systems are typically categorized by their performance, avail-
ability and capacity characteristics. Formerly, most application data
was stored on a single class of storage system until it was eventually
retired to tape for preservation. Today, however, it is possible to
migrate data from one class of storage array to another as the busi-
ness value and accessibility requirements of that data changes over
time. Tiered storage is a combination of different classes of storage
systems and data migration tools that enables administrators to align
the value of data to the value of the storage container in which it
resides. Because second-tier storage systems typically use slower
spinning or less expensive disk drives and have fewer high availability
features, they consume less energy compared to first-tier systems. In
addition, some larger storage arrays enable customers to deploy both
high-performance and moderate-performance disks sets in the same
chassis, thus enabling an in-chassis data migration.
A tiered storage strategy can help reduce your overall energy consump-
tion while still making less frequently accessed data available to
applications at a lower cost per gigabyte of storage. In addition, tiered
storage is a reinforcing mechanism for data retention policies as data
is migrated from one tier to another and then eventually preserved via
tape or simply deleted.
Best Practice #11: Solid State Storage
Solid state storage still commands a price premium compared to
mechanical disk storage, but has excellent performance characteris-
tics and much lower energy consumption compared to spinning media.
While solid state storage may not be an option for some data center
budgets, it should be considered for applications requiring high perfor-
mance and for tiered storage architectures as a top-tier container.
Best Practice #12: MAID and Slow-Spin Disk Technology
High performance applications typically require continuous access to
storage and thus assume that all disk sets are spinning at full speed
and ready to read or write data. For occasional or random access to
data, however, the response time may not be as critical. MAID (mas-
sive array of idle disks) technology uses a combination of cache
memory and idle disks to service requests, only spinning up disks as
required. Once no further requests for data in a specific disk set are
made, the drives are once again spun down to idle mode. Because
each disk drive represents a power draw, MAID provides inherent

The New Data Center 133


Appendix A: “Best Practices for Energy Efficient Storage Operations”

green benefits. As MAID systems are more accessed more frequently,


however, the energy profile begins to approach those of conventional
storage arrays.
Another approach is to put disk drives into slow spin mode when no
requests are pending. Because slower spinning disks require less
power, the energy efficiency of slow spin arrays is inversely propor-
tional to their frequency of access.
Occasionally lengthy access times are inherent to MAID technology, so
it is only useful when data access times of several seconds-the length
of time it takes a disk to spin up-can be tolerated.
Best Practice #13: Tape Subsystems
As a storage technology, tape is the clear leader in energy efficiency.
Once data is written to tape for preservation, the power bill is essen-
tially zero. Unfortunately, however, businesses today cannot simply use
tape as their primary storage without inciting a revolution among end
users and bringing applications to their knees. Although the obituary
for tape technology has been written multiple times over the past
decade, tape endures as a viable archive media. From a green stand-
point, tape is still the best option for long term data retention.
Best Practice #14: Fabric Design
Fabrics provide the interconnect between servers and storage sys-
tems. For larger data centers, fabrics can be quite extensive with
thousands of ports in a single configuration. Because each switch or
director in the fabric contributes to the data center power bill, design-
ing an efficient fabric should include the energy and cooling impact as
well as rational distribution of ports to service the storage network.
A mesh design, for example, typically incorporates multiple switches
connected by interswitch links (ISLs) for redundant pathing. Multiple
(sometimes 30 or more) meshed switches represent multiple energy
consumers in the data center. Consequently, consolidating the fabric
into higher port count and more energy efficient director chassis and
core-edge design can help simplify the fabric design and potentially
lower the overall energy impact of the fabric interconnect.
Best Practice #15 - File System Virtualization
By some industry estimates, 75% of corporate data resides outside of
the data center, dispersed in remote offices and regional centers. This
presents a number of issues, including the inability to comply with reg-
ulatory requirements for data security and backup, duplication of
server and storage resources across the enterprise, management and
maintenance of geographically distributed systems and increased

134 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

energy consumption for corporate-wide IT assets. File system virtual-


ization includes several technologies for centralizing and consolidating
remote file data, incorporating that data into data center best prac-
tices for security and backup and maintaining local response-time to
remote users. From a green perspective, reducing dispersed energy
inefficiencies via consolidation helps lower the overall IT energy
footprint.
Best Practice #16: Server, Fabric and Storage
Virtualization
Data center virtualization leverage virtualization of servers, the fabric
and storage to create a more flexible and efficient IT ecosystem.
Server virtualization essentially deduplicates processing hardware by
enabling a single hardware platform to replace up to 20 platforms.
Server virtualization also facilitates mobility of applications so that the
proper processing power can be applied to specific applications on
demand. Fabric virtualization enables mobility and more efficient utili-
zation of interconnect assets by providing policy-based data flows from
servers to storage. Applications that require first class handling are
given a higher quality of service delivery while less demanding applica-
tion data flows are serviced by less expensive paths. In addition,
technologies such as NPIV (N_Port ID Virtualization) reduce the num-
ber of switches required to support virtual server connections and
emerging technologies such as FCoE (Fibre Channel over Ethernet)
can reduce the number of hardware interfaces required to support
both storage and messaging traffic. Finally, storage virtualization sup-
plies the enabling foundation technology for more efficient capacity
utilization, snapshots, resizeable volumes and other green storage
solutions. By extending virtualization end-to-end in the data center, IT
can accomplish more with fewer hardware assets and help reduce
data center energy consumption.
File system virtualization can also be used as a means of implement-
ing tiered storage with transparent impact to users through use of a
global name space.
Best Practice #17: Flywheel UPS Technology
Flywheel UPSs, while more expensive up front, are several percent
more efficient (typically > 97%), easier to maintain, more reliable and
do not have the large environmental footprint that conventional bat-
tery-backed UPSs do. Forward-looking data center managers are
increasingly finding that this technology is less expensive in multiple
dimensions over the lifetime of the equipment.

The New Data Center 135


Appendix A: “Best Practices for Energy Efficient Storage Operations”

Best Practice #18: Data Center Air Conditioning


Improvements
The combined use of economizers and hot-aisle/cold aisle technology
can result in PUEs of as low as 1.25. As the PUE (Power Usage Effec-
tiveness ratio) of a traditional data center is often over 2.25, this
difference can represent literally millions of dollars a year in energy
savings.
Economizers work by using outside air instead of recirculated air when
doing so uses less energy. Obviously climate is a major factor in how
effective this strategy is: heat and high humidity both reduce its
effectiveness.
There are various strategies for hot/cold air containment. All depend
on placing rows of racks front to front and back to back. As almost all
data center equipment is designed to draw cooled air in the front and
eject heated air out the back, this results in concentrating the areas
where heat evacuation and cool air supply are located.
One strategy is to isolate only the cold aisles and to run the rest of the
room at hot aisle temperatures. As hot aisle temperatures are typically
in the 95° F range, this has the advantage that little to no insulation is
needed in the building skin, and in cooler climates, some cooling is
gotten via ordinary thermal dissipation through the building skin.
Another strategy is to isolate both hot and cold aisles. This reduces the
volume of air that must be conditioned, and has the advantage that
humans will find the building temperature to be more pleasant.
In general, hot aisle/cold aisle technologies avoid raised floor configu-
rations, as pumping cool air upward requires extra energy.
Best Practice #19: Increased Data Center temperatures
Increasing data center temperatures can save significant amounts of
energy. Ability to do this is dependent in much part on excellent tem-
perature and power monitoring capabilities, and on conditioned air
containment strategies. Typical enterprise class disk drives are rated
to 55° C (131° F), but disk lifetime suffers somewhat at these higher
temperatures, and most data center managers think it unwise to get
very close to that upper limit. Even tightly designed cold aisle contain-
ment measures may have 10 to 15 degree variations in temperature
from top to bottom of a rack; the total possible variation plus the maxi-
mum measured heat gain across the rack must be subtracted from
the maximum tolerated temperature to get a maximum allowable cold

136 The New Data Center


Appendix A: “Best Practices for Energy Efficient Storage Operations”

aisle temperature. So the more precisely that air delivery can be con-
trolled and measured, the higher the temperature one can run in the
“cold” aisles.
Benefits of higher temperatures include raised chiller water tempera-
tures and efficiency, reduced fan speed, noise and power draw, and
increased ability to use outside air for cooling through an economizer.
Best Practice #20: Work with Your Regional Utilities
Some electrical utility companies and state agencies are partnering
with customers by providing financial incentives for deploying more
energy efficient technologies. If you are planning a new data center or
consolidating an existing one, incentive programs can provide guid-
ance for the types of technologies and architectures that will give the
best results.

What the SNIA is Doing About Data Center Energy


Usage
The SNIA Green Storage Initiative is conducting a multi-pronged
approach for advancing energy efficient storage networking solutions,
including advocacy, promotion of standard metrics, education, devel-
opment of energy best practices and alliances with other industry
energy organizations such as The Green Grid. Currently, over 20 SNIA
members have joined the SNIA GSI as voting members.
A key requirement for customers is the ability to audit their current
energy consumption and to take practical steps to minimize energy
use. The task of developing metrics for measuring the energy effi-
ciency of storage network elements is being performed by the SNIA
Green Storage Technical Work Group (TWG). The SNIA GSI is support-
ing the technical work of the GS-TWG by funding laboratory testing
required for metrics development, formulation of a common taxonomy
for classes of storage and promoting GS-TWG metrics for industry
standardization.
The SNIA encourages all storage networking vendors, channels, tech-
nologists and end users to actively participate in the green storage
initiative and help discover additional ways to minimize the impact of
IT storage operations on power consumption. If, as industry analysts
forecast, adequate power for many data centers will simply not be
available, we all have a vital interest in reducing our collective power
requirements and make our technology do far more with far less envi-
ronmental impact.

The New Data Center 137


Appendix A: “Best Practices for Energy Efficient Storage Operations”

For more information about the SNIA Green Storage Initiative, link to:
http://www.snia.org/forums/green/
To view the SNIA GSI Green Tutorials, link to:
http://www.snia.org/education/tutorials#green

About the SNIA


The Storage Networking Industry Association (SNIA) is a not-for-profit
global organization, made up of some 400 member companies and
7000 individuals spanning virtually the entire storage industry. SNIA's
mission is to lead the storage industry worldwide in developing and
promoting standards, technologies, and educational services to
empower organizations in the management of information. To this
end, the SNIA is uniquely committed to delivering standards, educa-
tion, and services that will propel open storage networking solutions
into the broader market. For additional information, visit the SNIA web
site at www.snia.org.
NOTE: The section, “Green Storage Terminology” has been ommited
from this reprint, however, you can find green storage terms in the
“Glossary” on page 141.

138 The New Data Center


Online Sources
B
ANSI ansi.org
ASHRAE ashrae.com
Blade Systems Alliance bladesystems.org
Brocade brocade.com
Brocade Communities community.brocade.com
Brocade Data Center Virtualization brocade.com/virtualization
Brocade TechBytes brocade.com/techbytes
Climate Savers climatesaverscomputing.org
Data Center Journal datacenterjournal.com
Data Center Knowledge datacenterknowledge.com
Green Storage Initiative snia.org/forums/green
Greener Computing greenercomputing.com
IEEE ieee.org
IETF ietf.org
LEED usgbc.org/DisplayPage.aspx?CMSPageID=222
SNIA snia.org
The Green Grid thegreengrid.org
Uptime Institute uptimeinstitute.org
US Department of Energy - Data Centers
www1.eere.energy.gov/industry/saveenergynow/
partnering_data_centers.html

The New Data Center 139


Appendix B: Online Sources

140 The New Data Center


Glossary
Data center network terminology

ACL Access control list, a security mechanism for assigning


various permissions to a network device.
AES256-GCM An IEEE encryption standard for data on tape.
AES256-XTS An IEEE encryption standard for data on disk.
ANS American National Standards Institute
API Application Programming Interface, a set of calling
conventions for program-to-program communication.
ASHRAE American Society for Heating, Refrigerating, and Air
Conditioning Engineers
ASI Application-specific integrated circuit, hardware
designed for specific high-speed functions required by
protocol applications such as Fibre Channel and
Ethernet.
Access Gateway A Brocade product designed to optimize storage I/O for
blade server frames.
Access layer Network switches that provide direct connection to
servers or hosts.
Active power The energy consumption of a system when powered on
and under normal workload.
Adaptive Brocade technology that enables proactive changes in
Networking network configurations based on defined traffic flows.
Aggregation layer Network switches that provide connectivity between
multiple access layer switches and the network
backbone or core.
Application server A compute platform optimized for hosting applications
for other programs or client access.
ARP spoofing Address Resolution Protocol spoofing, a hacker
technique for associating a hacker's Layer 2 (MAC)
address with a trusted IP address.

The New Data Center 141


Glossary

Asynchronous Data For storage, writing the same data to two separate disk
Replication arrays based on a buffered scheme that may not
capture every data write, typically used for long-
distance disaster recovery.
BTU British Thermal Unit, a metric for heat dissipation.
Blade server A server architecture that minimizes the number of
components required per blade, while relying on the
shared elements (power supply, fans, memory, I/O) of
a common frame.
Blanking plates Metal plates used to cover unused portions of
equipment racks to enhance air flow.
Bright green Applying new technologies to enhance energy
efficiency while maintaining or improving productivity.
CEE Converged Enhanced Ethernet, modifications to
conventional 10 Gbps Ethernet to provide the
deterministic data delivery associated with Fibre
Channel, also known as Data Center Bridging (DCB).
CFC Chlorofluorocarbon, a refrigerant that has been shown
to deplete ozone.
Control path In networking, handles configuration and traffic
exceptions and is implemented in software. Since it
takes more time to handle control path messages, it is
often logically separated from the data path to improve
performance.
CAN Converged network adapter, a DCB-enabled adapter
that supports both FCoE and conventional TCP/IP
traffic.
CRAC Computer room air conditioning
Core layer Typically high-performance network switches that
provide centralized connectivity for the data center
aggregation and access layer switches.
Data compression Bit-level reduction of redundant bit patterns in a data
stream via encoding. Typically used for WAN
transmissions and archival storage of data to tape.
Data deduplication Block-level reduction of redundant data by replacing
duplicate data blocks with pointers to a single good
block.
Data path In networking, handles data flowing between devices
(servers, clients, storage, and so on). To keep up with
increasing speeds, the data path is often implemented
in hardware, typical ASICs.
Dark green Addressing energy consumption by the across-the-
board reduction of energy consuming activities.

142 The New Data Center


Glossary

DAS Direct-attached storage, connection of disks or disk


arrays directly to servers with no intervening network.
DCB Data Center Bridging, enhancements made to
Ethernet LANs for use in data center environments,
standards developed by IEEE and IETF.
DCC Device Connection Control, a Brocade SAN security
mechanism to allow only authorized devices to
connect to a switch.
DCiE Data Center Infrastructure Efficiency, a Green Grid for
measuring IT equipment power consumption in
relation to total data center power draw.
Distribution layer Typically a tier in the network architecture that routes
traffic between LAN segments in the access layer and
aggregates access layer traffic to the core layer.
DMTF Distributed Management Task Force, a standards body
focused on systems management.
DoS/DDoS Denial of service/Distributed denial of service, a
hacking technique to prevent a server from functioning
by flooding it with continuous network requests from
rogue sources
DWDM Dense wave division multiplexing, a technique for
transmitting multiple data streams on a single fiber
optic cable by using different wavelengths.
Data center A facility to house computer systems, storage and
network operations
ERP Enterprise resource planning, an application that
coordinates resources, information and functions of
business across the enterprise.
Economizer Equipment used to treat external air to cool a data
center or building.
Encryption A technique to encode data into a form that can't be
understood so as to secure it from unauthorized
access. Often, a key is used to encode and decode the
data from its encrypted format.
End of row EoR, provides network connectivity for multiple racks
of servers by provisioning a high-availability switch at
the end of the equipment rack row.
Energy The capacity of a physical system to do work.
Energy efficiency Using less energy to provide an equivalent level of
energy service.
Energy Star An EPA program that leverages market dynamics to
foster energy efficiency in product design.
Exabyte 1 billion bigabytes

The New Data Center 143


Glossary

FAIS Fabric Application Interface Standard, an ANSI


standard for providing storage virtualization services
from a Fibre Channel switch or director.
FCF Fiber Channel forwarder, the function in FCoE that
forwards frames between a Fibre Channel fabric and
FCoE network.
FCIP Fibre Channel over IP, an IETF specification for
encapsulating Fibre Channel frames in TCP/IP,
typically used for SAN extension and disaster recovery
applications.
FCoE Fibre Channel over Ethernet, an ANSI standard for
encapsulating Fibre Channel frames over Converged
Enhanced Ethernet (CEE) to simplify server
connectivity.
FICON Fibre Connectivity, a Fibre Channel Layer 4 protocol for
mapping legacy IBM transport over Fibre Channel,
typically used for distance applications.
File deduplication Reduction of file copies by replacing duplicates with
pointers to a single original file.
File server A compute platform optimized for providing file-based
data to clients over a network.
Five-nines 99.999% availability, or 5.26 minutes of downtime per
year.
Flywheel UPS An uninterruptible power supply technology using a
balanced flywheel and kinetic energy to provide
transitional power.
Gateway In networking, a gateway converts one protocol to
another at the same layer of the networking stack.
GbE Gigabit Ethernet
Gigabit (Gb) 1000 megabits
Gigabyte (GB) 1000 megabytes
Greenwashing A by-product of excessive marketing and ineffective
engineering.
GSI Green Storage Initiative, a SNIA initiative to promote
energy efficient storage practices and to define
metrics for measuring the power consumption of
storage systems and networks.
GSLB Global server load balancing, a Brocade ServerIron
ADX feature that enables client requests to be
redirected to the most available and higher-
performance data center resource.
HBA Host bus adapter, a network interface optimized for
storage I/O, typically to a Fibre Channel SAN.

144 The New Data Center


Glossary

HCFC Hydrochlorofluorocarbon, a refrigerant shown to


deplete ozone.
HPC High-Performance Computing, typically
supercomputers or computer clusters that provide
teraflop (1012 floating point operations) levels of
performance.
HVAC Heating, ventilation and air conditioning
Hot aisle/cold aisle The arrangement of data center equipment racks to
optimize air flow for cooling in alternating rows.
Hot-swap The ability to replace a hardware component without
disrupting ongoing operations.
Hypervisor Software or firmware that enables multiple instances
of an operating system and applications (for example,
VMs) to run on a single hardware platform.
ICL Inter-chassis link, high-performance channels used to
connect multiple Brocade DCX/DCX-4S backbone
platform chassis in two- or three-chassis
configurations.
Idle power The power consumption of a system when powered on
but with no active workload.
IEEE Institute of Electrical and Electronics Engineers, a
standards body responsible for, among other things,
Ethernet standards.
IETF Internet Engineering Task Force, responsible for
TCP/IP de facto standards.
IFL Inter-fabric link, a set of Fibre Channel switch ports
(Ex_Port on the router and E_Port on the switch) that
can route device traffic between independent fabrics.
IFR Inter-fabric routing, an ANSI standard for providing
connectivity between separate Fibre Channel SANs
without creating an extended flat Layer 2 network.
ILM Information lifecycle management, a technique for
migrating storage data from one class of storage
system to another based on the current business value
of the data.
Initiator A SCSI device within a host that initiates I/O between
the host and storage.
IOPS/W Input/output operations per second per watt. A metric
for evaluating storage I/O performance per fixed unit
of energy.
iSCSI Internet SCSI, an IETF standard for transporting SCSI
block data over conventional TCP/IP networks.
iSER iSCSI Serial RDMA, an IETF specification to facilitate
direct memory access by iSCSI network adapters.

The New Data Center 145


Glossary

ISL Inter-switch Link, Fibre Channel switch ports (E_Ports)


used to provide switch-to-switch connectivity.
iSNS Internet Simple Name Server, an IETF specification to
enable device registration and discovery in iSCSI
environments.
Initiator In storage, a server or host system that initiates
storage I/O requests.
kWh Kilowatt hours, a unit of electrical usage common used
by power companies for billing purposes.
LACP Link Aggregation Control Protocol, an IEEE
specification for grouping multiple separate network
links between two switches to provide a faster logical
link.
LAN Local area network, a network covering a small
physical area, such as a home or office, or small
groups of buildings, such as a campus or airport, typic
allyl based on Ethernet and/or Wifi.
Lite (or Light) green Solutions or products that purport to be energy
efficient but which have only negligible green benefits.
LUN Logical Unit Number, commonly used to refer to a
volume of storage capacity configured on a target
storage system.
LUN masking A means to restrict advertisement of available LUNs to
prevent unauthorized or unintended storage access.
Layer 2 In networking, a link layer protocol for device-to device
communication within the same subnet or network.
Layer 3 In networking, a routing protocol (for example, IP) that
enables devices to communicate between different
subnets or networks.
Layer 4–7 In networking, upper-layer network protocols (for
example, TCP) that provide end-to-end connectivity,
session management, and data formatting.
MAID Massive array of idle disks. A storage array that only
spins up disks to active state when data in a disk set is
accessed or written.
MAN Metropolitan area network, a mid-distance network
often covering a metropolitan wide radius (about
200 km).
MaxTTD Maximum time to data. For a given category of storage,
the maximum time allowed to service a data read or
write.
MRP Metro Ring Protocol, a Brocade value-added protocol
to enhance resiliency and recovery from a link or
switch outage.

146 The New Data Center


Glossary

Metadata In storage virtualization, a data map that associates


physical storage locations with logical storage
locations.
Metric A standard unit of measurement, typically part of a
system of measurements to quantify a process or
event within a given domain. GB/W and IOPS/W are
examples of proposed metrics that can be applied for
evaluating the energy efficiency of storage systems.
Non-removable A virtual tape backup system with spinning disks and
media library shorter maximum time to data access compared to
conventional tape.
NAS Network-attached storage, use of an optimized file
server or appliance to provide shared file access over
an IP network.
Near online storage Storage systems with longer maximum time to data
access, typical of MAID and fixed content storage
(CAS).
Network Replacing multiple smaller switches and routers with
consolidation larger switches that provide higher port densities,
performance and energy efficiency.
Network Technology that enables a single physical network
virtualization infrastructure to be managed as multiple separate
logical networks or for multiple physical networks to be
managed as a single logical network.
NPIV N_Port ID Virtualization, a Fibre Channel standard that
enables multiple logical network addresses to share a
common physical network port.
OC3 A 155 Mbps WAN link speed.
OLTP On-line Transaction Processing, commonly associated
with business applications that perform transactions
with a database.
Online storage Storage systems with fast data access, typical of most
data center storage arrays in production environments.
Open Systems A vendor-neutral, non-proprietary, standards-based
approach for IT equipment design and deployment.
Orchestration Software that enables centralized coordination
between virtualization capabilities in the server,
storage, and network domains to automate data
center operations.
PDU Power Distribution Unit. A system that distributes
electrical power, typically stepping down the higher
input voltage to voltages required by end equipment. A
PDU can also be a single-inlet/multi-outlet device
within a rack cabinet

The New Data Center 147


Glossary

Petabyte 1000 terabytes


PoE/PoE+ Power over Ethernet, IEEE standards for powering IP
devices such as VoIP phones over Ethernet cabling.
Port In Fibre Channel, a port is the physical connection
on a switch, host, or storage array. Each port has a
personality (N_Port, E_Port, F_Port, and so on) and the
personality defines the port's function within the
overall Fibre Channel protocol.
QoS Quality of service, a means to prioritize network traffic
on a per-application basis.
RAID Redundant array of independent disks, a storage
technology for expediting reads and writes of data to
disks and/or providing data recovery in the event of
disk failure.
Raised floor Typical of older data center architecture, a raised floor
provides space for cable runs between equipment
racks and cold air flow for equipment cooling.
RBAC Role-based access control, network permissions
based on defined roles or work responsibilities.
Removable media A tape or optical backup system with removable
library cartridges or disks and >80ms maximum time to data
access.
Resizeable volumes Variable length volumes that can expand or contract
depending on the data storage requirements of an
application.
RPO Recovery point objective, defines how much data is
lost in a disaster.
RSCN Registered state change notification, a Fibre Channel
fabric feature that enables notification of storage
resources leaving or entering the SAN.
RTO Recovery time objective, defines how long data access
is unavailable in a disaster.
RSTP Rapid Spanning Tree Protocol, a bridging protocol that
replaces conventional STP and enables an
approximately 1-second recovery in the event of a
primary link failure
SAN Storage area network, a shared network infrastructure
deployed between servers, disk arrays, and tape
subsystems, typically based on Fibre Channel.
SAN boot Firmware that enables a server to load its boot image
across a SAN.
SCC Switch Connection Control, a Brocade SAN security
mechanism to allow only authorized switch-to-switch
links.

148 The New Data Center


Glossary

SI-EER Site Infrastructure Energy Efficiency Ratio, a formula


developed by The Uptime Institute to calculate total
data center power consumption in relation to IT
equipment power consumption.
SLA Service-level agreement, typically a contracted
assurance of response time or performance of an
application.
SMB Small and medium business, companies typically with
typically fewer than 1000 employees.
SMI-S Storage Management Initiative Specification, a SNIA
standard based on CIM/WBEM for managing
heterogeneous storage infrastructures.
SNIA Storage Networking Industry Association, a standards
body focused on data storage hardware and software.
SNS Simple name server, a Fibre Channel switch feature
that maintains a database of attached devices and
capabilities to streamline device discovery.
Solid state storage A storage device based on flash or other static memory
technology that emulates conventional spinning disk
media.
SONET Synchronous Optical Networking, a WAN for
multiplexing multiple protocols over a fiber optic
infrastructure.
Server A compute platform used to host one or more
applications for client access.
Server platform Hardware (typically CPU, memory, and I/O) used to
support file or application access.
Server virtualization Software or firmware that enables multiple instances
of an operating system and applications to be run on a
single hardware platform.
sFlow An IETF specification for performing network packet
captures at line speed for diagnostics and analysis.
Single Initiator A method of securing traffic on a Fibre Channel fabric
Zoning so that only the storage targets used by a host initiator
can connect to that initiator.
Snapshot A point-in-time copy of a data set or volume used to
restore data to a known good state in the event of data
corruption or loss.
SPOF Single point of failure.
Storage taxonomy A hierarchical categorization of storage networking
products based on capacity, availability, port count,
and other attributes. A storage taxonomy is required
for the development of energy efficiency metrics so
that products in a similar class can be evaluated.

The New Data Center 149


Glossary

Storage Technology that enables multiple storage arrays to be


virtualization logically managed as a single storage pool.
Synchronous Data For storage, writing the same data to two separate
Replication storage systems on a write-by-write basis so that
identical copies of current data are maintained,
typically used for metro distance disaster recovery.
T3 A 45 Mbps WAN link speed
Target A SCSI target within a storage device that
communicates with a host SCSI initiator.
TCP/IP Transmission Control Protocol/Internet Protocol, used
to move data in a network (IP) and to move data
between cooperating computer applications (TCP). The
Internet commonly relies on TCP/IP.
Terabyte 1000 bigabytes
Thin provisioning Allocating less physical storage to an application than
is indicated by the virtual volume size.
Tiers Often applied to storage to indicate different cost/
performance characteristics and the ability to
dynamically move data between tiers based on a policy
such as ILM.
ToR Top of rack, provides network connectivity for a rack of
equipment by provisioning one or more switches in the
upper slots of each rack.
TRILL Transparent Interconnect for Lots of Links, an
emerging IETF standard to enable multiple active
paths through an IP network infrastructure.
Target In storage, a storage device or system that receives
and executes storage I/O requests from a server or
host.
Three-tier A network design that incorporates access,
architecture aggregation, and core layers to accommodate growth
and maintain performance.
Top Talkers A Brocade technology for identifying the most active
initiators in a storage network.
Trunking In Fibre Channel, a means to combine multiple inter-
switch links (ISLs) to create a faster virtual link.
TWG Technical Working Group, commonly formed to define
open, publicly available technology standards.
Type 1 virtualization Server virtualization in which the hypervisor runs
directly on the hardware.
Type 2 virtualization Server virtualization in which the hypervisor runs
inside an instance of an operating system.

150 The New Data Center


Glossary

U A unit of vertical space (1.75 inches) used to measure


how much rack space a piece of equipment requires,
sometimes expressed as RU (Rack Unit).
UPS Uninterruptible power supply
uRPF Unicast Reverse Path Forwarding, an IETF specification
for blocking packets from unauthorized network
addresses.
VCS Virtual Cluster Switching, a new class of Brocade-
developed technologies that overcomes the limitations
of conventional Ethernet networking by applying non-
stop operations, any-to-any connectivity, and the
intelligence of fabric switching.
VLAN Virtual LAN, an IEEE standard that enables multiple
hosts to be configured as a single network regardless
of their physical location.
VM Virtual machine, one of many instances of a virtual
operating system and applications hosted on a
physical server.
VoIP Voice over IP. A method of carrying telephone traffic
over an IP network.
VRF Virtual Routing and Forwarding, a means to enable a
single physical router to maintain multiple separate
routing tables and thus appear as multiple logical
routers.
VRRP Virtual Router Redundancy Protocol, an IETF
specification that enable multiple routers to be
configured as a single virtual router to provide
resiliency in the event of a link or route failure.
VSRP Virtual Switch Redundancy Protocol, a Brocade value-
added protocol to enhance network resilience and
recovery from a link or switch failure.
Virtual Fabrics An ANSI standard to create separate logical fabrics
within a single physical SAN infrastructure, often
spanning multiple switches.
Virtualization Technology that provides a logical abstraction layer
between the administrator or user and the physical IT
infrastructure.
WAN Wide area network, commonly able to span the globe.
WAN networks commonly employ TCP/IP networking
protocols.
WWN World Wide Name, a unique 64-bit identifier assigned
to a Fibre Channel initiator or target.
Work cell A unit of rack-mounted IT equipment used to calculate
energy consumption, developed by Intel.

The New Data Center 151


Glossary

Zetabyte 1000 exabytes


Zoning A Fibre Channel standard for assigning specific
initiators and targets as part of a separate group
within a shared storage network infrastructure.

152 The New Data Center


Index

Symbols Bidirectional Forwarding Detection


"Securing Fibre Channel Fabrics" by (BFD) 77
Roger Bouchard 55 blade servers 21
storage access 28
A VMs 22
blade.org 22
access control lists (ACLs) 27, 57
blanking plates 13
Access Gateway 22, 28
boot from SAN 24
access layer 71
boot LUN discovery 25
cabling 72
Brocade Management Pack for
oversubscription 72
Microsoft Service Center Virtual
Adaptive Networking services 48
Machine Manager 86
Address Resolution Protocol (ARP)
Brocade Network Advisor 119
spoofing 78
Brocade One 117
aggregation layer 71
Brocade Virtual Access Layer
functions 74
(VAL) 118
air conditioning 5
Brocade Virtual Cluster Switching
air flow systems 5
(VCS) 118
ambient temperature 10, 14
BTU (British Thermal Units) per
American National Standards
hour (h) 10
Institute T11.5 84
ANSI/INCITS T11.5 standard 41
ANSI/TIA-942 Telecommunications C
Infrastructure Standard for Data CFC (chlorofluorocarbon) 14
Centers 2, 3 computer room air conditioning
application delivery controllers 80 (CRAC) 4, 14
performance 82 consolidation
application load balancing 81, 85 data centers 70
ASHRAE Thermal Guidelines for server 21
Data Processing Environments 10 converged fabrics 118, 119
asynchronous data replication 65 cooling 14
Automatic Migration of Port Profiles cooling towers 15
(AMPP) 120 core layer 71
functions 74
B customer-centric approach 117
backup 59

The New Data Center 153


Index

D F
dark fiber 65, 67 F_Port Trunking 28
Data Center Bridging (DCB) 119 Fabric Application Interface
data center consolidation 46, 48 Standard (FAIS) 41, 84
data center evolution 117 fabric management 119
Data Center Infrastructure Efficiency fabric-based security 55
(DCiE) 6 fabric-based storage
data center LAN virtualization 41
bandwidth 69 fabric-based zoning 26
consolidation 76 fan modules 53
design 75 FastWrite acceleration 66
infrastructure 70 Fibre Channel over Ethernet (FCoE)
security 77 compared to iSCSI 61
server platforms 72 Fibre Channel over IP (FCIP) 62
data encryption 56 FICON acceleration 66
data encryption for data-at-rest. 27 floor plan 11
decommissioned equipment 13 forwarding information base
dehumidifiers 14 (FIB) 78
denial of service (DoS) attacks 77 frame redirection in Brocade FOS 57
dense wavelength division Fujitsu fiber optic system 15
multiplexing (DWDM) 65, 67
Device Connection Control (DCC) 57 G
disaster recovery (DR) 65 Gartner prediction 1
distance extension 65 Gigabit Ethernet 59
technologies 66 global server load balancing
distributed DoS (DDoS) attacks 77 (GSLB) 82
Distributed Management Task Force Green Storage Initiative (GSI) 53
(DMTF) 19, 84 Green Storage Technical Working
dry-side economizers 15 Group (GS TWG) 53

E H
economizers 14 HCFC (hydrochlorofluorocarbon) 14
EMC Invista software 44, 87 high-level metrics 7
Emerson Power survey 1 Host bus adapters (HBAs) 23
encryption 56 hot aisle/cold aisle 11
data-in-flight 27 HTTP (HyperText Transfer
encryption keys 56 Protocol) 80
energy efficiency 7 HTTPS (HyperText Transfer Protocol
Brocade DCX 54 Secure) 80
new technology 70 humidifiers 14
product design 53, 79 humidity 10
Environmental Protection Agency humidity probes 15
(EPA) 10 hypervisor 18
EPA Energy Star 17 secure access 19
Ethernet networks 69
external air 14

154 The New Data Center


Index

I O
IEEE open systems approach 84
AES256-GCM encryption algo- Open Virtual Machine Format
rithm for tape 56 (OVF) 84
AES256-XTS encryption algorithm outside air 14
for disk 56 ozone 14
information lifecycle management
(ILM) 39 P
ingress rate limiting (IRL) 49 particulate filters 14
Integrated Routing (IR) 63 Patterson and Pratt research 12
Intel x86 18 power consumption 70
intelligent fabric 48 power supplies 53
inter-chassis links (ICLs) 28 preferred paths 50
Invista software from EMC 44
IP address spoofing 78 Q
IP network links 66 quality of service
IP networks application tiering 49
layered architecture 71 Quality of Service (QoS) 24, 26
resiliency 76
iSCSI 58
R
Serial RDMA (iSER) 60
Rapid Spanning Tree Protocol
IT processes 83
(RSTP) 77
recovery point objective (RPO) 65
K recovery time objective (RTO) 65
key management solutions 57 refrigerants 14
registered state change notification
L (RSCN) 63
Layer 4–7 70 RFC 3176 standard 77
Layer 4–7 switches 80 RFC 3704 (uRPF) standard 78
link congestion 49 RFC 3768 standard 76
logical fabrics 63 role-based access control (RBAC) 27
long-distance SAN connectivity 67 routing information base (RIB) 78

M S
management framework 85 SAN boot 24
measuring energy consumption 12 SAN design 45, 46
metadata mapping 42, 43 storage-centric design 48
Metro Ring Protocol (MRP) 77 security
Multi- Chassis Trunking (MCT) 120 SAN 55
SAN security myths 55
N Web applications 81
N_Port ID Virtualization security solutions 27
(NPIV) 24, 28 Server and StorageIO Group 78
N_Port Trunking 23
network health monitoring 85
network segmentation 78

The New Data Center 155


Index

server virtualization 18 U
IP networks 69 Unicast Reverse Path Forwarding
mainstream 86 (uRPF) 78
networking complement 79 UPS systems 3
service-level agreements (SLAs) 27 Uptime Institute 5
network 80
sFlow V
RFC 3176 standard 77
variable speed fans 12
simple name server (SNS) 60, 63
Virtual Cluster Switching (VCS)
Site Infrastructure Energy Efficiency
architecture 122
Ratio (SI-EER) 5
Virtual Fabrics (VF) 62
software as a service (SaaS) 70
virtual IPs (VIPs) 79
Spanning Tree Protocol (STP) 73
virtual LUNs 37
standardized units of joules 9
virtual machines (VMs) 17
state change notification (SCN) 45
migration 86, 120
Storage Application Services
mobility 20
(SAS) 87
Virtual Router Redundancy Protocol
Storage Networking Industry
(VRRP) 76
Association (SNIA) 53
Virtual Routing and Forwarding
Green Storage Power Measure-
(VRF) 78
ment Specification 53
virtual server pool 20
Storage Management Initiative
Virtual Switch Redundancy Protocol
(SMI) 84
(VSRP) 77
storage virtualization 35
virtualization
fabric-based 41
network 79
metadata mapping 38
orchestratration 84
tiered data storage 40
server 18
support infrastructure 4
storage 35
Switch Connection Control (SCC) 57
Virtualization Management Initiative
synchronous data replication 65
(VMAN) 19
Synchronous Optical Networking
VM mobility
(SONET) 65
IP networks 70
VRRP Extension (VRRPE) 76
T
tape pipelining algorithms 66 W
temperature probes 15
wet-side economizers 15
The Green Grid 6
work cell 12
tiered data storage 40
World Wide Name (WWN) 25
Top Talkers 26, 51
top-of-rack access solution 73
traffic isolation (TI) 51
traffic prioritization 26
Transparent Interconnection of Lots
of Links (TRILL) 119

156 The New Data Center

You might also like