THE NEW DATA CENTER
FIRST EDITION New technologies are radically reshaping the data center

TOM CLARK

We will always remember the endearing and rewarding experiences with Tom and he will be greatly missed by those who knew him. and someone who was respected for his great achievements.Tom Clark. an intelligent and articulate man. a man who was inspired as well as inspiring. a sincere and gentle person with enjoyable humor. 1947–2010 All too infrequently we have the true privilege of knowing a friend and colleague like Tom Clark. Detrick . We mourn the passing of a special person. Mark S.

products. without notice. BigIron. Other brands. and VCS are trademarks of Brocade Communications Systems. DCX. SAN Health. ServerIron. Extraordinary Networks. Contact a Brocade sales office for information on feature and product availability. in the United States and/or in other countries. Export of technical data contained in this document may require an export license from the United States government. DCFM. FastIron. Brocade NET Health. Brocade reserves the right to make changes to this document at any time. expressed or implied. All Rights Reserved. the B-wing symbol. IronView. This informational document describes features that may not be currently available. Fabric OS. Brocade One. Inc.© 2010 Brocade Communications Systems. or service offered or to be offered by Brocade. equipment feature. Inc. and Victoria Thomas Printing History First Edition. concerning any equipment. NetIron. MyBrocade. Brocade. or service names mentioned are or may be trademarks or service marks of their respective owners. August 2010 iv The New Data Center . David Lehmann. Brocade Bookshelf Series designed by Josh Judd The New Data Center Written by Tom Clark Reviewed by Brook Reams Edited by Victoria Thomas Design and Production by Victoria Thomas Illustrated by Jim Heuser. and Wingspan are registered trademarks. TurboIron. Notice: This document is for informational purposes only and does not set forth any warranty.. and Brocade Assurance. and assumes no responsibility for its use.

Solutioneers.k.com Brocade Asia Pacific Headquarters Singapore T: +65-6538-4700 apac-info@brocade. content generation.a. Technical Marketing Manager at Brocade.Important Notice Use of this book constitutes consent to the following conditions. publication. equipment feature. Finally. a. for reviewing my draft manuscript and providing suggestions and invaluable insights on the technologies under discussion. Contact a Brocade sales office for information on feature and product availability. The real work of project management. and assumes no responsibility for its use. a thank you to the entire Brocade team for making this a first-class company that produces first-class products for first-class customers worldwide. I would also like to thank Brook Reams. without notice. none of this material would see the light of day. or service offered or to be offered by Brocade. concerning any equipment. Solution Architect for Applications on the Integrated Marketing team. assembly. Ron's consistent support and encouragement for the Brocade Bookshelf projects and Brocade TechBytes Webcast series provides sustained momentum for getting technical information into the hands of our customers.com Brocade European Headquarters Geneva. and promotion is done by Victoria Thomas. Export of technical data contained in this book may require an export license from the United States government. Brocade Corporate Headquarters San Jose. Switzerland T: +41-22-799-56-40 emea-info@brocade. This informational document describes features that may not be currently available. This book is supplied “AS IS” for informational purposes only. CA USA T: +01-408-333-8000 info@brocade. expressed or implied. copyediting. The New Data Center v . Without Victoria's steadfast commitment.com Acknowledgements I would first of all like to thank Ron Totah. without warranty of any kind. Senior Director of Marketing at Brocade and cat-herder of the Global Solutions Architects. Brocade reserves the right to make changes to this book at any time.

He was indeed the heart of Brocade TechBytes. With more than 20 years experience in the IT industry. and acted as a customer liaison. 2008). As a liaison between marketing. Second Edition (Addison-Wesley 2003. iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001). the innovator of storage over IP technology. IP SANs: A Guide to iSCSI. Tom Clark passed away in February 2010. a monthly Webcast he described as “a late night technical talk show.About the Author Tom Clark was a resident SAN evangelist for Brocade and represented Brocade in industry associations. Storage Virtualization: Technologies for Simplifying Data Storage and Management (Addison-Wesley 2005). he was a board member of the Storage Networking Industry Association (SNIA) and former Chair of the SNIA Green Storage Initiative. Anyone who knew Tom knows that he was intelligent. Clark has published hundreds of articles and white papers on storage networking and is the author of Designing Storage Area Networks. and a pragmatist with a great heart. a voice of sanity and also sarcasm.” which was launched in November 2008 and is still part of Brocade’s Technical Marketing program. promoted Brocade storage networking solutions. conducted seminars and tutorials at conferences and trade shows. Sadly. he focused on customer education and defining features that ensure productive deployment of SANs. and Strategies for Data Protection (Brocade Bookshelf. vi The New Data Center . Clark was Director of Solutions and Technologies for McDATA Corporation and the Director of Technical Marketing for Nishan Systems. A noted author and industry advocate of storage networking technology. quick. engineering. Clark held technical marketing and systems consulting positions with storage networking and other data communications companies. Prior to joining Brocade. and customers.

.....................................27 Brocade Access Gateway for Blade Frames ..................14 Monitoring the Data Center Environment ..........................................30 FCoE and Server Virtualization .......................28 The Energy-Efficient Brocade DCX Backbone Platform for Consolidation ....................... 17 Blade Server Architecture ............................39 Centralizing Storage Virtualization from the Fabric ......................................... xv Chapter 1: Supply and Demand ...........................................25 Configuring Single Initiator/Target Zoning ...................................... 35 Optimizing Storage Capacity Utilization in the Data Center ......................................................................................................... 17 VMs Reborn .....................................................................................................................1 Chapter 2: Running Hot and Cold ............................................................................................................................... Power..........................15 Chapter 3: Doing More with Less ......................................................................................................29 Brocade Industry Standard SMI-S Monitoring ..................................................................................................................... and Heat ................................................................................................................................................................................24 Brocade N_Port ID Virtualization for Workload Optimization ..................................................24 Brocade Virtual Machine SAN Boot ...........................................................11 Economizers ........10 Rationalizing IT Equipment Distribution ...................................................................................................................................26 Brocade End-to-End Quality of Service ...........31 Chapter 4: Into the Pool .......................................................................................................Contents Preface ................35 Building on a Storage Virtualization Foundation .........................9 Energy..............................21 Brocade Server Virtualization Solutions ......................................................29 Brocade Professional Services .............................22 Brocade High-Performance 8 Gbps HBAs .......................................................43 The New Data Center vii ...........26 Brocade LAN and SAN Security ............................................. 9 Environmental Parameters ...................................................... 41 Brocade Fabric-based Storage Virtualization ....................................23 Brocade 8 Gbps Switch and Director Ports ..........................28 Enhanced and Secure Client Access with Brocade LAN Solutions .....................

...............................................................................91 Brocade FCoE CNAs ..................................................................................80 Chapter 7: Orchestration .....................................................................................................................75 Consolidate to Accommodate Growth ...........................89 Brocade 825/815 FC HBA ..................................................46 Intelligent by Design ............ 97 Brocade DCX Backbones (Core) .......................................................... 103 Brocade VA-40FC Switch .................................................................................................76 Network Security ............................................................................................................................................................. 105 Brocade 7800 Extension Switch and FX8-24 Extension Blade ...........................................................................................................................48 Energy Efficient Fabrics ......................................................................................... 102 Brocade 300 Switch .......................................79 Application Delivery Infrastructure ....107 Brocade Data Center Fabric Manager ......................................... 100 Brocade 5300 Switch ...........101 Brocade 5100 Switch ..................................75 Network Resiliency ..................................................................................................................................... 74 Design Considerations ..93 Brocade Management Pack ........................................................................................................ 108 Chapter 10: Brocade LAN Network Solutions ...................................................................................................................................................................................... 111 viii The New Data Center ................................................................................................109 Core and Aggregation .........98 Brocade 8 Gbps SAN Switches (Edge) ............................55 Multi-protocol Data Center Fabrics ...................................92 Access Gateway ...........77 Power............................................................. 89 Server Adapters .............................................................................................. 110 Brocade BigIron RX Series .......................................................................................................................................... 106 Brocade Optical Transceiver Modules .......... 110 Brocade NetIron MLX Series ................................................................................................................ 104 Brocade Encryption Switch and FS8-18 Encryption Blade ......................................Contents Chapter 5: Weaving a New Data Center Fabric ..........................................................................53 Safeguarding Storage Data ....................................90 Brocade 425/415 FC HBA .........................................................................64 Chapter 6: The New Data Center LAN ............................................................................................. 71 Consolidating Network Tiers .............................................................................................................. 83 Chapter 8: Brocade Solutions Optimized for Server Virtualization ......................................................................................................................95 Chapter 9: Brocade SAN Solutions ......................94 Brocade ServerIron ADX ....................................................................................... 45 Better Fewer but Better ........ 69 A Layered Architecture ................................................. Space and Cooling Efficiency ..............................................................................91 Brocade 8000 Switch and FCOE10-24 Blade ..........................58 Fabric-based Disaster Recovery ...........................78 Network Virtualization ..........

............................................... 135 Best Practice #18: Data Center Air Conditioning Improvements ................................................................................................................................................141 Index ............................... 135 Best Practice #17: Flywheel UPS Technology .................... 129 Best Practice #4: Use Data Compression ................................................................................. 133 Best Practice #12: MAID and Slow-Spin Disk Technology ................................................................................................. 113 Brocade NetIron CES 2000 Series ..................... 133 Best Practice #11: Solid State Storage ............... 128 Best Practice #3: Leverage Storage Virtualization .. 120 Logical Chassis ............................. 130 Best Practice #5: Incorporate Data Deduplication ........................................................................ 136 Best Practice #20: Work with Your Regional Utilities ................................... Fabric and Storage Virtualization ............................................. 112 Brocade FastIron CX Series ..................................................................................................................................................................................................................................... 124 Shades of Green ............................................................................................................................................................... 132 Best Practice #9: Writeable Snapshots ........................................................................ 116 Chapter 11: Brocade One ............................................................................................. 121 The VCS Architecture .........................131 Best Practice #6: File Deduplication .. 126 Best Practice #2: Select the Appropriate Storage RAID Level ..................................................................................................................................... 133 Best Practice #13: Tape Subsystems ..............................................................117 Industry's First Converged Data Center Fabric ...... 134 Best Practice #15 ..................................................... 132 Best Practice #10: Deploy Tiered Storage ...................................................................................................................................................................................Contents Access .................................137 What the SNIA is Doing About Data Center Energy Usage ................................ 112 Brocade TurboIron 24X Switch ............................................ 138 Appendix B: Online Sources .............................................117 Evolution not Revolution .................................................................................. 113 Brocade FastIron Edge X Series ....... 123 Some Fundamental Considerations ............ 121 Dynamic Services ................................... 134 Best Practice #16: Server........ 122 Appendix A: “Best Practices for Energy Efficient Storage Operations” ......................................................................................................123 Introduction ....................... 119 Ethernet Fabric .......File System Virtualization ........ 136 Best Practice #19: Increased Data Center temperatures ...........137 About the SNIA ........................................................................................................................ 115 Brocade Mobility ......................................139 Glossary ...................................... 134 Best Practice #14: Fabric Design ............................................................................................. 120 Distributed Intelligence ...................................................................................153 The New Data Center ix ...........131 Best Practice #7: Thin Provisioning of Storage to Servers ........................................... 125 Best Practice #1: Manage Your Data .................................... 114 Brocade IronView Network Manager . 132 Best Practice #8: Leverage Resizeable Volumes ................................

Contents x The New Data Center .

.. and storage access............. ..... ...... Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric..................................19 Figure 9... ...... Hot aisle/cold aisle equipment floor plan.......... A native or Type 1 hypervisor................. energy-efficient form factors........................25 Figure 12...........................23 Figure 11....... The Brocade Encryption Switch provides high-performance data encryption to safeguard data written to disk or tape................ A blade server architecture centralizes shared resources while reducing individual blade server elements........................................ ........ Brocade SecureIron switches provide firewall traffic management and LAN security for client access to virtual server clusters........................ .........27 Figure 14...... ......31 Figure 17........... .............. .......18 Figure 8..... The Brocade 825 8 Gbps HBA supports N_Port Trunking for an aggregate 16 Gbps bandwidth and 1000 IOPS. The support infrastructure adds substantial cost and energy overhead to the data center.. 4 Figure 3............................... A hosted or Type 2 hypervisor......... An economizer uses the lower ambient temperature of outside air to provide cooling....................................................... .................... . Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact................... An FCoE top-of-rack solution provides both DCB and Fibre Channel ports and provides protocol conversion to the data center SAN.. ........ Variable speed fans enable more efficient distribution of cooling............................................26 Figure 13...........21 Figure 10..................................... peer-to-peer........14 Figure 7........... The concept of work cell incorporates both equipment power draw and requisite cooling............ SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts................. 12 Figure 5...........13 Figure 6...... FCoE simplifies the server cable plant by reducing the number of network interfaces required for client........Figures Figure 1..... 3 Figure 2.........29 Figure 16................................ ... The ANSI/TIA-942 standard functional area connectivity.................... ...................................... ...... ............27 Figure 15.......................................32 The New Data Center xi ......11 Figure 4......................................... ..........

............ The virtualization abstraction layer provides virtual targets to real hosts and virtual hosts to real targets......... A storage-centric core/edge topology provides flexibility in deploying servers and storage assets while accommodating growth over time........................ Long-distance connectivity options using Brocade devices......... ....................................... and core layers in the data center network. FAIS splits the control and data paths for more efficient execution of metadata mapping between virtual storage and servers........................ Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facilitate a compact......................54 Figure 31...............................................................................................50 Figure 28......58 Figure 33.49 Figure 27....................................... which enforce separation of traffic through the fabric based on designated applications...56 Figure 32..... .................... ............................................. ........................................................................ .......Figures Figure 18........................33 Figure 19......... 52 Figure 30........... aggregation....... ......... and cable strategy.... Storage virtualization aggregates the total storage capacity of multiple physical arrays into a single virtual pool.....73 xii The New Data Center ..... port density.........42 Figure 24.... Using fabric ACLs to secure switch and device connectivity.............................. Brocade DCX power consumption at full speed on an 8 Gbps port compared to the competition...40 Figure 23................................................. Access..... Integrating formerly standalone mid-tier servers into the data center fabric with an iSCSI blade in the Brocade DCX......................... Brocade QoS gives preferential treatment to high-value applications through the fabric to ensure reliable delivery.... high-performance FCoE deployment........ Ingress rate limiting enables the fabric to alleviate potential congestion by throttling the transmission rate of the offending initiator....... Access layer switch placement is determined by availability...43 Figure 25............... Conventional storage configurations often result in over.......................................................................and underutilization of storage capacity across multiple storage arrays... IR facilitates resource sharing between physically independent SANs........................ ........................................... mirroring and data migration............ ..... ........................... Using Virtual Fabrics to isolate applications and minimize fabricwide disruptions....... By monitoring traffic activity on each port....... .............. .............. The Brocade FA4-18 Application Blade provides line-speed metadata map execution for non-disruptive storage pooling................. ......... .................. Leveraging classes of storage to align data storage to the business value of data over time..................67 Figure 37...61 Figure 34......... ................................47 Figure 26..................... Top Talkers can identify which applications would most benefit from Adaptive Networking services... . .....................36 Figure 20......................64 Figure 36.........62 Figure 35...................... The Brocade Encryption Switch provides secure encryption for disk or tape..........51 Figure 29.................................38 Figure 22.37 Figure 21..... .......... ................ .................................. ...................71 Figure 38.................... Preferred paths are established through traffic isolation zones....................................

.... ......... SAN Call Home events displayed in the Microsoft System Center Operations Center interface........................ Brocade 5100 Switch. Brocade DCX (left) and DCX-4S (right) Backbone...........94 Figure 51......... ..... ..........81 Figure 43.................84 Figure 44........................ 105 Figure 58.................... Brocade FS8-18 Encryption Blade..............91 Figure 47................................ Brocade 8000 Switch............. Brocade 825 FC 8 Gbps HBA (dual ports shown)....... ...........98 Figure 53.............. Network infrastructure typically contributes only 10% to 15% of total data center IT equipment power usage........................95 Figure 52.............................................................. 106 Figure 60............................ 111 Figure 64.. ..................................... ......................86 Figure 45. 104 Figure 57....................... .. ............ ................................. Brocade NetIron MLX-4..... . .......... A Brocade BigIron RX Series switch consolidates connectivity in a more energy efficient footprint...................... Brocade DCFM main window showing the topology view......................... Open systems-based orchestration between virtualization domains.............................................. Brocade BigIron RX-16....... Application congestion (traffic shown as a dashed line) on a Webbased enterprise application infrastructure..80 Figure 42. Brocade Management Pack for Microsoft Service Center Virtual Machine Manager leverages APIs between the SAN and SCVMM to trigger VM migration.. 103 Figure 56..... ......... 24.................. 108 Figure 62....................... ................ ........................................................................ Brocade FCOE10-24 Blade........................... Brocade 7800 Extension Switch........... .....90 Figure 46............................ 101 Figure 54...... Brocade Encryption Switch. .................................... Brocade 300 Switch..............................................93 Figure 50....... 105 Figure 59.............. 114 The New Data Center xiii ... ........ ........... ...... Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-toPCIe CNA.................. .............................................92 Figure 49......... Brocade ServerIron ADX 1000... Brocade NetIron CES 2000 switches...................... ........... Brocade FastIron Edge X 624..and 48-port configurations in both Hybrid Fiber (HF) and RJ45 versions.. 113 Figure 66.......................................79 Figure 41................... 112 Figure 65........................... ............................................................................................ 107 Figure 61...................................75 Figure 40............ .................... Brocade TurboIron 24X Switch.. protocol processing offload and security via the Brocade ServerIron ADX.......... ...Figures Figure 39............................ .. Brocade 415 FC 4 Gbps HBA (single port shown)...........................................92 Figure 48.... Brocade FastIron CX-624S-HPOE Switch................. 110 Figure 63................................................. Brocade FX8-24 Extension Blade............................ .................... .................... Brocade VA-40FC Switch.................................. ......................................................... ..................................... 102 Figure 55........... Application workload balancing... 114 Figure 67..................................................... Brocade 5300 Switch...........

........... 122 xiv The New Data Center ...................................... ............... 118 Figure 70....... ........... The pillars of Brocade VCS (detailed in the next section).........Figures Figure 68................. Brocade INM Dashboard (top) and Backup Configuration Manager (bottom).... ............. A Brocade VCS reference network architecture..... 115 Figure 69.................

new regulations are imposing more rigorous requirements for data protection and security. This effort necessitates cooperation between both data center administrators and vendors and between the multiple vendors responsible for providing the elements that compose a comprehensive data center solution. new technologies that promise to alleviate some of these issues require both capital expenditures and a sharp learning curve to successfully integrate new solutions that can increase productivity and lower ongoing operational costs. network accessibility. Not only must products be designed and tested for The New Data Center xv . Since all major enterprises run their businesses on the basis of digital information. or data availability can have a profound impact on the viability of the enterprise itself. The ability to quickly adapt new technologies to new problems is essential for creating a more flexible data center strategy that can meet both current and future requirements.Preface Data center administrators today are facing unprecedented challenges. The much overused term “ecosystem” is nonetheless an accurate description of the interdependencies of technologies required for twenty-first century data center operation. energy costs continue to escalate. the consequences of inadequate processing power. At the same time. and tighter corporate budgets are making it difficult to accommodate client demands for more applications and data storage. storage. data center real estate is at a premium. storage. No single vendor manufactures the full spectrum of hardware and software elements required to drive data center IT processing. This is especially true when each of the three major domains of IT operations -server. Business applications are shifting from conventional client/ server relationships to Web-based applications. and networking-are each undergoing profound technical evolution in the form of virtualization.

and proper sizing of the cooling plant can help maximize productive use of existing real estate and reduce energy overhead. Brocade has developed a suite of new technologies to leverage the benefits of server virtualization and coordinate operation between virtual machine managers and the LAN and SAN networks. hot aisle/cold aisle rack deployment. The book is organized as follows: • “Chapter 1: Supply and Demand” starting on page 1 examines the technological and business drivers that are forcing changes in the conventional data center paradigm. Brocade has a long and proven track record in data center network innovation and collaboration with partners to create new solutions to solve real problems and at the same time reducing deployment and operational costs. but management between the domains must be orchestrated to ensure stable operations and coordination of tasks. • • • xvi The New Data Center . data centers are running out of space and power and this in turn is driving new initiatives for server. “Chapter 3: Doing More with Less” starting on page 17 provides an overview of server virtualization and blade server technology. in particular. Due to increased business demands (even in difficult economic times). New technologies such as wet and dryside economizers. “Chapter 2: Running Hot and Cold” starting on page 9 looks at data center power and cooling issues that threaten productivity and operational budgets. “Chapter 4: Into the Pool” starting on page 35 reviews the potential benefits of storage virtualization for maximizing utilization of storage assets and automating life cycle management.standards compliance and multi-vendor operability. is moving from secondary to primary applications and requires coordination with upstream networking and downstream storage for successful implementation. storage and network consolidation. Server virtualization. This book provides an overview of the new technologies that are radically transforming the data center into a more costeffective corporate asset and the specific Brocade products that can help you achieve this goal.

“Appendix A: “Best Practices for Energy Efficient Storage Operations”” starting on page 123 is a reprint of an article written by Tom Clark and Dr. Products like the Brocade ServerIron ADX Series of application delivery controller provide more intelligence in the network to offload server protocol processing and provide much higher levels of availability and security. and 10 provide brief descriptions of Brocade products and technologies that have been developed to solve data center problems.• “Chapter 5: Weaving a New Data Center Fabric” starting on page 45 examines the recent developments in storage networking technology. fabric virtualization. Brocade continues to pioneer more productive solutions for SANs and is the author or coauthor of the significant standards underlying these new technologies. Alan Yoder. • • • • • • • The New Data Center xvii . NetApp. Chapters 8. for the SNIA Green Storage Initiative (GSI). storage and network domains so that management frameworks can provide a comprehensive view of the entire infrastructure and proactively address potential bottlenecks. “Chapter 11: Brocade One” starting on page 117 described a new Brocade direction and innovative technologies to simplify the complexity of virtualized data centers. and SAN extension. “Appendix B: Online Sources” starting on page 139 is a list of online resources. enhanced security. The “Glossary” starting on page 141 is a list of data center network terms and definitions. “Chapter 6: The New Data Center LAN” starting on page 69 highlights the new challenges that virtualization and Web-based applications present to the data communications network. including higher bandwidth. “Chapter 7: Orchestration” starting on page 83 focuses on the importance of standards-based coordination between server. 9.

xviii The New Data Center .

real estate has become prohibitively expensive. with projections of about 1 zetabyte (1000 exabytes) by 2010.Supply and Demand The collapse of the old data center paradigm 1 As in other social and economic sectors. the increasing dependence of companies and institutions on electronic information and communications has resulted in a geometric increase in the amount of data that must be managed and stored. and technical personnel enabled companies to build large data centers to house their mainframe and open systems infrastructures and to support the diversity of business applications typical of modern enterprises. electricity. In the new millennium. The installation of more servers and disk arrays to accommodate data growth is simply not sustainable as data centers run out of floor space. The seemingly endless supply of affordable real estate. The demands constantly placed on IT administrators to expand support for new applications and data are now in direct conflict with the supply of data center space and power. half of the world's data centers will not have sufficient power to support their applications. Gartner predicted that by 2009. information technology has recently found itself in the awkward position of having lived beyond its means. data processing technology has become more complex. This data must be stored somewhere. and energy to feed additional hardware. utilities are often incapable of increasing supply to existing facilities. the amount of corporate data generated worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300 exabytes. the cost of energy has skyrocketed. An Emerson Power survey projects that 96% of all data centers will not have sufficient power by 2011. and the pool of technical talent to support new technologies is shrinking. cooling capacity. At the same time. however. The New Data Center 1 . Since 2000. data processing equipment.

Consequently. both doubles the cost of the data center infrastruc2 The New Data Center . cooling technology. for example. many IT managers are looking for ways to align the data center infrastructure to the new realities of space. could specify the near-term requirements for power distribution for IT equipment but because the utility bill was often paid for by the company's facilities management. storage technology. networks. with no proactive notification of the data center and network operators. mechanical. cabling. A data center administrator.999%) availability that allows for only 5. Loss of data access is loss of business and few companies can afford to risk unplanned outages that disrupt customers and revenue streams. Twenty-first century technology now resides in twentieth century facilities that are proving too inflexible to meet the needs of the new data processing paradigm. the administrator would be unaware of continually increasing utility costs. the technical evolution of data center design. individual business units might deploy new rich content applications resulting in a sudden spike in storage requirements and additional load placed on the messaging network. and budget constraints. and generator systems.26 minutes of data center downtime annually requires redundant electrical. power. and applications. Fault tolerant with multiple active distribution paths A Tier 4 data center is obviously the most expensive to build and maintain but fault tolerance is now essential for most data center implementations. Basic data center with no redundancy Tier 2. Duplication of power and cooling sources. network ports. Redundant components but single distribution path Tier 3. A “five-nines” (99. guidelines for data center design were not codified into standards until 2005. Likewise. Although data centers have existed for over 50 years. and power distribution has lagged far behind the rapid development of server platforms.Chapter 1: Supply and Demand The conventional approach to data center design and operations has endured beyond its usefulness primarily due to a departmental silo effect common to many business operations. and facilities layout. characterized chiefly by the degree of availability each provides: • • • • Tier 1. The ANSI/TIA-942 Telecommunications Infrastructure Standard for Data Centers focuses primarily on cable plant design but also includes power distribution. TIA-942 defines four basic tiers for data center classification. UPS. Concurrently maintainable with multiple distribution paths and one active Tier 4. however. In addition. and storage. cooling.

The definition of primary functional areas is meant to rationalize the cable plant and equipment placement so that space is used more efficiently and ongoing maintenance and troubleshooting can be minimized. The subfloors of older data centers can be clogged with abandoned bus and tag cables. the classic data center architecture is not sustainable. Note that the overview in Figure 1 does not depict the additional data center infrastructure required for UPS systems (primarily battery rooms). many older data centers are victims of indiscriminant cable runs. backbone LAN/SAN/KVM Switches PBX. This impedes airflow and makes it difficult to accommodate new cable requirements. Without new means to reduce the amount of space. cooling. the TIA-942 standard defines the main functional areas and interconnecting cable plant for the data center. and other facilities support systems. Although the support infrastructure represents a significant part of the data center investment. it is often over-provisioned for the actual operational power and cooling requirements of IT equipment. backup generators. humidifiers. Even though it may The New Data Center 3 . Offices Operations Center Support Horizontal cabling Entrance Room Carriers Carrier Equipment and Demarcations Carriers Backbone cabling Backbone cabling COMPUTER ROOM Telecom room Office & Operations Center LAN Switches Main Distribution Area Routers. fire suppression equipment. which are simply too long and too tangled to remove. As part of the mainframe legacy.ture and the recurring monthly cost of energy. As shown in Figure 1. cooling plant. often strung reactively in response to an immediate need. M13 Muxes Horizontal Distribution Area LAN/SAN/KVM Switches Backbone cabling Zone Distribution Area Horizontal Distribution Area LAN/SAN/KVM Switches Horizontal cabling Horizontal Distribution Area LAN/SAN/KVM Switches Equipment Distribution Area Rack / Cabinets Equipment Distribution Area Rack / Cabinets Equipment Distribution Area Rack / Cabinets Figure 1. The ANSI/TIA-942 standard functional area connectivity. and power while maintaining high data availability. Horizontal distribution is typically subfloor for older raised-floor data centers or ceiling rack drop for newer facilities.

backbone LAN/SAN/KVM Switches PBX. Depending on the design. M13 Muxes Horizontal Distribution Area LAN/SAN/KVM Switches Backbone cabling Backup Generators Horizontal Distribution Area LAN/SAN/KVM Switches Zone Distribution Area Horizontal Distribution Area LAN/SAN/KVM Switches Horizontal cabling Equipment Distribution Area Rack / Cabinets Equipment Distribution Area Rack / Cabinets Equipment Distribution Area Rack / Cabinets Diesel Fuel Reserves Power Distribution Fire Suppression System Computer Room Air Conditioners (CRAC) CRAC Conduits Cooling Towers Figure 2. The support infrastructure adds substantial cost and energy overhead to the data center. Each unit of powered equipment has a multiplier effect on total energy draw. Offices Operations Center Support Horizontal cabling Entrance Room Carriers Carrier Equipment and Demarcations Carriers Backbone cabling Backbone cabling COMPUTER ROOM UPS Battery Room Telecom room Office & Operations Center LAN Switches Main Distribution Area Routers. each element also places an additional load on the humidity control system. Second. pumps. First. which also requires cooling. And finally. each ele- 4 The New Data Center . typically on a 7x24 basis. over-provisioning is now a luxury that few data centers can afford. The CRAC system itself generates heat. each data center element consumes electricity according to its specific load requirements. and heat removal and cooling requires additional energy draw in the form of the computer room air conditioning system. Because electronic equipment is sensitive to ambient humidity. which draw additional power.Chapter 1: Supply and Demand be done in anticipation of future growth. and so on. the CRAC system may require auxiliary equipment such as cooling towers. Properly sizing the computer room air conditioning (CRAC) to the proven cooling requirement is one of the first steps in getting data center power costs under control. each unit dissipates heat as a natural by-product of its operation. The diagram in Figure 2 shows the basic functional areas for IT processing supplemented by the key data center support systems required for high availability data access.

A SI-EER of 2 would indicate that for every 2 watts of energy input at the data center meter. they are often over-provisioned in older data centers and the original air flow strategy may not work efficiently for rack-mount open systems infrastructure. retrofitting or redesigning air conditioning and flow during production may not be feasible. and higher availability continue unabated. By the Uptime Institute's own member surveys. The Uptime Institute. Additional space simply may not be available. companies tended to simply buy additional floor space and cooling to deal with increasing IT processing demands. faster performance. heating/cooling. however. maximizing utilization of available power by increasing energy efficiency is essential. Even in standby mode. Air conditioning and air flow systems typically represent about 37% of a data center's power bill.5 is not uncommon. Resolving this contradiction between supply and demand requires much closer attention to both the IT infrastructure and the data center architecture as elements of a common ecosystem. and other contributors. charging batteries. As long as energy was relatively inexpensive. for example. and utility-supplied power already at their maximum. For an operational data center. and energy capabilities highlights the shortcomings of conventional data center design. the steady accumulation of more servers.ment requires UPS support for continuous operation in the event of a power failure. and storage elements and their accompanying impact on space. more data storage. The New Data Center 5 . Although these systems are essential for IT operations. the UPS draws power for monitoring controls. With energy now at a premium. cooling. And yet the escalating requirements for more applications. has formulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) to analyze the relationship between total power supplied to the data center and the power that is supplied specifically to operate IT equipment. and fly-wheel operation. For large data centers in particular. the air flow inadequate for sufficient cooling. a SI-EER of 2. inefficient hardware. network infrastructure. Industry organizations have developed new metrics for calculating the energy efficiency of data centers and providing guidance for data center design and operations. only 1 watt is drives IT equipment. The total facilities power input divided by the IT equipment power draw highlights the energy losses due to power conversion. Little attention was paid to the efficiency of electrical distribution systems or the IT equipment they serviced.

lighting. air flow. a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers and business computing ecosystems. Cooling. These supporting elements. fire suppression. and other factors inevitably consume power. stated ratings cannot account for hidden inefficiencies (for example. The ideal solution is for facilities and IT equipment to have embedded power metering capability that can be solicited via network management frameworks. Although it may be feasible to design in metering for a new data center deployment. to simply use the manufacturer's stated power figures for specific equipment. humidity control. however. yielding a fractional ratio between the facilities power supplied and the actual power draw for IT processing. has proposed a Data Center Infrastructure Efficiency (DCiE) ratio that divides the IT equipment power draw by the total data center facility power. failure to use blanking panels in 19" racks) that periodically increase the overall power draw depending on ambient conditions. backup power. can be managed so that productive utilization of facilities power is increased and IT processing itself is made more efficient via new technologies and better product design. especially since manufacturer power ratings are often based on projected peak usage and not normal operations. 6 The New Data Center . It is not sufficient. for example. it is difficult to support these high-level metrics with real substantiating data. The alternative is to meter major data center components to establish baselines of operational power consumption.Chapter 1: Supply and Demand Likewise. Although SI-EER and DCiE are useful tools for a top-down analysis of data center efficiency. however. This is essentially the reciprocal of SI-EER. power distribution losses. it is more difficult for existing environments. With DCiE or SIEER. it is not possible to achieve a 1:1 ratio that would enable every watt supplied to the data center to be productively used for IT processing. The Green Grid. In addition.

The New Data Center 7 . Although $3B may be a bargain for modern US society as a whole. data centers in the US alone require the equivalent of more than 6 x 1000 megawatt power plants at a cost of approximately $3B annually. Data center managers. the spread of digital information and commerce has already provided environmentally friendly benefits in terms of electronic transactions for banking and finance. The practical usefulness of high-level metrics is therefore dependent on underlying opportunities to increase energy efficiency in individual products and IT systems. More applications and more data means either more hardware and energy draw or the adoption of new data center technologies and practices that can achieve much more with far less. data deduplication. Having a tighter ratio between facilities power input and IT output is good. achieving higher levels of data center efficiency is now a prerequisite for meeting the continued expansion of IT processing requirements. storage virtualization. it is still a significant and growing number. but lowering the overall input number is much better. In addition. have little opportunity to bask in the glow of external efficiencies especially when energy costs continue to climb and energy sourcing becomes problematic. Although that represents less than 2% of US power consumption. the social cost in terms of energy consumption for IT processing is relatively modest. e-commerce for both retail and wholesale channels. tiered storage. The IT equipment energy efficiency delta could be due to a number of different technology choices. Currently. or other elements. Given that all modern commerce and information exchange is based ultimately on digitized data. Data center energy efficiency has external implications as well. Global data center power usage is more than twice the US figure. the one drawing 50 megawatts of power to drive 25 megawatts of IT equipment would have the same DCiE as a data center drawing 10 megawatts to drive 5 megawatts of IT equipment. electronic information retrieval. Unfortunately. more efficient power supplies and hardware design.High-level SI-EER and DCiE metrics focus on data center energy efficiency to power IT equipment. this does not provide information on the energy efficiency or productivity of the IT equipment itself. including server virtualization. Suppose that there were two data centers with equivalent IT productivity. and other systems that have increased productivity and reduced the requirement for brick-and-mortar onsite commercial transactions. however. remote online employment.

still endless racks of blinking lights. The enabling infrastructure in the form of virtual servers. and virtual storage has the added benefit of reducing the physical footprint of IT and its accompanying energy consumption. after all. The static relationships between clients. and other familiar systems and a certain chill in the air. and data characteristic of conventional IT processing are being replaced with more flexible and mobile relationships that enables IT resources to be dynamically allocated when and where they are needed most. applications. cabling.Chapter 1: Supply and Demand What differentiates the new data center architecture from the old may not be obvious at first glance. The differences are found in the types of technologies deployed and the real estate required to house them. 8 The New Data Center . As we will see in subsequent chapters. the new data center is an increasingly virtualized environment. virtual fabrics. network infrastructure. The new data center architecture thus reconciles the conflict between supply and demand by requiring less energy while supplying higher levels of IT productivity. storage arrays. There are.

(2) the amount and type of cooling plant infrastructure required. is the rate at which energy is expended over time. Power.Running Hot and Cold Taking the heat 2 Dissipating the heat generated by IT equipment is a persistent problem for data center operations. Power. Because electrical systems often consume thousands of watts. and disrupt ongoing business operations. and Heat In common usage.600 kWh per year.000 watts of power would thus consume and be billed for 10 kWh of energy for each hour of operation. increase the risk of equipment outages. Cooling systems alone can account for one third to one half of data center energy consumption. or 240 kWh per day. with one watt of power equal to one joule of energy per second. The power of a 100-watt light bulb. Under-provisioning the thermal plant to reduce costs can negatively impact IT equipment. A system that requires 10. The New Data Center 9 . and (3) the efficiency of air flow around equipment on the data center floor to remove heat. Resolving heat generation issues therefore requires a multi-pronged approach to address (1) the source of heat from IT equipment. and the amount of energy consumed by the bulb over an hour would be 6000 joules. and in fact the kilowatt hour is the preferred unit used by power companies for billing purposes. energy is the capacity of a physical system to do work and is expressed in standardized units of joules (the work done by a force of one newton moving one meter along the line of direction of the force). the amount of energy consumed is expressed in kilowatt hours (kWh). by contrast.656 kWh per year. Over-provisioning the thermal plant to accommodate current and future requirements leads to higher operational costs. The typical American household consumes 10. is equivalent to 100 joules of energy per second. or 87. Energy. for example.

ambient temperature and humidity must also be considered. While energy consumption is expressed in watts. open systems IT equipment is less demanding.000 BTU. ASHRAE Thermal Guidelines for Data Processing Environments provides best practices for maintaining proper ambient conditions for operating IT equipment within data centers. Because BTUs quickly add up to tens or hundreds of thousands per hour in complex systems. plus the typical 2x factor for the cooling plant energy draw. Consequently. Fibre Channel directors. 10 The New Data Center . One watt is approximately 3. higher-end enterprise servers can be as much as 8000 watts. Data centers typically run fairly cool at about 68 degrees Fahrenheit and 50% relative humidity. With the high population of servers and the requisite storage infrastructure to support them in the data center. sometimes disturbingly referred to as “Speedo” mode data center operation. Environmental Parameters Because data centers are closed environments. or about 61 billion kWh per year. for example. According to the Environmental Protection Agency (EPA). for example. Although low-end servers may be rated at ~200 watts. Energy consumption generates heat.Chapter 2: Running Hot and Cold Medium and large IT hardware products are typically in the 1000+ watt range.4 BTU/h. with one therm equal to 100. A large storage array can be in the 6400 watt range. data centers in the US collectively consume the energy equivalent of approximately 6 million households. While legacy mainframe systems did require considerable cooling to remain within operational norms. Although ASHRAE's guidelines present fairly broad allowable ranges of operation (50 to 90 degrees. 20 to 80% relative humidity). it is not difficult to understand why data center power bills keep escalating. 40 to 55% relative humidity). heat dissipation is expressed in BTU (British Thermal Units) per hour (h). recommended ranges are still somewhat narrow (68 to 77 degrees. is often listed as therms averaged per day or billing period. there has been a more recent trend to run data centers at higher ambient temperatures. Your household heating bill. heat can also be expressed in therms. can be as efficient as 1300 watts (Brocade) to more than 3000 watts (competition).

The New Data Center 11 . Increasingly.Rationalizing IT Equipment Distribution Rationalizing IT Equipment Distribution Servers and network equipment are typically configured in standard 19" (wide) racks and rack enclosures. the floor plan for data center equipment distribution must also accommodate air flow for equipment cooling. Cold aisle Equipment row Hot aisle Equipment row Air flow Cold aisle Equipment row Hot aisle Figure 3. Even greater efficiency is achieved by deploying equipment with variable-speed fans. however. This requires that individual units be mounted in a rack for consistent air flow direction (all exhaust to the rear or all exhaust to the front) and that the rows of racks be arranged to exhaust into a common space. Hot aisle/cold aisle equipment floor plan. thus enabling maximum benefit for the hot/cold circulation infrastructure. called a hot aisle/cold aisle plan. as shown in Figure 3. A hot aisle/cold aisle floor plan provides greater cooling efficiency by directing cold to hot air flow for each equipment row into a common aisle. are arranged for accessibility for cabling and servicing. in turn. Each cold aisle feeds cool air for two equipment rows while each hot aisle allows exhaust for two equipment rows.

enables each unit to selectively apply cooling as needed. with more even utilization of cooling throughout the equipment rack. Equipment mounted in the upper slots is heated by their own power draw as well as the heat exhaust from the lower tiers. By convention. Variable speed fans increase or decrease their spin rate in response to changes in equipment temperature. Use of variable speed fans. This “work cell” u nit thus provides a more accurate description of what is actually required to power and cool IT equipment and. As shown in Figure 4. Research done by Michael Patterson and Annabelle Pratt of Intel leverages the hot aisle/cold aisle floor plan approach to create a metric for measuring energy consumption of IT equipment. by contrast.Chapter 2: Running Hot and Cold More even cooling Equipment at bottom is cooler Server rack with constant speed fans Server rack with variable speed fans Figure 4. cold air flow into equipment racks with constant speed fans favors the hardware mounted in the lower equipment slots and thus nearer to the cold air feed. As shown in Figure 5 Patterson and Pratt incorporate both the energy draw of the equipment mounted within a rack and the associated hot aisle/cold aisle real estate required to cool the entire rack. supposing the equipment (for example. servers) is uniform across a row. the energy consumption of a unit of IT hardware can be measured physically via use of metering equipment or approximated via use of the manufacturer's stated power rating (in watts or BTUs). 12 The New Data Center . provides a useful multiplier for calculating total energy consumption of an entire row of mounted hardware. Variable speed fans enable more efficient distribution of cooling.

for example. Blanking plates. Because the cooling plant itself represents such a significant share of data center energy use. are often ignored. In raised floor data centers. especially when equipment is frequently moved or upgraded. When energy was plentiful and cheap. decommissioned cabling can disrupt cold air circulation and unsealed cable cutouts can result in continuous and fruitless loss of cooling. Likewise. even seemingly minor issues can quickly add up to major inefficiencies and higher energy bills. however. Racked but unused equipment can disrupt air flow within a cabinet and become a heat trap for heat generated by active hardware. it is not uncommon to find decommissioned equipment still racked up (and sometimes actually powered on). The New Data Center 13 . are used to cover unused rack or cabinet slots and thus enforce more efficient airflow within an individual rack. The concept of work cell incorporates both equipment power draw and requisite cooling. it was often easy to overlook the basic best practices for data center hardware deployment and the simple remedies to correct inefficient air flow. Blanking plates.Rationalizing IT Equipment Distribution Work cell Cold aisle Equipment racks Hot aisle Figure 5.

a dryside economizer) is essentially a heat exchanger that leverages cooler outside ambient air temperature to cool the equipment racks. 14 The New Data Center . economizers can provide part or all of the cooling requirement. Humidifier/ dehumidifier Damper Particulate filter Outside air Air return Figure 6. data center cooling has been provided by large air conditioning systems (computer room air conditioning. In addition. however. current systems use ozone-friendly refrigerants to minimize broader environmental impact. As shown in Figure 6. ASHRAE recommends 40 to 55% relative humidity. In addition. or CRAC) that used CFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refrigerants. consume significant amounts of energy and may account for nearly half of a data center power bill. Economizer technology dates to the mid-1800s but has seen a revival in response to rising energy costs. For new data centers in temperate or colder latitudes. Use of outside air has its inherent problems. external air may be too humid or too dry for data center use. Data center equipment is sensitive to particulates that can build up on circuit boards and contribute to heating issues. an economizer (in this case. An economizer may therefore incorporate particulate filters to scrub the external air before the air flow enters the data center. As stated above. these systems are typically over-provisioned to accommodate data center growth and consequently incur a higher operational expense than is justified for the required cooling capacity. Since both CFCs and HCFCs are ozone depleting. Integrated humidifiers and dehumidifiers can condition the air flow to meet operational specifications for data center use.Chapter 2: Running Hot and Cold Economizers Traditionally. Conventional CRAC systems. An economizer uses the lower ambient temperature of outside air to provide cooling.

An objective multi-point monitoring system for measuring heat and humidity throughout the data center is really the only means to observe and proactively respond to changes in the environment. Some solutions provide rack-mountable systems that include both temperature and humidity probes and monitoring through a Web interface. The New Data Center 15 . Fujitsu offers a fiber optic system that leverages the affect of temperature on light propagation to provide a multi-point probe using a single fiber optic cable strung throughout equipment racks. Monitoring the Data Center Environment Because vendor wattage and BTU specifications may assume maximum load conditions. Wet-side economizers thus include cooling towers as part of the design to further condition the air supply for data center use. For example. which are tough. new monitoring software products can render a three-dimensional view of temperature distribution across the entire data center. analogous to an infrared photo of a heat source. they are invaluable diagnostic tools for fine-tuning airflow and equipment placement to maximize cooling and keeping power and cooling costs to a minimum.Monitoring the Data Center Environment Dry-side economizers depend on the external air supply temperature to be sufficiently lower than the data center itself. using data sheet specifications or equipment label declarations does not provide an accurate basis for calculating equipment power draw or heat dissipation. Cooling towers present their own complications. and this may fluctuate seasonally. Accuracy is reported to be within a half degree Celsius and within 1 meter of the measuring point. Many monitoring systems can be retrofitted to existing data center plants so that even older sites can leverage new technologies. Although monitoring systems add cost to data center design. especially in more arid geographies where water resources are expensive and scarce. some vendors are incorporating temperature probes into their equipment design to provide continuous reporting of heat levels via management software. A number of monitoring options are available today. Ideally. economizers should leverage as much recyclable resources as possible to accomplish the task of cooling while reducing any collateral environmental impact. In addition.

Chapter 2: Running Hot and Cold 16 The New Data Center .

Typically. 10 or fewer VM instances are run per physical platform although more powerful server platforms can support 20 or more VMs. today's virtualization software can support dozens of VMs on a single physical server.Doing More with Less Leveraging virtualization and blade server technologies 3 Of the three primary components of an IT data center infrastructure— servers. shared storage on a SAN or NAS is the norm. Servers represent approximately half of the IT equipment energy cost and about a quarter of the total data center power bill. The New Data Center 17 . Server virtualization and blade server design. VMs Reborn The concept of virtual machines dates back to mainframe days. In today's usage. Each VM ran its own operating system and applications in isolation although the processor and peripherals could be shared. storage and network—servers are by far the most populous and have the highest energy impact. are distinct technologies fulfilling different goals but together have a multiplying affect on server processing performance and energy efficiency. for example. a single physical system was logically partitioned into independent virtual machines. In addition. Server technology has therefore been a prime candidate for regulation via EPA Energy Star and other market-driven initiatives and has undergone a transformation in both hardware and software. To maximize the benefit of mainframe processing. multi-core processors and multi-processor motherboards have dramatically increased server processing power in a more compact footprint. Unlike previous mainframe implementations. VMs typically run on open systems servers and although direct-connect storage is possible.

Running 10 VMs on a single server platform eliminates the need for 9 additional servers with their associated cost. With the hypervisor directly managing hardware resources. This type of hypervisor must therefore support all CPU. however. In a native Type 1 virtualization implementation. virtualization offers an immediate solution for server sprawl and ever increasing costs. VMs on a single platform are hosted by a hypervisor layer which runs either directly (Type 1 or native) on the server hardware or on top of (Type 2 or hosted) the conventional operating system already running on the server hardware. the hypervisor runs directly on the server hardware as shown in Figure 7. For data centers with hundreds or thousands of servers. 18 The New Data Center . Application Application Application Service console OS OS OS Hypervisor Hardware CPU Memory NIC Storage I/O Figure 7. Like any virtualization strategy. network and storage I/O traffic directly without the assistance of an underlying operating system. A native or Type 1 hypervisor. Clearly. it is also less vulnerable over time to code changes or updates that might be required if an underlying OS were used. typically an Intel x86 design) and associated I/O. memory. one of the benefits of native hypervisors is that overall latency can be minimized as individual VMs perform the normal functions required by their applications.Chapter 3: Doing More with Less The benefits of server virtualization are as obvious as the potential risks. and accompanying power draw and heat dissipation. the logical separation of VMs must be maintained and access to server memory and external peripherals negotiated to prevent conflicts or errors. components. The hypervisor is consequently written to a specific CPU architecture (for open systems.

secure access to the hypervisor itself must be maintained. The advantage of this approach is that virtualization can be implemented on existing servers to more fully leverage existing processing power and support more applications in the same footprint.VMs Reborn Application Application Application Application OS OS OS OS Hypervisor Host Operating System Hardware CPU Memory NIC Storage I/O Figure 8. On the other hand. for example. a hosted or Type 2 server virtualization solution is installed on top of the host operating system. this hosted implementation incurs more latency than native hypervisors. Because the hypervisor is now managing multiple virtual computers. The New Data Center 19 . Without the proper management of shared memory tables by the hypervisor. In both native and hosted hypervisor environments. Given that the host OS and hypervisor layer inserts additional steps between the VMs and the lower level hardware. hosted hypervisors can readily support applications with moderate performance requirements and still achieve the objective of consolidating compute resources. A hosted or Type 2 hypervisor. Efforts to standardize server virtualization management for stable and secure operation are being led by the Distributed Management Task Force (DMTF) through its Virtualization Management Initiative (VMAN) and through collaborative efforts by virtualization vendors and partner companies. one VM instance could easily crash another. the hypervisor oversees the creation and activity of its VMs to ensure that each VM has its requisite resources and does not interfere with the activity of other VMs. The hypervisor must also manage the software traps created to intercept hardware calls made by the guest OS and provide the appropriate emulation of normal OS hardware access and I/O. As shown in Figure 8.

Because the virtual machine is now detached from the underlying physical processing. server virtualization enables a degree of mobility unachievable via conventional server management. Communication between the virtualization manager and the fabric via APIs. Adoption for mid-tier. workload and application priorities. virtual machine mobility creates new opportunities for automating application distribution within the virtual server pool and implementing policy-based procedures to enforce priority handling of select applications over others. 20 The New Data Center . it is now possible to migrate a virtual machine from one hardware platform to another non-disruptively. Although server virtualization has steadily been gaining ground in large data centers. In addition to providing a viable means to consolidate server hardware and reduce energy costs. As discussed in more detail below. it can be migrated onto a less busy host or one that supports faster CPUs and I/O. mission-critical applications will follow. an application's performance is beginning to exceed the capabilities of its shared physical host. mid-tier applications have been first in line and as these deployments become more pervasive and proven. there has been some reluctance to commit the most mission-critical applications to VM implementations. for example. This application agility that initially was just an unintended by-product of migrating virtual machines has become one of the compelling reasons to invest in a virtual server solution. With ever-changing business. memory. This further simplifies management of application resources and ensures higher availability. and I/O hardware. enable proactive response to potential traffic congestion or changes in the state of the network infrastructure. the ability to quickly shift processing resources where most needed is a competitive business advantage. hardware platforms and operating systems. Highperformance requirements can be met with multi-CPU platforms optimized for shared processing. for example.Chapter 3: Doing More with Less Server virtualization software is now available for a variety of CPUs. If. moderate performance applications has been enabled by the availability of economical dual-core CPUs and commodity rack-mount servers. Consequently.

can accommodate 42 1U conventional rack-mount servers. cooling. Brocade Access Gateway Network I/O Memory CPU Memory Bus Power supply Fan Power supply Fans CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic Network I/O Storage AUX Bus External SAN storage Figure 9. Blade server design strips away all but the most essential dedicated components from the motherboard and provides shared assets as either auxiliary special function blades or as part of the blade chassis hardware. A standard data center rack. fans and other elements are shared with greater efficiency. The degree of component offload and availability of specialized blades varies from vendor to vendor. a blade server architecture offloads all components that can be supplied by the chassis or by supporting specialized blades. The blade server itself is reduced to one or more CPUs and requisite auxiliary logic. memory. these are distinct technologies that have a multiplying benefit when combined. but the net result is essentially the same. and I/O resources. it is still less than the equivalent 3 racks that would otherwise be required. network.Blade Server Architecture Blade Server Architecture Server consolidation in the new data center can also be achieved by deploying blade server frames. More processing power can now be packed into a much smaller space and compute resources can be managed more efficiently. storage. and although the cooling requirement for a fully populated blade server rack may be greater than for a conventional server rack. for example. Consequently. The New Data Center 21 . but 128 or more blade servers in the same space. Although blade servers are commonly associated with server virtualization. A single rack of blade servers can therefore house the equivalent of 3 racks of conventional servers. the power consumption of each blade server is dramatically reduced while power supply. The successful development of blade server architecture has been dependent on the steady increase in CPU processing power and solving basic problems around shared power. A blade server architecture centralizes shared resources while reducing individual blade server elements. As shown in Figure 9.

adoption of blade server technology has been increasing in both large data centers and small/ medium business (SMB) environments. The unique value-add of each vendor's offering may leverage hot-swap capability. Slightly less than half of the data center respondents and approximately a third of SMB operations have already implemented blade servers and over a third in both categories have deployment plans in place. A fully populated data center rack of 128 blade servers. reduced consumption of resources. From an energy savings standpoint. much higher server consolidation is achieved when blade servers are combined with server virtualization software. and other elements that contribute to higher data center power bills and cooling load. That would be the equivalent of 30 racks (at 42 servers per rack) of conventional 1U rack-mount servers running one OS instance per server. shared memory blades and consolidated network access. With limited data center real estate and increasing power costs squeezing data center budgets.org shows. Brocade has long worked with the major blade server manufacturers to provide optimized Access Gateway and switch blades to centralize storage network capability and the specific features of these products will be discussed in the next section. variable-speed fans. that represents the elimination of over 1000 power supplies.Chapter 3: Doing More with Less By significantly reducing the number of discrete components per processing unit. network adapters. could support 10 or more virtual machines per blade for a total of 1280 virtual servers. the combination of blade servers and server virtualization is fairly easy to justify. As a 2009 survey by blade. The value of a server virtualization solution is thus amplified when combined with Brocade's network technology. Brocade Server Virtualization Solutions Whether on standalone servers or blade server frames. it can help ensure that a server virtualization deployment proactively addresses the new requirements of both client and storage access. variable-speed CPUs. WAN and SAN. Because Brocade offers a full spectrum of products spanning LAN. for example. Although consolidation ratios of 3:1 are impressive. the blade server architecture achieves higher efficiencies in manufacturing. streamlined design and reduced overall costs of provisioning and administration. 22 The New Data Center . fan units. implementing server virtualization has both upstream (client) and downstream (storage) impact in the data center.

Brocade N_Port Trunking enables the 825 to deliver an unprecedented 16 Gbps bandwidth (3200 MBps) and one million IOPS performance. The Brocade 815 (single port) and 825 HBAs (dual port. security. This exceptional performance helps ensure that server virtualization configurations can expand over time to accommodate additional virtual machines without impacting the continuous operation of existing applications. each running its own application. In a virtual server configuration. a host bus adapter (HBA) provides storage access for a single operating system and its applications.Brocade Server Virtualization Solutions To maximize the benefits of network connectivity in a virtualized server environment. Brocade has worked with the major server virtualization solutions and managers to deliver high performance. High performance is therefore essential for enabling multiple virtual machines to share HBA ports without congestion. energy efficiency.000 I/Os per second (IOPS) performance per port to ensure the maximum throughput for shared virtualized connectivity. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an aggregate 16 Gbps bandwidth and 1000 IOPS. Figure 10. and streamlined management end to end. The New Data Center 23 . high availability. The following Brocade solutions can enhance a server virtualization deployment and help eliminate potential bottlenecks: Brocade High-Performance 8 Gbps HBAs In a conventional server. the HBA may be supporting 10 to 20 OS instances. shown in Figure 10) provide 8 Gbps bandwidth and 500.

the need for speed does not end at the network or storage port. When virtual machines are migrated from one hardware platform to another. building high-performance network infrastructures is a prerequisite for maintaining non-disruptive. the boot images can be readily accessed across the SAN via Brocade HBAs. Brocade virtual machine SAN boot. As shown in Figure 11. Designing high-performance fabrics ensures that applications running on virtual machines are not exposed to bandwidth issues and can accommodate high volume traffic patterns required for data backup and other applications. SAN boot centralizes management of boot images and eliminates the need for local storage on each physical server platform. 24 The New Data Center . As discussed below. non-blocking storage fabrics that can scale from small VM configurations to enterprise-class data center deployments. Brocade 8 Gbps Switch and Director Ports In virtual server environments. Brocade's support of 8 Gbps ports on both switch and enterprise-class platforms enables customers to build highperformance.Chapter 3: Doing More with Less The Brocade 815 and 825 HBAs are further optimized for server virtualization connectivity by supporting advanced intelligent services that enable end-to-end visibility and management. N_Port ID Virtualization (NPIV) and integrated Quality of Service (QoS) provide powerful tools for simplifying virtual machine deployments and providing proactive alerts directly to server virtualization managers. high-performance virtual machine traffic flows. Brocade Virtual Machine SAN Boot For both standalone physical servers and blade server environments. Because more traffic is now traversing fewer physical links. the ability to boot from the storage network greatly simplifies virtual machine deployment and migration of VM instances from one server to another.

This eliminates the error-prone manual host-based configuration scheme required by other HBA vendors. Brocade 825 HBAs .. This in turn provides a level of granularity for identifying each VM attached to the fabric for end-to-end monitoring. Directattached storage (DAS) SAN switches Storage arrays Boot images Figure 11. the WWN follows the VM when it is migrated to another platform. In a server virtualization environment. Because the WWN is now bound to an individual virtual machine. Brocade's SAN boot and boot LUN discovery facilitates migration of virtual machines from host to host. Brocade 815 and 825 HBAs provide the ability to automatically retrieve boot LUN parameters from a centralized fabric-based registry.. and configuration. The New Data Center 25 . NPIV creates the linkage required for advanced services such as QoS. the individual virtual machine instances are unaware of physical ports since the underlying hardware has been abstracted by the hypervisor... This creates potential problems for identifying traffic flows from virtual machines through shared physical ports. NPIV allows each virtual machine instance to have a unique World Wide Name (WWN) or virtual HBA port. Boot images Servers . and zoning as discussed in the next section. Brocade N_Port ID Virtualization for Workload Optimization In a virtual server environment. NPIV is an industry standard that enables multiple Fibre Channel addresses to share a single physical Fibre Channel port.. Servers ..Brocade Server Virtualization Solutions .. removes the need for local storage and improves reliability and performance. SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts.. accounting. In addition. security.

as shown in Figure 12. Brocade End-to-End Quality of Service The combination of NPIV and zoning functionality on Brocade HBAs and switches provides the foundation for higher-level fabric services including end-to-end QoS. Brocade's Top Talkers management feature can help identify which VM applications may require priority treatment. Because the traffic flows from each virtual machine can be identified by virtual WWN and segregated via zoning. each can be assigned a delivery priority (low.Chapter 3: Doing More with Less Configuring Single Initiator/Target Zoning Brocade has been a pioneer in fabric-based zoning to segregate fabric traffic and restrict visibility of storage resources to only authorized hosts. medium or high) that is enforced fabric-wide from the host connection to the storage port. SQL Server). This feature minimizes configuration errors during VM migration and extends the management visibility of fabric connections to specific virtual machines. the QoS assignment follows the VM if it is migrated from one hardware platform 26 The New Data Center . NPIV and single initiator/target zoning ensures that individual virtual machines have access only to their designated storage assets. While some applications running on virtual machines are logical candidates for QoS prioritization (for example. Because Brocade end-to-end QoS is ultimately tied to the virtual machine's virtualized WWN address. QoS Priorities High Medium Low App 1 App 2 App 3 App 4 Virtual Channels technology enables QoS at the ASIC level in the HBA Default QoS priority is Medium HBA Frame-level interleaving of outbound data maximizes initiator link utilization Figure 12. As a recognized best practice for server to storage configuration. Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric.

In addition. Brocade SecureIron switches provide firewall traffic management and LAN security for client access to virtual server clusters. Brocade SAN security features include authentication via access control lists (ACLs) and role-based access control (RBAC) as well as security mechanisms for authenticating connectivity of switch ports and devices to fabrics. shown in Figure 14. and FS8-18 Encryption Blade for the Brocade DCX Backbone platform provide high-performance (96 Gbps) data encryption for dataat-rest. Brocade SecureIron products. Brocade offers a broad spectrum of security solutions. Figure 13. provide firewall traffic management and LAN security to safeguard access from clients to virtual hosts on the IP network. shown in Figure 13. including LAN and WAN-based technologies and storage-specific SAN security features. Figure 14. the Brocade Encryption Switch. Brocade LAN and SAN Security Most companies are now subject to government regulations that mandate the protection and security of customer data transactions.Brocade Server Virtualization Solutions to another. Planning a virtualization deployment must therefore also account for basic security mechanisms for both client and storage access. This feature ensures that applications enjoy non-disruptive data access despite adds/moves and changes to the downstream environment and enables administrators to more easily fulfill client servicelevel agreements (SLAs). The Brocade Encryption Switch provides high-performance data encryption to safeguard data written to disk or tape. Brocade's security environment thus protects data-in-flight from client to virtual host as well as data written to disk across the SAN. The New Data Center 27 . For example.

Ultra-high-speed inter-chassis links (ICLs) allow further expansion of the SAN core for scaling to meet the requirements of very large server virtualization deployments. Brocade Access Gateway technology leverages NPIV to simplify virtual machine addressing and F_Port Trunking for high utilization and automatic link failover. The Brocade DCX supports 384 ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. The Brocade DCX is also available in a 192-port configuration (as the Brocade DCX-4S) to support medium VM configurations. By integrating SAN connectivity into a virtualized blade server chassis. while providing the same high availability.Chapter 3: Doing More with Less Brocade Access Gateway for Blade Frames Server virtualization software can be installed on conventional server platforms or blade server frames. high-availability Access Gateway blades for Fibre Channel connectivity to the SAN. Blade server form factors offer the highest density for consolidating IT processing in the data center and leverage shared resources across the backplane. performance. The Brocade DCX's Adaptive Networking services for QoS. congestion detection. ingress rate limiting. Brocade helps to streamline deployment and simplify management while reducing overall costs. Adaptive Networking services provide greater agility in managing application workloads as they migrate between physical servers. Brocade has partnered with blade server providers to create high-performance. and management ensure that traffic streams from virtual machines are proactively managed throughout the fabric and accommodate the varying requirements of upper-layer business applications. and advanced SAN services. the Brocade DCX delivers the high performance required for virtual server implementation and can accommodate growth in VM environments in a compact footprint. The Brocade DCX is also designed to non-disruptively integrate Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) for future virtual server connectivity. To optimize storage access from blade server frames. The Energy-Efficient Brocade DCX Backbone Platform for Consolidation With 4x the performance and over 10x the energy efficiency of other SAN directors. 28 The New Data Center .

This complicates server management and can impact performance and availability. Brocade can proactively address these issues by integrating communication between Brocade intelligent fabric services with VM The New Data Center 29 . Brocade SecureIron switches bring advanced security protection for client access into virtualized server clusters. As with the Brocade DCX architecture for SANs. Brocade LAN solutions provide up to 10 Gbps throughput per port and so can accommodate the higher traffic loads typical of virtual machine environments. Brocade edge switches with Power over Ethernet (PoE) support enable customers to integrate a wide variety of IP business applications. Brocade BigIron RX. Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact. wireless access points. while Brocade ServerIron switches provide Layer 4–7 application switching and load balancing. Figure 15. and FastIron SuperX switches incorporate best-in-class functionality and low power consumption to deliver high-performance core switching for data center LAN backbones. shown in Figure 15. including voice over IP (VoIP). energy-efficient form factors. This product suite is the natural complement to Brocade's robust SAN products and enables customers to build full-featured and secure networks end to end. complex traffic patterns are created and unexpected congestion can occur. and security monitoring. Because server virtualization platforms can support dynamic migration of application workloads between physical servers.Brocade Server Virtualization Solutions Enhanced and Secure Client Access with Brocade LAN Solutions Brocade offers a full line of sophisticated LAN switches and routers for Ethernet and IP traffic from Layer 2/3 to Layer 4–7 application switching. Brocade Industry Standard SMI-S Monitoring Virtual server deployments dramatically increase the number of data flows and requisite bandwidth per physical server or blade server.

In addition. A well-conceived and executed virtualization strategy can ensure that a virtual machine deployment achieves its budgetary goals and fulfills the prime directive to do far more with much less. and virtual machine provisioning can be integrated with Brocade fabric services such as Adaptive Networking. capacity planning. As one of the original contributors to the SMI-S specification. it can provide a wealth of practical knowledge and insight into the key issues surrounding client-to-server and server-tostorage data access. As the fabric monitors potential congestion on a per-VM basis. configuration management.Chapter 3: Doing More with Less managers. Because Brocade technology is ubiquitous in the vast majority of data centers worldwide and Brocade has years of experience in the most mission-critical IT environments. Brocade Professional Services has helped hundreds of customers upgrade to virtualized server infrastructures and provides a spectrum of services from virtual server assessments. encryption and security policies. SLA policies. 30 The New Data Center . consolidation and lower energy consumption characteristic of server virtualization technology may not have the staff or in-house expertise to plan and implement a server virtualization project. Because this diagnostic functionality is fine tuned to the workflows of each VM. Brocade is uniquely positioned to provide a truly open systems solution end to end. Open management standards such as the Storage Management Initiative (SMI) are the appropriate tools for integrating virtualization management platforms with fabric management services. Many organizations fail to consider the overall impact of virtualization on the data center and that in turn can lead to degraded application performance. changes can be restricted to only the affected VM instances. audits and planning to end-to-end deployment and operation. Brocade Professional Services Even large companies who want to take advantage of the cost savings. and increased management complexity. it can proactively alert virtual machine management that a workload should be migrated to a less utilized physical link. inadequate data protection.

FCoE enables a simplified cabling solution for reducing the number of network and storage interfaces per server attachment. As a means to encapsulate Fibre Channel frames in Ethernet. pause quanta are used to restrict traffic for a given period to relieve network congestion and avoid dropped frames. FCoE is not intended to run on conventional Ethernet networks. or Data Center Bridging (DCB). The combined network and storage connection is now provided by a converged network adapter (CNA). as shown in Figure 16. In order to replicate the low latency. The New Data Center 31 . at 10 Gbps. Other standards initiatives such as TRILL (Transparent Interconnect for Lot of Links) are being developed to enable multiple pathing through DCB-switched infrastructures. DCB provides much more robust congestion management and highavailability features characteristic of data center Fibre Channel. FCoE is best supported on a new. To accommodate the larger payload of Fibre Channel frames. FCoE simplifies the server cable plant by reducing the number of network interfaces required for client. and storage access. deterministic delivery.FCoE and Server Virtualization FCoE and Server Virtualization Fibre Channel over Ethernet is an optional storage network interconnect for both conventional and virtualized server environments. peer-to-peer. and high performance of traditional Fibre Channel. DCB-enabled switches must also support jumbo frames so that entire Fibre Channel frames can be encapsulated in each Ethernet transmission. Given the more rigorous requirements for storage data handling and performance. DCB replicates Fibre Channel's buffer-to-buffer credit flow control functionality via priority-based flow control (PFC) using 802. Instead of buffer credits.1Qbb pause frames. Without the enhancements of DCB. FC traffic FC traffic Ethernet traffic Ethernet traffic FCoE and Ethernet Figure 16. hardened form of Ethernet known as Converged Enhanced Ethernet (CEE). Unlike conventional Ethernet. standard Ethernet is too unreliable to support high-performance block storage transactions.

allows customers to simplify server connectivity and still retain the performance and reliability required for storage transactions. The FCoE initiative has been developed in the ANSI T11 Technical Committee. close collaboration has been required between ANSI T11 and the Institute of Electrical and Electronics Engineers (IEEE). for example. file data. 32 The New Data Center . servers can be configured with two DCB-enabled 10 Gigabit Ethernet (GbE) ports. An FCoE top-of-rack solution provides both DCB and Fibre Channel ports and provides protocol conversion to the data center SAN. Remote Direct Memory Access (RDMA). Enabling an enhanced Ethernet to carry both Fibre Channel storage data as well as other data types. LAN traffic. as shown in Figure 17. which deals with FC-specific issues and is included in a new Fibre Channel Backbone Generation 5 (FC-BB-5) specification. For blade server installations. FCoE switch IP LAN FC SAN Servers with CNAs Figure 17. which governs Ethernet and the new DCB standards. Storage access is provided by an FCoE-capable blade in a director chassis (end of row) or by a dedicated FCoE switch (top of rack). and VoIP. in particular. Because FCoE takes advantage of further enhancements to Ethernet.Chapter 3: Doing More with Less FCoE is not a replacement for conventional Fibre Channel but is an extension of Fibre Channel over a different link layer transport. Instead of provisioning a server with dual-redundant Ethernet and Fibre Channel ports (a total of four ports). this reduction in the number of interfaces greatly simplifies deployment and ongoing management of the cable plant.

enabling greater flexibility for allocating storage assets to hosts. FCoE is often overhyped as a cure-all for pervasive IT ills. high-performance FCoE deployment. the client. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facilitate a compact. and client traffic is redirected via the LAN. Brocade has designed FCoE products that integrate with existing infrastructures so that the advantages of FCoE can be realized without adversely impacting other operations. Fibre Channel connectivity can be directly to storage end-devices or to existing fabrics. Brocade offers the 1010 (single port) and 1020 (dual port) CNAs. The FCoE switch acts as a Fibre Channel Forwarder (FCF) and converts FCoE frames into conventional Fibre Channel frames for redirection to the fabric. at 10 Gbps DCB per port. Like many new technologies. while the 10 Gbps DCB ports support standard Link Aggregation Control Protocol (LACP). Peer-to-peer or clustering traffic between servers in the same rack is simply switched at Layer 2 or 3. From the host standpoint. The New Data Center 33 . should be balanced against the cost of deployment and the availability of value-added features that simplify management and administration. The Brocade 8000 Switch provides top-of-rack connectivity for servers with 24 ports of 10 Gbps DCB and 8 ports of 8 Gbps Fibre Channel. peer-to-peer. As an original contributor to the FCoE specification. Fibre Channel ports support trunking for a total of 64 Gbps bandwidth. The benefit of streamlining server connectivity. the FCoE functionality appears as a conventional Fibre Channel HBA. shown in Figure 18. and block storage traffic share a common 10 Gbps network interface.FCoE and Server Virtualization In this example. Figure 18. however.

Chapter 3: Doing More with Less 34 The New Data Center .

high-volume applications may suffer from over-utilization of storage capacity while low-volume applications under-utilize their storage targets. and energy consumption. provides greater asset utilization by treating multiple physical platforms as a single virtual asset or pool. This in turn often reduces the need to deploy new storage arrays.Into the Pool Transcending physical asset management with storage virtualization 4 Server virtualization achieves greater asset utilization by supporting multiple instances of discrete operating systems and applications on a single hardware platform. Because servers are bound to Logical Unit Numbers (LUNs) in specific storage arrays. The New Data Center 35 . although storage virtualization does not provide a comparable direct benefit in terms of reduced footprint or energy consumption in the data center. and so provides an indirect benefit in terms of continued acquisition costs. often from different vendors with the unique characteristics of each system. Storage virtualization. Optimizing Storage Capacity Utilization in the Data Center Storage administrators typically manage multiple storage arrays. deployment. by contrast. Consequently. it does enable a substantial benefit in productive use of existing storage capacity. management.

Conventional storage configurations often result in overand under-utilization of storage capacity across multiple storage arrays. 36 The New Data Center . Storage virtualization solves this problem by inserting an abstraction layer between the server farm and the downstream physical storage targets. As shown in Figure 19. The hard-coded assignment of storage capacity on specific storage arrays to individual servers or VMs is too inflexible to meet the requirements of the more fluid IT environments characteristic of today's data centers. This abstraction layer can be supported on the host. This problem is exacerbated by server virtualization. or on dedicated virtualization appliances. within the fabric.Chapter 4: Into the Pool Server 1 Server 2 Server 3 SAN LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55 Physical storage Array A Array B Array C Figure 19. the storage controller. since each physical server now supports multiple virtual machines with additional storage LUNs and a more dynamic utilization of storage space. the uneven utilization of storage capacity across multiple arrays puts some applications at risk of running out of disk space. while neighboring arrays still have excess idle capacity.

Because the availability of storage capacity is now no longer restricted to individual storage arrays. Once LUNs are created and assigned to the storage pool. the virtual storage pool is one large generic storage system. This enables more efficient utilization of the aggregate storage space and facilitates the creation and management of dynamic volumes that can be sized to changing application requirements. for example. Storage virtualization thus facilitates sharing of storage capacity among systems that would otherwise be incompatible with each other. the abstraction layer provided by storage virtualization creates a homogenous view of storage. however. The storage capacity of each physical storage system is now assigned to a virtual storage pool from which virtual LUNs (for example.Optimizing Storage Capacity Utilization in the Data Center Server 1 Server 2 Server 3 SAN LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6 Virtualized Storage Pool LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55 Physical storage Array A Array B Array C Figure 20. could be from any vendor and have proprietary value-added features. vendor-specific functionality is invisible to the servers. The physical arrays shown in Figure 20. As illustrated in Figure 20. storage virtualization breaks the physical assignment between servers and their target LUNs. LUNs 1 through 6 at the top of the figure) can be created and dynamically assigned to servers. The New Data Center 37 . LUN creation and sizing is dependent only on the total capacity of the virtual pool. Storage virtualization aggregates the total storage capacity of multiple physical arrays into a single virtual pool. In addition to enabling more efficient use of storage assets. From the server perspective.

the abstraction layer provided by virtualization software and hardware logic (the virtualization engine) must assume responsibility for errors or changes that occur at the physical layer. Loss of the metadata mapping would mean loss of access to the real data. The virtualization abstraction layer provides virtual targets to real hosts and virtual hosts to real targets. Instead. for example.Chapter 4: Into the Pool As with all virtualization solutions. A storage virtualization solution must therefore guarantee the integrity of the metadata mapping and provide safeguards in the form of replication and synchronization of metadata map copies. 38 The New Data Center . A virtual LUN of 500 GB. Real intiators Virtual target Virtualization engine Virtual initiator Metadata map Real targets Figure 21. management of backend complexity centers primarily on the maintenance of the metadata mapping required to correlate virtual storage addresses to real ones. as shown in Figure 21. In the case of storage virtualization specifically. The relationship between virtual and real storage LUN assignment is maintained by the metadata map. may map to storage capacity spread across several physical arrays. Storage virtualization proxies virtual targets (storage) and virtual initiators (servers) so that real initiators and targets can connect to the storage pool without modification using conventional SCSI commands. masking the underlying complexity of physical systems does not make that complexity disappear.

information lifecycle management can leverage virtualized storage tiers to pair the cost of virtual storage containers to the value of the data they contain. replication for disaster recovery. aligning infrastructure to applications requires a more flexible approach to the handling and maintenance of data assets as the business value of the data itself changes over time. no longer depends on a vendor-specific application and licensing but can be executed via a thirdparty solution. Once storage assets have been vendor-neutralized and pooled via virtualization. and safeguarding the data they generate and process. The New Data Center 39 . creation of storage pools from multiple physical storage arrays should be implemented by storage class. In addition. Aggregating like assets in the same pool ensures consistent performance and comparable availability for all virtual LUNs and thus minimizes problematic inconsistencies among disparate systems. virtual tape backup. lower performance arrays should be assigned to a separate pool. Highend RAID arrays contribute to one virtual pool. it's really about the upper-layer business applications. Vendor literature on storage virtualization is consequently often linked t o snapshot technology for data protection. Providing that each virtual storage tier is composed of a similar class of products. In the end. One of the central challenges of next-generation data center design is to align infrastructure to application requirements. their availability and performance. each tier represents different performance and availability characteristics. burdened cost of storage. Building on a Storage Virtualization Foundation Storage virtualization is an enabling technology for higher levels of data management and data protection and facilitates centralized administration and automation of storage operations.Building on a Storage Virtualization Foundation As a data center best practice. As shown in Figure 22. For data storage. it is easier to overlay advanced storage services that are not dependent on vendor proprietary functionality. Data replication for remote disaster recovery. and energy consumption. for example. data migration. and information lifecycle management (ILM). there are benefits in maintaining separate classes of virtualized storage systems for applications such as lifecycle management as will be discussed in the next section.

including frequency of access to specific data sets. 40 The New Data Center . maximum utilization of the storage capacity of a tier can help reduce overall costs and more readily accommodate the growth of aged data without the addition of new disk drives.Chapter 4: Into the Pool Class of storage Burdened cost per GB Value of data Tier 1 10x High Tier 2 4x Moderate Tier 1 10x Low Tier 4 0. Policy-based data migration can be triggered by a number of criteria. or appending metadata to indicate data status. In addition. Leveraging classes of storage to align data storage to the business value of data over time. Establishing tiers of virtual storage pools by storage class also provides the foundation for automating data migration from one level to another over time. however. flagging transactions as completed.999% availability of Tier 1 systems but still provide adequate accessibility before the data can finally be retired to tape. capacity on high-value systems is freed to accommodate new active transactions.5x Archive Figure 22. Reducing or eliminating the need for manual operator intervention can significantly reduce administrative costs and enhance the return on investment (ROI) of a virtualization deployment. Tier 2 and 3 classes may not have the performance and 99. if each tier is a virtual storage pool. can require that migrated data remain accessible within certain time frames. By migrating data from one level to another as its immediate business value declines. Business practice or regulatory compliance. the age of the data.

5 standard. storage controllers. Fabric-based virtualization is now codified in an ANSI/INCITS T11. fabricbased storage virtualization has been a compelling solution for centralizing the virtualization function and enabling more flexibility in scaling and deployment. Implementing storage virtualization on the storage array controllers is a viable alternative. or fabric. or within the fabric via directors or switches. storage ports. Exceptions are forwarded to the CPP. The New Data Center 41 . making it difficult to scale to larger configurations without performance and availability issues. The central challenge for fabric-based virtualization is to achieve the highest performance while maintaining the integrity of metadata mapping and exception handling. Because all storage data flows through the storage network. Because host-based storage virtualization is deployed per server. and the virtual volume created via the CPP.Centralizing Storage Virtualization from the Fabric Centralizing Storage Virtualization from the Fabric Although storage virtualization can be implemented on host systems. freeing the DPC to continue processing valid transactions. Dedicated storage virtualization appliances are typically deployed between multiple hosts and their storage targets. there are trade-offs for each solution in terms of performance and flexibility. however. it incurs greater overhead in terms of administration and consumes CPU cycles on each host. providing that the vendor can accommodate heterogeneous systems for multi-vendor environments. which provides APIs for communication between virtualization software and the switching elements embedded in a switch or director blade. for example. while the Data Path Controller (DPC) ensures that the proper connectivity is established between the servers. The Fabric Application Interface Standard (FAIS) separates the control path to a virtualization engine (typically external to the switch) and the data paths between initiators and targets. dedicated appliances. As shown in Figure 23. the Control Path Processor (CPP) represents the virtualization intelligence layer and the FAIS interface.

And because communication between the virtualization engine and the switch is supported by standardsbased APIs. This is a significant benefit over host-based and appliance-based solutions.Chapter 4: Into the Pool Virtualization engine Control path Storage pool Figure 23. it is possible to run a variety of virtualization software solutions. With high performance and support for heterogeneous storage systems. Because the DPC function can be executed in an ASIC at the switch level. it is possible to achieve very high performance without impacting upper-layer applications. The central role a switch plays in providing connectivity between servers and storage and the FAIS-enabled ability to execute metadata mapping for virtualization also creates new opportunities for fabricbased services such as mirroring or data migration. FAIS splits the control and data paths for more efficient execution of metadata mapping between virtual storage and servers. fabric-based services can be implemented with much greater transparency than alternate approaches and can scale over time to larger deployments. 42 The New Data Center .

Brocade Fabric-based Storage Virtualization Brocade Fabric-based Storage Virtualization Engineered to the ANSI/INCITS T11 FAIS specification. since any server connected to the SAN can be directed to the FA4-18 blade for virtualization services. As shown in Figure 24. mirroring and data migration. NOTE: Information for the Brocade DCX Backbone also includes the Brocade DCX-4S Backbone unless otherwise noted. Interoperability with existing SAN infrastructures amplifies this advantage. Line-speed metadata mapping is achieved through purpose-built components instead of relying on general-purpose processors that other vendors use. compatibility with both the Brocade 48000 and Brocade DCX chassis enables the Brocade FA4-18 Application Blade to extend the benefits of Brocade energy-efficient design and high bandwidth to advanced fabric services without requiring a separate enclosure. The Brocade FA4-18 Application Blade provides line-speed metadata map execution for non-disruptive storage pooling. Servers Virtualization contol processor Brocade DCX/DCX-4S or Brocade 48000 with the Brocade FA4-18 Application Blade Storage pools Figure 24. The New Data Center 43 . the Brocade FA4-18 Application Blade provides high-performance storage virtualization for the Brocade 48000 Director and Brocade DCX Backbone.

Because the virtualization functionality is driven in the fabric and under configuration control of the CPC. a Control Path Cluster (CPC) consisting of two processor platforms attached to the FA4-18 provides high availability and failover in the event of link or unit failure. 44 The New Data Center . this solution requires no host middleware or host CPU cycles for attached servers. For new data center design or upgrade. performance. Fabric-based storage virtualization offers the added advantage of flexibility. and transparency to both servers and storage systems as well as enhanced control over the virtual environment.Chapter 4: Into the Pool The virtualization application is provided by third-party and partner solutions. storage virtualization is a natural complement to server virtualization. including EMC Invista software. Initial configuration of storage pools is performed on the Invista CPC and downloaded to the FA-18 for execution. For Invista specifically.

is now facilitated with new standards and features that bring more power and intelligence to the fabric. high performance. A well-conceived core/edge SAN design can provide optimum pathing between groups of servers and storage ports with similar performance requirements. The New Data Center 45 . however. typically in a mesh configuration to provide alternate switch-to-switch links. At a certain critical mass. The concept of a managed unit of SAN is predicated on the proper sizing of a fabric configuration to meet both connectivity and manageability requirements. For large data centers in particular. the response was to begin consolidating the fabric by deploying high-port-count Fibre Channel directors at the core and using the 16.Weaving a New Data Center Fabric Intelligent design in the storage infrastructure 5 In the early days of SAN adoption.or 32-port switches at the edge for device fan-out. including greater stability. Consolidation of the fabric brings several concrete benefits. As a result. In practice. these large multi-switch fabrics became problematic and vulnerable to fabric-wide disruptions through state change notification (SCN) broadcasts or fabric reconfigurations.or 32port Fibre Channel switches. this meant acquiring new fabric switches and joining them to an existing fabric via E_Port connection. and the ability to accommodate growth in ports without excessive dependence on inter-switch links (ISLs) to provide connectivity. data centers gradually built very large and complex storage networks composed of 16. Keeping the SAN design within rational boundaries. while simplifying management of SAN traffic. storage networks tended to evolve spontaneously in reaction to new requirements for additional ports to accommodate new servers and storage devices.

small-to-medium size business (SMB) and fan-out applications. cannot rely on peer-to-peer connectivity since some nodes are active (initiators/servers) and others are passive (storage targets). storage traffic requires deterministic delivery. fabric consolidation is also driven by the need to reduce the number of physical elements in the data center and their associated power requirements. for example. Targets also do not initiate transactions. cooling load. Applying a network-centric design to storage inevitably results in a failure to provide adequate safeguards for storage traffic and a greater vulnerability to inefficiencies. Storage systems do not typically communicate with each other (with the exception of disk-to-disk data replication or array-based virtualization) across the SAN. Brocade backbones are now the cornerstone for optimized data center fabric designs. Brocade's strategy is to promote storage46 The New Data Center . storage networks must provide a range of unique services to facilitate discovery of storage targets by servers. Although lower-port-count switches are still viable solutions for departmental. by contrast. is based on peer-to-peer communications with all end-points (nodes) sharing equal access. and data center real estate. zone or segregate traffic between designated groups of servers and their targets. Unfortunately.Chapter 5: Weaving a New Data Center Fabric As with server and storage consolidation. These distinctions play a central role in the proper design of data center SANs. These services are not required in conventional data communication networks. The underlying assumption is that any node can communicate with any other node at any time. fans. The trend in new data center design is therefore to architect the entire storage infrastructure for minimal physical and energy impact while accommodating inevitable growth over time. As with blade server frames. high-port-density platforms such as the Brocade DCX Backbone enable more concentrated productivity in a smaller footprint and with a lower total energy budget. some vendors fail to appreciate the unique requirements of storage environments and recommend what are essentially network-centric architectures instead of the more appropriate storage-centric approach. Each additional switch means additional redundant power supplies. Better Fewer but Better Storage area networks substantially differ from conventional data communications networks in a number of ways. or poor performance. but passively wait for an initiator to access them. In addition. disruption. restrict access to only authorized server/target pairs. A typical LAN. Consequently. and provide notifications when storage assets enter or depart the fabric. heat generation. whereas LAN and WAN protocols are typically best-effort delivery systems. A SAN.

real estate is kept to a minimum. thousands of servers in a single high-performance solution. the core layer can support hundreds of storage ports and. the SAN core can be built with high-port-density backbone platforms. while accommodating server access and departmental storage at the edge. Power consumption of less than a half watt per Gbps provides over 10x the energy efficiency of comparable enterprise-class products. The Brocade DCX Backbone. Doing more with less is thus realized through compact product design and engineering power efficiency down to the port Servers Highperformance servers Edge switches Brocade DCX core Departmental storage Primary corporate storage Figure 25. With up to 384 x 8 Gbps ports in a single chassis or up to 768 ports in a dual-chassis configuration. depending on the appropriate fan-in ratio.Better Fewer but Better centric SAN designs that more readily accommodate the unique and more demanding requirements of storage traffic and ensure stable and highly available connectivity between servers and storage systems. A storage-centric core/edge topology provides flexibility in deploying servers and storage assets while accommodating growth over time. As shown in Figure 25. Because two or even three backbone chassis can be deployed in a single 19" rack or adjacent racks. A storage-centric fabric design is facilitated by concentrating key corporate storage elements at the core. The New Data Center 47 . a 14U chassis with eight vertical blade slots is also available in a 192-port 8U Brocade DCX-4S with four horizontal blade slots—with compatibility for any Brocade DCX blade.

Departmental storage can be deployed at the edge layer. With 8 Gbps port connectivity and the ability to trunk multiple inter-switch links between the edge and core. dual-chassis Brocade DCX configuration of 768 ports can replace 48 x 16-port or 24 x 32-port switches. but the most significant differentiating features compared to conventional SANs revolve around increased intelligence for storage data transport. Large complex SANs. energy consumption. and adapts to changed network conditions both ensures stable operation and reduces the need for manual intervention and administrative oversight. In addition. current data center best practices for storage consolidation now incorporate fabric consolidation as a foundation for shrinking the hardware footprint and its associated energy costs. providing a much more efficient use of fabric address space. In addition. 2. for example. storagespecific applications such as tape backup may share the same infrastructure as production applications. typically support a wide variety of business applications. because the Brocade DCX 8 Gbps port blades are backward compatible with 1. and 4 Gbps speeds. the potential would exist for congestion and disruption of high-value applications impacted negatively by the 48 The New Data Center . Brocade has developed a number of intelligent fabric capabilities under the umbrella term of Adaptive Networking services to streamline fabric operations. and microcode version control and a dramatic decrease in maintenance overhead. Intelligent by Design The new data center fabric is characterized by high port density. centralized management. New functionality that streamlines data delivery. for example. low energy costs. existing devices can be integrated into a new consolidated design without expensive upgrades. Mission-critical servers with high-performance requirements. compact footprint.Chapter 5: Weaving a New Data Center Fabric In this example. ranging from high-performance and missioncritical to moderate-performance requirements. Consequently. can be attached directly to the core layer to provide the optimum path to primary storage. a single-rack. and cable complexity. and streamlined management. this design provides the flexibility to support different bandwidth and performance needs for a wide range of business applications in a single coherent architecture. In terms of data center consolidation. servers and storage assets are configured to best meet the performance and traffic requirements of specific business applications. automates data flows. while still enabling servers to access centralized storage resources. If all storage traffic types were treated with the same priority.

Intelligent by Design

traffic load of moderate-value applications. Brocade addresses this problem via a quality of service mechanism, which enables the storage administrator to assign priority values to different applications.
QoS Priorities High Medium Low

Servers

Tape

Edge switches

Brocade DCX core

Disk

Figure 26. Brocade QoS gives preferential treatment to high-value applications through the fabric to ensure reliable delivery. As shown in Figure 26, applications running on conventional or virtualized servers can be assigned high, medium, or low priority delivery through the fabric. This QoS solution guarantees that essential but lower-priority applications such as tape backup do not overwhelm mission-critical applications such as online transaction processing (OLTP). It also makes it much easier to deploy new applications over time or migrate existing virtual machines since the QoS priority level of an application moderates its consumption of available bandwidth. When combined with the high performance and 8 Gbps port speed of Brocade HBAs, switches, directors, and backbone platforms, QoS provides an additional means to meet application requirements despite fluctuations in aggregate traffic loads. Because traffic loads vary over time and sudden spikes in workload can occur unexpectedly, congestion on a link, particularly between the fabric and a burdened storage port, can occur. Ideally, a flow control mechanism would enable the fabric to slow the pace of traffic at the source of the problem, typically a very active server generating an atypical workload. Another Adaptive Networking service, Brocade ingress rate limiting (IRL) proactively monitors the traffic levels on all links and, when congestion is sensed on a specific link, identifies the
The New Data Center 49

Chapter 5: Weaving a New Data Center Fabric

initiating source. Ingress rate limiting allows the fabric switch to throttle the transmission rate of a server to a speed lower than the originally negotiated link speed.

mit Rate li

Servers Tape

Edge switches

stion Conge

Brocade DCX core

Disk

Figure 27. Ingress rate limiting enables the fabric to alleviate potential congestion by throttling the transmission rate of the offending initiator. In the example shown in Figure 27, the Brocade DCX monitors potential congestion on the link to a storage array and proactively reduces the rate of transmission at the server source. If, for example, the server HBA had originally negotiated an 8 Gbps transmission rate when it initially logged in to the fabric, ingress rate limiting could reduce the transmission rate to 4 Gbps or lower, depending on the volume of traffic to be reduced to alleviate congestion at the storage port. Thus, without operator intervention, potentially disruptive congestion events can be resolved proactively, while ensuring continuous operation of all applications. Brocade's Adaptive Networking services also enable storage administrators to establish preferred paths for specific applications through the fabric and the ability to fail over from a preferred path to an alternate path if the preferred path is unavailable. This capability is especially useful for isolating certain applications such as tape backup or disk-to-disk replication to ensure that they always enter or exit on the same inter-switch link to optimize the data flow and avoid overwhelming other application streams.

50

The New Data Center

Intelligent by Design

Backup

ERP

Oracle

Figure 28. Preferred paths are established through traffic isolation zones, which enforce separation of traffic through the fabric based on designated applications. Figure 28 illustrates a fabric with two primary business applications (ERP and Oracle) and a tape backup segment. In this example, the tape backup preferred path is isolated from the ERP and Oracle database paths so that the high volume of traffic generated by backup does not interfere with the production applications. Because the preferred path traffic isolation zone also accommodates failover to alternate paths, the storage administrator does not have to intervene manually if issues arise in a particular isolation zone. To more easily identify which applications might require specialized treatment with QoS, rate limiting, or traffic isolation, Brocade has provided a Top Talkers monitor for devices in the fabric. Top Talkers automatically monitors the traffic pattern on each port to diagnose over- or under-utilization of port bandwidth.

The New Data Center

51

Top Talkers can help indicate when a migration might be desirable to benefit from higher bandwidth or preferred pathing. This functionality is especially useful in virtual server environments. 52 The New Data Center OA251 3/1 E12D2 . while moderate performance applications could continue to function quite well on conventional links. for example. since the deployment of new VMs or migration of VMs from one platform to another can have unintended consequences. Top Talkers allows administrators to deploy fabric resources where and when they are needed most. Top Talkers can identify which applications would most benefit from Adaptive Networking services. as shown in Figure 29. might be required for particularly active applications. Applications that generate higher volumes of traffic through the fabric are primary candidates for Adaptive Networking services. By monitoring traffic activity on each port. In terms of aligning infrastructure to applications.Chapter 5: Weaving a New Data Center Fabric Port 14 Port 20 Port 56 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 OA251 3/1 E12D2 Figure 29. Configuring additional ISLs to create a higher-performance trunk.

Energy Efficient Fabrics

Energy Efficient Fabrics
In the previous era of readily available and relatively cheap energy, data center design focused more on equipment placement and convenient access than on the power requirements of the IT infrastructure. Today, many data centers simply cannot obtain additional power capacity from their utilities or are under severe budget constraints to cover ongoing operational expense. Consequently, data center managers are scrutinizing the power requirements of every hardware element and looking for means to reduce the total data center power budget. As we have seen, this is a major driver for technologies such as server virtualization and consolidation of hardware assets across the data center, including storage and storage networking. The energy consumption of data center storage systems and storage networking products has been one of the key focal points of the Storage Networking Industry Association (SNIA) in the form of the SNIA Green Storage Initiative (GSI) and Green Storage Technical Working Group (GS TWG). In January 2009, the SNIA GSI released the SNIA Green Storage Power Measurement Specification as an initial document to formulate standards for measuring the energy efficiency of different classes of storage products. For storage systems, energy efficiency can be defined in terms of watts per megabyte of storage capacity. For fabric elements, energy efficiency can be defined in watts per gigabytes/second bandwidth. Brocade played a leading role in the formation of the SNIA GSI and participation in the GS TWG and leads by example in pioneering the most energy-efficient storage fabric products in the market. Achieving the greatest energy efficiency in fabric switches and directors requires a holistic view of product design so that all components are optimized for low energy draw. Enterprise switches and directors, for example, are typically provisioned with dual-redundant power supplies for high availability. From an energy standpoint, it would be preferable to operate with only a single power supply, but business availability demands redundancy for failover. Consequently, it is critical to design power supplies that have at least 80% efficiency in converting AC input power into DC output to service switch components. Likewise, the cooling efficiency of fan modules and selection and placement of discrete components for processing elements and port cards all add to a product design optimized for high performance and low energy consumption. Typically, for every watt of power consumed for productive IT processing, another watt is required to cool the equip-

The New Data Center

53

Chapter 5: Weaving a New Data Center Fabric

ment. Dramatically lowering the energy consumption of fabric switches and directors therefore has a dual benefit in terms of reducing both direct power costs and indirect cooling overhead. The Brocade DCX achieves an energy efficiency of less than a watt of power per gigabit of bandwidth. That is 10x more efficient than comparable directors on the market and frees up available power for other IT equipment. To highlight this difference in product design philosophy, in laboratory tests a fully loaded Brocade director consumed less power (4.6 Amps) than an empty chassis from a competitor (5.1 Amps). The difference in energy draw of two comparably configured directors would be enough to power an entire storage array. Energy efficient switch and director designs have a multiplier benefit as more elements are added to the SAN. Although the fabric infrastructure as a whole is a small part of the total data center energy budget, it can be leveraged to reduce costs and make better use of available power resources. As shown in Figure 30, power measurements on an 8 Gbps port at full speed show the Brocade DCX advantage.

Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port compared to the competition.

54

The New Data Center

Safeguarding Storage Data

Safeguarding Storage Data
Unfortunately, SAN security has been a back-burner issue for many storage administrators due in part to several myths about the security of data centers in general. These myths (listed below) are addressed in detail in Roger Bouchard's Securing Fibre Channel Fabrics (Brocade Bookshelf) and include assumptions about data center physical security and the difficulty of hacking into Fibre Channel networks and protocols. Given that most breaches in storage security occur through operator error and lost disks or tape cartridges, however, threats to storage security are typically internal, not external, risks.

SAN Security Myths
• • SAN Security Myth #1. SANs are inherently secure since they are in a closed, physically protected environment. SAN Security Myth #2. The Fibre Channel protocol is not well known by hackers and there are almost no avenues available to attack FC fabrics. SAN Security Myth #3. You can't “sniff” optical fiber without cutting it first and causing disruption. SAN Security Myth #4. The SAN is not connected to the Internet so there is no risk from outside attackers. SAN Security Myth #5. Even if fiber cables could be sniffed, there are so many protocol layers, file systems, and database formats that the data would not be legible in any case. SAN Security Myth #6. Even if fiber cables could be sniffed, the amount of data to capture is simply too large to capture realistically and would require expensive equipment to do so. SAN Security Myth #7. If the switches already come with builtin security features, why should I be concerned with implementing security features in the SAN?

• • •

The centrality of the fabric in providing both host and storage connectivity provides new opportunities for safeguarding storage data. As with other intelligent fabric services, fabric-based security mechanisms can help ensure consistent implementation of security policies and the flexibility to apply higher levels of security where they are most needed.

The New Data Center

55

56 The New Data Center . the IEEE AES256-XTS encryption algorithm facilitates encryption of disk blocks without increasing the amount of data per block. Because tape devices accommodate variable block sizes.Chapter 5: Weaving a New Data Center Fabric Because data on disk or tape is vulnerable to theft or loss. Both the 16-port encryption blade for Brocade DCX and the 32-port encryption switch provide 8 Gbps per port for fabric or device connectivity and an aggregate 96 Gbps of hardware-based encryption throughput and 48 Gbps of data compression bandwidth. The combination of encryption and data compression enables greater efficiency in both storing and securing data. From the host standpoint. the AES256-GCM encryption algorithm appends authenticating metadata to each encrypted data block. The Brocade Encryption Switch provides secure encryption for disk or tape. both encryption processes are transparent and due to the high performance of the Brocade encryption engine there is no impact on response time. sensitive information is at risk unless the data itself is encrypted. Best practices for guarding corporate and customer information consequently mandate full encryption of data as it is written to disk or tape and a secure means to manage the encryption keys used to encrypt and decrypt the data. Brocade has developed a fabric-based solution for encrypting data-at-rest that is available as a blade for the Brocade DCX Backbone (Brocade FS8-18 Encryption Blade) or as a standalone switch (Brocade Encryption Switch). For encryption to tape. For encryption to disk. Servers Key management Storage arrays Tape Figure 31. encryption does not impede backup operations.

Safeguarding Storage Data As shown in Figure 31. Brocade switches and directors use access control lists (ACLs) to allow access to the fabric for only authorized switches and end devices. the tape device connected to the encryption switch in Figure 31). Brocade Key Management Solutions • • • • NetApp KM500 Lifetime Key Management (LKM) Appliance EMC RSA Key Manager (RKM) Server Appliance HP StorageWorks Secure Key Manager (SKM) Thales Encryption Manager for Storage For more about these key management solutions. this solution easily integrates into existing fabrics and can provide a much higher level of data security with minimal reconfiguration. Once configured. the encrypted data is simply switched to the appropriate port. Within both the encryption blade and switch. fabric-based security includes features for protecting the integrity of fabric connectivity and safeguarding management interfaces. the fabric is essentially locked down to prevent unauthorized access until the administrator specifically defines a new connection. as shown in Figure 32. Switch Connection Control (SCC) and Device Connection Control (DCC) prevent the intentional or accidental connection of a new switch or device that would potentially pose a security threat. visit the Brocade Encryption Switch product page on www. In addition to data encryption for disk and tape.com and find the Technical Briefs section a the bottom of the page. virtual targets are presented to the hosts and virtual initiators are presented to the downstream storage array or tape subsystem ports. a Fabric OS technology. Frame redirection.brocade. Key management for safeguarding and authenticating encryption keys is provided via Ethernet connection to the Brocade encryption device. In the case of direct device attachment (for example. Based on the port or device's WWN. is used to forward traffic to the encryption device for encryption on data writes and decryption on data reads. Because no additional middleware is required for hosts or storage devices. the Brocade Encryption Switch supports both fabric attachment and end-device connectivity. Although this requires additional manage- The New Data Center 57 .

Creating a dedicated storage area network. This capability is now available across the entire fabric with no impact to fabric performance or availability. Multi-protocol Data Center Fabrics Data center best practices have historically prescribed the separation of networks according to function. while LANs are almost universally based on Ethernet. For securing storage data-in-flight. In part.Chapter 5: Weaving a New Data Center Fabric ment intervention. ensures that storage traffic is unimpeded by the more erratic traffic patterns typical of messaging or data communications networks. although most iSCSI vendors still recommend building a dedicated IP storage network for iSCSI hosts and storage. it precludes disruptive fabric reconfigurations and security breaches that could otherwise occur through deliberate action or operator error. that is. In high-security environments. Brocade also provides hardwarebased encryption on its 8 Gbps HBAs and the Brocade 7800 Extension Switch and FX8-24 Extension Blade products. X Unauthorized device Figure 32. meeting regulatory compliance standards can require encrypting all data along the entire data path from host to the primary storage target as well as secondary storage in disaster recovery scenarios. This situation changed somewhat with the introduction of iSCSI for transporting SCSI block data over conventional Ethernet and TCP/IP. 58 The New Data Center . Using fabric ACLs to secure switch and device connectivity. for example. this separation was facilitated by the fact that nearly all storage networks used a unique protocol and transport. Fibre Channel.

In terms of management overhead. alternate servers can bind to the failed server's LUNs and continue operation. many data centers also house hundreds or thousands of moderateperformance standalone servers with legacy DAS. Of course. iSCSI has found its niche market primarily in cost-sensitive small and medium business (SMB) environments. although nearly all data centers worldwide run their most mission-critical applications on Fibre Channel SANs. features such as iSCSI SAN boot can simplify server administration by centralizing management of boot images instead of touching hundreds of individual servers. highly available SANs. the greatest benefit of converting from directattached to shared storage is the ability to centralize backup operations. including the ready availability of diverse Fibre Channel products and the continued evolution of the technology to higher speeds and richer functionality over time. And various storage system vendors offer iSCSI interfaces for mid-range storage systems and tape backup subsystems. Still. In addition. If a particular server fails. The New Data Center 59 . Using iSCSI connectivity. servers are no longer the exclusive “owner” of their own (direct-attached) storage. even when compared to direct-attached SCSI storage. Instead of backing up individual standalone servers. but for mid-tier applications Gigabit Ethernet may be sufficient and the total cost for implementing shared storage is very reasonable. Gigabit Ethernet does not have the performance of 4 or 8 Gbps Fibre Channel. but can share storage systems over the storage network. The IP SAN switched infrastructure can be built with off-the-shelf. As with conventional Fibre Channel SANs. It is difficult to cost justify installation of Fibre Channel HBAs into low-cost servers if the cost of storage connectivity exceeds the cost of the server itself. low-cost Ethernet switches. adding storage capacity to the network is no longer disruptive and can be performed on the fly. Using iSCSI to transition from direct-attached to shared storage yields most of the benefits associated with traditional SANs.Multi-protocol Data Center Fabrics Fibre Channel continues to be the protocol of choice for high-performance. backup can now be performed across the IP SAN without disrupting client access. It offers the advantage of lowcost per-server connectivity since iSCSI device drivers are readily available for a variety of operating systems at no cost and can be run over conventional Ethernet or (preferably) Gigabit Ethernet interfaces. There are several reasons for this.

the basic fabric and storage services embedded in Fibre Channel switches are still unavailable. Serial RDMA for iSCSI (iSER). however. however. these factors overshadow the performance difference between Gigabit Ethernet and 8 Gbps Fibre Channel. The Brocade FC4-16IP iSCSI Blade for the Brocade 48000 Director. and 10 GbE storage ports. Performance becomes less of an issue when iSCSI is run over 10 Gigabit Ethernet. none of the storagespecific features built into Fibre Channel fabric switches are available. it is more difficult to diagnose storagerelated problems that might arise. Even with these additional costs. low-cost iSCSI running over standard Gigabit Ethernet does make sense when standalone DAS servers are integrated into existing Fibre Channel SAN infrastructures via gateway products with iSCSI-to-Fibre Channel protocol conversion.Chapter 5: Weaving a New Data Center Fabric One significant drawback of iSCSI. For data center applications. because Ethernet switches are indifferent to the upperlayer IP protocols they carry. but that typically requires a specialized iSCSI network interface card (NIC) with TCP offload. Simplifying tape backup operations in itself is often sufficient cost justification for iSCSI integration via gateways and if free iSCSI device drivers are used. This enables formerly standalone low-cost servers to enjoy the benefits of shared storage while advanced storage services are supplied by the fabric itself. automatic address assignment. iSCSI is difficult to manage when scaled to larger deployments. and other storage services are simply unavailable in conventional Ethernet switches. zoning. In addition. device discovery. can aggregate hundreds of iSCSI-based servers for connectivity into an existing SAN. iSCSI standards do include the Internet Simple Name Server (iSNS) protocol for device authentication and discovery. for example. is that by using commodity Ethernet switches for the IP SAN infrastructure. although small iSCSI deployments can be configured manually to ensure proper assignment of servers to their storage LUNs. Consequently. as shown in Figure 33. 60 The New Data Center . the per-server connectivity cost is negligible. Collectively. The cost advantage of iSCSI at 1 GbE is therefore quickly undermined when iSCSI attempts to achieve the performance levels common to Fibre Channel. 10 GbE switches. but iSNS must be supplied as a third-party add-on to the IP SAN. simple name server (SNS) registration. Fabric login services.

the cost/benefit advantage has yet to be demonstrated in practice. FCoE is another multi-protocol option for integrating new servers into existing data center fabrics. Although Brocade has developed both CNA adapters and FCoE switch products for customers who are ready to deploy them. Although FCoE is being aggressively promoted by some network vendors. In current economic conditions. Integrating formerly standalone mid-tier servers into the data center fabric with an iSCSI blade in the Brocade DCX. to replicate the flow control and deterministic performance of native Fibre Channel. In fact. FCoE is therefore much closer to native Fibre Channel in terms of protocol overhead and performance but does require an additional level of frame encapsulation and decapsulation for transport over Ethernet. As discussed in Chapter 3. FCoE operates at Layer 2 switching and relies on Fibre Channel protocols for recovery.Multi-protocol Data Center Fabrics FC servers Brocade director with iSCSI blade GbE switches Rack-mount 1U servers FC storage arrays FC tape Figure 33. Another dissimilarity to iSCSI is that FCoE requires a specialized host adapter card. Unlike the iSCSI protocol which uses Layer 3 IP routing and TCP for packet recovery. many customers are hesitant to adopt new technologies that have no proven track record or viable ROI. a CNA that supports FCoE and 10 Gbps Data Center Bridging. FCoE therefore does not have the obvious cost advantage of iSCSI but does offer a comparable means to simplify cabling by reducing the number of server connections needed to carry both messaging and storage traffic. the market will determine if simplifying server connectivity is sufficient cost The New Data Center 61 . Ethernet switches between the host and target must be DCB capable.

FCIP is used to extend Fibre Channel over conventional IP networks for remote data replication or remote tape backup. At the point when 10 Gbps DCBenabled switches and CNA technology become commoditized. IR SAN routing protocols enable connectivity between two or more independent SANs for resource sharing without creating one large flat network. 62 The New Data Center . and Integrated Routing (IR).Chapter 5: Weaving a New Data Center Fabric justification for FCoE adoption. FCoE will certainly become an attractive option. Virtual Fabrics protocols enable a single complex fabric to be subdivided into separate virtual SANs in order to segregate different applications and protect against fabric-wide disruptions. Virtual Fabrics (VF). Virtual Fabric 1 Virtual Fabric 2 Virtual Fabric 2 Virtual Fabric 3 Figure 34. Other enhanced solutions for data center fabrics include Fibre Channel over IP (FCIP) for SAN extension. As discussed in the next section on disaster recovery. Using Virtual Fabrics to isolate applications and minimize fabric-wide disruptions.

Multi-protocol Data Center Fabrics As shown in Figure 34. security. The New Data Center 63 . each of the three Logical Fabrics could be administered by a separate department with different storage. To isolate frame routing between the logical fabrics. although in practice only a few are typically used. however. VF tagging headers are applied to the appropriate frames as they are issued. Although the total SAN configuration may be quite large. the division into separately managed logical fabrics simplifies administration while leveraging the data center's investment in SAN technology. director. Without IR. can lead to much greater complexity in management and vulnerability to fabric-wide disruptions. Integrated Routing (IR) is used to share resources between separate physical fabrics. for example. Virtual Fabrics is used to divide a single physical SAN into multiple logical SANs. connecting two or more fabrics together would create a large flat network. Creating very large fabrics. Virtual Fabrics is a means to consolidate SAN assets while enforcing managed units of SAN. providing greater flexibility in how servers and storage within a logical fabric can be deployed. with its own simple name server (SNS) and registered state change notification (RSCN) Brocade domain. Brocade Fabric OS supports Virtual Fabrics across Brocade switches. Logical fabrics can span multiple switches. logical fabric. the VF tagging header would allow for 4096 logical fabrics on a single physical SAN configuration. The headers are then removed by the destination switch before the frames are sent on to the appropriate initiator or target. analogous to bridging in LAN environments. In the example shown in Figure 33. and backbone platforms. Each virtual fabric behaves as a separate fabric entity. Where Virtual Fabrics technology can be used to isolate resources on the same physical fabric. and bill-back policies. Theoretically.

As shown in Figure 35. Fabric-based Disaster Recovery Deploying new technologies to achieve greater energy efficiency. a server on SAN A can access a storage array on SAN B (dashed line) via the SAN router. and more intelligence in the data center fabric cannot ensure data availability if the data center itself is vulnerable to disruption or outage. a major disruption can result in prolonged outages that put business or the viability of the enterprise at risk. IR SAN routers provide both connectivity and fault isolation between separate SANs. Because each SAN is autonomous. Although data center facilities may be designed to withstand seismic or catastrophic weather events. the storage array is a local resource on SAN A. In this example. Consequently. hardware consolidation. IR facilitates resource sharing between physically independent SANs.Chapter 5: Weaving a New Data Center Fabric SAN B SAN C IR SAN router SAN A Figure 35. fabric reconfigurations or RSCN broadcasts on one SAN will not adversely impact the others. From the perspective of the server. most data centers have some degree of disaster recovery planning that provides either instanta64 The New Data Center . Brocade products such as the Brocade 7800 Extension Switch and FX4-24 Extension Blade for the Brocade DCX Backbone provide routing capability for non-disruptive resource sharing between independent SANs. The SAN router performs network address translation to proxy the appearance of the storage array and to conform to the address space of each SAN.

To connect primary and secondary data center SANs efficiently. In addition. or IP network—and the recurring monthly cost of WAN links is typically the most expensive operational cost in a disaster recovery implementation. Synchronous data replication ensures that every transaction is safely duplicated to a remote location. disaster recovery technology has improved significantly in recent years and now enables companies to implement more economical disaster recovery solutions that do not burden the data center with excessive costs or administration. requires technology to optimize use of wide area links in order to transmit more data in less time and the flexibility to deploy long-distance replication over the most cost-effective WAN links appropriate for the application. typically 100 miles or less. tolerate extremely long-distance replication and is currently deployed for disaster recovery installations that span transoceanic and transcontinental distances. Disaster recovery planning today is bounded by tighter budget constraints and conventional recovery point and recovery time (RPO/RTO) objectives. Northeast power blackouts and hurricanes Katrina and Rita in the US) have raised concerns over how far away a recovery site must be to ensure reliable failover. Synchronous disk-to-disk data replication. Fortunately.Fabric-based Disaster Recovery neous failover to an alternate site or recovery within acceptable time frames for business resumption. The distance between primary and failover sites is also affected by the type of data protection required. however. then. is limited to metropolitan distances. dense wavelength division multiplexing (DWDM). and so may miss the most recent transaction if a failure occurs. for example. more recent examples of region-wide disruptions (for example. It does. Synchronous Optical Networking (SONET). The New Data Center 65 . Asynchronous data replication buffers multiple transactions before transmission. but the distance may not be sufficient to protect against regional events. Both synchronous and asynchronous data replication over distance require some kind of wide area service such as metro dark fiber.

data compression. rate limiting. In order to avoid credit starvation at high speeds. For metropolitan distances suitable for synchronous disk-to-disk data replication. can provide a 5x or greater increase in link capacity and so enable slower WAN links to carry more useful traffic. While the distance supported is more than adequate for synchronous applications. native Fibre Channel extension can be implemented up to 218 miles at 8 Gbps using Brocade 8 Gbps port cards in the Brocade 48000 or Brocade DCX. for example. Brocade switch architecture allocates additional port buffers for continuous performance. 66 The New Data Center . Brocade has developed auxiliary technologies to achieve even higher performance over IP networks. By using data compression. A 45 Megabits per second (Mbps) T3 WAN link typically provides about 4. for example. significant performance improvements over conventional IP networks can be achieved with Brocade FastWrite acceleration and tape pipelining algorithms. Commonly available IP network links are typically used for long-distance asynchronous data replication. and specialized algorithms such as SCSI write acceleration and tape pipelining. Fibre Channel over IP enables Fibre Channel-originated traffic to pass over conventional IP infrastructures via frame encapsulation of Fibre Channel within TCP/IP. these features achieve the objectives of maximizing utilization of expensive WAN services. Even longer distances for native Fibre Channel transport are possible at lower port speeds.Chapter 5: Weaving a New Data Center Fabric Achieving maximum utilization of metro or wide area links is facilitated by combining several technologies. Brocade FICON acceleration provides comparable functionality for mainframe environments. FCIP is now used for disaster recovery solutions that span thousands of miles and because it uses standard IP services is more economical than other WAN transports. while ensuring data integrity for disaster recovery and remote replication applications. This is equivalent to using far more expensive 155 Mbps OC3 WAN links to achieve the same data throughput. including high-speed bandwidth. the 8 Gbps bandwidth ensures maximum utilization of dark fiber or MAN services. Collectively.5 Megabytes per second (MBps) of data throughput. Data compression. Likewise. These features dramatically reduce the protocol overhead that would otherwise occupy WAN bandwidth and enable much faster data transfers on a given link speed. port buffers. the throughput can be increased to 25 MBps.

Many large data centers use a combination of extension technologies to provide both synchronous replication within metro boundaries to capture every transaction and asynchronous FCIPbased extension to more distant recovery sites as a safeguard against regional disruptions. For asynchronous replication over hundreds or thousands of miles.Fabric-based Disaster Recovery Brocade DCX Brocade DCX DWDM Brocade FX8-24 in DCX Brocade FX8-24 in DCX IP Brocade 7800 Brocade 7800 IP Figure 36. Long-distance connectivity options using Brocade devices. Brocade DCX and SAN extension products offer a variety of ways to implement long-distance SAN connectivity for disaster recovery and other remote implementations. For synchronous disk-to-disk data replication within a metropolitan circumference. As shown in Figure 36. The New Data Center 67 . These solutions provide flexible options for storage architects to deploy the most appropriate form of data protection based on specific application needs. the Brocade 7800 and FX8-24 extension platforms covert native Fibre Channel to FCIP for transport over conventional IP network infrastructures. native Fibre Channel at 8 Gbps or 10 Gbps can be driven directly from Brocade DCX ports over dark fiber or DWDM.

Chapter 5: Weaving a New Data Center Fabric 68 The New Data Center .

Server virtualization's dense compute environment is also driving port density in the network interconnect. Bandwidth is also becoming an issue. energy-efficient. high-performance. especially when virtualization is installed on blade servers. the data center Ethernet network brings server resources and processing power to clients. but also requires more raw bandwidth per connection. the network is under increasing pressure to serve more complex and varied client needs. If 20 virtual machines are now sharing the same physical network port previously occupied by one physical machine. for example. and data over a common infrastructure is a driving force behind the shift from 1 GbE to 10 GbE in most data centers. and intelligent network 6 Just as data center fabrics bind application servers to storage. smart phones. According to the International Data Corporation (IDC). Use of multi-core processors in server platforms increases the processing power and reduces the number of requisite connections per platform. the port speed must necessarily be increased to accommodate the potential 20x increase in client requests. the growth of non-PC client data access is five times greater than that of conventional PC-based users as shown by the rapid proliferation of PDAs. video. graphics. This change applies to traditional in-house clients as well as external customers and puts additional pressure on both corporate intranet and Internet network access. Although the fundamental principles of data center network design have not changed significantly. The convergence of voice. Rich content is not simply a roadside attraction for modern business but a necessary competitive advantage for attracting and retaining customers. Server virtualization is having the same effect. Physical consolidation of network connecThe New Data Center 69 .The New Data Center LAN Building a cost-effective. and other mobile and wireless devices.

preprocessing of data flows helps offload server CPU cycles and provides higher availability. and reducing ongoing operational expense. the reduction of available data centers increases the need for security throughout the network infrastructure to ensure data integrity and application availability. more transactions can be handled in less time and with less congestion at the server front-end. blade frames.Chapter 6: The New Data Center LAN tivity is important for both rationalizing the cable plant and in providing flexibility to accommodate mobility of VMs as applications are migrated from one platform to another. Even with server consolidation. requires more high-speed ports at the aggregation and core layers to accommodate higher traffic volumes. Web-based applications in particular benefit from a network-based hardware assist to ensure reliability and availability to internal and external users. higher-port-density network switches can save valuable real estate. Since the new data center network infrastructure must now support client traffic that was previously distributed over multiple data centers. This. In addition. 70 The New Data Center . and virtualization. and hot/cold aisle floor plans (see Figure 3 on page 11). Another cost-cutting trend for large enterprises is the consolidation of multiple data centers to one or just a few larger regional data centers. Such large-scale consolidation typically involves construction of new facilities that can leverage state-of-the-art energy efficiencies such as solar power. To maintain acceptable response times and ensure equitable service to multiple concurrent clients. data center floor space is at a premium and more compact. In addition. Where previously server network access was adequately served by 1 Gbps ports. maintaining availability. By accelerating application access. servers collectively still account for the majority of data center power and cooling requirements. top-of-rack access layer switches now must provide compact connectivity at 10 Gbps. Application layer (Layer 4– 7) networking is therefore gaining traction as a means to balance workloads and offload networking protocol processing. fly-wheel technology. deploying a high-performance LAN with advanced application support is crucial for a successful consolidation strategy. in turn. The selection of new IT equipment is also an essential factor in maximizing the benefit of consolidation. still incurs a significant power and cooling overhead and data center managers are now evaluating power consumption as one of the key criteria in network equipment selection. however. air economizers. Other trends such as software as a service (SaaS) and Web-based business applications are shifting the burden of data processing from remote or branch clients back to the data center. Network infrastructure.

As shown in Figure 37.A Layered Architecture A Layered Architecture With tens of thousands of installations worldwide. Typically. This basic architecture has proven to be the most suitable for providing flexibility. although high-performance applications may also The New Data Center 71 . The three fundamental layers common to nearly all data center networks are the access. the conventional three-layer network architecture provides a hierarchy of connectivity that enable servers to communicate with each other (for cluster and HPC environments) and with external clients. data center networks have evolved into a common infrastructure built on multiple layers of connectivity. aggregation. and core layers. Access. Mission-critical application servers General-purpose application servers Access Aggregation Core External network Figure 37. higher bandwidth is provided at the aggregation and core layers to accommodate the high volume of access layer inputs. and resiliency and can be scaled from moderate to very large infrastructures. high performance. aggregation. and core layers in the data center network.

In Figure 37 on page 71. while end-of-row placement requires longer cable runs to the most distant racks. and end of row. the mission-critical servers could be provisioned with 10 GbE network interfaces and a 1:1 ratio for uplink. In either case. middle of row. Servers are typically provisioned with two or more GbE or 10 GbE network ports for redundant connectivity. Access layer switches typically provide basic Layer 2 (MAC-based) and Layer 3 (IP-based) switching for server connectivity and often have higher speed 10 GbE uplink ports to consolidate connectivity to the aggregation layer. high-availability network access is enabled by the hardened architecture of HA access switches. Unless designed otherwise. Server platforms vary from standalone servers to 1U rack-mount servers and blade servers with passthrough cabling or bladed Ethernet switches. the access layer is therefore typically oversubscribed in a 6:1 or higher ratio of server network ports to uplink ports.Chapter 6: The New Data Center LAN require 10 Gbps links. A middle-of-rack configuration is similar but with multiple 1U switches deployed throughout the stack to further simplify cabling. 72 The New Data Center . Options for switch placement range from top of rack to middle of rack. middle-of-row placement facilitates shorter cable runs. would be adequately supported with 1 GbE network ports and a 6:1 or higher oversubscription ratio. As illustrated in Figure 38. however. For high-availability environments. The general purpose servers. the access layer functions as the fan-in point to join many dedicated network connections to fewer but higher-speed shared connections. Scalability is achieved by adding more switches at the requisite layers as the population of physical or virtual servers and volume of traffic increases over time. The access layer provides the direct network connection to application and file servers. Because servers represent the highest population of platforms in the data center. by contrast. top-of-rack access layer switches are typically deployed in redundant pairs with cabling run to each racked server. This is a common configuration for medium and small server farms and enables each rack to be managed as a single entity. for example. larger switches with redundant power supplies and switch modules can be positioned in middle-of-row or end-of-row configurations. Access layer switches are available in a variety of port densities and can be deployed for optimal cabling and maintenance. In these deployments.

access layer services can also include power over Ethernet (PoE) to support voice over IP (VoIP) telecommunications systems and wireless access points for in-house clients as well as security monitoring. Examples of top-of-rack access solutions include Brocade FastIron Edge Series switches. Within the data center. In conventional use. and cable strategy. sub-second recovery time for failed links.A Layered Architecture Top of rack (ToR) Middle of row (MoR) End of row (EoR) Figure 38. upstream links to the aggregation layer can be optimized for high availability in metropolitan area networks (MANs) through value-added features such as the Metro Ring Protocol (MRP) and Virtual Switch Redundancy Protocol (VSRP). Access layer switch placement is determined by availability. For modern data centers. the access layer typically supports application servers but can also be used to support in-house client workstations. In addition to scalable server connectivity. As discussed in more detail later in this chapter. or 1000 Mbps and 10 Gbps) and redundant features. these access switches offer multiple connectivity options (10. these features replace conventional Spanning Tree Protocol (STP) for metro and campus environments with a much faster. The New Data Center 73 . The ability to provide both data and power over Ethernet greatly simplifies the wiring infrastructure and facilitates resource management. 100. the data center access layer supports servers and clients and workstations are connected at the network edge. port density. Because different applications can have different performance and availability requirements.

advanced service options available for aggregation-class switches can be shared by more downstream devices connected to standard access switches. which provide advanced routing functions and upstream connectivity to the core layer. Although it is possible to attach servers directly to the core or aggregation layer. for example. intranet. the network core is the nucleus of the data center LAN and provides the top-layer switching between all devices connected via the aggregation and access layers. In addition to high-performance 10 Gbps Ethernet ports. and Internet. can be managed with a separate access layer linked through aggregation points. 74 The New Data Center . performance and availability are absolutely critical. Consolidating Network Tiers The access/aggregation/core architecture is not a rigid blueprint for data center networking. Because the aggregation layer must support the traffic flows of potentially thousands of downstream servers. consolidation of port connectivity can be achieved with an accompanying reduction in power draw and cooling overhead compared to a standard multi-switch design. Layer 2 domains. With products such as the Brocade BigIron RX Series switches. These enterprise-class switches provide high availability and fault tolerance to ensure reliable data access.Chapter 6: The New Data Center LAN At the aggregation layer.68 Tbps switching capacity. By providing support for 768 x 1 Gbps downstream ports and 64 x 10 Gbps upstream ports. In a classic three-tier model. it is possible to collapse the functionality of a conventional multitier architecture into a smaller footprint.12 Tbps switching capacity) with Layer 2 and Layer 3 switching and the Brocade ServerIron ADX series with Layer 4–7 application switching. uplinks from multiple access-layer switches are further consolidated into fewer high-availability and high-performance switches. as shown in Figure 39. the core also provides connectivity to the external corporate network. core switches can be provisioned with OC-12 or higher WAN interfaces. A three-tier architecture also provides flexibility in selectively deploying bandwidth and services that align with specific application requirements. there are some advantages to maintaining distinct connectivity tiers. Examples of aggregation-layer switches include the Brocade BigIron RX Series (with up to 5. Examples of network core switches include the Brocade NetIron MLX Series switches with up to 7. As the name implies. however. In addition.

Design Considerations Although each network tier has unique functional requirements. For modern data center networks.and high-performance requirements. Unfortunately. the advantage of centralizing connectivity and management is complemented by reduced power consumption and consolidation of rack space. A Brocade BigIron RX Series switch consolidates connectivity in a more energy efficient footprint. Layer 2 domains are segregated via VLANs and advanced aggregation-level services can be integrated directly into the BigIron chassis. different-speed port cards can be provisioned to accommodate both moderate. At some point the sheer number of network devices makes the network difficult to manage and troubleshoot. In addition. is a foundation layer for building higher-level network services to automate data transport processes such as network resource allocation and proactive network management. Consolidate to Accommodate Growth One of the advantages of a tiered data center LAN infrastructure is that it can be expanded to accommodate growth of servers and clients by adding more switches at the appropriate layers. and visibility for management. and invariably introduces congestion points that degrade network performance. In this example. with up to 512 x 10 Gbps ports per chassis. the entire data center LAN must provide high availability. high performance. security for data flows. Proper product selection and interoperability between tiers is therefore essential for building a resilient data center network infrastructure that enables maximum utilization of resources while minimizing operational expense. A properly designed network infrastructure. in turn. The New Data Center 75 .Design Considerations 64 x 10 Gbps ports 768 x 1 Gbps ports Figure 39. this frequently results in the spontaneous acquisition of more and more equipment over time as network managers react to increasing demand. increases the complexity of the cable plant.

for example. fan modules. and switching blades ensure that an individual unit can withstand component failures. higher-port-density switches can help resolve space and cooling issues in the data center. Increased port density alone. high availability is absolutely essential for day-to-day operations and best-effort delivery is no longer acceptable. Network resiliency has two major components: the high availability architecture of individual switches and the high availability design of a multi-switch network. A large multi-slot chassis that replaces 10 discrete switches. Brocade BigIron RX Series switches. For the former. are designed to scale from moderate to high port-count requirements in a single chassis for both access and aggregation layer deployment (greater than 1500 x 1 Gbps or 512 x 10 Gbps ports). a backup router automatically assumes the routing task for continued service. is not sufficient to accommodate growth if increasing the port count results in degraded performance on each port. was originally a best-effort delivery mechanism designed to function in potentially congested or lossy infrastructures (for example disruption due to nuclear exchange). redundant power supplies. Now that IP networking is the mainstream mechanism for virtually all business transactions worldwide. VRRP Extension (VRRPE) is an extension of 76 The New Data Center . Network Resiliency Early proprietary data communications networks based on SNA and 3270 protocols were predicated on high availability for remote user access to centralized mainframe applications. and it can also facilitate planning for growth. by contrast. simplifies the network management map and makes it much easier to identify traffic flows through the network infrastructure. From a management standpoint. typically within 3 seconds of failure detection.Chapter 6: The New Data Center LAN Network consolidation via larger. Consequently. If a master router fails. BigIron RX Series switches are engineered to support over 5 Tbps aggregate bandwidth to ensure that even fully loaded configurations deliver wire-speed throughput. however. For the latter. Multiple routers can be configured as a single virtual router. network consolidation significantly reduces the number of elements to configure and monitor and streamlines microcode upgrades. IP networking. redundant pathing through the network using failover links and routing protocols ensures that the loss of an individual switch or link will not result in loss of data access. for example. Resilient routing protocols such as Virtual Router Redundancy Protocol (VRRP) as defined in RFC 3768 provide a standards-based mechanism to ensure high availability access to a network subnet even if a primary router or path fails.

Packet sampling is performed in hardware by switches and routers in the network and samples are forward to a central sFlow server or collector for analysis. If a primary link fails. Continuous traffic analysis to monitor the behavior of hosts is one means to guard against intrusion. The New Data Center 77 . At Layer 2. Network Security Data center network administrators must now assume that their networks are under a constant threat of attack from both internal and external sources. Unfortunately. RSTP decreases the failover window to about 1 second. Because networks now carry more latency-sensitive protocols such as voice over IP. Abnormal traffic patterns or host behavior can then be identified and proactively responded to in real time. Spanning tree allows redundant pathing through the network while disabling redundant links to prevent multiple loops. In addition to enhanced resiliency. Attack mechanisms such as denial of service (DoS) are today well understood and typically blocked by a combination of access control lists (ACLs) and rate limiting algorithms that prevent packet flooding. incorporates sFlow for continuous monitoring of the network in addition to ACL and rate limiting management of network elements. The sFlow (RFC 3176) standard defines a process for sampling network traffic at wire speed without impacting network performance. for example.Design Considerations VRRP that uses Bidirectional Forwarding Detection (BFD) to shrink the failover window to about 1 second. Innovative protocols such as Brocade Virtual Switch Redundancy Protocol (VSRP) and Metro Ring Protocol (MRP). VSRP enables more efficient use of network resources by allowing a link that is in standby or blocked mode for one VLAN to be active for another VLAN. hackers are constantly creating new means to penetrate or disable corporate and government networks and network security requires more than the deployment of conventional firewalls. wire-speed ACL processing to block DoS and the more sinister distributed DoS (DDoS) attacks. for example. can accelerate the failover process to a sub-second response time. conventional STP can identify the failure and enable a standby link within 30 to 50 seconds. failover must be performed as quickly as possible to ensure uninterrupted access. provides enhanced hardwarebased. Brocade. resiliency is enabled by the Rapid Spanning Tree Protocol (RSTP). however. Brocade IronView Network Manager (INM). Timing can also be critical for Layer 2 network segments.

as shown in Figure 40. Especially for networking products. For very large data center networks. so uRPF is a further defense against attempts to overwhelm network routers. risks to the network as a whole can be reduced by segmenting the network through use of Virtual Routing and Forwarding (VRF). ARP spoofing can be thwarted via ARP inspection or monitoring of ARP requests to ensure that only valid queries are allowed. Compared to server power consumption at 48%. Another spoofing hazard is Address Resolution Protocol (ARP) spoofing. Brocade switches and routers provide an entire suite of security protocols and services to protect the data center network and maintain stable operation and management. 15% may not seem a significant number. Closer cooperation between data center administrators and the facilities management responsible for the power bill can lead to a closer examination of the power draw and cooling requirements of network equipment and selection of products that provide both performance and availability as well as lower energy consumption. which attempts to associate an attacker's MAC address with a valid user IP address to sniff or modify data between legitimate hosts. A single physical network can thus be subdivided into multiple virtual networks with traffic isolation between designated departments or applications. Space and Cooling Efficiency According to The Server and StorageIO Group. 78 The New Data Center . Power. but considering that a typical data center can spend close to a million dollars per year on power. VRF is implemented by enabling a router with multiple independent instances of routing tables and this essentially turns a single router into multiple virtual routers. Unicast Reverse Path Forwarding (uRPF) as defined in RFC 3704 provides a means to block packets from sources that have not been already registered in a router's routing information base (RIB) or forwarding information base (FIB). IT consultants.Chapter 6: The New Data Center LAN Other security considerations include IP address spoofing and network segmentation. Address spoofing is typically used to disguise the source of DoS attacks. the energy efficiency of every piece of IT equipment represents a potential savings. network infrastructure contributes only from 10% to 15% of IT equipment power consumption in the data center. there can be a wide disparity between vendors who have integrated energy efficiency into their product design philosophy and those who have not.

In addition. for example. and other components that would otherwise be deployed if smaller switches were used. As with SAN products such as the Brocade DCX Backbone. the intention of network virtualization is to maximize productive use of existing infrastructure to reinforce traffic separation. The increased energy efficiency of these network design options. Use of high-port-density switches. Network Virtualization The networking complement to server virtualization is a suite of virtualization protocols that enable extended sections of a shared multiswitch network to function as independent LANs (VLANs) or for a single switch to operate as multiple virtual switches (virtual routing and forwarding (VRF) as discussed earlier).Design Considerations IT Data Center Typical Power Consumption Cooling/HVAC Servers 48% 80% IT equipment 48-50% Storage (disk and tape) 37-40% Network (SAN/LAN/WAN) Other Tape drive library External storage (all tiers) 50-60% Other Figure 40. Brocade LAN solutions are engineered for energy efficiency and consume less than a fourth of the power of competing products in comparable classes of equipment. however. availabilThe New Data Center 79 . Network infrastructure typically contributes only 10% to 15% of total data center IT equipment power usage. Combining access and aggregation layers with a BigIron RX Series switch likewise reduces the total number of elements required to support host connectivity. As with server virtualization. still ultimately depends on how the vendor has incorporated energy saving components into the product architecture. fans. security. Selecting larger end-of-row access-layer switches instead of individual top-of-rack switches has a similar affect. protocols such as virtual IPs (VIPs) can be used to extend virtual domains between data centers or multiple sites over distance. Designing for data center energy efficiency includes product selection that provides the highest productivity with the least energy footprint. can reduce the total number of power supplies.

Chapter 6: The New Data Center LAN ity. The concurrent proliferation of virtualized servers helps to alleviate the application workload issues but adds complexity in designing resilient configurations that can provide continuous access. can provide a means to better meet service-level agreements (SLAs) and conform to regulatory compliance requirements. Web-based enterprise applications present a number of challenges due to increased network and server loads. and security concerns. Applications Oracle Sap Microsoft Web/Mail/DNS Layer 2-3 switches Network Clients Figure 41. network virtualization can be used to create logically separate security zones for policy enforcement without deploying physically separate networks. As discussed in “Chapter 3: Doing More with Less” starting on page 17. Use of Internet-enabled protocols such as HTTP (HyperText Transfer Protocol) and HTTPS (HyperText Transfer Protocol Secure) has streamlined application development and delivery and is now a prerequisite for next-generation cloud computing solutions. and performance. for example. Application congestion (traffic shown as a dashed line) on a Web-based enterprise application infrastructure. Application delivery controllers (also known as Layer 4–7 switches) provide a particularly effective means to address the upstream network consequences of increased traffic volumes when Web-based enterprise applications are supported on higher populations of virtualized servers. however. At the same time. Application separation via VLANs at Layer 2 or VRF at Layer 3. increased user access. Application Delivery Infrastructure One of the major transformations in business applications over the past few years has been the shift from conventional applications to Web-based enterprise applications. 80 The New Data Center . greater applications load. implementing a successful server virtualization plan requires careful attention to both upstream LAN network impact as well as downstream SAN impact. Likewise.

The Brocade ServerIron ADX application delivery controller addresses these problems by providing hardware-assisted protocol processing offload. conventional network switching and routing cannot prevent higher traffic volumes generated by user activity from overwhelming applications. As shown in Figure 42. this solution solves multiple application-related issues simultaneously. Without a means to balance the workload between applications servers. Applications VMs SSL/encryption Firewalls Oracle Sap Microsoft Web/Mail/DNS DNS Mail Radius IPD/IDS Cache Switches Brocade ServerIron ADX Application Delivery Controllers Layer 2-3 switches Network Clients Figure 42. By also offloading HTTPS/SSL security protocols.Application Delivery Infrastructure As illustrated in Figure 41. whether access is over the Internet or through a company intranet. In addition. response time suffers even when the number of application server instances has been increased via server virtualization. Application workload balancing. and firewall protection to ensure that application access is distributed among the relevant servers and that access is secured. the Brocade ServerIron ADX provides the intended level of security without further burdening the server pool. load balancing across multiple servers hosting the same application can ensure that no individual server or virtual machine is overwhelmed with requests. The New Data Center 81 . server CPU cycles can be used more efficiently to process application requests. protocol processing offload and security via the Brocade ServerIron ADX. The Brocade ServerIron ADX also provides protection against DoS attacks and so facilitates application availability. server workload balancing. By implementing Web-based protocol processing offload. security vulnerabilities such as DoS attacks still exist. In addition.

This approach also integrates enterprise-wide disaster recovery for application access without disruption to client transactions. for example. In addition to performance. As with other network solutions. 82 The New Data Center . the Brocade ServerIron ADX maintains Brocade's track record of providing the industry's most energy efficient network products by using less than half the power of the closest competing application delivery controller product. provides an aggregate of 70 Gbps Layer 7 throughput and over 16 million Layer 4 transactions per second. clients are directed to the best site for the fastest content delivery given current workloads and optimum network response time. With GSLB. Cloud computing is the ultimate extension of this trend.Chapter 6: The New Data Center LAN The value of application delivery controllers in safeguarding and equalizing application workloads appreciates as more business applications shift to Web-based applications. the benefits of application delivery controller technology can be maintained only if the product architecture maintains or improves client performance. Although the Brocade ServerIron ADX sits physically in the path between the network and its servers. The Brocade ServerIron ADX. performance is actually substantially increased compared to conventional connectivity. with the applications themselves migrating from clients to enterprise application servers. Few network designers are willing to trade network response time for enhanced services. The Brocade ServerIron ADX provides global server load balancing (GSLB) for load balancing not only between individual servers but between geographically dispersed server or VM farms. which can be physically located across dispersed data center locations or outsourced to service providers.

memory. virtualization of data storage capacity in the form of storage virtualization. From a client perspective. virtualization appears to mean different things to different people depending on their areas of responsibility and unique issues.Orchestration Automating data center processes 7 So far virtualization has been “the” buzzword of twenty-first century IT parliance and unfortunately has undergone depreciation due to overuse-in particular. The common denominator between these three primary domains of virtualization is the use of new technology to streamline and automate IT processes while maximizing productive use of the physical IT infrastructure. The complexity does not go away but is now the responsibility of the virtualization layer inserted between physical and logical domains. over-marketing-of the term. and other elements typical of a conventional server. The same applies to the other domains of storage and network virtualization. In this example. the hypervisor assumes responsibility for supplying all the expected CPU. As with elephants and blind men. The level of actual complexity of a virtualized environment is powers of ten greater than ordinary configurations but so is the level of productivity and resource utilization. virtualization hides the complexity of underlying hardware elements and configurations. For revitalizing an existing data center or designing a new one. and virtualization of the data transport in the form of network virtualization. for example. and this places tremendous importance on the proper selection of products to extend virtualization across the enterprise. the umbrella term “virtualization” covers three primary domains: virtualization of compute power in the form of server virtualization. I/O. an application running on a single physical server behaves the same as one running on a virtual machine. As with graphical user interfaces. The New Data Center 83 .

from VLANs to VRF.5 work group developed the Fabric Application Interface Standard (FAIS) to promote open APIs for implementing storage virtualization via the fabric. orchestration requires vendor cooperation and new open systems standards to ensure stability and resilience. for example. IEEE and IETF have progressively developed more sophisticated open standards for network virtualization. but to virtualize the entire data center requires first of all a means to harmoniously orchestrate these technologies into an integral solution.Chapter 7: Orchestration Next-generation data center design necessarily incorporates a variety of virtualization technologies. The Distributed Management Task Force (DMTF). The Storage Networking Industry Association (SNIA) Storage Management Initiative (SMI) includes open standards for deployment and management of virtual storage environments. For large-scale virtualization environments. The alternative is proprietary solutions and products and the implicit vendor monopoly that accompanies single-source technologies. Because no single vendor can provide all the myriad elements found in a modern data center. developed the Open Virtual Machine Format (OVF) standard for VM deployment and mobility. as depicted in Figure 43. The market long ago rejected this vendor lock-in and has consistently supported an open systems approach to technology development and deployment. The American National Standards Institute T11. Open systems-based orchestration between virtualization domains. standards-based orchestration is all the more critical because virtualization in each domain is still undergoing rapid technical development. Server virtualization APIs Storage virtualization Orchestration framework APIs APIs Netowrk virtualization Figure 43. The development of open standards and 84 The New Data Center .

allocation. the data center infrastructure automatically allocates the requisite CPU. Companies cannot continue to add staff to manage increased volumes of applications and data. a single management framework—provides configuration. Of course. This in turn implies that the initial deployment of an application. any changes to its environment. selected. QoS levels adjusted appropriately. Enabled by open APIs in the server. or other network tuning to support the application. boot LUNs. storage. and deployed before logical automation and virtualization can be applied. but careful selection of products today can lay the groundwork for fuller implementation tomorrow. the application can be migrated from one server resource to another. and administrator productivity cannot meet growth rates without full-scale virtualization and automation of IT processes. load balancing. application delivery. As application workloads change over time. In addition. which formerly stood as isolated The New Data Center 85 . open standards become the guideposts for further development of specific virtualization technologies. and monitoring management over an IT infrastructure that is based on a complex of virtualization technologies. Data center orchestration assumes that a single conductor—in this case. servers. security status changed. storage volumes increased or decreased. change. With tight budgets. and network domains. and any required security or QoS parameters needed for storage access. With average corporate data growth at 60% per year. data center orchestration is becoming a business necessity. The physical infrastructure must be properly sized. and resilience for a particular application. planned. deployment. and networking. storage. Servers. and management of virtualized resources across data center domains. memory. and network equipment do not rack themselves up and plug themselves in. The ideal of data center orchestration is that the configuration. and management of applications on the underlying infrastructure should require little or no human intervention and instead rely on intelligence engineered into each domain. and bandwidth adjusted for upstream client access. it may not be possible to provision all the elements needed for full data center orchestration. and provides optimized client access through the data communications network such as VLANs. so that vendors can develop products with a much higher degree of interoperability. I/O. assigns storage capacity.common APIs is the prerequisite for developing comprehensive orchestration frameworks that can automate the creation. storage. and proactive monitoring of its health are no longer manual processes but are largely automated according to a set of defined IT policies.

VMs move from first physical server to next available LAN Microsoft System Center VMM Microsoft System Center Operations Manager Brocade HBA plus Brocade Management Pack for Microsoft System Center VMM QoS Engine Brocade DCFM QoS Engine SAN Figure 44. is now a mature technology that is moving from secondary applications to primary ones. Because this uber-technology is still under construction. resiliency. for example. Microsoft. technology.Chapter 7: Orchestration management domains. are being transformed into interrelated services. as shown in Figure 44. not all necessary components are currently available but substantial progress has been made. The SAN Call Home events displayed in the Microsoft System Center Operations Center interface is shown in Figure 50 on page 94. QoS. For Brocade. Brocade is working with VMware. network infrastructure as a service requires richer intelligence in the network to coordinate provisioning of bandwidth. 86 The New Data Center . and security features to support server and storage services. and others to coordinate communication between the SAN and LAN infrastructure and various virtualization hypervisors so that proactive monitoring of storage bandwidth and QoS can trigger migration of VMs to more available resources should congestion occur. Brocade Management Pack for Microsoft Service Center Virtual Machine Manager leverages APIs between the SAN and SCVMM to trigger VM migration. Server virtualization.

the Brocade FA4-18 supports applications such as EMC Invista to maximize efficient utilization of storage assets. fabric-based storage encryption and virtual routing protocols can help data center network designers allocate enhanced bandwidth and services to accommodate both current requirements and future growth. Collectively. these building blocks facilitate higher degrees of data center orchestration to achieve the IT business goal of doing far more with much less. For client access. Other capabilities such as 10 Gigabit Ethernet and 8 Gbps Fibre Channel connectivity. Brocade supports fabric-based storage virtualization with the Brocade FA4-18 Application Blade and Brocade's Storage Application Services (SAS) APIs. The New Data Center 87 . the Brocade ADX application delivery controller automates load balancing of client requests and offloads upper-layer protocol processing from the destination VMs. Based on FAIS standards.On the storage front.

Chapter 7: Orchestration 88 The New Data Center .

Brocade released a family of Fibre Channel HBAs with 8 and4 Gbps HBAs.com > Products and Solutions. The server connectivity and convergence products described in this chapter are: • • • • • “Server Adapters” on page 89 “Brocade 8000 Switch and FCOE10-24 Blade” on page 92 “Access Gateway” on page 93 “Brocade Management Pack” on page 94 “Brocade ServerIron ADX” on page 95 Server Adapters In mid-2008. and 32 Virtual Channels per port with guaranteed QoS 89 The New Data Center . visit www.brocade. For the most current information. Choose a product from the drop-down list on the left and then scroll down to view Data Sheets. and White Papers.Brocade Solutions Optimized for Server Virtualization Enabling server consolidation and end-to-end fabric management 8 Brocade has engineered a number of different network components that enable server virtualization in the data center fabric. FAQs. Technical Briefs. frame-based prioritization.0a Gen2 (x8) bus interface with intelligent lane negotiation Prioritizes traffic and minimizes network congestion with target rate limiting. Highlights of Brocade FC HBAs include: • • Maximizes bus throughput with a Fibre Channel-to-PCIe 2. The sections in this chapter introduce you to these products and briefly describe them.

The Brocade 8 Gbps FC HBA also: • • Maximizes I/O transfer rates with up to 500. Figure 45.000 IOPS per port at 8 Gbps Utilizes N_Port Trunking capabilities to create a single logical 16 Gbps high-speed link 90 The New Data Center .Chapter 8: Brocade Solutions Optimized for Server Virtualization • Enhances security with Fibre Channel-Security Protocol (FC-SP) for device authentication and hardware-based AES-GCM. This class of HBAs is designed to help IT organizations deploy and manage true end-to-end SAN service across next-generation data centers. Brocade 825 FC 8 Gbps HBA (dual ports shown). ready for inflight data encryption Supports virtualized environments with NPIV for 255 virtual ports Uniquely enables end-to-end (server-to-storage) management in Brocade Data Center Fabric environments • • Brocade 825/815 FC HBA The Brocade 815 (single port) and Brocade 825 (dual ports) 8 Gbps Fibre Channel-to-PCIe HBAs provides industry-leading server connectivity through unmatched hardware capabilities and unique software configurability.

Utilizing hardware-based virtualization acceleration capabilities. Industry-leading hardware capabilities. organizations can optimize performance in virtual environments to increase overall ROI and improve TCO even further. Brocade FCoE CNAs The Brocade 1010 (single port) and Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNAs provide server I/O consolidation by transporting both storage and Ethernet networking traffic across the same physical connection. The Brocade 1000 Series CNAs combine the powerful capabilities of storage (Fibre Channel) and networking (Ethernet) devices. including investments made in management and training. cooling.000 IOPS per port at 4 Gbps Utilizes N_Port Trunking capabilities to create a single logical 8 Gbps high-speed link Figure 46. and cabling costs through the use of a single adapter. The New Data Center 91 . This approach helps improve TCO by significantly reducing power. and unified management all contribute to exceptional flexibility.Server Adapters Brocade 425/415 FC HBA The Brocade 4 Gbps FC HBA has capabilities similar to those described for the 8 Gbps version. It also extends storage and networking investments. unique software configurability. Brocade 415 FC 4 Gbps HBA (single port shown). The Brocade 4 Gbps FC HBA also: • • Maximizes I/O transfer rates with up to 500.

Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-PCIe CNA. 92 The New Data Center . Supporting Windows and Linux environments. the Brocade 8000 Switch enables access to both LANs and SANs over a common server connection by utilizing Converged Enhanced Ethernet (CEE) and FCoE protocols. the Brocade 1000 Series CNAs provide a highly efficient way to transport Fibre Channel storage traffic over Ethernet links-addressing the highly sensitive nature of storage traffic. LAN traffic is forwarded to aggregation-layer Ethernet switches using conventional 10 GbE connections. supports Ethernet and CEE capabilities. Figure 48. The Brocade 8000 provides advanced Fibre Channel services. Figure 47. Brocade 8000 Switch and FCOE10-24 Blade The Brocade 8000 is a top-of-rack Layer 2 CEE/FCoE switch with 24 x 10 GbE ports for LAN connections and 8 x FC ports (with up to 8 Gbps speed) for Fibre Channel SAN connections. Brocade 8000 Switch. and storage traffic is forwarded to Fibre Channel SANs over 8 Gbps FC connections.Chapter 8: Brocade Solutions Optimized for Server Virtualization Leveraging IEEE standards for Data Center Bridging (DCB). and is managed by Brocade DCFM.

These switches can be used in Access Gateway mode. By providing first-hop connectivity for access-layer servers.Access Gateway The Brocade FCOE10-24 Blade is a Layer 2 blade with cut-though. Brocade blade server SAN switches and the Brocade 300 and Brocade 5100 rack-mount switches are key components of enterprise data centers. available in the standard Brocade Fabric OS. nonblocking architecture designed for use with the Brocade DCX. Brocade FCOE10-24 Blade. Access Gateway Brocade Access Gateway simplifies server and storage connectivity by enabling direct connection of servers to any SAN fabric-enhancing scalability by eliminating the switch domain identity and simplifying local switch device management. It features 24 x 10 Gbps CEE ports and extends CEE/FCoE capabilities to backbone platforms enabling end-of-row CEE/FCoE deployment. for enhanced server connectivity to SANs. Access Gateway provides: • • • • • Seamless connectivity with any SAN fabric Improved scalability Simplified management Automatic failover and failback for high availability Lower total cost of ownership The New Data Center 93 . Figure 49. manageability. the Brocade FCOE10-24 also enables server I/O consolidation for servers with Tier 3 and some Tier 2 virtualized applications. and cost advantages to SAN environments. bringing a wide variety of scalability.

As a result. Cisco. Figure 50. McDATA. It provides Brocade HBA performance and health monitoring capabilities to System Center Operations Manager (SCOM). It enables real-time monitoring of Brocade HBA links through SCOM. or other SAN fabrics. IT organizations can improve efficiency while reducing their overall operating costs. Attaching through NPIV-enabled edge switches or directors. combined with proactive remediation action in the form of recommended Performance and Resource Optimization (PRO) Tips handled by SCVMM. Brocade Management Pack Brocade Management Pack for Microsoft System Center monitors the health and performance of Brocade HBA-to-SAN links and works with Microsoft System Center to provide intelligent recommendations for dynamically optimizing the performance of virtualized workloads. and that information can be used to dynamically optimize server resources in virtualized data centers via System Center Virtual Machine Manager (SCVMM). 94 The New Data Center .Chapter 8: Brocade Solutions Optimized for Server Virtualization Access Gateway mode eliminates traditional heterogeneous switch-toswitch interoperability challenges by utilizing NPIV standards to present Fibre Channel server connections as logical devices to SAN fabrics. Access Gateway seamlessly connects servers to Brocade. SAN Call Home events displayed in the Microsoft System Center Operations Center interface.

and other HTTP headers. 3. and 10000 models—enable highly secure and scalable service infrastructures to help applications run more efficiently and with higher availability. HOST. 4000. XML. The Brocade ServerIron ADX Series also provides a reliable line of defense by securing servers and applications against many types of intrusion and attack without sacrificing performance. and application content.or UDPbased application by providing specialized acceleration. Brocade ServerIron ADX switches simplify server farm management and application upgrades by enabling organizations to easily remove and insert resources into the pool. ServerIron ADX switches use detailed application message information beyond the traditional Layer 2 and 3 packet headers. content caching. and deliver industry-leading performance for higher-layer application switching functions.Brocade ServerIron ADX Brocade ServerIron ADX The Brocade ServerIron ADX Series of switches provides Layer 4–7 switching performance in an intelligent. To optimize application availability. firewall load balancing. 4. with real-time session synchronization between two Brocade ServerIron ADX switches to protect against session loss during outages. standards-based network monitoring for all application traffic. The switches—including the ServerIron ADX 1000. These intelligent application switches transparently support any TCP. enabling real-time problem detection. modular application delivery controller platform. Extensive and customizable service health check capabilities monitor Layer 2. these switches support many high-availability mode options. directing client requests to the most available servers. improving manageability and security for network and server resources. All Brocade ServerIron ADX switches forward traffic flows based on Layer 4–7 definitions. and 7 connectivity along with service availability and server response. network optimization. Brocade ServerIron ADX 1000. and host offload features for Web services. The New Data Center 95 . Superior content switching capabilities include customizable rules based on URL. The Brocade ServerIron ADX provides hardware-assisted. as well as cookies. Figure 51.

Chapter 8: Brocade Solutions Optimized for Server Virtualization 96 The New Data Center .

directors. The sections in this chapter introduce you to these products and briefly describe them. energy-efficient SAN switches. For the most current information.com > Products and Solutions.Brocade SAN Solutions Meeting the most demanding data center requirements today and tomorrow 9 Brocade leads the pack in networked storage from the development of Fibre Channel to its current family of high-performance. Technical Briefs. Choose a product from the drop-down list on the left and then scroll down to view Data Sheets. FAQs. and backbones and advanced fabric capabilities such as encryption and distance extension.brocade. visit www. The SAN products described in this chapter are: • • • • • • “Brocade DCX Backbones (Core)” on page 98 “Brocade 8 Gbps SAN Switches (Edge)” on page 100 “Brocade Encryption Switch and FS8-18 Encryption Blade” on page 105 “Brocade 7800 Extension Switch and FX8-24 Extension Blade” on page 106 “Brocade Optical Transceiver Modules” on page 107 “Brocade Data Center Fabric Manager” on page 108 The New Data Center 97 . and White Papers.

lossless connectivity. 98 The New Data Center . Figure 52. It is designed to support a broad range of current and emerging network protocols to form a unified. extending SAN investments for maximum ROI. In addition. Brocade DCX (left) and DCX-4S (right) Backbone.999%) reliability of hundreds of thousands of Brocade SAN deployments. it operates natively with Brocade and Brocade M-Series components. server-to-storage. To minimize risk and costly downtime.Chapter 9: Brocade SAN Solutions Brocade DCX Backbones (Core) The Brocade DCX and DCX-4S Backbone offer flexible management capabilities as well as Adaptive Networking services and fabric-based applications to help optimize network and application performance. high-performance data center fabric. and storage-to-storage networks with highly available. The Brocade DCX facilitates the consolidation of server-to-server. the platform leverages the proven “five-nines” (99.

and Top Talkers • Plug-in services for fabric-based storage virtualization. fans. WWN cards. separate and redundant control processor and core switching blades • Hot-pluggable components. optics • Adaptive Networking services. Fibre Channel Routing over IP. blades. continuous data protection and replication.5 Tbps for a single chassis) • 1 Tbps of aggregate ICL bandwidth • More than 5x the performance of competitive offerings • High-density. and online data migration • Support for Fibre Channel.Brocade DCX Backbones (Core) Table 1. Brocade DCX Capabilities Feature Industry-leading capabilities for large enterprises Details • Industry-leading Performance 8 Gbps per-port. ingress rate limiting. including QoS. FCIP. Data Center Bridging (DCB). bladed architecture • Up to 384 8 Gbps Fibre Channel ports in a single chassis • Up to 768 8 Gbps Fibre Channel ports in a dualchassis configuration • 544 Gbps aggregate bandwidth per slot plus local switching • Fibre Channel Integrated Routing • Specialty blades for 10 Gbps connectivity. traffic isolation. fullline-rate performance • 13 Tbps aggregate dual-chassis bandwidth (6. and IPFC • Designed for future 10 Gigabit Ethernet (GbE). including redundant power supplies. and Fibre Channel over Ethernet (FCoE) • Native connectivity in Brocade and Brocade M-Series fabrics. and fabric-based applications • Energy efficiency less than one-half watt per Gbps • 10x more energy efficient than competitive offerings • Designed to support 99. including backward and forward compatibility 99 High scalability Energy efficiency Ultra-High Availability Fabric services and applications Multiprotocol capabilities and fabric interoperability The New Data Center .99% uptime • Passive backplane. FICON.

Brocade switches are fully forward and backward compatible—providing a seamless migration path to 8 Gbps connectivity and future technologies. and Brocade Advanced Performance Monitoring • Integration with third-party management tools Brocade 8 Gbps SAN Switches (Edge) Industry-leading Brocade switches are the foundation for connecting servers and storage devices in SANs. and support expertise to provide reliable operation in mixed fabrics. Brocade DCFM. 4. This capability enables organizations to deploy 1. 2. All switches feature flexible port configuration with Ports On Demand capabilities for straightforward scalability. With native E_Port interoperability. and 8 Gbps fabrics with highly scalable core-to-edge configurations. manageable. depending upon business requirements. Moreover. To protect existing investments. Brocade switches connect to the vast majority of fabrics in operation today. Brocade DCX Capabilities Feature Intelligent management and monitoring Details • Full utilization of the Brocade Fabric OS embedded operating system • Flexibility to utilize a CLI. Organizations can also experience high performance between switches by using Brocade ISL Trunking to achieve up to 64 Gbps total throughput 100 The New Data Center . enabling organizations to access and share data in a high-performance.Chapter 9: Brocade SAN Solutions Table 1. allowing organizations to seamlessly integrate and scale their existing SAN infrastructures. Brocade standalone switch models offer flexible configurations ranging from 8 to 80 ports. test. and can function as core or edge switches. Brocade Advanced Web Tools. Brocade switches are backed by FOS engineering. and scalable manner.

these switches can provide a fabric capable of delivering overall system Designed for flexibility. customer. and extensive diagnostics. All Brocade switches feature non-disruptive software upgrades. Figure 53. these switches are ideal as standalone departmental SANs or as high-performance edge switches in large enterprise SANs. 4. small SAN islands. security. Brocade switches provide a low-cost solution for Direct-Attached Storage (DAS)-to-SAN migration. and 8 Gbps technology in configurations of 48. Network-Attached Storage (NAS) back-ends. Leveraging the Brocade networking model. and the edge of core-toedge enterprise SANs. Brocade 5300 Switch. while reducing complexity for virtualized servers and storage. They include a Virtual Fabrics feature that enables the partitioning of a physical SAN into logical fabrics. or 80 ports in a 2U chassis. and “pay-as-you-grow” scalability increases server and storage utilization. The Brocade 5300 and 5100 switches support full Fibre Channel routing capabilities with the addition of the Fibre Channel Integrated Routing (IR) option.Brocade 8 Gbps SAN Switches (Edge) Brocade switches meet high-availability requirements with Brocade 5300. or traffic type without sacrificing performance. As a result. 5100. and 300 switches offering redundant. Using built-in routing capabilities. Brocade 5300 Switch As the value and volume of business data continue to rise. organizations need technology solutions that are easy to implement and manage and that can grow and change with minimal disruption. The combination of density. 64. hot-pluggable components. automatic path rerouting. organizations can selectively share devices while still maintaining remote fabric isolation. The Brocade 5300 Switch is designed to consolidate connectivity in rapidly growing mission-critical environments. scalability. This provides fabric isolation by application. performance. or reliability. 2. business group. supporting 1. The New Data Center 101 .

consuming less than 2. Brocade 5100 Switch The Brocade 5100 Switch is designed for rapidly growing storage requirements in mission-critical environments combining 1. Similar to the Brocade 5300. Within these groups. enhanced Brocade ISL Trunking utilizes ISLs more efficiently to preserve the number of usable switch ports. The design makes it very efficient in power. it provides low-cost access to industry-leading SAN technology and pay-as-you-grow scalability for consolidating storage and maximizing the value of virtual server deployments. 4. It features consolidated power and fan assemblies 102 The New Data Center .5 watts of power per port for exceptional power and cooling efficiency. As a result. Brocade 5100 Switch. and rack density to help enable midsize and large server and storage consolidation. The Brocade 5300 also includes Adaptive Networking capabilities to more efficiently manage resources in highly consolidated environments. or 40 ports in a 1U chassis. It supports Fibre Channel Integrated Routing for selective device sharing and maintains remote fabric isolation for higher levels of scalability and fault isolation. 2. an inter-switch link trunk can supply up to 68 Gbps of balanced data throughput. cooling. and 8 Gbps Fibre Channel technology in configurations of 24. The density of the Brocade 5300 uniquely enables fan-out from the core of the data center fabric with less than half the number of switch devices to manage compared to traditional 32. The Brocade 5300 utilizes ASIC technology featuring eight 8-port groups. Figure 54.Chapter 9: Brocade SAN Solutions Used at the fabric core or at the edge of a tiered core-to-edge infrastructure. 32.or 40-port edge switches. the Brocade 5100 features a flexible architecture that operates seamlessly with existing Brocade switches through native E_Port connectivity into Brocade FOS or M-EOS environments. In addition to reducing congestion and increasing bandwidth. the Brocade 5300 operates seamlessly with existing Brocade switches through native E_Port connectivity into Brocade FOS or M-EOS) environments. With the highest port density of any midrange enterprise switch. it is designed for a broad range of SAN architectures.

The Brocade 5100 is a costeffective building block for standalone networks or the edge of enterprise core-to-edge fabrics. Access Gateway mode enables connectivity into any SAN by utilizing NPIV switch standards to present Fibre Channel connections as logical devices to SAN fabrics. It delivers up to 24 ports of 8 Gbps performance in an energy-efficient. It augments ISL Trunking to provide more effective load balancing in certain configurations. M-EOSbased. The 8 Gbps Fibre Channel Brocade 300 provides a simple.Brocade 8 Gbps SAN Switches (Edge) to improve environmental performance. affordable. optimized 1U form factor. the Brocade 300 features the EZSwitchSetup wizard and other ease-of-use and configuration enhancements. or other SAN fabrics. Brocade 300 Switch. and reduces overall storage costs. In addition. Exchange-based Dynamic Path Selection optimizes fabric-wide performance and load balancing by automatically routing data to the most efficient available path in the fabric. improves system performance. DPS can balance traffic between the Brocade 5100 and Brocade M-Series devices enabled with Brocade Open Trunking. the Brocade 300 in Access Gateway mode can connect to FOS-based. Additional performance capabilities include the following: • 32 Virtual Channels on each ISL enhance QoS traffic prioritization and “anti-starvation” capabilities at the port level to avoid performance degradation. Attaching through NPIV-enabled switches and directors. as well as the optional Brocade Access Gateway mode of operation (supported with 24-port configurations only). To simplify deployment. The New Data Center 103 . single-switch solution for both new and existing SANs. • Brocade 300 Switch The Brocade 300 Switch provides small to midsize enterprises with SAN connectivity that simplifies IT management infrastructures. maximizes the value of virtual server deployments. Figure 55.

The Brocade VA-40FC helps meet this challenge. Figure 56.Chapter 9: Brocade SAN Solutions Organizations can easily enable Access Gateway mode (see page 151) via the FOS CLI. By leveraging Brocade Access Gateway technology. Brocade Web Tools. The Brocade VA-40FC is in Access Gateway mode by default. Key benefits of Access Gateway mode include: • • • Improved scalability for large or rapidly growing server and virtual server environments Simplified management through the reduction of domains and management tasks Fabric interoperability for mixed vendor SAN configurations that require full functionality Brocade VA-40FC Switch The Brocade VA-40FC is a high-performance Fibre Channel edge switch optimized for server connectivity in large-scale enterprise SANs As organizations consolidate data centers. providing the first Fibre Channel edge switch optimized for server connectivity in large core-to-edge SANs. Brocade VA-40FC Switch. or Brocade Fabric Manager. expand application services. Minimizing the network deployment steps and simplifying management can help organizations grow seamlessly while reducing operating costs. Some use cases for Access Gateway mode are: • • • 104 Connectivity of many servers into large SAN fabrics Connectivity of servers into Brocade. and begin to implement cloud initiatives. the Brocade VA-40FC enables zero-configuration deployment and reduces management of the network edge—increasing scalability and simplifying management for large-scale server architectures. Cisco. large-scale server architectures are becoming a standard part of the data center. which is ideal for larger SAN fabrics that can benefit from the scalability of fixed-port switches at the edge of the network. or any NPIV-enabled SAN fabrics Connectivity into multiple SAN fabrics The New Data Center .

Brocade Encryption Switch and FS8-18 Encryption Blade The Brocade VA-40FC also supports Fabric Switch mode to provide standard Fibre Channel switching and routing capabilities that are available on all Brocade enterprise-class 8 Gbps solutions. Brocade Encryption Switch. It scales non-disruptively. the Brocade Encryption Switch is tightly integrated with industry-leading. providing from 48 up to 96 Gbps of disk encryption processing power. Based on industry standards. Brocade Encryption Switch and FS8-18 Encryption Blade The Brocade Encryption Switch is a high-performance standalone device for protecting data-at-rest in mission-critical environments. Figure 58. It is also FIPS 140-2 Level 3-compliant. scalable encryption services that seamlessly integrate into existing Brocade Fabric OS environments. Figure 57. Brocade FS8-18 Encryption Blade. enterprise-class key management systems that can scale to support key lifecycle services across distributed environments. The New Data Center 105 . Moreover. Brocade encryption solutions for data-at-rest provide centralized.

The Brocade 7800 4/2 Extension Switch is a cost-effective option for smaller data centers and remote offices implementing point-topoint disk replication for open systems. backup. and throughput for maximum application performance over WAN links. A broad range of optional advanced extension. • The Brocade 7800 16/6 Extension Switch is a robust platform for data centers and multisite environments implementing disk and tape solutions for open systems and mainframe environments. FICON. Organizations can optimize bandwidth and throughput through 4 x 8 Gbps FC ports and 2 x 1 GbE ports. The Brocade 7800 is an ideal platform for building or expanding a high-performance SAN extension infrastructure. • 106 The New Data Center . It can be configured for simple point-to-point or comprehensive multisite SAN extension. Brocade 7800 Extension Switch.Chapter 9: Brocade SAN Solutions Brocade 7800 Extension Switch and FX8-24 Extension Blade The Brocade 7800 Extension Switch helps provide network infrastructure for remote data replication. and SAN fabric services are available. or too expensive with standard Fibre Channel connections. and migration. Organizations can optimize bandwidth and throughput through 16 x 8 Gbps FC ports and 6 x 1 GbE ports. port density. Figure 59. The Brocade 7800 4/2 can be easily upgraded to the Brocade 7800 16/6 through software licensing. Leveraging next-generation Fibre Channel and advanced FCIP technology. It leverages costeffective IP WAN transport to extend open systems and mainframe disk and tape storage applications over distances that would otherwise be impossible. Up to 16 x 8 Gbps Fibre Channel ports and 6 x 1 GbE ports provide unmatched Fibre Channel and FCIP bandwidth. the Brocade 7800 provides a flexible and extensible platform to move more data faster and further than ever before. impractical.

The New Data Center 107 . Activating the optional 10 GbE ports doubles the aggregate bandwidth to 20 Gbps and enables additional FCIP port configurations (10 x 1 GbE ports and 1 x 10 GbE port. 10 GbE and advanced FCIP technology. These transceiver modules support data rates up to 8 Gbps Fibre Channel and link lengths up to 30 kilometers (for 4 Gbps Fibre Channel). the Brocade FX8-24 provides a flexible and extensible platform to move more data faster and further than ever before. Brocade Optical Transceiver Modules Brocade optical transceiver modules. plug into Brocade switches. and backbones to provide Fibre Channel connectivity and satisfy a wide range of speed and distance requirements. also known as Small Form-factor Pluggables (SFPs). Brocade FX8-24 Extension Blade. or 2 x 10 GbE ports). designed specifically for the Brocade DCX Backbone. Leveraging next-generation 8 Gbps Fibre Channel. and migration. Brocade transceiver modules are optimized for Brocade 8 Gbps platforms to maximize performance. Up to two Brocade FX8-24 blades can be installed in a Brocade DCX or DCX-4S Backbone. reduce power consumption. backup. helps provide the network infrastructure for remote data replication. Figure 60.Brocade Optical Transceiver Modules The Brocade FX8-24 Extension Blade. and help ensure the highest availability of mission-critical applications. directors.

It features enterprise-class reliability. and enhance the security of storage network infrastructures. switches. 108 The New Data Center . As a result. Brocade DCFM main window showing the topology view. Brocade DCFM Enterprise integrates with leading partner data center automation solutions through frameworks such as the Storage Management Initiative-Specification (SMI-S). or multisite storage networks through a single pane of glass. It is part of a common framework designed to manage entire data center fabrics. and Adaptive Networking services. As part of a common management ecosystem. and CNA products. Figure 61. Brocade DCFM Enterprise configures and manages Brocade DCX Backbone family.Chapter 9: Brocade SAN Solutions Brocade Data Center Fabric Manager Brocade Data Center Fabric Manager (DCFM) Enterprise unifies the management of large. maximize performance. Brocade DCFM Enterprise tightly integrates with Brocade Fabric OS (FOS) to leverage key features such as Advanced Performance Monitoring. it helps optimize storage resources. and serviceability (RAS). multifabric. as well as Brocade data-at-rest encryption. directors. as well as advanced features such as proactive monitoring and alert notification. availability. and extension solutions. from the storage ports to the HBAs. HBA. Fabric Watch. both physical and virtual. FCoE/DCB.

Brocade LAN Network Solutions
End-to-end networking from the edge to the core of today's networking infrastructures

10

Brocade offers a complete line of enterprise and service provider Ethernet switches, Ethernet routers, application management, and network-wide security products. With industry-leading features, performance, reliability, and scalability capabilities, these products enable network convergence and secure network infrastructures to support advanced data, voice, and video applications. The complete Brocade product portfolio enables end-to-end networking from the edge to the core of today's networking infrastructures. The sections in this chapter introduce you to these products and briefly describe them. For the most current information, visit www.brocade.com > Products and Solutions. Choose a product from the drop-down list on the left and then scroll down to view Data Sheets, FAQs, Technical Briefs, and White Papers. The LAN products described in this chapter are: • • • • “Core and Aggregation” on page 110 “Access” on page 112 “Brocade IronView Network Manager” on page 115 “Brocade Mobility” on page 116

For a more detailed discussion of the access, aggregation, and core layers in the data center network, see “Chapter 6: The New Data Center LAN” starting on page 69.

The New Data Center

109

Chapter 10: Brocade LAN Network Solutions

Core and Aggregation
The network core is the nucleus of the data center LAN. In a three-tier model, the core also provides connectivity to the external corporate network, intranet, and Internet. At the aggregation layer, uplinks from multiple access-layer switches are further consolidated into fewer high-availability and high-performance switches. For application delivery and control, see also “Brocade ServerIron ADX” on page 95.

Brocade NetIron MLX Series
The Brocade NetIron MLX Series of switching routers is designed to provide the right mix of functionality and high performance. while reducing TCO in the data center. Built with the Brocade state-of-the-art, fifth-generation, network-processor-based architecture and Terabitscale switch fabrics, the NetIron MLX Series offers network planners a rich set of high-performance IPv4, IPv6, MPLS, and Multi-VRF capabilities as well as advanced Layer 2 switching capabilities. The NetIron MLX Series includes the 4-slot NetIron MLX-4, 8-slot NetIron MLX-8, 16-slot NetIron MLX-16, and the 32-slot NetIron MLX32. The series offers industry-leading port capacity and density with up to 256 x 10 GbE, 1536 x 1 GbE, 64 x OC-192, or 256 x OC-48 ports in a single system.

Figure 62. Brocade NetIron MLX-4.

110

The New Data Center

Core and Aggregation

Brocade BigIron RX Series
The Brocade BigIron RX Series of switches provides the first 2.2 billion packet-per-second device that scale cost-effectively from the enterprise edge to the core with hardware-based IP routing to 512,000 IP routes per line module. The high-availability design features redundant and hot-pluggable hardware, hitless software upgrades, and graceful BGP and OSPF restart. The BigIron RX Series of Layer 2/3 Ethernet switches enables network designers to deploy an Ethernet infrastructure that addresses today's requirements with a scalable and future-ready architecture that will support network growth and evolution for years to come. BigIron RX Series incorporates the latest advances in switch architecture, system resilience, QoS, and switch security in a family of modular chassis, setting leading industry benchmarks for price performance, scalability and TCO.

Figure 63. Brocade BigIron RX-16.

The New Data Center

111

non-blocking architecture and low power consumption help provide a cost-effective solution for server or compute-node connectivity. An ultra-low-latency. queuing. cut-through. cut-through. load-sharing. 112 The New Data Center . iSCSI storage. Servers are typically provisioned with two or more GbE or 10 GbE network ports for redundant connectivity. automatic fan speed adjustment. and congestion management Embedded per-port sFlow capabilities to support scalable hardware-based traffic monitoring Wire-speed performance with an ultra-low-latency. real-time application environments • • • • Figure 64. high-availability. hot-swappable.Chapter 10: Brocade LAN Network Solutions Access The access layer provides the direct network connection to application and file servers. Brocade TurboIron 24X Switch The Brocade TurboIron 24X switch is a compact. Server platforms vary from standalone servers to 1U rack-mount servers and blade servers with passthrough cabling or bladed Ethernet switches. Brocade TurboIron 24X Switch. auto-sensing/switching power supplies and triple-fan assembly End-to-end QoS with hardware-based marking. and high-density 10/1 GbE dual-speed solution that meets mission-critical data center ToR and High-Performance Cluster Computing (HPCC) requirements. and use of SFP+ and direct attached SFP+ copper (Twinax) High availability with redundant. non-blocking architecture ideal for HPC. high-performance. Additional highlights include: • Highly efficient power and cooling with front-to-back airflow.

and flexibility required for today's enterprise networks. these switches deliver performance and intelligence to the network edge in a flexible 1U form factor. and MPLS capabilities in the same device. Designed for wire-speed and non-blocking performance. Brocade FastIron CX-624S-HPOE Switch. The switches provide a broad set of advanced Layer 2. PoE models support the emerging Power over Ethernet Plus (PoE+) standard to deliver up to 30 watts of power to edge devices. Brocade NetIron CES 2000 Series Whether they are located at a central office or remote site. resilient.and 48-port models. The NetIron CES 2000 Series is a family of compact 1U. organizations can stack up to eight switches into a single logical switch with up to 384 ports. The Brocade NetIron Compact Ethernet Switch (CES) 2000 Series is purpose-built to provide flexible. IPv4. As a result. FastIron CX switches include 24. The New Data Center 113 . Figure 65. scalability. secure. With advanced capabilities. and large enterprise networks. in both Power over Ethernet (PoE) and non-PoE versions. and advanced Ethernet and MPLSbased services in a compact form factor. Utilizing built-in 16 Gbps stacking ports and Brocade IronStack technology.Access Brocade FastIron CX Series The Brocade FastIron CX Series of switches provides new levels of performance. enabling next-generation campus applications. the availability of space often determines the feasibility of deploying new equipment and services in a data center environment. they support a diverse set of applications in data center. multiservice edge/aggregation switches that combine powerful capabilities with high performance and availability. which helps reduce infrastructure and administrative costs.

and 48-port configurations in both Hybrid Fiber (HF) and RJ45 versions. FastIron Edge X Series offers a diverse range of switches that meet Layer 2/3 edge. Brocade NetIron CES 2000 switches. 24.Chapter 10: Brocade LAN Network Solutions Figure 66. 114 The New Data Center . advanced security. aggregation. It is the ideal networking platform to deliver 10 GbE. including superior QoS. or small-network backbone-connectivity requirements with intelligent network services. Brocade FastIron Edge X Series The Brocade FastIron Edge X Series switches are high-performance data center-class switches that provide Gigabit copper and fiber-optic connectivity and 10 GbE uplinks. comprehensive management. Advanced Layer 3 routing capabilities and full IPv6 support are designed for the most demanding environments. predictable performance. Brocade FastIron Edge X 624. and integrated resiliency. Figure 67.

monitoring. Figure 68. they can easily configure and deploy policies for wired and wireless products. monitoring. VLANs. The New Data Center 115 . and archive configurations for each device. organizations can automatically discover Brocade network equipment and immediately acquire. software and configuration updates. and network alarms and events. and managing network-wide features such as Access Control Lists (ACLs).Brocade IronView Network Manager Brocade IronView Network Manager Brocade IronView Network Manager (INM) provides a comprehensive tool for configuring. In addition. managing. rate limiting policies. Brocade INM Dashboard (top) and Backup Configuration Manager (bottom). It is an intelligent network management solution that reduces the complexity of changing. view. and securing Brocade wired and wireless network products. Using Brocade INM.

In addition to indoor networking equipment. video. robust security. Brocade mobility controllers provide: • • • • Wired and wireless networking services Multiple locationing technologies such as Wi-i and RFID Resiliency via 3G/4G wireless broadband backhaul High performance with 802. Brocade Mobility controllers enables wireless enterprises by providing an integrated communications platform that delivers secure and reliable voice. It easily handles from 8000 to 96. Brocade offers two models of controllers: the Brocade RFS6000 and RFS7000 Controller.11n networks The Brocade Mobility RFS7000 features a multicore. scalable network expansion. these organizations can save significant capital and feel confident in expanding their wireless deployments to business-critical applications. 116 The New Data Center . high-bandwidth enterprise deployments. Brocade also provides the tools to wirelessly connect multiple buildings across a corporate campus.11 dual-radio a/b/g/n access points or 1024 adaptive access points (Brocade Mobility 5181 a/b/g or Brocade Mobility 7131 a/b/g/n) per controller.000 mobile devices and 256 to 3000 802.Chapter 10: Brocade LAN Network Solutions Brocade Mobility While once considered a luxury. wireless technologies often match the performance of wired networks-all with simplified deployment. multithreaded architecture designed for large-scale. With the introduction of the IEEE 802. To that end. Based on an innovative architecture. and at a significantly lower cost. In fact. Wi-Fi connectivity is now an integral part of the modern enterprise. and data applications in Wireless LAN (WLAN) environments. Brocade offers all the pieces to deploy a wireless enterprise. The Brocade Mobility RFS7000 provides the investment protection enterprises require: innovative clustering technology provides a 12X capacity increase. and smart licensing enables efficient.11n standard. most IT organizations are deploying Wireless LANs (WLANs).

is as important as reaching the end-state.Brocade One Simplifying complexity in the virtualized data center 11 Brocade One. This evolution has already started inside the data center and Brocade offers insights on the challenges faced as it moves out to the rest of the network. announced in mid-2010. the Brocade One architecture takes a customer-centric approach with the following commitments: • • Unmatched simplicity. Dramatically simplifying the design. and ongoing support of IT infrastructures. The process. is the unifying network architecture and strategy that enables customers to simplify the complexity of virtualizing their applications. Brocade shares a common industry view that IT infrastructures will eventually evolve to a highly virtualized. serviceson-demand state enabled through the cloud. The realization of this vision requires radically simplified network architectures. In contrast. simplifying management. By removing network layers. Emphasizing an approach that builds on existing customer multivendor infrastructures while improving their total cost of ownership. This is best achieved through a deep understanding of data center networking intricacies and the rejection of rip-and-replace deployment scenarios with vertically integrated stacks sourced from a single vendor. Investment protection. Brocade One helps customers migrate to a world where information and services are available anywhere in the cloud. an evolutionary path toward this desired end-state. and protecting existing technology investments. The New Data Center 117 . deployment. configuration. Evolution not Revolution In the data center.

The New Data Center 118 . Supporting the ever-increasing requirements for unparalleled uptime by setting the standard for continuous operations. including: • Brocade Virtual Cluster Switching™ (VCS). overcomes the limitations of conventional Ethernet networking by applying non-stop operations. Optimizing current and future customer applications. Layer 4 . Optimized applications.Chapter 11: Brocade One • High-availability networking. A logical layer between Brocade converged fabric and server virtualization hypervisors that will help ensure a consistent interface and set of services for virtual machines (VMs) connected to the network. end-to-end Dynamic Services Connectivity over distance. available in shipping product in late 2010. devices. The pillars of Brocade VCS (detailed in the next section). and resiliency. Native Fibre Channel. and so on Figure 69. • The new Brocade converged fabric solutions include unique and powerful innovations customized to support virtualized data centers. deterministic Auto-healing. • Brocade Virtual Access Layer (VAL). Brocade Virtual Cluster Switching (VCS) Ethernet Fabric Distributed Intelligence Logical Chassis No STP Multi-path. including the emerging Virtual Ethernet Port Aggregator (VEPA) and Virtual Ethernet Bridging (VEB) standards.7. ease of management. low latency Convergence ready Self-forming Arbitrary topology Network aware of all members. VMs Masterless control. Security Services. non-disruptive Lossless. Brocade VAL is designed to be vendor agnostic and will support all major hypervisors by utilizing industry-standard technologies. Brocade VCS. A new class of Brocadedeveloped technologies designed to address the unique requirements of virtualized data centers. no reconfiguration VAL interaction Logically flattens and collapses network layers Scale edge and manage as if single switch Auto-configuration Centralized or distributed management. any-to-any connectivity and the intelligence of fabric switching.

Brocade helped pioneer DCB through industry standards bodies to ensure that the technology would be suitable for the rigors of data center networking.Industry's First Converged Data Center Fabric • Brocade Open Virtual Compute Blocks. They are also examples of how Brocade has been able to leverage decades of experience in building data center fabrics to deliver the industry's first converged fabrics. A best-in-class element management toolset that will help provide industry-standard and customized support for industry-leading network management. Brocade converged fabrics are designed to transport all types of network and storage traffic over a single wire to reduce complexity and help ensure a simplified migration path from current technologies. These VCS features allow customers to flatten network architectures into a single Layer 2 domain that can be managed as a single switch. and efficient converged fabrics capable of supporting both Ethernet and storage traffic. This reduces network complexity and operational costs while allowing VCS users to scale their VM environments to global topologies. Brocade Network Advisor. virtualization management. The New Data Center 119 . highperformance and flat Layer 2 data center fabrics to better support the increased adoption of server virtualization. which will provide a more efficient way of moving data throughout converged fabrics by automatically determining the shortest path between routes. Brocade is working with leading systems and IT infrastructure vendors to build tested and verified data center blueprints for highly scalable and cost-effective deployment of VMs on converged fabrics. Another key technology in Brocade VCS is the emerging IETF standard Transparent Interconnection of Lots of Links (TRILL). • • Industry's First Converged Data Center Fabric Brocade designed VCS as the core technology for building large. Multiprotocol Support. Brocade VCS is built on Data Center Bridging technologies to meet the increased network reliability and performance requirements as customers deploy more and more VMs. Both DCB and TRILL are advances to current technologies and are critical for building large. Brocade VCS also simplifies the management of Brocade converged fabrics by managing multiple discrete switches as one logical entity. storage management. flat. and data center orchestration tools.

devices. Spanning Tree Protocol is no longer necessary. because the Ethernet fabric appears as a single logical switch to connected servers. Moreover. and the rest of the network. The Ethernet fabric is an advanced multipath network utilizing TRILL. In this optimized environment. VCS achieves this through a distributed services architecture that makes the fabric aware of all of connected devices and shares the information across those devices. If a single link fails.” When two VCS-enabled switches are connected. This means that no single switch stores configuration information or controls fabric operations. Also. or failed links are not disruptive to the Ethernet fabric and do not require all traffic in the fabric to stop. And. single component failures do not require the entire fabric topology to reconverge. This unprecedented level of VM visibility and automated profile management helps intelligently remove the physical barriers to VM mobility that exists in current technologies and network architectures. in which all paths in the network are active and traffic is automatically distributed across the equal-cost paths. removed. Distributed intelligence allows the Ethernet fabric to be “self-forming. enables a VM's network profiles—such as security or QoS levels—to follow the VM during migrations without manual intervention. the Ethernet fabric is masterless. the fabric is automatically created. Distributed Intelligence Brocade VCS also enhances server virtualization with technologies that increase VM visibility in the network and enable seamless migration of policies along with the VM. Scaling bandwidth in the fabric is as simple as connecting another link between switches or adding a new switch as required. and the switches discover the common fabric configuration. 120 The New Data Center . unlike switch stacking technologies. MultiChassis Trunking (MCT) capabilities in aggregation switches enable a logical one-to-one relationship between the access (VCS) and aggregation layers of the network. traffic automatically takes the shortest path for minimum latency without manual configuration. helping to ensure that no traffic is negatively impacted by an isolated issue.Chapter 11: Brocade One Ethernet Fabric In the new data center LAN. Automatic Migration of Port Profiles (AMPP). traffic is automatically rerouted to other available paths in less than a second. Events such as added. a VCS feature.

Industry's First Converged Data Center Fabric The Ethernet fabric does not dictate a specific topology. When a port module is added to a chassis. application delivery. This enables fabric scalability without manual configuration. whether the fabric contains as few as 48 ports or thousands of ports. adding a network service layer across the entire fabric. Dynamic Services Brocade VCS also offers dynamic services so that you can add new network and fabric services to Brocade converged fabrics. the fabric looks no different than any other Layer 2 switch. network architects can create a topology that best meets specific application requirements. Logical Chassis All switches in an Ethernet fabric are managed as if they were a single logical chassis. including capabilities such as fabric extension over distance. As a result. Each physical switch in the fabric is managed as if it were a port module in a chassis. Furthermore. The logical chassis capability significantly reduces management of small-form-factor edge switches. VCS enables different end-to-end subscription ratios to be created or fine. so it does not restrict oversubscription ratios. which further optimizes the network in the virtualized data center and will further enable a cloud computing model. To the rest of the network. Through VCS. Instead of managing each top-of-rack switch (or switches in blade server chassis) individually.tuned as application demands change over time. and a switch can be added to the Ethernet fabric just as easily. native Fibre Channel connectivity. The New Data Center 121 . and enhanced security services such as firewalls and data encryption. When a VCS-enabled switch is connected to the fabric. the module does not need to be configured. it inherits the configuration of the fabric and the new ports become available immediately. the new services are then made available to the entire converged fabric. the new switches and software with these services behave as service modules within a logical chassis. The network sees the fabric as a single switch. organizations can manage them as one logical chassis. dynamically evolving the fabric with new functionality. Switches with these unique capabilities can join the Ethernet fabric. Unlike other technologies.

Since the Ethernet fabric is one logical chassis with distributed intelligence. Servers running high-priority applications or other servers requiring the highest block storage service levels connect to the SAN using native Fibre Channel. Mobility extends even further with the VCS fabric extension Dynamic Service. 10 GbE with DCB.Chapter 11: Brocade One The VCS Architecture The VCS architecture. multiple protocols and speeds are supported: 1 GbE. the VM sphere of mobility spans the entire VCS. 122 The New Data Center . inside the data center or across data centers. and Fibre Channel. providing shared storage for servers connected to that fabric. Since the fabric is self-aggregating. At the core of the data center. A Brocade VCS reference network architecture. flattens the network by collapsing the traditional access and aggregation layers. shown in Figure 70. encryption) VCS SAN VM VM VM VM FC/FCoE/ iSCSI/NAS storage VM Dedicated Fibre Channel SAN for Tier 1 applications VM VM VM VM VM VM VM Blade servers VM VM Rack-mount servers Figure 70. FCoE or iSCSI storage can be connected directly to the Ethernet fabric. 10 GbE. For maximum flexibility of server and storage connectivity. Remote data center Public Network VCS Core routers VM VM VM VM VM Layer 4–7 application delivery VM VCS fabric extension VCS fabric extension Security ervices (firewall. For lower-tier applications. routers are virtualized using MCT and provide high-performance connectivity between Ethernet fabrics. there is no need for aggregation switches to manage subscription ratios and provide server-to-server communication.

More hardware means more energy consumption. GSI Governing Board Reprinted with permission of the SNIA Introduction The energy required to support data center IT operations is becoming a central concern worldwide. NetApp. Simply throwing more hardware assets at the problem is no longer viable. This is a difficult challenge. The increasing scarcity and higher cost of energy. it comes at an ever increasing cost. Data center energy efficiency solutions span the spectrum from more efficient rack placement and alternative cooling methods to server and storage virtualization technologies. Brocade. Alan Yoder.“Best Practices for Energy Efficient Storage Operations” Version 1. is being accompanied by a sustained growth of applications and data. For some data centers. the cost of powering IT equipment is often higher than the original cost of the equipment itself. additional energy supply is simply not available. Even if energy is available. This document is the first iteration of the SNIA GASSY's recommendations for maximizing utilization of The New Data Center 123 . however. Companies are therefore now seeking ways to accommodate data growth while reducing their overall power profile. more heat generation and increasing load on the data center cooling system.0 A October 2008 Authored by Tom Clark. With current pricing. Green Storage Initiative (GSI) Chair and Dr. either due to finite power generation capacity in certain regions or the inability of the power distribution grid to accommodate more lines. The SNIA's Green Storage Initiative was formed to identify and promote energy efficiency solutions specifically relating to data storage.

This simply highlights the importance of finding new ways to leverage technology to increase energy efficiency within the data center and accomplish more IT processing with fewer energy resources. the amount of corporate data generated worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300 exabytes. While data centers represent only ~2% of total energy consumption in the US.2 Even if there was a national campaign to build alternative energy generation capability. Global power consumption for data centers is more than twice the US figures. In addition to the pending scarcity and increased cost of energy to power IT operations. 2006 2. The inability of the power generation and delivery infrastructure to accommodate the growth in continued demand.Appendix A: “Best Practices for Energy Efficient Storage Operations” data center storage assets while reducing overall power consumption. The sustained growth of data requires new tools for data management. Press Release. Gartner predicts that by 2009. new systems would not be online soon enough to prevent a widespread energy deficit. In terms of power generation. data centers in the US require the equivalent of six 1000 MegaWatt power plants to sustain current operations. with projections of ~1 zetabyte (1000 exabytes) by 2010. An Emerson Power survey projects that 96% of all data centers will not have sufficient power by 2011. Some Fundamental Considerations Reducing energy consumption is both an economic and a social imperative. “Gartner Says 50 Percent of Data Centers Will Have Insufficient Power and Cooling Capacity by 2008. however. the dollar figure is approximately $4B annually. November 29. We plan to expand and update the content over time to include new energy-related storage technologies as well as SNIA-generated metrics for evaluating energy efficiency in storage product selection. Since 2000. means that most data centers will be facing power restrictions in the coming years.” Gartner Inc. November 16. half of the world's data centers will not have sufficient power to support their applications1. “Emerson Network Power Presents Industry Survey Results That Project 96 Percent of Today`s Data Centers Will Run Out of Capacity by 2011" Emerson Press Release. This data must be stored somewhere. data retention and data redundancy. storage allocation. 1. 2006 124 The New Data Center . data center managers face a continued explosion in data growth.

There are. To remain competitive. Here too. This aligns to yet another shade of environmental green known as “bright green”. viable energy efficiency for ongoing data center operations must be based on solutions that are able to leverage state-of-theart technologies to do much more with much less. new green storage technologies and best practices can assist in retaining high availability of applications and data while reducing total energy requirements. redundant networking connectivity. redundant fabric pathing. the standard market dynamic that eventually separates weak products from viable ones has not had sufficient time to eliminate the green pretenders. Within the broader green environmental movement greenwashing is also known as being “lite green” or sometimes “light green”. Consequently. Consequently. Dark green refers to environmental solutions that rely on across-the-board reductions in energy and material consumption. Unfortunately. redundant servers for failover. however. analysts often complain about the 'greenwashing' of vendor marketing campaigns and the opportunistic attempt to portray marginally useful solutions as the cure to all the IT manager's energy ills. however. is not feasible for today's business operations. a dark green tactic would be to simply reduce the number of applications and associated hardware and halt the expansion of data growth. Shades of Green The quandary for data center managers is in identifying which new technologies will actually have a sustainable impact for increasing energy efficiency and which are only transient patches whose initial energy benefit quickly dissipates as data center requirements change. Simply cutting back. however. For a data center. other shades of green. These top tier applications are so essential for business operations.Appendix A: “Best Practices for Energy Efficient Storage Operations” The conflict between the available supply of energy to power IT operations and the increasing demand imposed by data growth is further exacerbated by the operational requirement for high availability access to applications and data. Bright green solutions reject both the superficial lite green and the Luddite dark green approaches to the environment and rely instead on techniThe New Data Center 125 . though. Mission-critical applications in particular are high energy consumers and require more powerful processors. that the doubling of server and storage hardware elements and the accompanying doubling of energy draw have been largely unavoidable. and redundant data storage in the form of mirroring and data replication for disaster recovery. businesses must be able to accommodate growth and expansion of operations.

longer backup windows and higher energy costs. it is useful to imagine how they will work in concert with other products to achieve greater efficiencies. By some industry estimates. Even if individual users copy the attachment to their local drives. over half of the total volume of a typical company's data exists in the form of redundant copies dispersed across multiple storage systems and client workstations. When evaluating specific solutions. These recommendations collectively fall into the category of “silver buckshot” in addressing data center storage issues. resulting in ever increasing requirements for storage. This phenomenon is replicated daily across companies of every size worldwide. some users may copy the attachment to their individual share on a data center file server. no prioritization is implied. Consider the impact. redundancy and retention is therefore an essential first step in managing data growth and getting storage energy costs 126 The New Data Center . for example. A corporate policy for data management. Instead.Appendix A: “Best Practices for Energy Efficient Storage Operations” cal innovation to provide sustainable productivity and growth while steadily driving down energy consumption. Every data center operation has different characteristics and what is suitable for one application environment may not work in another. Thin provisioning and data deduplication. for example. the lack of data retention policies can result in duplicate copies of data being maintained and backed up indefinitely. Best Practice #1: Manage Your Data A significant component of the exponential growth of data is the growth of redundant copies of data. The corporate email servers now have an additional 400 MB of capacity devoted to redundant copies of the same data. multiple energy efficient technologies can be deployed in concert to reduce the overall energy footprint and bring costs under control. The following SNIA GSI best practices include many bright green solutions that accomplish the goal of energy reduction while increasing productivity of IT storage operations. then. further compounding the duplication. And to make matters worse. of emailing a 4MB PowerPoint attachment to 100 users instead of simply sending a link to the file. the original email and attachment may languish on the email server for months before the user tidies their Inbox. are distinctly different technologies that together can help reduce the amount of storage capacity required to support applications and thus the amount of energy-consuming hardware in the data center. There is no single silver bullet to dramatically reduce IT energy consumption and cost. In addition. Although the Best Practices recommendations listed below are numbered sequentially.

the primary outcome of a data audit should be to change corporate behavior. This is especially true when SLAs are structured to require fewer backup copies as data value declines. longer backup cycles and more energy consumption.Appendix A: “Best Practices for Energy Efficient Storage Operations” under control. even high-value data does not typically sustain its value over time. aligning applications and data to the appropriate storage tier and migrating data from one tier to another as its value changes can reduce both the cost of storage and the cost of energy to drive it. Many companies lack data management policies or effective means to enforce them because they are already overwhelmed with the consequences of prior data avalanches. Proactively managing data also requires aligning specific applications and their data to the appropriate class of storage. As we will see in the recommendations below. The New Data Center 127 . all applications and data receive the same high level of service. Are data sets periodically reviewed to ensure that only information that is relevant to business is retained? Does your company have a data retention policy and mechanisms to enforce it? Are you educating your users on the importance of managing their data and deleting non-essential or redundant copies of files? Are your Service Level Agreements (SLAs) structured to reward more efficient data management by individual departments? Given that data generators (i. In addition. end users) typically do not understand where their data resides or what resources are required to support it. are not truly mission-critical and do not require the more expensive storage infrastructure needed for high availability and performance. however. creating policies for data management and retention can be a useful means to educate end users about the consequences of excessive data redundancy. Without a logical prioritization of applications in terms of business value. however.. begin with an audit of your existing applications and data and begin prioritizing data in terms of its business value. Although tools are available to help identify and reduce data redundancy throughout the network. Most applications. Responding reactively to the problem. To proactively deal with data growth.e. typically results in the spontaneous acquisition of more storage capacity.

It is therefore essential to determine what applications and data are absolutely required for continuous business operations and thus merit more expensive and less energy efficient RAID protection. While the RAID set does remain online. possibly impacting performance. RAID 5 only requires one spare drive in a RAID set. RAID guards against catastrophic loss of data when disk drives fail by creating redundant copies of data or providing parity reconstruction of data onto spare disks. Both solutions. Unlike RAID 1. a failed disk must be reconstructed from the distributed parity on the surviving drives in the set.Appendix A: “Best Practices for Energy Efficient Storage Operations” Best Practice #2: Select the Appropriate Storage RAID Level Storage networking provides multiple levels of data protection. but at the expense of doubling the number of disk drives and consequently doubling the power consumption of the storage infrastructure. 128 The New Data Center . not all data is mission critical and even high value data may decrease in value over time. ranging from simple CRC checks on data frames to more sophisticated data recovery mechanisms such as RAID. Likewise. As shown in Best Practices #1. providing a higher availability than RAID 5. however. By adding two additional drives. Accessibility to data is sometimes so essential for business operations that the ability to quickly switch from primary storage to its mirror without any RAID reconstruct penalty is an absolute business requirement. it offers the basic data protection against disk failure that RAID 1 provides. RAID 5's distributed parity algorithm enables a RAID set to withstand the loss of a single disk drive in a RAID set. but only against a single disk failure and with no immediate failover to a mirrored array. The primary advantage of RAID 1 is that it can withstand the failure of one or all of the disks in one mirror of a given RAID set. RAID 1 mirroring creates a duplicate copy of disk data. are more energy efficient than RAID 1 mirroring (or RAID 1+0 mirroring and striping) and should be considered for applications that do not require an immediate failover to a secondary array. Fewer redundant drives means less energy consumption as well as better utilization of raw capacity. however. For some mission-critical environments. asynchronous and synchronous data replication provide redundant copies of disk data for high availability access and are widely deployed as insurance against system or site failure. In that respect. the extra cost and power usage characteristic of RAID 1 may be unavoidable. however. RAID 6 can withstand the loss of two disk drives in a RAID set.

Best Practice #3: Leverage Storage Virtualization Storage virtualization refers to a suite of technologies that create a logical abstraction layer above the physical storage layer.Green technologies use less raw capacity to store and use the same data set . Alan Yoder. © 2008 Storage Networking Industry Association. Instead of managing individual physical storage arrays. virtualization enables administrators to manage multiple storage systems as a single logical pool of capacity. The New Data Center 129 . the selection of the appropriate RAID levels to retain high availability data access while reducing the storage hardware footprint can enable incremental green benefits when combined with other technologies.Power consumption falls accordingly Test Test Test Test Test Archive Backup Snapshots “Growth” Test Test Test Test Test Archive Backup Snapshot s “Growth” Test Test Test Test Test Archive Backup Snapshots “G rowth” Test Test Test Test Test Archive Backup Snapshot s “Growth” 5 TB Snapshots “Growth” RAID10 Data Snapshots “Growth” RAID DP Data Snapshots “Growth” RAID DP RAID DP Data Snapshot s “Growth” Data RAID DP Snapshots “Growth” Archive Backup Data RAID DP Data 1 TB RAID10 RAIDDP RAIDDP Snapshots “G rowth” RAIDDP Snapshot s “Growth” RAIDDP Snapshots “Growth” RAIDDP Data Data Data Data Data Data RAID 5/6 Thin Provisioning MultiUse Backups Virtual Clones Dedupe & Compression Figure 1. Software Technologies for Green Storage. All Rights Reserved. as shown in Figure 2.Appendix A: “Best Practices for Energy Efficient Storage Operations”   Test 10 TB Test Test Test Test Archive Backup . for example. NetApp As shown in Figure 1.

for example.Appendix A: “Best Practices for Energy Efficient Storage Operations”   Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 SAN LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 LUN 6 LUN 8 LUN 1 LUN 43 LUN 22 LUN 5 LUN 55 virtualized storage pool LUN 8 LUN 1 LUN 2 LUN 43 LUN 22 LUN 12 LUN 5 LUN 55 Array A Array B physical storage Array C Array A Array B physical storage Array C Figure 2. Clark. however. snapshots and other solutions that contribute to more energy efficient storage operations. By combining dispersed capacity into a single logical pool. storage virtualization is not inherently more energy efficient than conventional storage management but can be used to maximize efficient capacity utilization and thus slow the growth of hardware acquisition. Best Practice #4: Use Data Compression Compression has long been used in data communications to minimize the number of bits sent along a transmission link and in some storage technologies to reduce the amount of data that must be stored. T. 130 The New Data Center . Storage virtualization is also an enabling foundation technology for thin provisioning. though. used with permission from the author On its own. are already compressed and will not benefit from further compression algorithms when written to disk or tape. compression can impose a performance penalty because the data must be encoded when written and decoded (decompressed) when read. resizeable volumes. Addison-Wesley. Storage Virtualization: Technologies for Simplifying Data Storage and Management. Not all data is compressible. JPEG. Simply minimizing redundant or recurring bit patterns via compression. it is now possible to allocate additional storage to resource-starved applications without having to deploy new energyconsuming hardware. can reduce the amount of processed data that is stored by one half or more and thus reduce the amount of total storage capacity and hardware required. Depending on implementation. MPEG and MP3 file formats. and some data formats have already undergone compression at the application layer.

on existing stored data. however. Data should be compressed before encryption on writes and decrypted before decompression on reads. As with data compression. the data deduplication engine must reverse the process when data is read so that the proper blocks are supplied to the read request. may only have minor changes in different areas of the document while the remaining material in the copies have identical content. data deduplication can reduce storage requirements by up to 20:1.Appendix A: “Best Practices for Energy Efficient Storage Operations” When used in combination with security mechanisms such as data encryption. conventional data deduplication works at the disk block level. are required to address the immense volume of already stored data that data center managers must deal with. for example. Rich targets such as full network-based backup of laptops may do much better than this. File deduplication therefore only provides a 3 or 4 to 1 reduction in data volume in general. Similar to block level data deduplication. Data deduplication may be done either in band. Best Practice #6: File Deduplication File deduplication operates at the file system level to reduce redundant copies of identical files. If two files are 99% identical in content. The New Data Center 131 . or in place. and therefore never have to be hunted down and removed. Data deduplication also works at the block level to reduce redundancy of identical files. Best Practice #5: Incorporate Data Deduplication While data compression works at the bit level. Multiple copies of a document. Unlike block level data deduplication. however. as data is transmitted to the storage medium. Redundant data blocks are identified and referenced to a single identical data block via pointers so that the redundant blocks do not have to be maintained intact for backup (virtual to disk or actual to tape). both copies must be stored in their entirety. file deduplication lacks the granularity to prevent redundancy of file content. In place techniques. compression must be executed in the proper sequence. the redundant copies must be identified and then referenced via pointers to a single file source. however. By retaining only unique data blocks and providing pointers for the duplicates. In band techniques have the obvious advantage that multiple copies of data never get made.

Fewer disks equate to lower energy consumption and cost and by monitoring storage usage the storage administrator can add capacity only as required. servers are allocated storage capacity based on the anticipated requirements of the applications they support. more efficient use of existing disk capacity means fewer hardware resources over time and a much better energy profile. can expand or contract depending on the amount of data generated by an application. Typically. but can increase efficient capacity utilization to 70% or more. Best Practice #8: Leverage Resizeable Volumes Another approach to increasing capacity utilization and thus reducing the overall disk storage footprint is to implement variable size volumes. This eliminates the under-utilization issues typical of most applications. by contrast. administrators typically over-provision storage to servers. This minimizes the amount of storage space required for testing while allowing the active non-test applications to continue unimpeded. Dynamic volumes. Resizeable volumes require support from the host operating system and relevant applications. primary data is supplemented by writing only the data changes incurred by testing. configured by the administrator and assigned to specific servers. From a green perspective. A snapshot of the active. Because exceeding that storage capacity over time would result in an application failure. Instead of allocating additional storage space for complete copies of live data. Best Practice #9: Writeable Snapshots Application development and testing are integral components of data center operations and can require significant increases in storage capacity to perform simulations and modeling against real data. Thin provisioning is a means to satisfy the application server's expectation of a certain volume size while actually allocating less physical capacity on the storage array or virtualized storage pool. both for the extra storage capacity itself and in the energy required to support additional spinning disks that are not actively used for IT processing. snapshot technology can be used to create temporary copies for testing. 132 The New Data Center .Appendix A: “Best Practices for Energy Efficient Storage Operations” Best Practice #7: Thin Provisioning of Storage to Servers In classic server-storage configurations. The result of fat provisioning is higher cost. storage volumes are of a fixed size. provides storage on demand and reduces the total disk capacity required for operations.

Because each disk drive represents a power draw. Tiered storage is a combination of different classes of storage systems and data migration tools that enables administrators to align the value of data to the value of the storage container in which it resides. thus enabling an in-chassis data migration. MAID provides inherent The New Data Center 133 . Once no further requests for data in a specific disk set are made. In addition. the drives are once again spun down to idle mode. the response time may not be as critical. Formerly. In addition. most application data was stored on a single class of storage system until it was eventually retired to tape for preservation. Because second-tier storage systems typically use slower spinning or less expensive disk drives and have fewer high availability features. only spinning up disks as required. tiered storage is a reinforcing mechanism for data retention policies as data is migrated from one tier to another and then eventually preserved via tape or simply deleted. For occasional or random access to data. While solid state storage may not be an option for some data center budgets. availability and capacity characteristics. they consume less energy compared to first-tier systems. A tiered storage strategy can help reduce your overall energy consumption while still making less frequently accessed data available to applications at a lower cost per gigabyte of storage. it is possible to migrate data from one class of storage array to another as the business value and accessibility requirements of that data changes over time. Best Practice #12: MAID and Slow-Spin Disk Technology High performance applications typically require continuous access to storage and thus assume that all disk sets are spinning at full speed and ready to read or write data. MAID (massive array of idle disks) technology uses a combination of cache memory and idle disks to service requests. however.Appendix A: “Best Practices for Energy Efficient Storage Operations” Best Practice #10: Deploy Tiered Storage Storage systems are typically categorized by their performance. some larger storage arrays enable customers to deploy both high-performance and moderate-performance disks sets in the same chassis. it should be considered for applications requiring high performance and for tiered storage architectures as a top-tier container. but has excellent performance characteristics and much lower energy consumption compared to spinning media. Best Practice #11: Solid State Storage Solid state storage still commands a price premium compared to mechanical disk storage. Today. however.

Multiple (sometimes 30 or more) meshed switches represent multiple energy consumers in the data center. typically incorporates multiple switches connected by interswitch links (ISLs) for redundant pathing. however. tape is the clear leader in energy efficiency.Appendix A: “Best Practices for Energy Efficient Storage Operations” green benefits. From a green standpoint. Occasionally lengthy access times are inherent to MAID technology. Because slower spinning disks require less power. Best Practice #14: Fabric Design Fabrics provide the interconnect between servers and storage systems. including the inability to comply with regulatory requirements for data security and backup. management and maintenance of geographically distributed systems and increased 134 The New Data Center . duplication of server and storage resources across the enterprise. A mesh design. Because each switch or director in the fabric contributes to the data center power bill. 75% of corporate data resides outside of the data center. the energy profile begins to approach those of conventional storage arrays. For larger data centers. for example. Best Practice #15 . businesses today cannot simply use tape as their primary storage without inciting a revolution among end users and bringing applications to their knees.File System Virtualization By some industry estimates. consolidating the fabric into higher port count and more energy efficient director chassis and core-edge design can help simplify the fabric design and potentially lower the overall energy impact of the fabric interconnect. Although the obituary for tape technology has been written multiple times over the past decade. Unfortunately. the energy efficiency of slow spin arrays is inversely proportional to their frequency of access. This presents a number of issues. tape endures as a viable archive media. Consequently. tape is still the best option for long term data retention. designing an efficient fabric should include the energy and cooling impact as well as rational distribution of ports to service the storage network. however. so it is only useful when data access times of several seconds-the length of time it takes a disk to spin up-can be tolerated. fabrics can be quite extensive with thousands of ports in a single configuration. Best Practice #13: Tape Subsystems As a storage technology. dispersed in remote offices and regional centers. Another approach is to put disk drives into slow spin mode when no requests are pending. the power bill is essentially zero. Once data is written to tape for preservation. As MAID systems are more accessed more frequently.

The New Data Center 135 . In addition. Fabric and Storage Virtualization Data center virtualization leverage virtualization of servers. technologies such as NPIV (N_Port ID Virtualization) reduce the number of switches required to support virtual server connections and emerging technologies such as FCoE (Fibre Channel over Ethernet) can reduce the number of hardware interfaces required to support both storage and messaging traffic. Best Practice #17: Flywheel UPS Technology Flywheel UPSs. incorporating that data into data center best practices for security and backup and maintaining local response-time to remote users. easier to maintain.Appendix A: “Best Practices for Energy Efficient Storage Operations” energy consumption for corporate-wide IT assets. Fabric virtualization enables mobility and more efficient utilization of interconnect assets by providing policy-based data flows from servers to storage. storage virtualization supplies the enabling foundation technology for more efficient capacity utilization. Server virtualization essentially deduplicates processing hardware by enabling a single hardware platform to replace up to 20 platforms. Finally. while more expensive up front. are several percent more efficient (typically > 97%). By extending virtualization end-to-end in the data center. Server virtualization also facilitates mobility of applications so that the proper processing power can be applied to specific applications on demand. the fabric and storage to create a more flexible and efficient IT ecosystem. more reliable and do not have the large environmental footprint that conventional battery-backed UPSs do. Forward-looking data center managers are increasingly finding that this technology is less expensive in multiple dimensions over the lifetime of the equipment. IT can accomplish more with fewer hardware assets and help reduce data center energy consumption. Best Practice #16: Server. File system virtualization includes several technologies for centralizing and consolidating remote file data. File system virtualization can also be used as a means of implementing tiered storage with transparent impact to users through use of a global name space. Applications that require first class handling are given a higher quality of service delivery while less demanding application data flows are serviced by less expensive paths. snapshots. reducing dispersed energy inefficiencies via consolidation helps lower the overall IT energy footprint. resizeable volumes and other green storage solutions. From a green perspective.

this difference can represent literally millions of dollars a year in energy savings. Obviously climate is a major factor in how effective this strategy is: heat and high humidity both reduce its effectiveness.25. Best Practice #19: Increased Data Center temperatures Increasing data center temperatures can save significant amounts of energy. As hot aisle temperatures are typically in the 95° F range. this has the advantage that little to no insulation is needed in the building skin. some cooling is gotten via ordinary thermal dissipation through the building skin.Appendix A: “Best Practices for Energy Efficient Storage Operations” Best Practice #18: Data Center Air Conditioning Improvements The combined use of economizers and hot-aisle/cold aisle technology can result in PUEs of as low as 1. All depend on placing rows of racks front to front and back to back. Even tightly designed cold aisle containment measures may have 10 to 15 degree variations in temperature from top to bottom of a rack. and most data center managers think it unwise to get very close to that upper limit. this results in concentrating the areas where heat evacuation and cool air supply are located. This reduces the volume of air that must be conditioned. and has the advantage that humans will find the building temperature to be more pleasant. hot aisle/cold aisle technologies avoid raised floor configurations. There are various strategies for hot/cold air containment. and in cooler climates. and on conditioned air containment strategies. As almost all data center equipment is designed to draw cooled air in the front and eject heated air out the back. As the PUE (Power Usage Effectiveness ratio) of a traditional data center is often over 2. Another strategy is to isolate both hot and cold aisles. One strategy is to isolate only the cold aisles and to run the rest of the room at hot aisle temperatures.25. Typical enterprise class disk drives are rated to 55° C (131° F). Ability to do this is dependent in much part on excellent temperature and power monitoring capabilities. but disk lifetime suffers somewhat at these higher temperatures. In general. as pumping cool air upward requires extra energy. Economizers work by using outside air instead of recirculated air when doing so uses less energy. the total possible variation plus the maximum measured heat gain across the rack must be subtracted from the maximum tolerated temperature to get a maximum allowable cold 136 The New Data Center .

Appendix A: “Best Practices for Energy Efficient Storage Operations”

aisle temperature. So the more precisely that air delivery can be controlled and measured, the higher the temperature one can run in the “cold” aisles. Benefits of higher temperatures include raised chiller water temperatures and efficiency, reduced fan speed, noise and power draw, and increased ability to use outside air for cooling through an economizer.

Best Practice #20: Work with Your Regional Utilities
Some electrical utility companies and state agencies are partnering with customers by providing financial incentives for deploying more energy efficient technologies. If you are planning a new data center or consolidating an existing one, incentive programs can provide guidance for the types of technologies and architectures that will give the best results.

What the SNIA is Doing About Data Center Energy Usage
The SNIA Green Storage Initiative is conducting a multi-pronged approach for advancing energy efficient storage networking solutions, including advocacy, promotion of standard metrics, education, development of energy best practices and alliances with other industry energy organizations such as The Green Grid. Currently, over 20 SNIA members have joined the SNIA GSI as voting members. A key requirement for customers is the ability to audit their current energy consumption and to take practical steps to minimize energy use. The task of developing metrics for measuring the energy efficiency of storage network elements is being performed by the SNIA Green Storage Technical Work Group (TWG). The SNIA GSI is supporting the technical work of the GS-TWG by funding laboratory testing required for metrics development, formulation of a common taxonomy for classes of storage and promoting GS-TWG metrics for industry standardization. The SNIA encourages all storage networking vendors, channels, technologists and end users to actively participate in the green storage initiative and help discover additional ways to minimize the impact of IT storage operations on power consumption. If, as industry analysts forecast, adequate power for many data centers will simply not be available, we all have a vital interest in reducing our collective power requirements and make our technology do far more with far less environmental impact.

The New Data Center

137

Appendix A: “Best Practices for Energy Efficient Storage Operations”

For more information about the SNIA Green Storage Initiative, link to: http://www.snia.org/forums/green/ To view the SNIA GSI Green Tutorials, link to: http://www.snia.org/education/tutorials#green

About the SNIA
The Storage Networking Industry Association (SNIA) is a not-for-profit global organization, made up of some 400 member companies and 7000 individuals spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organizations in the management of information. To this end, the SNIA is uniquely committed to delivering standards, education, and services that will propel open storage networking solutions into the broader market. For additional information, visit the SNIA web site at www.snia.org. NOTE: The section, “Green Storage Terminology” has been ommited from this reprint, however, you can find green storage terms in the “Glossary” on page 141.

138

The New Data Center

Online Sources
ANSI ASHRAE Blade Systems Alliance Brocade Brocade Communities Brocade Data Center Virtualization Brocade TechBytes Climate Savers Data Center Journal Data Center Knowledge Green Storage Initiative Greener Computing IEEE IETF LEED SNIA The Green Grid Uptime Institute US Department of Energy - Data Centers www1.eere.energy.gov/industry/saveenergynow/ partnering_data_centers.html

B
ansi.org ashrae.com bladesystems.org brocade.com community.brocade.com brocade.com/virtualization brocade.com/techbytes climatesaverscomputing.org datacenterjournal.com datacenterknowledge.com snia.org/forums/green greenercomputing.com ieee.org ietf.org

usgbc.org/DisplayPage.aspx?CMSPageID=222 snia.org thegreengrid.org uptimeinstitute.org

The New Data Center

139

Appendix B: Online Sources 140 The New Data Center .

American National Standards Institute Application Programming Interface. A compute platform optimized for hosting applications for other programs or client access. An IEEE encryption standard for data on tape.Glossary Data center network terminology ACL AES256-GCM AES256-XTS ANS API ASHRAE ASI Access Gateway Access layer Active power Adaptive Networking Aggregation layer Application server ARP spoofing Access control list. a set of calling conventions for program-to-program communication. hardware designed for specific high-speed functions required by protocol applications such as Fibre Channel and Ethernet. a security mechanism for assigning various permissions to a network device. a hacker technique for associating a hacker's Layer 2 (MAC) address with a trusted IP address. American Society for Heating. A Brocade product designed to optimize storage I/O for blade server frames. Address Resolution Protocol spoofing. The energy consumption of a system when powered on and under normal workload. Network switches that provide direct connection to servers or hosts. Refrigerating. An IEEE encryption standard for data on disk. The New Data Center 141 . Network switches that provide connectivity between multiple access layer switches and the network backbone or core. Brocade technology that enables proactive changes in network configurations based on defined traffic flows. and Air Conditioning Engineers Application-specific integrated circuit.

while relying on the shared elements (power supply. Addressing energy consumption by the across-theboard reduction of energy consuming activities. Chlorofluorocarbon. memory. Since it takes more time to handle control path messages. the data path is often implemented in hardware. Computer room air conditioning Typically high-performance network switches that provide centralized connectivity for the data center aggregation and access layer switches. a refrigerant that has been shown to deplete ozone. I/O) of a common frame. fans. also known as Data Center Bridging (DCB). and so on). Metal plates used to cover unused portions of equipment racks to enhance air flow. typically used for longdistance disaster recovery. it is often logically separated from the data path to improve performance. storage. clients. A server architecture that minimizes the number of components required per blade. writing the same data to two separate disk arrays based on a buffered scheme that may not capture every data write. handles data flowing between devices (servers.Glossary Asynchronous Data Replication BTU Blade server Blanking plates Bright green CEE CFC Control path CAN CRAC Core layer Data compression Data deduplication Data path Dark green For storage. British Thermal Unit. Bit-level reduction of redundant bit patterns in a data stream via encoding. 142 The New Data Center . a metric for heat dissipation. In networking. Converged Enhanced Ethernet. To keep up with increasing speeds. Typically used for WAN transmissions and archival storage of data to tape. Converged network adapter. typical ASICs. Block-level reduction of redundant data by replacing duplicate data blocks with pointers to a single good block. modifications to conventional 10 Gbps Ethernet to provide the deterministic data delivery associated with Fibre Channel. a DCB-enabled adapter that supports both FCoE and conventional TCP/IP traffic. In networking. handles configuration and traffic exceptions and is implemented in software. Applying new technologies to enhance energy efficiency while maintaining or improving productivity.

Data Center Bridging. enhancements made to Ethernet LANs for use in data center environments. 1 billion bigabytes The New Data Center 143 . connection of disks or disk arrays directly to servers with no intervening network. a hacking technique to prevent a server from functioning by flooding it with continuous network requests from rogue sources Dense wave division multiplexing. An EPA program that leverages market dynamics to foster energy efficiency in product design. The capacity of a physical system to do work.Glossary DAS DCB DCC DCiE Distribution layer DMTF DoS/DDoS DWDM Data center ERP Economizer Encryption End of row Energy Energy efficiency Energy Star Exabyte Direct-attached storage. a Green Grid for measuring IT equipment power consumption in relation to total data center power draw. provides network connectivity for multiple racks of servers by provisioning a high-availability switch at the end of the equipment rack row. EoR. Data Center Infrastructure Efficiency. standards developed by IEEE and IETF. A facility to house computer systems. a Brocade SAN security mechanism to allow only authorized devices to connect to a switch. a key is used to encode and decode the data from its encrypted format. A technique to encode data into a form that can't be understood so as to secure it from unauthorized access. Distributed Management Task Force. an application that coordinates resources. information and functions of business across the enterprise. Using less energy to provide an equivalent level of energy service. Equipment used to treat external air to cool a data center or building. Typically a tier in the network architecture that routes traffic between LAN segments in the access layer and aggregates access layer traffic to the core layer. storage and network operations Enterprise resource planning. Often. a technique for transmitting multiple data streams on a single fiber optic cable by using different wavelengths. Denial of service/Distributed denial of service. Device Connection Control. a standards body focused on systems management.

a SNIA initiative to promote energy efficient storage practices and to define metrics for measuring the power consumption of storage systems and networks. Host bus adapter. Fibre Channel over Ethernet.999% availability. 144 The New Data Center . Fibre Connectivity. an ANSI standard for providing storage virtualization services from a Fibre Channel switch or director. An uninterruptible power supply technology using a balanced flywheel and kinetic energy to provide transitional power. typically used for SAN extension and disaster recovery applications. an IETF specification for encapsulating Fibre Channel frames in TCP/IP.26 minutes of downtime per year. a network interface optimized for storage I/O. the function in FCoE that forwards frames between a Fibre Channel fabric and FCoE network.Glossary FAIS FCF FCIP FCoE FICON File deduplication File server Five-nines Flywheel UPS Gateway GbE Gigabit (Gb) Gigabyte (GB) Greenwashing GSI GSLB HBA Fabric Application Interface Standard. typically to a Fibre Channel SAN. a Fibre Channel Layer 4 protocol for mapping legacy IBM transport over Fibre Channel. 99. an ANSI standard for encapsulating Fibre Channel frames over Converged Enhanced Ethernet (CEE) to simplify server connectivity. A compute platform optimized for providing file-based data to clients over a network. a Brocade ServerIron ADX feature that enables client requests to be redirected to the most available and higherperformance data center resource. Fiber Channel forwarder. Reduction of file copies by replacing duplicates with pointers to a single original file. Global server load balancing. Green Storage Initiative. Fibre Channel over IP. In networking. Gigabit Ethernet 1000 megabits 1000 megabytes A by-product of excessive marketing and ineffective engineering. or 5. a gateway converts one protocol to another at the same layer of the networking stack. typically used for distance applications.

a set of Fibre Channel switch ports (Ex_Port on the router and E_Port on the switch) that can route device traffic between independent fabrics. Institute of Electrical and Electronics Engineers.or three-chassis configurations. The power consumption of a system when powered on but with no active workload. a technique for migrating storage data from one class of storage system to another based on the current business value of the data. Internet SCSI. a refrigerant shown to deplete ozone. among other things. Internet Engineering Task Force. a standards body responsible for. Information lifecycle management. Inter-fabric routing. High-Performance Computing. high-performance channels used to connect multiple Brocade DCX/DCX-4S backbone platform chassis in two. Inter-chassis link. ventilation and air conditioning The arrangement of data center equipment racks to optimize air flow for cooling in alternating rows. VMs) to run on a single hardware platform. The ability to replace a hardware component without disrupting ongoing operations. an IETF standard for transporting SCSI block data over conventional TCP/IP networks. Inter-fabric link. an ANSI standard for providing connectivity between separate Fibre Channel SANs without creating an extended flat Layer 2 network. typically supercomputers or computer clusters that provide teraflop (1012 floating point operations) levels of performance. iSCSI Serial RDMA.Glossary HCFC HPC HVAC Hot aisle/cold aisle Hot-swap Hypervisor ICL Idle power IEEE IETF IFL IFR ILM Initiator IOPS/W iSCSI iSER Hydrochlorofluorocarbon. an IETF specification to facilitate direct memory access by iSCSI network adapters. responsible for TCP/IP de facto standards. A metric for evaluating storage I/O performance per fixed unit of energy. Input/output operations per second per watt. Software or firmware that enables multiple instances of an operating system and applications (for example. 145 The New Data Center . Ethernet standards. A SCSI device within a host that initiates I/O between the host and storage. Heating.

IP) that enables devices to communicate between different subnets or networks. Local area network. an IEEE specification for grouping multiple separate network links between two switches to provide a faster logical link. In networking. Metro Ring Protocol. a routing protocol (for example. session management. TCP) that provide end-to-end connectivity. Maximum time to data. Link Aggregation Control Protocol. such as a campus or airport. such as a home or office. and data formatting. Massive array of idle disks. For a given category of storage. or small groups of buildings. In networking. the maximum time allowed to service a data read or write. In storage. an IETF specification to enable device registration and discovery in iSCSI environments. a network covering a small physical area. Fibre Channel switch ports (E_Ports) used to provide switch-to-switch connectivity. a Brocade value-added protocol to enhance resiliency and recovery from a link or switch outage. a mid-distance network often covering a metropolitan wide radius (about 200 km).Glossary ISL iSNS Initiator kWh LACP LAN Lite (or Light) green LUN LUN masking Layer 2 Layer 3 Layer 4–7 MAID MAN MaxTTD MRP Inter-switch Link. typic allyl based on Ethernet and/or Wifi. Solutions or products that purport to be energy efficient but which have only negligible green benefits. Logical Unit Number. Metropolitan area network. A means to restrict advertisement of available LUNs to prevent unauthorized or unintended storage access. A storage array that only spins up disks to active state when data in a disk set is accessed or written. a unit of electrical usage common used by power companies for billing purposes. Internet Simple Name Server. The New Data Center 146 . commonly used to refer to a volume of storage capacity configured on a target storage system. Kilowatt hours. a link layer protocol for device-to device communication within the same subnet or network. In networking. a server or host system that initiates storage I/O requests. upper-layer network protocols (for example.

Near online storage Storage systems with longer maximum time to data access. a data map that associates physical storage locations with logical storage locations. NAS Network-attached storage. typically part of a system of measurements to quantify a process or event within a given domain. performance and energy efficiency. GB/W and IOPS/W are examples of proposed metrics that can be applied for evaluating the energy efficiency of storage systems. Metric A standard unit of measurement. Open Systems A vendor-neutral. commonly associated with business applications that perform transactions with a database. Network Replacing multiple smaller switches and routers with consolidation larger switches that provide higher port densities. Online storage Storage systems with fast data access. OLTP On-line Transaction Processing. non-proprietary. use of an optimized file server or appliance to provide shared file access over an IP network. Network Technology that enables a single physical network virtualization infrastructure to be managed as multiple separate logical networks or for multiple physical networks to be managed as a single logical network. storage. typical of most data center storage arrays in production environments. A PDU can also be a single-inlet/multi-outlet device within a rack cabinet The New Data Center 147 . a Fibre Channel standard that enables multiple logical network addresses to share a common physical network port. Non-removable A virtual tape backup system with spinning disks and media library shorter maximum time to data access compared to conventional tape.Glossary Metadata In storage virtualization. OC3 A 155 Mbps WAN link speed. A system that distributes electrical power. PDU Power Distribution Unit. typical of MAID and fixed content storage (CAS). typically stepping down the higher input voltage to voltages required by end equipment. and network domains to automate data center operations. NPIV N_Port ID Virtualization. standards-based approach for IT equipment design and deployment. Orchestration Software that enables centralized coordination between virtualization capabilities in the server.

host. a storage technology for expediting reads and writes of data to disks and/or providing data recovery in the event of disk failure. SCC Switch Connection Control. SAN boot Firmware that enables a server to load its boot image across a SAN. typically based on Fibre Channel. disk arrays. a bridging protocol that replaces conventional STP and enables an approximately 1-second recovery in the event of a primary link failure SAN Storage area network. defines how long data access is unavailable in a disaster. and tape subsystems. a means to prioritize network traffic on a per-application basis. Port In Fibre Channel. RSCN Registered state change notification. RSTP Rapid Spanning Tree Protocol. RPO Recovery point objective. a shared network infrastructure deployed between servers. RTO Recovery time objective.Glossary 1000 terabytes Power over Ethernet. RBAC Role-based access control. Resizeable volumes Variable length volumes that can expand or contract depending on the data storage requirements of an application. F_Port. a Fibre Channel fabric feature that enables notification of storage resources leaving or entering the SAN. RAID Redundant array of independent disks. Removable media A tape or optical backup system with removable library cartridges or disks and >80ms maximum time to data access. QoS Quality of service. defines how much data is lost in a disaster. and so on) and the personality defines the port's function within the overall Fibre Channel protocol. E_Port. Raised floor Typical of older data center architecture. a port is the physical connection on a switch. network permissions based on defined roles or work responsibilities. a Brocade SAN security mechanism to allow only authorized switch-to-switch links. or storage array. Each port has a personality (N_Port. a raised floor provides space for cable runs between equipment racks and cold air flow for equipment cooling. 148 The New Data Center Petabyte PoE/PoE+ . IEEE standards for powering IP devices such as VoIP phones over Ethernet cabling.

a SNIA standard based on CIM/WBEM for managing heterogeneous storage infrastructures. memory. SMB Small and medium business. sFlow An IETF specification for performing network packet captures at line speed for diagnostics and analysis. and I/O) used to support file or application access. Server platform Hardware (typically CPU. A storage taxonomy is required for the development of energy efficiency metrics so that products in a similar class can be evaluated. typically a contracted assurance of response time or performance of an application. companies typically with typically fewer than 1000 employees.Glossary Site Infrastructure Energy Efficiency Ratio. Snapshot A point-in-time copy of a data set or volume used to restore data to a known good state in the event of data corruption or loss. SMI-S Storage Management Initiative Specification. Single Initiator A method of securing traffic on a Fibre Channel fabric Zoning so that only the storage targets used by a host initiator can connect to that initiator. SONET Synchronous Optical Networking. port count. SLA Service-level agreement. Storage taxonomy A hierarchical categorization of storage networking products based on capacity. a WAN for multiplexing multiple protocols over a fiber optic infrastructure. SPOF Single point of failure. a standards body focused on data storage hardware and software. Server virtualization Software or firmware that enables multiple instances of an operating system and applications to be run on a single hardware platform. Server A compute platform used to host one or more applications for client access. The New Data Center 149 SI-EER . availability. SNS Simple name server. a formula developed by The Uptime Institute to calculate total data center power consumption in relation to IT equipment power consumption. SNIA Storage Networking Industry Association. and other attributes. Solid state storage A storage device based on flash or other static memory technology that emulates conventional spinning disk media. a Fibre Channel switch feature that maintains a database of attached devices and capabilities to streamline device discovery.

architecture aggregation. Target In storage. commonly formed to define open. TWG Technical Working Group. TRILL Transparent Interconnect for Lots of Links. For storage. an emerging IETF standard to enable multiple active paths through an IP network infrastructure. Three-tier A network design that incorporates access. Terabyte 1000 bigabytes Thin provisioning Allocating less physical storage to an application than is indicated by the virtual volume size. a means to combine multiple interswitch links (ISLs) to create a faster virtual link. writing the same data to two separate storage systems on a write-by-write basis so that identical copies of current data are maintained. The Internet commonly relies on TCP/IP. Storage virtualization Synchronous Data Replication 150 The New Data Center . Trunking In Fibre Channel. Tiers Often applied to storage to indicate different cost/ performance characteristics and the ability to dynamically move data between tiers based on a policy such as ILM. T3 A 45 Mbps WAN link speed Target A SCSI target within a storage device that communicates with a host SCSI initiator. a storage device or system that receives and executes storage I/O requests from a server or host. Top Talkers A Brocade technology for identifying the most active initiators in a storage network. Type 1 virtualization Server virtualization in which the hypervisor runs directly on the hardware. and core layers to accommodate growth and maintain performance.Glossary Technology that enables multiple storage arrays to be logically managed as a single storage pool. typically used for metro distance disaster recovery. Type 2 virtualization Server virtualization in which the hypervisor runs inside an instance of an operating system. TCP/IP Transmission Control Protocol/Internet Protocol. provides network connectivity for a rack of equipment by provisioning one or more switches in the upper slots of each rack. used to move data in a network (IP) and to move data between cooperating computer applications (TCP). publicly available technology standards. ToR Top of rack.

an IETF specification that enable multiple routers to be configured as a single virtual router to provide resiliency in the event of a link or route failure. WAN networks commonly employ TCP/IP networking protocols. Voice over IP. Wide area network. Virtual Router Redundancy Protocol. The New Data Center 151 . any-to-any connectivity. a unique 64-bit identifier assigned to a Fibre Channel initiator or target. Virtual LAN. World Wide Name. Uninterruptible power supply Unicast Reverse Path Forwarding. Virtual Switch Redundancy Protocol. developed by Intel. an IEEE standard that enables multiple hosts to be configured as a single network regardless of their physical location. a means to enable a single physical router to maintain multiple separate routing tables and thus appear as multiple logical routers.Glossary U UPS uRPF VCS VLAN VM VoIP VRF VRRP VSRP Virtual Fabrics Virtualization WAN WWN Work cell A unit of vertical space (1. commonly able to span the globe. An ANSI standard to create separate logical fabrics within a single physical SAN infrastructure. often spanning multiple switches. an IETF specification for blocking packets from unauthorized network addresses.75 inches) used to measure how much rack space a piece of equipment requires. one of many instances of a virtual operating system and applications hosted on a physical server. A method of carrying telephone traffic over an IP network. a Brocade valueadded protocol to enhance network resilience and recovery from a link or switch failure. Virtual machine. Virtual Cluster Switching. Technology that provides a logical abstraction layer between the administrator or user and the physical IT infrastructure. sometimes expressed as RU (Rack Unit). and the intelligence of fabric switching. Virtual Routing and Forwarding. A unit of rack-mounted IT equipment used to calculate energy consumption. a new class of Brocadedeveloped technologies that overcomes the limitations of conventional Ethernet networking by applying nonstop operations.

152 The New Data Center .Glossary Zetabyte Zoning 1000 exabytes A Fibre Channel standard for assigning specific initiators and targets as part of a separate group within a shared storage network infrastructure.

28 access layer 71 cabling 72 oversubscription 72 Adaptive Networking services 48 Address Resolution Protocol (ARP) spoofing 78 aggregation layer 71 functions 74 air conditioning 5 air flow systems 5 ambient temperature 10.Index Symbols "Securing Fibre Channel Fabrics" by Roger Bouchard 55 access control lists (ACLs) 27.org 22 blanking plates 13 boot from SAN 24 boot LUN discovery 25 Brocade Management Pack for Microsoft Service Center Virtual Machine Manager 86 Brocade Network Advisor 119 Brocade One 117 Brocade Virtual Access Layer (VAL) 118 Brocade Virtual Cluster Switching (VCS) 118 BTU (British Thermal Units) per hour (h) 10 A C CFC (chlorofluorocarbon) 14 computer room air conditioning (CRAC) 4.5 84 ANSI/INCITS T11. 14 American National Standards Institute T11. 3 application delivery controllers 80 performance 82 application load balancing 81. 57 Access Gateway 22. 119 cooling 14 cooling towers 15 core layer 71 functions 74 customer-centric approach 117 B backup 59 The New Data Center 153 . 85 ASHRAE Thermal Guidelines for Data Processing Environments 10 asynchronous data replication 65 Automatic Migration of Port Profiles (AMPP) 120 Bidirectional Forwarding Detection (BFD) 77 blade servers 21 storage access 28 VMs 22 blade.5 standard 41 ANSI/TIA-942 Telecommunications Infrastructure Standard for Data Centers 2. 14 consolidation data centers 70 server 21 converged fabrics 118.

48 data center evolution 117 Data Center Infrastructure Efficiency (DCiE) 6 data center LAN bandwidth 69 consolidation 76 design 75 infrastructure 70 security 77 server platforms 72 data encryption 56 data encryption for data-at-rest. 79 Environmental Protection Agency (EPA) 10 EPA Energy Star 17 Ethernet networks 69 external air 14 H HCFC (hydrochlorofluorocarbon) 14 high-level metrics 7 Host bus adapters (HBAs) 23 hot aisle/cold aisle 11 HTTP (HyperText Transfer Protocol) 80 HTTPS (HyperText Transfer Protocol Secure) 80 humidifiers 14 humidity 10 humidity probes 15 hypervisor 18 secure access 19 154 The New Data Center . 87 Emerson Power survey 1 encryption 56 data-in-flight 27 encryption keys 56 energy efficiency 7 Brocade DCX 54 new technology 70 product design 53. 27 decommissioned equipment 13 dehumidifiers 14 denial of service (DoS) attacks 77 dense wavelength division multiplexing (DWDM) 65. 84 dry-side economizers 15 D F F_Port Trunking 28 Fabric Application Interface Standard (FAIS) 41.Index dark fiber 65. 84 fabric management 119 fabric-based security 55 fabric-based storage virtualization 41 fabric-based zoning 26 fan modules 53 FastWrite acceleration 66 Fibre Channel over Ethernet (FCoE) compared to iSCSI 61 Fibre Channel over IP (FCIP) 62 FICON acceleration 66 floor plan 11 forwarding information base (FIB) 78 frame redirection in Brocade FOS 57 Fujitsu fiber optic system 15 G Gartner prediction 1 Gigabit Ethernet 59 global server load balancing (GSLB) 82 Green Storage Initiative (GSI) 53 Green Storage Technical Working Group (GS TWG) 53 E economizers 14 EMC Invista software 44. 67 Device Connection Control (DCC) 57 disaster recovery (DR) 65 distance extension 65 technologies 66 distributed DoS (DDoS) attacks 77 Distributed Management Task Force (DMTF) 19. 67 Data Center Bridging (DCB) 119 data center consolidation 46.

Index I IEEE AES256-GCM encryption algorithm for tape 56 AES256-XTS encryption algorithm for disk 56 information lifecycle management (ILM) 39 ingress rate limiting (IRL) 49 Integrated Routing (IR) 63 Intel x86 18 intelligent fabric 48 inter-chassis links (ICLs) 28 Invista software from EMC 44 IP address spoofing 78 IP network links 66 IP networks layered architecture 71 resiliency 76 iSCSI 58 Serial RDMA (iSER) 60 IT processes 83 O open systems approach 84 Open Virtual Machine Format (OVF) 84 outside air 14 ozone 14 P particulate filters 14 Patterson and Pratt research 12 power consumption 70 power supplies 53 preferred paths 50 Q quality of service application tiering 49 Quality of Service (QoS) 24. 26 R Rapid Spanning Tree Protocol (RSTP) 77 recovery point objective (RPO) 65 recovery time objective (RTO) 65 refrigerants 14 registered state change notification (RSCN) 63 RFC 3176 standard 77 RFC 3704 (uRPF) standard 78 RFC 3768 standard 76 role-based access control (RBAC) 27 routing information base (RIB) 78 K key management solutions 57 L Layer 4–7 70 Layer 4–7 switches 80 link congestion 49 logical fabrics 63 long-distance SAN connectivity 67 M management framework 85 measuring energy consumption 12 metadata mapping 42.Chassis Trunking (MCT) 120 S SAN boot 24 SAN design 45. 43 Metro Ring Protocol (MRP) 77 Multi. 28 N_Port Trunking 23 network health monitoring 85 network segmentation 78 The New Data Center 155 . 46 storage-centric design 48 security SAN 55 SAN security myths 55 Web applications 81 security solutions 27 Server and StorageIO Group 78 N N_Port ID Virtualization (NPIV) 24.

63 Site Infrastructure Energy Efficiency Ratio (SI-EER) 5 software as a service (SaaS) 70 Spanning Tree Protocol (STP) 73 standardized units of joules 9 state change notification (SCN) 45 Storage Application Services (SAS) 87 Storage Networking Industry Association (SNIA) 53 Green Storage Power Measurement Specification 53 Storage Management Initiative (SMI) 84 storage virtualization 35 fabric-based 41 metadata mapping 38 tiered data storage 40 support infrastructure 4 Switch Connection Control (SCC) 57 synchronous data replication 65 Synchronous Optical Networking (SONET) 65 U Unicast Reverse Path Forwarding (uRPF) 78 UPS systems 3 Uptime Institute 5 V variable speed fans 12 Virtual Cluster Switching (VCS) architecture 122 Virtual Fabrics (VF) 62 virtual IPs (VIPs) 79 virtual LUNs 37 virtual machines (VMs) 17 migration 86.Index server virtualization 18 IP networks 69 mainstream 86 networking complement 79 service-level agreements (SLAs) 27 network 80 sFlow RFC 3176 standard 77 simple name server (SNS) 60. 51 top-of-rack access solution 73 traffic isolation (TI) 51 traffic prioritization 26 Transparent Interconnection of Lots of Links (TRILL) 119 W wet-side economizers 15 work cell 12 World Wide Name (WWN) 25 156 The New Data Center . 120 mobility 20 Virtual Router Redundancy Protocol (VRRP) 76 Virtual Routing and Forwarding (VRF) 78 virtual server pool 20 Virtual Switch Redundancy Protocol (VSRP) 77 virtualization network 79 orchestratration 84 server 18 storage 35 Virtualization Management Initiative (VMAN) 19 VM mobility IP networks 70 VRRP Extension (VRRPE) 76 T tape pipelining algorithms 66 temperature probes 15 The Green Grid 6 tiered data storage 40 Top Talkers 26.

Sign up to vote on this title
UsefulNot useful