Professional Documents
Culture Documents
com
WHITE PAPER
Data Center
Achieving Enterprise
SAN Performance with the
Brocade DCX Backbone
Overview
In January 2008, Brocade introduced the Brocade DCX Backbone (see Figure 1), the
first platform in the industry to provide 8 Gigabits per second (Gbps) Fibre Channel (FC)
capabilities. With the release of Fabric OS (FOS) 6.0 at the same time, the Brocade DCX
Backbone added 8 Gbps Fibre Channel and FICON performance for data-intensive storage
applications.
In January 2009, the Brocade DCX-4S (see Figure 2) was added to the backbone family,
and the Brocade DCX has become a key component in thousands of data centers around the
world. New Fibre Channel over Ethernet (FCoE) and SAN extension blades were introduced in
September 2009. In June 2010, Brocade launched the industrys first and only 8 Gbps
64-port blade.
Although this paper focuses on the Brocade DCX, some information is provided for the
Brocade DCX-4X Backbone, notably in the section on Inter-Chassis Link (ICL) configuration.
For more details on these two backbone platforms, see the Brocade DCX Backbone Family
Data Sheet on www.brocade.com.
Figure 1.
Brocade DCX (left)
and Brocade DCX-4S (right).
The Brocade DCX Backbone uses just 6 watts AC per port and 0.7 watts per Gbps at its
maximum 8 Gbps 512-port configuration. The Brocade DCX-4S uses just 6.7 watts AC per
port and 0.8 watts per Gbps at its maximum 8 Gbps 256-port configuration. Both are twice
as efficient as its their predecessors and up to six times more efficient than competitive
products. This efficiency not only reduces data center power billsit reduces cooling
requirements and minimizes or eliminates the need for data center infrastructure upgrades,
such as new Power Distribution Units (PDUs), power circuits, and larger Heating, Ventilation,
and Air Conditioning (HVAC) units. In addition, the highly integrated architecture uses fewer
active electric components boarding the chassis, which improves key reliability metrics such
as Mean Time Between Failure (MTBF).
The Brocade DCX Backbone leverages a highly flexible multiprotocol architecture, supporting
Fibre Channel, Fibre Connectivity (FICON), Fibre Channel over Ethernet, Fibre Channel over
IP (FCIP), IP over Fibre Channel (IPFC), and Data Center Bridging (DCB) IT organizations
can also easily mix FC port blades with advanced functionality blades for FCoE server I/O
convergence, SAN encryption, and SAN extension to build an infrastructure that optimizes
functionality, price, and performance. And ease of setup enables data center administrators
to quickly maximize its performance and availability.
This paper describes the internal architecture of Brocade DCX Backbones and how best
to leverage their industry-leading performance and blade flexibility to meet business
requirements.
3
Condor 2 ASICs also enable Brocade Inter-Switch Link (ISL) Trunking with up to 64 Gbps
full-duplex, frame-level trunks (up to 8 x 8 Gbps links in a trunk) and Dynamic Path Selection
(DPS) for exchange-level routing between individual ISLs or ISL Trunking groups. Exchangebased DPS automatically optimizes fabric-wide performance by automatically routing data
to the most efficient available path in the fabric. DPS augments ISL Trunking to provide more
effective load balancing in certain configurations, such as routing data between multiple
trunk groups. Up to 8 trunks can be balanced to achieve a total throughput of 512 Gbps.
Furthermore, Brocade has significantly improved frame-level Trunking through a masterless
link in a trunk group. If an ISL trunk link ever fails, the ISL trunk seamlessly reforms with the
remaining links, enabling higher overall data availability.
Preventing frame loss during an event such as the addition or removal of an ISL while the
fabric is active is a critical customer requirement. Lossless Dynamic Load Sharing and DPS
enable optimal utilization of ISLs by performing traffic rebalancing operations during fabric
events such as E_Port up/down, F_Port down, and so on. Typically, when a port goes down or
comes back up, frames may be dropped or arrive out of order or traffic imbalance may occur.
Brocades Lossless DLS/DPS architecture rebalances traffic at the frame and exchange level
delivering in-order traffic without dropping frames, thus preventing application timeouts or
SCSI retries.
Description
Introduced with
FOS 6.0
FOS 6.0
FC8-16
FOS 6.0
FC8-32
FOS 6.1
FC8-48
FOS 6.1
FC8-64
FOS 6.4
FCOE10-24
FOS 6.3
FS8-18 Encryption
Blade
FOS 6.1.1_enc
FX8-24 Extension
Blade
FOS 6.3
FC10-6
FOS 5.3
FA4-18 Fabric
Application Blade
FOS 5.3
Figure 1.
CP8 blade design.
CP Power
CPU
Figure 2.
CR8 blade design.
CR Power
ASIC
128 Gbps
ICL Connection
(ICL1)
1 Tbps to Blades
over Backplane
ASIC
128 Gbps
ICL Connection
(ICL0)
ASIC
ASIC
CR Power
Switching Block
Multi-Chassis Configuration
The dual-and triple-chassis configurations for the Brocade DCX and DCX-4S provide ultra-highspeed Inter-Chassis Link (ICL) ports to connect two or three backbones, providing extensive
scalability (up to 1536 x 8 Gbps universal FC ports) and flexibility at the network core. Special
ICL copper-based cables are used, which connect directly and require no SFPs. Connections
are made between 8 ICL ports (4 per chassis), located on the CR8 blades. The supported
cable configurations for connecting two Brocade DCX Backbones are shown in Figure 3. The
supported configurations for a three-chassis configuration are shown in Figure 4. This is a
good option for customers who want to build a powerful core without wasting the ports for ISL
connectivity between chassis.
NOTE: In a single rack, you can connect three Brocade DCX-4S chassis in addition to the
options shown in Figure 4. A three-chassis topology is supported for chassis in two racks as
long as the third chassis is in the middle of the second rack.
Brocade DCX supports 16 or 8 links per ICL cable, which means 16 or 8 Gbps E_Ports per ICL
cable. The Brocade DCX-4S supports 8 links per ICL.
Brocade DCX
Brocade DCX-4S
Figure 3.
Examples of
Brocade DCX/DCX-4S
dual-chassis configuration.
ICL cables
Brocade DCX
Figure 4.
Examples of
Brocade DCX/DCX-4S
three-chassis configuration.
Brocade DCX-4S
Figure 5.
Photograph of a Brocade DCX
three-chassis configuration
across two racks.
Brocade offers 16-, 32-, 48-, and 64-port 8 Gbps blades to connect to servers, storage,
or switches. All of the port blades can leverage Local Switching to ensure full 8 Gbps
performance on all ports. Each CR8 blade contains four ASICs that switch data over the
backplane between port blade ASICs. A total 256 Gbps of aggregate bandwidth per blade is
available for switching through the backplane. Mixing switching over the backplane with Local
Switching delivers performance of up to 512 Gbps per blade using 64-port blades.
For distance over dark fiber using Brocade-branded Small Form Factor Pluggables (SFPs),
the Condor 2 ASIC has approximately twice the buffer credits as the Condor ASICenabling
1, 2, 4, or 8 Gbps ISLs and more long-wave connections over greater distances.
When connecting a large number of devices that need sustained 8 Gbps transmission line
rates, IT organizations can leverage Local Switching to avoid congestion. Local Switching on
FC port blades reduces port-to-port latencyframes cross the backplane in 2.1 s, locally
switched frames cross the blade in only 700 nsthe latency from crossing the backplane is
still more than 50 times faster than disk access times and is much faster than any competing
product.
All 8 Gbps ports on the FC8-16 blade operate at full line rate through the backplane or with
Local Switching.
Figure 6 shows a photo and functional diagram of the 8 Gbps 16-port blade.
Power and
Control Path
No Oversubscription
at 8 Gbps
16 8 Gbps ports
ASIC
Figure 6.
FC8-16 blade design.
256 Gbps to
Core Switching
All 8 Gbps ports on the FC8-32 blade operate at full line rate through the backplane or with
Local Switching.
Figure 7 shows a photograph and functional diagram of the FC8-32 blade.
Figure 7.
FC8-32 blade design.
Power and
Control Path
No Oversubscription
at 8 Gbps
16 8 Gbps Port
Switching Group
256:256 (1:1)
Subscription Ratio
No Oversubscription
at 8 Gbps
16 8 Gbps Port
Switching Group
ASIC
ASIC
ASIC
256 Gbps to
Control Processor/
Core Switching
The FC8-48 blade has a higher backplane oversubscription ratio at 8 Gbps but larger port
groups to take advantage of Local Switching. While the backplane connectivity of this blade is
identical to the FC8-32 blade, the FC8-48 blade exposes 24 user-facing ports per ASIC rather
than 16. Oversubscription occurs only when the first 32 ports are fully utilized.
Figure 8 shows a photograph and functional diagram of the FC8-48 blade.
Figure 8.
FC8-48 blade design.
Power and
Control Path
24 8 Gbps
Port Switching Group
Relative 1.5:1
Oversubscription
at 8 Gbps
24 8 Gbps
Port Switching Group
Relative 1.5:1
Oversubscription
at 8 Gbps
384:256 (1.5:1)
384 Gbps Available
for Local Switching
10
ASIC
The FC8-64 blade has a 2:1 oversubscription ratio at 8 Gbps switching through the
backplane and no oversubscription with Local Switching. At 4 Gbps speeds, all 64 ports
can switch over the backplane with no oversubscription. The FC8-64 blade exposes 16
user-facing ports per ASIC, and up to eight 8-port trunk groups can be created with the
64-port blade.
Figure 9 shows a photograph and functional diagram of the FC8-64 blade.
Power and
Control Path
Relative 2:1
Oversubscription
at 8 Gbps
ASIC
16 8 Gbps
Port Groups
Figure 9.
FC8-64 blade design.
Fibre Channel
Switching
256 Gbps to
Backplane
ASIC
ASIC
Specialty Blades
DCB/FCoE Blade
The Brocade FCOE10-24 blade is designed as an end-of-row chassis solution for server I/O
consolidation (see Figure 15). Its a resilient, hot-pluggable blade that features 24 x 10 Gbps
Data Center Bridging (DCB) ports with a Layer 2 cut-through and non-blocking architecture,
which provides wire-speed performance for traditional Ethernet, DCB, and FCoE traffic.
The FCOE10-24 features a high-performance FCoE hardware engine (Encap/Decap) and can
use 8 Gbps Fibre Channel ports on 16-, 32-, and 48-port blades to integrate seamlessly into
existing Fibre Channel SANs and management infrastructures. The blade supports industrystandard Link Aggregation Control Protocol (LACP) and Brocade enhanced, frame-based port
Trunking that delivers 40 Gbps of aggregate bandwidth.
11
Figure 10.
Brocade FCOE10-24
Blade design.
Power and
Control Path
24 x 10 GbE
Ports
DCB
Switching
ASICs
ASIC
256 Gbps to
Backplane
Fibre Channel
Switching
ASICs
FCoE
Bridging
Figure 11.
Brocade FS8-18
Encryption Blade design.
2 x RJ-45 GbE
reduncant
cluster ports
Smart Card reader
Power and
Control Path
8 x 8 Gbps
Fibre Channel ports
8 x 8 Gbps
Fibre Channel ports
12
Power and
Control Path
10 x GbE
Ports
Figure 12.
FX8-24 blade design.
FCIP,
Compression,
and Encryption
2 x Optional
10 GbE Ports
Fibre Channel
Switching
12 x 8 Gbps
Port Switching
Group
ASIC
64 Gbps to
Backplane
The Brocade FC10-6 blade consists of 6 x 10 Gbps Fibre Channel ports that use 10 Gigabit
Small Form Factor Pluggable (XFP) optical transceivers. The primary use for the FC10-6 blade
is for long-distance extension over dark fiber. The ports on the FC10-6 blade operate only in
E_Port mode to create ISLs. The FC10-6 blade has buffering to drive 10 Gbps connectivity up
to 120 km per port and exceed the capabilities of 10 Gbps XFPs available in short-wave,
10, 40, and 80 km long-wave versions. While potential oversubscription of a fully populated
blade is small (1.125:1), Local Switching is supported in groups consisting of ports 0 to 2
and ports 3 to 5, enabling maximum port speeds ranging from 8.9 to 10 Gbps full duplex.
The Brocade FA4-18 Application Blade has 16 x 4 Gbps Fibre Channel ports and 2 x
auto-sensing 10/100/1000 Megabits per second (Mbps) Ethernet ports for LAN-based
management. It is tightly integrated with several enterprise storage applications that leverage
the Brocade Storage Application Services (SAS) APIan implementation of the T11 FAIS
standardto provide wire-speed data movement and offload server resources. These fabricbased applications provide online data migration, storage virtualization, and continuous data
replication and protection, and other partner applications.
13
The core/edge network topology has emerged as the design of choice for large-scale, highly
available, high-performance SANs constructed with multiple switches of any size.
The Brocade DCX Backbone uses an internal architecture analogous to a core/edge fattree topology, which is widely recognized as being the highest-performance arrangement of
switches. Note that the Brocade DCX Backbone is not literally a fat-tree network of discrete
switches, but thinking of it in this way provides a useful visualization.
While IT organizations could build a network of 40-port switches with similar performance
characteristics to the Brocade DCX Backbone, it would require more than a dozen 40-port
switches connected in a fat-tree fashion. This network would require complex cabling,
management of 12+ discrete switching elements, support for higher power and cooling,
and more SFPs to support ISLs. In contrast,
the Brocade DCX delivers the same high
The Brocade DCX
level of performance without the associated
disadvantages of a large multi-switch network,
Backbone architecture
bringing fat-tree performance to IT organizations
enables the entire
that could previously not justify the investment or
backbone to be a single overhead costs.
14
Any type of failure on the Brocade DCXwhether a control processor or core ASICis
extremely rare. However, in the unusual event of a failure, the Brocade DCX is designed for
fast and easy control processor replacement. This section describes potential (albeit unlikely)
failure scenarios and how the Brocade DCX is designed to minimize the impact
on performance and provide the highest level of system availability.
If the processor section of the active control processor blade fails, it affects only the
management plane and data traffic between end devices continues to flow uninterrupted.
With the addition of the second IP port, the risk of having to fail over to a standby CP is
potentially minimized. A control processor failure has no effect on the data plane: the standby
control processor automatically takes over and the backbone continues to operate without
dropping any data frames.
Data flows would not necessarily become congested in the Brocade DCX Backbone with one
CP8 failure. A worst- case scenario would require the backbone to be running at or near
50 percent of bandwidth capacity on a sustained basis. With typical I/O patterns and some
Local Switching, however, aggregate bandwidth demand is often below 50 percent maximum
capacity. In such environments there would be no impact, even if a failure persisted for
an extended period of time. For environments with higher bandwidth usage, performance
degradation would last only until the failed core blade is replaced, a simple 5-minute
procedure.
SUMMARY
With an aggregate chassis bandwidth far greater than competitive offerings, Brocade DCX
Backbones are architected to deliver congestion-free performance, broad scalability, and
high reliability for real-world enterprise SANs. As demonstrated by Brocade testing, the
Brocade DCX:
Delivers 8 and 4 Gbps Fibre Channel and FICON line-rate connectivity on all ports
simultaneously
Provides Local Switching to maximize bandwidth for high-demand applications
Offers port blade flexibility to meet specific connectivity, performance, and budget needs
Provides investment protection by supporting data security, inter-fabric routing, SAN
extension, and emerging protocols such as FCoE in the same chassis
Performs fabric-based data migration, protection, and storage virtualization
Delivers five-nines availability
For further details on the capabilities of the Brocade DCX Backbone in the Brocade data
center fabric, visit:
http://www.brocade.com/products-solutions/products/dcx-backbone/index.page
There you will find the Brocade DCX Backbone Family Data Sheet and relevant Technical
Briefs and White Papers.
15
WHITE PAPER
Corporate Headquarters
San Jose, CA USA
T: +1-408-333-8000
info@brocade.com
www.brocade.com
European Headquarters
Geneva, Switzerland
T: +41-22-799-56-40
emea-info@brocade.com
2010 Brocade Communications Systems, Inc. All Rights Reserved. 06/10 GA-WP-1224-01
Brocade, the B-wing symbol, BigIron, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, and TurboIron
are registered trademarks, and Brocade Assurance, DCFM, Extraordinary Networks, and Brocade NET Health are
trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands,
products, or service names mentioned are or may be trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied,
concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the
right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This
informational document describes features that may not be currently available. Contact a Brocade sales office for
information on feature and product availability. Export of technical data contained in this document may require an
export license from the United States government.