You are on page 1of 137

Storage Area Networking Core Edge Design Best Practices

BRKSAN-1121

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session BRKSAN-1121 Abstract


SAN Core-Edge Design Best Practices
This session gives non-storage-networking professionals the fundamentals to understand and implement storage area networks (SANs). This curriculum is intended to prepare attendees for involvement in SAN projects and I/O Consolidation of Ethernet & Fibre Channel networking. You will be exposed to the introduction of Storage Networking terminology & Designs. Specific topics covered include Fibre Channel (FC), FCoE, FC services, FC addressing, fabric routing, zoning, virtual SANs (VSANs). The session includes discussions on Designing Core-Edge Fibre Channel Networks and the best practice recommendations around them. This is an introductory session and attendees are encouraged to follow up with other SAN breakout sessions and labs to learn more about specific advanced topics.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Who Am I?
Chad Hintz Technical Solutions Architect-Data Center/Virtualization CCIE #15729 Routing & Switching, Security, Storage

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

What Are Storage Area Networks

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

History of Storage Area Networks

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Direct Attached Storage: DAS

DAS Direct-Attach Storage Cant share High Speed Access Dedicated capacity Difficult to share data Difficult to manage
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Network Attached Storage: NAS


LAN

NAS

File sharing Network-Attached Storage NAS More Centralized Storage Attached over LAN efficient capacity usage Performance limits usefulness of NAS mainly used for file storage and low-end databases
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Storage Area Network: SAN


LAN

NAS

SAN

Match DAS performance SAN Storage Area Network Capacity deployed and redeployed Dedicated Back End Network Centralized management Diskless servers simplified management, reduced power and cooling
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

SAN Protocols
Fibre Channel
Fibre Channel is a gigabit-speed network technology primarily used for storage networking

FCoE
Fibre Channel over Ethernet (FCoE) is an encapsulation of FibreChannel frames over Ethernet networks.This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol

iSCSI
iSCSI is a TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Fibre Channel Basics

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

12

SAN Components

Servers with host bus adapters Storage systems RAID JBOD Tape Switches SAN management software

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Types of Fibre Channel Switches


Enterprise and Service Provider Small/Medium Business

MDS 9200

MDS 9506, 9509, 9513

MDS 91XX FC Bladeswitch

Redundancy/Services

Edge
BRKSAN-1121

Cisco MDS 9000 Modular


2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Director

Fibre Channel Port Types


N N

N port: Node ports used to connect devices to switched fabric or point to point configurations. F port: Fabric ports residing on switches connecting N port devices L port: Loop ports are used in arbitrated loop configurations to build networks without FC switches. These ports often also have N port capabilities and are called NL ports. E port: Expansion ports are essentially trunk ports used to connect two Fibre Channel switches GL port: A generic port capable of operating as either an E or F port. Its also capable of acting in an FL port capacity. Auto Discovery.
N F

NL

FL

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Fibre Channel Port Types


Fibre Channel Switch

Fabric Switch E_Port Fabric TE_Port Switch

E_Port

F_Port

NP_Port

NPV Switch

TE_Port
End Node End Node

G_Port

F_Port

N_Port

G_Port

F_Port

N_Port

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Start from the Beginning


Start with the host and a target that need to communicate Host has 2 HBAs (one per fabric) each with a WWN Target has multiple ports to connect to fabric Connect to a FC Switch Port Type Negotiation Speed Negotiation FC Switch is part of the SAN fabric Most commonly, dual fabrics are deployed for redundancy
HBA FABRIC A

Target
FC

Core

Edge

Initiator
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

My Port Is UpCan I Talk Now?


FLOGIs/PLOGIs
Step 1: Fabric Login (FLOGI) determines the presence or absence of a Fabric exchanges Service Parameters with the Fabric switch identifies the WWN in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID) initializes the buffer-to-buffer credits Step 2: Port Login (PLOGI) required between nodes that want to communicate similar to FLOGI transports a PLOGI frame to the designation node port In p2p topology (no fabric present), initializes buffer-to-buffer credits
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Target
FC

Core E_Port

Edge

F_Port N_Port HBA

Initiator

Buffer to Buffer Credits


Fibre Channel Flow Control
B2B Credits used to ensure that FC transport is lossless # of credits negotiated between ports when link is brought up # Credits decremented with each packet placed on the wire Independent of packet size If # credits == 0, no more packet transmission # of credits incremented with each transfer ready received B2B Credits need to be taken into consideration as distance and/or bandwidth increases *See reference slides for B2B-Distance calculations based on Line cards.
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Fibre Channel Switch

16
R_RDY Packet

16 15 16 Host

Over-Subscription
FAN-OUT Ratio
Over-subscription (or fan-out) ratio for sizing ports and links Factors used
Speed of Host HBA interfaces Speed of Array interfaces Type of server and application

Storage vendors provide guidance in the process Ratios range between 4:1 - 20:1

3 x 8G ISL ports

6 x 4G Array ports
FC

Example: 10:1 O/S ratio

60 Servers with 4 Gb HBAs

240 G
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

24 G

24 G

Fabric Name and Addressing: WWN


Every Fibre Channel port and node has a hard-coded address called World Wide Name (WWN) During FLOGI the switch identifies the WWN in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID) Switch Name Server maps WWNs to FCID
WWNN uniquely identify devices WWPN uniquely identify each port in a device

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Fabric Channel ID (FCID)


Fibre Channel Addressing Scheme
FCID assigned to every WWN corresponding to an N_Port FCID made up of switch domain, area and device domain ID is native to a single FC switch limitation of domain IDs in a single fabric Forwarding decisions made on domain ID found in first 8 bits of FCID

Domain ID
FC Fabric

Switch Topology Model


BRKSAN-1121

Switch Domain
2011 Cisco and/or its affiliates. All rights reserved.

Area
Cisco Public

Device

Fabric Shortest Path First (like OSPF)


Fibre Channel Forwarding
FSPF routes traffic based on destination domain ID found in the destination FCID For FSPF a domain ID identifies a single switch
The number of Domains IDs are limited to 239/75 (theoretical limited/tested and qualified) within the same fabric (VSAN)

FSPF performs hop-by-hop routing FSPF uses total cost as the metric to determine most efficient path FSPF supports equal cost load balancing across links

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Directory Server/Name Server (Like DNS)


Repository of information regarding the components that make up the Fibre Channel network Located at address FF FF FC (some readings call this the name server) Components can register their characteristics with the directory server An N_Port can query the directory server for specific information
Query can be the address identifier, WWN and volume names for all SCSI targets

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Login CompleteAlmost There


Fabric Zoning
Zones are the basic form of data path security zone members can only see and talk to other members of the zone devices can be members of more than one zone Default zoning is deny Zones belong to a zoneset Zoneset must be active to enforce zoning Only one active zoneset per fabric or per VSAN
FC Fabric

pwwn 50:06:01:61:3c:e0:1a:f6

Target
FC

zone name EXAMPLE_Zone * fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [tnitiator] * fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target]

pwwn 10:00:00:00:c9:76:fd:31

Initiator
Cisco Public

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Enhanced vs. Basic Zoning


Basic Zoning Enhanced Zoning Enhanced Advantages

One configuration session for entire fabric to ensure consistency within fabric Reduced payload size If a zone is a member References to the zone are as the zone is of multiple zonesets , used by the zonesets as referenced. The size is required once you define the an instance is more pronounced with created per zoneset. zone. bigger database Fabric-wide policy Default zone policy is Enforces and exchanges default zone setting throughout enforcement reduces defined per switch. the fabric troubleshooting time.

Administrators can make simultaneous configuration changes

All configuration changes are made within a single session. Switch locks entire fabric to implement change

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Enhanced vs. Basic Zoning


Basic Zoning Enhanced Zoning Enhanced Advantages

Managing switch provides combined status about activation. Will not identify a failure switch. To distribute zoneset must re-activate the same zoneset. During a merge MDS specific types can be misunderstood by noncisco switches.

Retrieves the activation results and the nature of the problem from each remote switch.

Enhanced error reporting reduces troubleshooting process.

Implements changes to the This avoids hardware zoning database and changes for hard distributes it without zoning in the switches. activation. Provides a vendor ID along with a vendor-specific type Unique Vendor type value to uniquely identify a member type

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

ZoningEnforcement
Zoning is used to control access in a SAN Soft zoning
Enforced by name server query responses Name server sends membership list to N_Port N-port accesses members only

Hard zoning
Enforced by hardware (forwarding ASIC) at wire speed pWWN, fWWN, FC_ID, FC_Alias

Host FC Zone-1

Soft Zone

Hard Zone
Host FC Array Zone-1 Array MDS Array
Cisco Public

MDS Array
BRKSAN-1121

MDS Host FC

Zone-2

MDS Host FC

Zone-2

2011 Cisco and/or its affiliates. All rights reserved.

Hard zone

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Soft zone

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Virtual SANs (VSANs)


Virtual Fabric Separation
A Virtual SAN (VSAN) Provides a Method to Allocate Ports within a Physical Fabric and Create Virtual Fabrics
Analogous to VLANs in Ethernet Virtual fabrics created from larger costeffective redundant physical fabric Reduces wasted ports of a SAN island approach Fabric events are isolated per VSAN which gives further isolation for High Availability Statistics can be gathered per VSAN Each VSAN provides Separate Fabric Services
FSPF, Zones/Zoneset, DNS, RSCN VSANs supported on MDS and Nexus 5000 Product lines

Physical SAN Islands Are Virtualized onto Common SAN Infrastructure

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Islands Before VSANs


Production SAN Tape SAN Test SAN

SAN A DomainID=1 DomainID=7

SAN B DomainID=2 DomainID=8

SAN C DomainID=3

SAN D DomainID=4

SAN E DomainID=5

SAN F Domain ID=6

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Islands with Virtual SANs


Production SAN Tape SAN Test SAN

SAN A DomainID=1

SAN B DomainID=2

SAN C DomainID=3

SAN D DomainID=4

SAN E DomainID=5

SAN F Domain ID=6

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

VSANs and ZonesComplimentary


Virtual SANs and Fabric Zoning Are Very Complimentary
Hierarchical relationship
First assign physical ports to VSANs Then configure independent zones per VSAN

Relationship of VSANs to Zones Physical Topology


VSAN 2 Disk2 ZoneA Disk3 Host1 Disk1 Disk4 Host2

VSANs divide the physical infrastructure Zones provide added security and allow sharing of device ports VSANs provide traffic statistics VSANs only changed when ports needed per virtual fabric Zones can change frequently (e.g., backup) Ports are added/removed non-disruptively to VSANs

ZoneC

ZoneB VSAN 3

ZoneD Host4

ZoneA Disk5 Host3 Disk6

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Inter-Switch Link PortChanneling


A PortChannel Is a Logical Bundling of Identical Links
Criteria for forming a PortChannel
Same speed links Same modes (auto, E, etc.) and states Between same two switches Same VSAN membership

Treated as one logical ISL by upper layer protocols (FSPF) Can use up to 16 links in a PortChannel (32 Gbps max) Can be formed from any ports on any modulesHA enabled Exchange-based in-order load balancing Mode one: based on src/dst FC_IDs Mode two: based on src/dst FC_ID/OX_ID Much faster recovery than FSPF-based balancing Given logical interface name with aggregated bandwidth and derived routing metric

E.g., 4-Gbps PortChannel (Two x 2 Gbps) E.g., 8-Gbps PortChannel (Four x 2 Gbps)

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

PortChannel vs. Trunking


ISL = inter-switch link PortChannel = E_Ports and ISLs Trunk = ISLs that support VSANs Trunking = TE_Ports and EISLs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design Requirements

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

38

SAN Design
Key Requirements
High Availability - Providing a Dual Fabric (current best practice) Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for virtualized servers Providing connectivity for diverse server placement and form factors

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Design Requirements


Classical Fibre Channel
Fibre Channel SAN
Transport and Services are on the same layer in the same devices Well defined end device relationships (initiators and targets) Does not tolerate packet drop requires lossless transport Only north-south traffic, east-west traffic mostly irrelevant
I0 I1 I2

T0
DNS
Zone FSPF

T1

T2
FSPF

Zone

Switch
RSCN

Switch
DNS DNS
FSPF Zone RSCN

Switch RSCN

I5 I4

I3

Network designs optimized for Scale and Availability


High availability of network services provided through dual fabric architecture Edge/Core vs Edge/Core/Edge Service deployment
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Fabric topology, services and traffic flows are structured

I(c) T(s) I(c)


Client/Server Relationships are pre-defined

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Typical SAN Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

42

SAN Design Single Tier Topology


Collapsed Core Design Servers connect to the Core switches Storage devices connect to one or more core switches Core switches provide storage services Large amount of blades to support Initiator (Host) and Target (Storage) ports Single Management per Fabric Normal for Small SAN environments HA achieved in two physically separate, but identical, redundant SAN fabrics
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

Fabric A Core Core

Fabric B

How Do We Avoid This?

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design Two Tier Topology


Core-Edge Topology- Most Common Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

Core Fabric A

Core Fabric B

Edge

Edge

FC

SAN Design Three Tier Topology


Edge-Core-Edge Topology Servers connect to the edge switches Storage devices connect to one or more edge switches Core switches provide storage services to one or more edge switches, thus servicing more servers and storage in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Edge

Edge

Fabric A Core Core

Fabric B

Edge

Edge

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Introduction to NPIV/NPV

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

48

What Is NPIV? and Why?


N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port
Limitation exists in FC where only a single FCID can be handed out per F-port. Therefore and F-Port can only accept a single FLOGI

Allows multiple applications to share the same Fiber Channel adapter port Usage applies to applications such as Virtualization

Applica'on Server

FC NPIV Core Switch


F_Port
F_Port

Email

Email I/O N_Port_ID 1 Web I/O N_Port_ID 2 File Services I/O N_Port_ID 3

Web

File Services
BRKSAN-1121

N_Port
Cisco Public

2011 Cisco and/or its affiliates. All rights reserved.

What Is NPV? and Why?


N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a switch to act like a server performing multiple logins through a single physical link Physical servers connected to the NPV switch login to the upstream NPIV core switch No local switching is done on an FC switch in NPV mode FC edge switch in NPV mode does not take up a domain ID
Helps to alleviate domain ID exhaustion in large fabrics

Applica'on Server
F-Port

NPV Switch

FC NPIV Core Switch

Eth1/1

Server1 N_Port_ID 1 Server2 N_Port_ID 2 Server3 N_Port_ID 3


2011 Cisco and/or its affiliates. All rights reserved.

NP-Port

F-Port

Eth1/2

F_Port

Eth1/3
N-Port
BRKSAN-1121

Cisco Public

NPV Auto Load Balancing


Automatic Balancing of Load on NP Links
Uniform balancing of server loads on NP links
Server loads are not tied to any uplink

Blade Server Chassis


Blade 3 Blade 2 Blade 1 Blade 4

Benefit
Optimal uplink bandwidth utilization
Balanced load on NP links

1 3

2 4

SAN

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

NPV Auto Load Balancing


Automatic Distribution Loads onto NP Links
Blade Server Chassis
Blade 3 Blade 2 Blade 1 Blade 4

Re-distribution of the loads among available links


When the failed link comes up Potentially disruptive Not a default behavior

Disrupted servers re-login on other uplink

1 2

3 4

One time or explicitly configurable using CLI command/DM Original distribution not maintained

SAN

Benefits
No left-out NP links Optimal use of uplink bandwidth Scheduled traffic disruption
npv auto-load-balance disruptive
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

F-Port Port Channel


Enhance NPV uplink Resiliency
F-Port
F-Port Port Channel

PortChannels

Core Director Storage

Bundle multiple ports in to 1 logical link Similar to ISL portchannels in FC and EtherChannels in Ethernet

Blade N

Blade System

Blade 2 Blade 1

SAN

Benefits
N-Port F-Port

Interface port-channel 1 no shut Interface fc1/1 channel-group 1 Interface fc1/2 channel-group 1

High-Availability- no disruption if cable, port, or line cards fail Optimal bandwidth utilization & higher aggregate bandwidth with load balancing

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

F-Port Trunking
Extend VSAN Benefits to Blades
F-Port Trunking
NPV
F-Port Trunking on F-Port Channel

Core Director Storage

Uplinks carry multiple VSANs

Blade N

VSAN 1

Benefits
Extend VSAN benefits to Blade servers Separate management domains Traffic Isolation and ability to host differentiated services on blades

Blade System

Blade 2 Blade 1

VSAN 2

SAN

VSAN 3

N-Port

F-Port

Interface fc1/1 trunk mode on trunk allowed-vsan 1-3 Interface port-channel 1 trunk mode on trunk allowed-vsan 1-3

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Application Protection and Consolidation Using VSANs and F-Port Trunking


Security and Services Differentiation for blades
ERP F-Port Portchannel with Trunking

E-Mail

NPV
Blade N WEB Blade 2 Blade 1

Better application protection and isolation Ability to host different application on blades in different VSANs

Web
N-Port F-Port

Blade System

Email

ERP

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Core-Edge Design Review

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

57

SAN Design Two Tier Topology


Edge-Core Topology- Most Common Servers connect to the edge switches Storage devices connect to one or more core switches Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to-end oversubscription are maintained HA achieved in two physically separate, but identical, redundant SAN fabrics
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC

Core Fabric A

Core Fabric B

Edge

Edge

Core-Edge Design Options


Consolidation options
High density directors in core Fabric switches at edge Directors or blade switches on edge
A B

Scalable growth up to core and ISL capacity


A B A B A B

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Server Consolidation with Top-of-Rack Fabric Switches


Top of Rack

Top of Rack Design


Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of FC switches in the fabric 1200 192 896 9.3 : 1 30

96 Storage Ports at 2 G 28 ISL to Edge at 10 G


2X ISL to Core at 10 G 32 Host Ports at 4 G

MDS 91XXs

A B

14 Racks 32 Dual Attached Servers per Rack


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Server Consolidation with Blade servers


Blade Servers Blade Server Design Using 2 x 4 G ISL per Blade Switch; Less cables/power
Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of FC switches in the fabric 1608 240 480 8:1 62
A B

120 Storage Ports at 2 G 60 ISL to Edge at 4 G

2X ISL to Core at 4G 16 Host Ports at 4G

Five Racks 96 Dual Attached Blade Servers per Rack

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Explosion of Fabric/Blade Switch in the Edge


Scalability
Each fabric/blade Switch uses a single Domain ID (Tested 75 per fabric) Theoretical maximum number of Domain IDs is 239 per VSAN Supported number of domains is quite smaller (depends on storage vendors)

Manageability
More switches to manage Shared management of blade switches between storage and server administrators

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Recommended Core-Edge Designs for Scale and Availability

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

64

SAN Design
Key Requirements
High Availability - Providing a Dual Fabric (current best practice) Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for diverse server placement and form factors Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Providing connectivity for virtualized servers

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

N-Port Virtualizer (NPV) Reduces Number of FC Domain IDs


Top of Rack

Top of Rack Design Fabric Switches in NPV mode NPV wizard for the deployment
Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of FC switches in the fabric 1200 192 896 9.3 : 1 2

96 Storage Ports at 2 G 28 ISL to Edge at 10 G


2 ISL to Core at 10 G 32 Host Ports at 4 G

NPIV Core

MDS 91xx in NPV mode A B

14 Racks 32 Dual Attached Servers per Rack


Cisco Public

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

NPV Reduces Number of FC Blade Switches for Server Consolidation


Blade Servers Blade Server Design Using 2 x 4 G ISL per Blade Switch; Less cables/power
Ports Deployed: Storage Ports (4 G Dedicated): Host Ports (4 G Shared): Disk Oversubscription (Ports): Number of fabric switches to manage 1608 240 480 8:1 2
A

120 Storage Ports at 2 G 60 ISL to Edge at 4 G

NPIV Core

MDS 91xx in NPV mode


B

2 ISL to Core at 4G 16 Host Ports at 4G

Five Racks 96 Dual Attached Blade Servers per Rack

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design
Key Requirements
High Availability - Providing a Dual Fabric (current best practice) Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for diverse server placement and form factors Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Providing connectivity for virtualized servers

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design
FAN-OUT Ratio
Use port-channeling/trunking to enhance bandwidth available between devices Factors used
Speed of Host HBA interfaces Speed of Array interfaces Type of server and application

Keep ISL Oversubscription ratio lower than Array oversubscription ratio Ratios range between 4:1 - 20:1

3 x 8G ISL ports

6 x 4G Array ports
FC

Example: 10:1 O/S ratio

60 Servers with 4 Gb HBAs

240 G
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

24 G

24 G

F-Port Port Channel


Enhance NPV Uplink Resiliency
F-Port
F-Port Port Channel

PortChannels

Core Director Storage

Bundle multiple ports in to 1 logical link Similar to ISL portchannels in FC and EtherChannels in Ethernet

Blade N

Blade System

Blade 2 Blade 1

SAN

Benefits
N-Port F-Port

High-Availability- no disruption if cable, port, or line cards fail Optimal bandwidth utilization & higher aggregate bandwidth with load balancing

Interface port-channel 1 no shut Interface fc1/1 channel-group 1 Interface fc1/2 channel-group 1

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design
Key Requirements
High Availability - Providing a Dual Fabric (current best practice) Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for diverse server placement and form factors Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Providing connectivity for virtualized servers

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Zones/ZoneSet
Create Device-Alias for End Devices Create a readable name for end devices tied to their PWWN As device moves between VSANs their Device Alias stays the same Create 2 Member Zones Hardware zoning on MDS Recommended to have more zones with 2 members in larger SANs Single Management of Zones/Zoneset per Fabric Use Distribute full Zoneset command per VSAN to keep from isolation in Basic Zoning or use Enhanced Zoning. If a device has different active members the ISL will become isolated

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

SAN Design
Key Requirements
High Availability - Providing a Dual Fabric (current best practice) Fabric scalability (FLOGI and domain-id scaling) Providing connectivity for diverse server placement and form factors Meeting oversubscription ratios established by disk vendors Effective zoning Providing Business Function/Operating System Fabric Segmentation and Security Providing connectivity for virtualized servers

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Virtual SANs
Consolidate SAN Islands into OS or Department VSANs Reduction of SAN islands into a single Fabric while keeping Isolation Example is to have Test, Development and Production in their own VSANs Separate Tape or SAN extension VSANs Security Create Separate Administrative Roles per VSANs Use TACACS+ for authorization and auditing of Switches

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

VM-Aware SANs

Support for Port WWNs for Virtual Machine Hosted on Blade Servers
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

VM-Aware SANS
It is important to follow the guidelines from the virtual machine vendors for assigning port WWNs to virtual machines. For example, VMware requires the use of Raw Device Mode (RDM) instead of Virtual Machine File System (VMFS) to get access to raw LUNs. Using NPIV/Nested NPV with RDM we can give QOS, incident isolation (VSANS) and visibility into a Virtualized environment Individual LUNS vs. shared with the use of RDM with NPIV/Nested NPV

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Summary of Recommendations
High Availability - Provide a Dual Fabric Use of Port-Channels and F-Port Channels with NPIV to provide the bandwidth to meet oversubscription ratios Hardware zoning with single Zoneset management per fabric is key (2 member zones is recommended-Hard Zoning) Use VSANs to provide OS or Business Function Segmentation Use NPIV/NPV to provide Domain ID scaling and ease of management Use of host level NPIV and Nested NPV to provide visibility to Virtualized servers
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Recommended TOR Core-Edge Design


Top of Rack

Top of Rack Design: NPIV Core Fabric Edge Switches in NPV mode

NPIV Core

MDS Edge in NPV mode A B

14 Racks 32 Dual Attached Servers per Rack


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Recommended Blade Switch Core-Edge Design


Top of Rack

Blade Switch Blade Server Design Using Design:


Core-NPIV

NPIV Core

NPIV Core Edge-NPV Edge Blade Switches in NPV mode


Five Racks NPV Edge 96 Dual Attached Blade Servers per Rack
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Session Agenda
History of Storage Area Networks Fibre Channel Basics SAN Design Requirements Introduction to Typical SAN Designs Introduction to NPIV/NPV Core-Edge Design Review Recommended Core-Edge Designs for Scale and Availability Next Generation Core-Edge Designs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Next Generation SAN Design Best Practices

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

81

Fibre Channel over Ethernet


What Enables It?
10Gbps Ethernet Lossless Ethernet
Matches the lossless behavior guaranteed in FC by B2B credits

Ethernet jumbo frames


Max FC frame payload = 2112 bytes

Normal ethernet frame, ethertype = FCoE Same as a physical FC frame


Ethernet Header FCoE Header FC Header CRC

EOF

FC Payload

Control information: version, ordered sets (SOF, EOF)


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FCS

Unified Fabric
Why?
Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs Limited number of interfaces for Blade Servers

FC HBA FC HBA NIC NIC NIC NIC HCA


BRKSAN-1121

FC Traffic FC Traffic LAN Traffic LAN Traffic Mgmt Traffic Backup Traffic IPC Traffic
Cisco Public

CNA CNA

All traffic goes over 10GE

2011 Cisco and/or its affiliates. All rights reserved.

Todays Unified I/O Architecture


Today
LAN SAN A SAN B

I/O Consolidation with FCoE


LAN SAN A SAN B

Nexus 5000/UCS

Ethernet
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved.

FC
Cisco Public

FCoE

Nexus 5K FCOE Switches Also Use NPV to Achieve Both Server and IO Consolidation
LAN Core

32 Host CNA Ports at 10G

Core connectivity using FC modules on N5K Nexus 5K/ 2k in NPV A mode B

Attached Servers per Rack

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Cisco UCS Fabric Interconnects Also Use NPV


LAN Core

UCS 6100 fabric Interconnect in NPV mode inside UCS 5100 blade chassis

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

UCS Core-Edge Design


N-Port Virtualization Forwarding with MDS
SAN A
F_ Port Channel & Trunk

SAN B
NPIV

F_Port Channeling and Trunking from MDS to UCS FC Port Channel behaves as one logical uplink FC Port Channel can carry all VSANs (Trunk)

NPIV
F_Port
VSAN 1,2

VSAN 1,2

UCS Fabric Interconnects remains in NPV end host mode Server vHBA pinned to an FC Port Channel
vFC 2

N_Proxy 6100-A vFC 1 vFC 2 6100-B vFC 1

Server vHBA has access to bandwidth on any link member of the FC Port Channel Load balancing based on FC Exchange_ID Per Flow

F_Proxy

N_Port vHBA 0 vHBA 1 vHBA 0 vHBA 1

Loss of Port Channel member link has no effect on Server vHBA (hides the failure) Affected flows to remaining member links No FLOGI required
Cisco Public

Server 1
VSAN 1
BRKSAN-1121

Server 2
VSAN 2
2011 Cisco and/or its affiliates. All rights reserved.

Nexus 5X00 Core_Edge


F_Port Trunking and Channeling
Nexus 5000 access switches operating in NPV mode With NX-OS release 4.2(1) Nexus 5000 supports F-Port Trunking and Channeling on the links between an NPV device and upstream FC switch (NP port -> F port) F_Port Trunking: Better multiplexing of traffic using shared links (multiple VSANs on a common link) F_Port Channeling: Better resiliency between NPV edge and Director Core
No host re-login needed per link failure No FSPF recalculation due to link failure
Server 1 VSAN 20 & 30 Nexus 5000 NPV Fabric A Supporting VSAN 20 & 40 Fabric B Supporting VSAN 30 & 50

TF

VSAN 30,50
TNP

VLAN 10,30

VF VN

VLAN 10,50

Simplifies FC topology (single uplink from NPV device to FC director)

Server 2 VSAN 40 & 50

F Port Trunking & Channeling


Cisco Public

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Fibre Channel Aware Device


FCoE NPV
What does an FCoE-NPV device do? FCoE NPV bridge" improves over a "FIP snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF Active Fibre Channel forwarding and security element FCoE-NPV load balance logins from the CNAs evenly across the available FCF uplink ports FCoE NPV will take VSAN into account when mapping or pinning logins from a CNA to an FCF uplink Emulates existing Fibre Channel Topology (same mgmt, security, HA) Avoids Flooded Discovery and Configuration (FIP & RIP)
Fibre Channel Configuration and Control Applied at the Edge Port Proxy FCoE VLAN Discovery Proxy FCoE FCF Discovery VF VNP

FCF

FCoE NPV

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

FCoE Multi-Tier
Larger Fabric Multi-Hop Topologies
Multi-hop edge/core/edge topology Core SAN switches supporting FCoE N7K with DCB/FCoE line cards MDS with FCoE line cards (Sup2A) Edge FC switches supporting either N5K - E-NPV with FCoE uplinks to the FCoE enabled core (VNP to VF) N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE) Scaling of the fabric (FLOGI, ) will most likely drive the selection of which mode to deploy
VE
Edge FCF Switch Mode

N7K or MDS FCoE enabled Fabric Switches

VE

VF

VE

VNP

VE

Servers, FCoE attached Storage

Servers

FC Attached Storage Edge Switch in E-NPV Mode

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Reference Sessions
BRKCOM-2002-UCS Supported Storage Architectures and Best Practices with Storage BRKDCT-1044-FCoE for the IP Network Engineer BRKSAN-2704- Storage Area Network Extension Design and Operation

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Recommendations
Recommended Reading
NX-OS and Cisco Nexus Switching (ISBN: 1587058928), by David Jansen, Ron Fuller, Kevin Corbin. Cisco Press 2010. Storage Networking Fundamentals (ISBN-10:1-58705-162-1; ISBN-13: 978-11-58705-162-3), by Marc Farley. Cisco Press. 2007. Storage Networking Protocol Fundamentals (ISBN: 1-58705-160-5), by James Long, Cisco Press. 2006.

Available Onsite at the Cisco Company Store


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Complete Your Online Session Evaluation


Receive 25 Cisco Preferred Access points for each session evaluation you complete. Give us your feedback and you could win fabulous prizes. Points are calculated on a daily basis. Winners will be notified by email after July 22nd. Complete your session evaluation online now (open a browser through our wireless network to access our portal) or visit one of the Internet stations throughout the Convention Center. Don t forget to activate your Cisco Live and Networkers Virtual account for access to all session materials, communities, and on-demand and live activities throughout the year. Activate your account at any internet station or visit www.ciscolivevirtual.com.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Visit the Cisco Store for Related Titles http://theciscostores.com

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

94

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Thank you.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Reference Slides

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

97

NPV Traffic Engineering


Blade Server Chassis

.
Fc1/4

Allows user to select external interface per server interface Benefits

Blade 2

Customized Traffic Pattern

npv traffic-map server-interface fc1/2 external-interface fc1/1 npv traffic-map server-interface fc1/3 external-interface fc1/1 . npv traffic-map server-interface fc1/N external-interface fc1/5
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved.

Blade 1 Fc1/3 Fc1/1

Fc1/24

Blade N

1 2

Fc1/5

Allows customized Bandwidth Management Allows use of shortest path Enables use of Persistent FCIDs

SAN

Storage
Traffic-map

Cisco Public

Number of NPIV Logins: MDS 9200/9500


Type of Logins Logins per Port Logins per Line Card Logins per Switch Logins per physical fabric
(1) (2)

Verified Logins 126 (1) / 256 (2) 400 2,000 10,000


SAN-OS 3.x, NX-OS 4.1(1) NX-OS 4.1(2)

These are the number of logins allowed on all Gen1, Gen2 and Gen3 line cards. The limits applied to on a per switch will also apply to all MDS 9200 and MDS 9500. MDS 9124/9134 and Blade switches will have different limits and will be shown later.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Number of NPIV Logins: MDS 9124/9134 and Blade Switches


Switching Mode Logins per Port Logins per Port-Group Logins per MDS 9124 Logins per MDS 9134 Logins per MDS 9124e Logins per IBM Blade Switch
(1) (2)

NPV Mode 114 114 684 1140 684 570

42 (1) / 89 (2) 168 1008 1680 1008 840

Using 2 member zoning Using default zone-permit instead of zoning

The stated numbers are verified / supported number of logins.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

MDS 9000 Family Buffer to Buffer Credits


Shared FX-Port (Fixed) E-Port (Default) FX-Port / E-Port (Min-Max) 2-250 Dedicated Extended Credits (Min-Max) 256-4,095 Speed (Gbps) 1 16 16 250 2 4 N/A 16 750 125 32 32 250 2-500 24 Port 1/2/4/8-Gbps 500 501-4095 8 1 2 9148 8G Fabric Switch N/A 32 32 1-125 N/A 4 8 1 9124/9134 Fabric Switch
BRKSAN-1121

FX-Port (Default)

Max Distance (*) (km) 8,190 4,095 2,047 683 8190 4095 2047 1023 250 125

Disable Ports for Max Credits? No

18/4 Port 1/2/4-Gbps 9222i Fabric Switch 4 Port 10-Gbps 4/44 Port 1/2/4/8-Gbps 48 Port 1/2/4/8-Gbps

2-750 2-250

751-4,095 251-4095

10 1 2 4

No

No

No 62 31 122 61 30 max frame size No

N/A

16

16

1-61

N/A

2 4

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

(*) Assuming

Introduction to FCOE

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

102

Fibre Channel over Ethernet


What Enables It?
10Gbps Ethernet Lossless Ethernet
Matches the lossless behavior guaranteed in FC by B2B credits

Ethernet jumbo frames


Max FC frame payload = 2112 bytes

Normal ethernet frame, ethertype = FCoE Same as a physical FC frame


Ethernet Header FCoE Header FC Header CRC

EOF

FC Payload

Control information: version, ordered sets (SOF, EOF)


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FCS

Unified Fabric
IEEE DCB
Developed by IEEE 802.1 Data Center Bridging Task Group (DCB) All technically stable Final standards expected by mid 2010 Standard / Feature
IEEE 802.1Qbb Priority-based Flow Control (PFC) IEEE 802.3bd Frame Format for PFC IEEE 802.1Qaz Enhanced Transmission Selection (ETS) and Data Center Bridging eXchange (DCBX) IEEE 802.1Qau Congestion Notification IEEE 802.1Qbh Port Extender

Status of the Standard


In Sponsor Ballot In Sponsor Ballot Just completed WG recirculation ballot. New recirculation expected next week in order to go to Sponsor Ballot after the May interim Done! In its first task group ballot

CEE (Converged Enhanced Ethernet) is an informal group of companies that submitted initial inputs to the DCB WGs.
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Unified Fabric
Why?
Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs Limited number of interfaces for Blade Servers

FC HBA FC HBA NIC NIC NIC NIC HCA


BRKSAN-1121

FC Traffic FC Traffic LAN Traffic LAN Traffic Mgmt Traffic Backup Traffic IPC Traffic
Cisco Public

CNA CNA

All traffic goes over 10GE

2011 Cisco and/or its affiliates. All rights reserved.

Whats the difference between DCE, CEE and DCB ?


All three acronyms describe the same thing, meaning the architectural collection of Ethernet extensions (based on open standards) Cisco has co-authored many of the standards associated and is focused on providing a standards-based solution for a Unified Fabric in the data center The IEEE has decided to use the term DCB (Data Center Bridging) to describe these extensions to the industry. http://www.ieee802.org/1/pages/dcbridges.html

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Priority Flow Control


Fibre Channel over Ethernet Flow Control
Enables lossless Ethernet using PAUSE based on a COS as defined in 802.1p When link is congested, CoS assigned to FCoE will be PAUSEd so traffic will not be dropped Other traffic assigned to other CoS will continue to transmit and rely on upper layer protocols for retransmission
Transmit Queues Fibre Channel
One Two Three
R_RDY
STOP

Ethernet Link

Receive Buffers
One Two

PAUSE

Three Four Five Six Seven Eight

Four Five Six Seven

Eight Virtual Lanes

B2B Credits
BRKSAN-1121

Packet

Eight

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

DCB Virtual Links


An Example

VL2 - No Drop Service - Storage

VL1 LAN Service LAN/IP LAN/IP Gateway

VL1 VL2 VL3

Campus Core/ Internet

Ability to support QoS queues within the lanes

Storage Area Network

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Fiber Channel over Ethernet Protocol


Host Side FIP and DCBX Configuration

2nd portion of the MAC is the FC-ID

1st portion of the MAC is the FCMAP of the Nexus 5000


FC-MAP (0E-FC-xx) FC-ID 10.00.01 FC-ID 7.8.9

FC-MAC Address
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FC-MAP (0E-FC-xx)

FCoE Building Blocks


The Acronyms Defined
FCF (FCoE Forwarder): A Fibre Channel switching element that is able to forward FCoE frames (Nexus 5000, Nexus 7000, MDS 9000) FPMA : Fabric Provided MAC Address -- A unique MAC address that is assigned by an FCF to a single Enode Enode : End Node -- a Fiber Channel end node that is able to transmit FCoE frames using one or more ENode MACs. FCoE Pass-Through : any DCB device capable of passing FCoE frames to an FCF FIP Snooping Bridge FCoE N-Port Virtualizer Single hop FCoE : running FCoE between the host and the first hop access level switch Multi-hop FCoE : the extension of FCoE beyond a single hop into the Aggregation and Core layers of the Data Centre Network
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

FCoE Building Blocks


Fibre Channel Forwarder
FCF (Fibre Channel Forwarder) is the Fibre Channel switching element inside an FCoE switch
Fibre Channel logins (FLOGIs) happens at the FCF Consumes a Domain ID

FCoE encap/decap happens within the FCF


Forwarding based on FC information

FCoE Switch

FC Domain ID : 15

FC port FC port FC port FC port

FCF Ethernet Bridge

Eth port
BRKSAN-1121

Eth port

Eth port

Eth port

Eth port
Cisco Public

Eth port

Eth port

Eth port

2011 Cisco and/or its affiliates. All rights reserved.

FCoE Building Blocks


Converged Network Adapter
Replaces multiple adapters per server, consolidating both Ethernet and FC on a single interface Appears to the operation system as individual interfaces (NICs and HBAs) First Generation CNAs from support PFC and CIN-DCBX Second Generation CNAs support PFC, CEE-DCBX as well as FIP Single chip implementation Half Height/Length Half power consumption
FC Driver bound to FC HBA PCI address

10GbE

Fibre Channel Drivers

Operating System

10GbE Fibre Channel

Link

Ethernet Driver bound to Ethernet NIC PCI address

PCIe

Ethernet

Ethernet Drivers

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Translate to FCoE
Same host to target communication Host has 2 CNAs (one per fabric) Target has multiple ports to connect to fabric Connect to a capable switch Port Type Negotiation (FC port type will be handled by FIP) Speed Negotiation DCBX Negotiation Access switch is a Fibre Channel Forwarder (FCF) Dual fabrics are still deployed for redundancy
CNA Unified Wire Ethernet Fabric FC Fabric

Target
FC

DCB capable Switch acting as an FCF

ENode
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Fibre Channel over Ethernet Port Types


Fibre Channel over Ethernet Switch

**EagleHawk + Timeframe
FCF Switch VE_Port

**E Rocks Timeframe


VE_Port VF_Port VNP_Port
FCoE_ NPV Switch

VF_Port

VN_Port

End Node End

VF_Port

VN_Port Node

**Available NOW FCoE Switch : FCF


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

My Port Is UpCan I Talk Now?


FIP and FCoE Login Process
Step 1: FIP Discovery Process enables FCoE adapters to discover which VLAN to transmit & receive FCoE frames enables FCoE adapters and FCoE switches to discovers other FCoE capable devices verifies Lossless Ethernet is capable of FCoE transmit Step 2: FIP Login Process
Simular to existing Fibre Channel Login process - sends FLOGI to upstream FCF adds the negotiation of the MAC address to use Fabric Provided MAC Address (FPMA) FCF assigns the host a Enode MAC address to be used for FCoE forwarding
**Multi-hope FCoE with VE_Ports not supported until Eaglehawk Release
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Target
FC

FC or FCoE Fabric

E_ports or VE_Port VF_Port VN_Port

CNA ENode FIP Discovery

Enode MAC Address


Fibre Channel over Ethernet Addressing Scheme
Domain ID

Enode MAC assigned for each FCID Enode MAC composed of a FC-MAP and FCID

FC Fabric

FC-MAP is the upper 24 bits of the Enodes MAC FCID is the lower 24 bits of the Enodes MAC
FCoE forwarding decisions still made based on FSPF and the FCID within the Enode MAC Fibre Channel FCID Addressing

FC-MAP (0E-FC-xx)

FC-ID 10.00.01 FC-ID 7.8.9


116

FC-MAC Address
BRKSAN-1121

FC-MAP (0E-FC-xx)
Cisco Public

2011 Cisco and/or its affiliates. All rights reserved.

Login CompleteAlmost There


Fabric Zoning
FCoE fabric zoning done the same as FC fabric zoning Zoning is enforced at the FCF Zoning can be configured on the Nexus 5000 using the CLI or Fabric Manager If Nexus 5000 is an FCoE PassThrough device, zoning will be configured on the upstream core switch and pushed to the Nexus 5000
FC/FCoE Fabric

pwwn 50:06:01:61:3c:e0:1a:f6

Target
FC

FCF with Domain ID 10

zone name EXAMPLE_Zone * fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [tnitiator] * fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target]

pwwn 10:00:00:00:c9:76:fd:31

**Multi-hope FCoE with VE_Port not supported until Eaglehawk


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Initiator

Single Hop Design


Todays Solution
Host connected over unified wire to first hop access switch
Access switch (Nexus 5000) is the FCF Fibre Channel ports on the access switch can be in NPV or Switch mode for native FC traffic
Ethernet Fabric FC Fabric

Target
FC

DCBX is used to negotiate the enhanced Ethernet capabilities FIP is use to negotiate the FCoE capabilities as well as the host login process FCoE runs from host to access switch FCF native Ethernet and native FC break off at the access layer
ENode
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

DCB capable Switch acting as an FCF Unified Wire

CNA

Single Hop Design


The CNA Point of View
Generation 1 CNA limited to direct attached CNAs at the access Utilized Cisco, Intel, Nuova Data Center Bridging Exchange protocol (CIN-DCBX) Generation 2 CNA Utilizes Converged Enhanced Ethernet Data Center Bridging Exchange protocol (CEE-DCBX) Utilizes FCoE Initialization Protocol (FIP) as defined by the T.11 FC-BB-5 specification Supports both direct and multi-hop attachment (through a Nexus 4000 FIP Snooping Bridge)
Nexus 5000 FCF-A Nexus 5000 FCF-A

LAN Fabric

Fabric A

Fabric B

CEE-DCBX

VF VN

Direct attach VN_Port to VF_Port

CIN-DCBX

Generation 2 CNA

Generation 1 CNA

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Unified Fabric with FCoE


CNA: Converged Network Adapter
Standard drivers Same management Operating System sees:
Dual port 10 Gigabit Ethernet adapter Dual Port 4 Gbps Fibre Channel HBAs

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Virtual Fibre Channel Interface


Binding the vfc
Virtual Fibre Channel Interface (vfc) : Where is the logical Fibre Channel wire is terminated (the FCF)
Today this corresponds to an F_Port

Three options for binding a vfc interface :


Physical Interface: Direct Attach CNAs and FCoE_NPV devices (future) Single link port-channel: Direct Attach CNAs connected via a two port vPC MAC-Address over an Ethernet Cloud: through a FIP-Snooping Device
vfc3 vfc2
Nexus 5000 FCF Nexus 5000 FCF

vfc1

Eth1/2

PC1

Nexus 4000 FIP-Snooping

mac-address

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Single Hop Design


The FCoE VLAN
FCoE VLANs are treated differently than native Ethernet VLANs No flooding, MAC learning, broadcasts, etc. The FCoE VLAN must not be configured as a native VLAN FIP uses native VLAN FCoE VLANs should not be configured on Ethernet links that are not carrying FCoE traffic Unified Wires must be configured as trunk ports and STP edge ports
Nexus 5000 FCF

LAN Fabric

Fabric A

Fabric B

VSAN 2

VSAN 3

Nexus 5000 FCF

VLAN 10,20

STP Edge Trunk VLAN 10,30

! VLAN 20 is dedicated for VSAN 2 FCoE traffic (config)# vlan 20 (config-vlan)# fcoe vsan 2

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Single Hop Design

The FCoE VLAN and STP


FCoE Fabric A will have a different VLAN topology than FCoE Fabric B which are different from the LAN Fabric
PVST+ allows unique topology per VLAN MST requires that all switches in the same Region have the same mapping of VLANs to instances MST does not require that all VLANs be defined in all switches A separate instance must be used for FCoE VLANs Recommended: three separate instances native Ethernet VLANs, SAN A VLANs and SAN B VLANs
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved.

LAN Fabric

Fabric A

Fabric B

VSAN 2 VLAN 10 VSAN 3

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20 VLAN 10,30

spanning-tree mst configuration name FCoE-Fabric revision 5 instance 5 vlan 1-19,40-3967,4048-4093 instance 10 vlan 20-29 instance 15 vlan 30-39
Cisco Public

VLANs and VSANs


FCoE Considerations
VSANs use VLAN hardware table resources FCoE requires a VLAN and a VSAN that you bind the VLAN to. Hence for each FCoE VSAN you should count using 2 VLANs Enabling FCoE burns two internal VSAN/VLAN resources vFC binds to the Port-Channel, as long as there is one single port in the port-channel attached to the switch

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Single Hop Design


What is MCEC??
Optimal layer 2 LAN design often leverages Multi-Chassis Etherchannel (MCEC) Nexus utilizes Virtual Port-Channel (vPC) to enable MCEC either between switches or to direct attached servers (using LACP or static port-channels) MCEC provides network based load Nexus 5000 FCF-A sharing and redundancy without introducing layer 2 loops in the topology MCEC maintains the separation of LAN and SAN high availability topologies
FC maintains separate SAN A and SAN B topologies LAN utilizes a single logical topology
LAN Fabric Fabric A Fabric B

vPC Peer Link

Nexus 5000 FCF-B

vPC Peers

MCEC

Direct Attach vPC Topology

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Single Hop Design

Unified Wires and MCEC


vPC enabled topologies with FCoE must follow specific design and forwarding rules A vfc interface can only be associated with a single-port portchannel While the port-channel configurations are the same on N5K-1 and N5K-2, the FCoE VLANs are different vPC configuration works with Gen-2 FIP enabled CNAs ONLY FCoE VLANs are not carried on the vPC peer-link FCoE and FIP ethertypes are not forwarded over the vPC peer link

LAN Fabric

Fabric A

Fabric B

VLAN 10 ONLY HERE!

Nexus 5000 FCF-A

Nexus 5000 FCF-B

VLAN 10,20 STP Edge Trunk

VLAN 10,30

vPC contains only 2 X 10GE links one to each Nexus 5000

Direct Attach vPC Topology


BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Multi - Hop Design

FCoE Pass-Through Options


Multi-hop FCoE networks allow for FCoE traffic to extend past the access layer (first hop) In Multi-hop FCoE the role of a transit Ethernet bridge needs to be evaluated DCB Capable
Avoid Domain ID exhaustion Ease management
Ethernet Switch

SAN A

SAN B

VF

FCF

FCF
VF VN

FIP Snooping is a minimum requirement suggested in FC-BB-5 Fibre Channel over Ethernet NPV (FCoE-NPV) is a new capability intended to solve a number of design and management challenges
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

DCB Capable Ethernet Switch

VN

CLI Configuration Sample

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

128

Sample Working Topology

NPV Core

NPV Edge

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Enabling the Required Features


NPV Edge Switch:
pod7-5020-51(config)# feature fcoe FC license checked out successfully fc_plugin extracted successfully FC plugin loaded successfully FCoE manager enabled successfully FC enabled on all modules successfully

pod7-5020-51(config)# feature npv Verify that boot variables are set and the changes are saved. Changing to npv mode erases the current configuration and reboots the switch in npv mode. Do you want to continue? (y/n):y

NPV Core Switch:


pod3-9216i-70(config)# feature npiv pod3-9216i-70(config)# feature fport-channel-trunk Admin trunk mode has been set to off for 1- Interfaces with admin switchport mode F,FL,FX,SD,ST in admin down state 2- Interfaces with operational switchport mode F,FL,SD,ST.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Configure the VSANs


NPV Edge Switch:
pod7-5020-51(config)# vsan database pod7-5020-51(config-vsan-db)# vsan 10

NPV Core Switch:


pod3-9216i-70(config)# vsan database pod3-9216i-70(config-vsan-db)# vsan 10 pod3-9216i-70(config-vsan-db)# vsan 10 interface fc1/12

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Configure Trunking F_Port Port Channel


NPV Core Switch:
pod3-9216i-70(config)# interface port-channel 1 pod3-9216i-70(config-if)# switchport mode f pod3-9216i-70(config-if)# switchport trunk mode on pod3-9216i-70(config-if)# channel mode active pod3-9216i-70(config-if)# interface fc2/13, fc2/19 pod3-9216i-70(config-if)# switchport mode f pod3-9216i-70(config-if)# switchport rate-mode dedicated pod3-9216i-70(config-if)# switchport trunk mode on pod3-9216i-70(config-if)# channel-group 100 force fc2/13 fc2/19 added to port-channel 100 and disabled please do the same operation on the switch at the other end of the port-channel, then do "no shutdown" at both end to bring them up pod3-9216i-70(config-if)# no shut
BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Configure Trunking F_Port Port Channel


NPV Edge Switch:
pod7-5020-51(config)# interface san-port-channel 1 pod7-5020-51(config-if)# switchport mode np pod7-5020-51(config-if)# switchport trunk mode on pod7-5020-51(config-if)# interface fc2/1-2 pod7-5020-51(config-if)# switchport mode np pod7-5020-51(config-if)# switchport trunk mode on pod7-5020-51(config-if)# channel-group 1 fc2/1 fc2/2 added to port-channel 1 and disabled please do the same operation on the switch at the other end of the port-channel, then do "no shutdown" at both ends to bring it up pod7-5020-51(config-if)# no shut

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Configure FCoE on NPV Edge Switch


pod7-5020-51(config)# vlan 10 pod7-5020-51(config-vlan)# fcoe vsan 10 pod7-5020-51(config-vlan)# interface ethernet 1/3 pod7-5020-51(config-if)# switchport mode trunk pod7-5020-51(config-if)# switchport trunk allowed vlan 1,10 pod7-5020-51(config-if)# spanning-tree port type edge trunk Warning: Edge port type (portfast) should only be enabled on ports connected to a single host. Connecting hubs, concentrators, switches, bridges, etc... to this interface when edge port type (portfast) is enabled, can cause temporary bridging loops. Use with CAUTION pod7-5020-51(config-if)# interface vfc3 pod7-5020-51(config-if)# bind interface ethernet 1/3 pod7-5020-51(config-if)# vsan database pod7-5020-51(config-vsan-db)# vsan 10 interface vfc3 pod7-5020-51(config-vsan-db)# interface vfc3 pod7-5020-51(config-if)# no shut

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Verify NPV Fabric Connectivity (Edge)


pod7-5020-51# sh npv flogi-table -------------------------------------------------------------------------------SERVER INTERFACE VSAN FCID PORT NAME NODE NAME -------------------------------------------------------------------------------vfc3 10 0x0f0100 21:00:00:c0:dd:12:04:f3 20:00:00:c0:dd:12:04:f3 Total number of flogi = 1. Verify that the VFC interface is pinned to the SAN Port Channel pod7-5020-51# sh npv traffic-usage NPV Traffic Usage Information: ---------------------------------------Server-If External-If ---------------------------------------vfc3 san-port-channel 1 ---------------------------------------BRKSAN-1121 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

EXTERNAL INTERFACE Spo1

Verify on Core that N5K and FCoE Workstation Are Logged into the Fabric.
pod3-9216i-70(config)# show flogi database -------------------------------------------------------------------------------INTERFACE VSAN FCID PORT NAME NODE NAME

-------------------------------------------------------------------------------fc1/12 fc1/12 fc1/12 fc1/12 fc1/12 fc1/12 10 10 10 10 10 10 0x0f00dc 21:00:00:20:37:a9:cd:6e 20:00:00:20:37:a9:cd:6e 0x0f00e0 21:00:00:20:37:a9:89:7e 20:00:00:20:37:a9:89:7e 0x0f00e2 21:00:00:20:37:af:de:85 20:00:00:20:37:af:de:85 0x0f00e4 21:00:00:20:37:a9:d6:49 20:00:00:20:37:a9:d6:49 0x0f00e8 21:00:00:20:37:a9:d7:bf 20:00:00:20:37:a9:d7:bf 0x0f00ef 21:00:00:20:37:a9:94:89 20:00:00:20:37:a9:94:89 0x670102 24:01:00:0d:ec:a3:da:40 20:01:00:0d:ec:a3:da:41 0x0f0100 21:00:00:c0:dd:12:04:f3 20:00:00:c0:dd:12:04:f3

port-channel 1 1 port-channel 1 10

Total number of flogi = 8.

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Configure Zoning
NPV Core Switch:
pod3-9216i-70(config)# zone name npv_vsan10 vsan 10 pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:a9:cd:6e pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:a9:89:7e pod3-9216i-70(config-zone)# member pwwn 21:00:00:20:37:af:de:85 pod3-9216i-70(config-zone)# member pwwn 21:00:00:c0:dd:12:04:f3 pod3-9216i-70(config-zone)# exit pod3-9216i-70(config)# zoneset name npv_v10_zs vsan 10 pod3-9216i-70(config-zoneset)# member npv_vsan10 pod3-9216i-70(config-zoneset)# zoneset activate name npv_v10_zs vsan 10 Zoneset activation initiated. check zone status pod3-9216i-70(config)# show zoneset active zoneset name npv_v10_zs vsan 10 zone name npv_vsan10 vsan 10 * fcid 0x0f00dc [pwwn 21:00:00:20:37:a9:cd:6e] * fcid 0x0f00e0 [pwwn 21:00:00:20:37:a9:89:7e] * fcid 0x0f00e2 [pwwn 21:00:00:20:37:af:de:85] * fcid 0x0f0100 [pwwn 21:00:00:c0:dd:12:04:f3]

BRKSAN-1121

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public