You are on page 1of 91

Everyone in this room is a

GENIUS
What are Best Practices ?

Learning from Others


Mistakes
4
Learning from your
mistakes makes you
SMART
Learning from others
mistakes makes you
GENIUS
vPC Best Practices and
Design on NXOS
Nazim Khan, CCIE#39502 (DC/SP)
Technical Marketing Engineer, Data Center Group
BRKDCT-2378
Session Goals
• Best Practices and Designs for vPC – virtual port-channel
• Nexus 2000 (FEX) will only be addressed from vPC
standpoint
• Fabricpath / vPC+ Overview
• vPC with FCOE
• vPC with VXLAN
• vPC with ACI

vPC : Get it Right the very First time


Session Non-Goals
• vPC troubleshooting
• Details of vPC+
• Details of Fabricpath, FCoE, ACI and VXLAN
Related Sessions at Cisco Live San Diego

Session Id Session Name

BRKDCT-2404 VXLAN deployment models - A practical perspective

BRKDCT-2081 Cisco FabricPath Technology and Design

Nexus 9000/7000/6000/5000 Operations and


BRKDCT-2458 Maintenance Best Practices

BRKACI-2601 Real World ACI Deployment and Migration


Agenda
• Feature Overview
• Configuration Best Practices
• Design Best Practices
• vPC Operations and Upgrade
• vPC with Fabric Technologies
• Scalability
• Reference Material
MPLS, OTV,
LISP

Data Center Technology Evolution MPLS, OTV,


LISP

ACI

VXLAN

FabricPath with vPC+

FEX with vPC

VPC
2014-2015
STP
2013-2014

2010

2010
2009
2008
Why vPC in 2015 ?

12
vPC is Foundation
Role of vPC in the Evolution of Data Center
• vPC launched in 2009

• Deployed by almost 95% of Nexus customers Unified Fabric


• Used to redundantly connect network entities at the
edge of the Fabric
− Dual-homed servers (bare metal, blades, etc.)
− Network services (Firewalls, Load Balancers, etc.)
Agenda
• Feature Overview
− Concepts and Benefits
− Terminology
vPC Feature Overview
vPC Concept & Benefits

S1 S2

S1 S2 S1 S2

S3 S3
S3
STP vPC Physical Topology vPC Logical Topology

• No Blocked Ports, More Usable Bandwidth, Load Sharing


• Fast Convergence
Feature Overview
vPC Terminology

Layer 3 Cloud
vPC vPC Domain vPC Peer
Peer Keepalive Link
Peer-Link

Orphan
CFS S2
Port S1

vPC Member
vPC Port

Orphan
Device S3
For Your
vPC Failure Scenario Reference

vPC Peer-Keepalive Link up & vPC Peer-Link down

vPC peer-link failure (link loss): P vPC Peer-keepalive S

• vPC peer-keepalive up
• Status of other vPC peer known
vPC_PLink
• Both peers Active Suspend secondary
vPC Member Ports
• Secondary vPC peer disables all vPC’s
vPC1 vPC2
• Traffic from vPC primary.
• Orphan devices connected to secondary peer will SW3 SW4

be isolated
Keepalive Heartbeat

P Primary vPC

S Secondary vPC
vPC Failure Scenario – Dual Active For Your
Reference
vPC Peer-Keepalive down followed by vPC Peer-Link down
1. vPC peer-keepalive DOWN
P P
S
vPC Peer-keepalive
2. vPC peer-link DOWN
3. DUAL-ACTIVE or SPLIT BRAIN
vPC_PLink

• vPC primary peer remains primary and Traffic Loss / Uncertain Traffic
secondary peer becomes operational primary Behavior
role vPC1 vPC2

• Result in traffic loss / uncertain traffic behavior SW3 SW4

• When links are restored, the operational


primary (former secondary) keeps the primary P Primary vPC
role & former primary becomes operational S Secondary vPC
secondary
Agenda
• vPC Configuration Best Practices
− Building a vPC domain
− Domain-ID
− Peer-Link
− Peer-Keepalive Link
− Spanning-Tree
− Peer-switch
− Private VLAN (PVLAN)
− Auto-recovery
− Object tracking
vPC Configuration Best Practices
Building a vPC domain – Configuration Steps

1. Define domains S1 S2
2. Establish Peer Keepalive connectivity

3. Create a Peer link CFS

4. Create vPCs

5. Make Sure Configurations are Consistent

(Order does Matter!)

S3
vPC Configuration Best Practices
vPC Domain-ID vPC Domain 10

• The vPC peer devices use the vPC domain ID to


automatically assign a unique vPC system MAC S1 S2
address
• You MUST use unique Domain id’s for all vPC
pairs defined in a contiguous layer 2 domain vPC Domain 20

S3 S4
! Configure the vPC Domain ID – It should be unique within the layer 2
domain
NX-1(config)# vpc domain 20

! Check the vPC system MAC address


NX-1# show vpc role
<snip>
vPC system-mac : 00:23:04:ee:be:14

S5
vPC Configuration Best Practices
vPC Peer-Link

S1 S2
S1 S2

S3
S3

• vPC Peer-link should be a point-to-point connection


• Peer-Link member ports can be 10/40/100GE interfaces
• Peer-Link bandwidth should be designed as per the vPC
• vPC imposes the rule that peer-link should never be blocking
vPC Configuration Best Practices
vPC Peer-Keepalive link

Preference Nexus 7X00 / Nexus 9300 /6000 /


9500 series 5X00 / 3X00 series
1 Dedicated link(s) mgmt0 interface
Recommendations
(in order of (1GE/10GE LC)
preference): 2 mgmt0 interface Dedicated link(s)
(1GE/10GE LC)
3 L3 infrastructure L3 infrastructure
vPC Configuration Best Practices For Your
Reference
vPC Peer-Keepalive link – Dual Supervisors
Management Switch Management
Network

• When using dual supervisors and mgmt0 interfaces vPC_PKL


vPC_PKL

to carry the vPC peer-keepalive, DO NOT connect


them back to back between the two switches
vPC_PL

• Only one management port will be active a given point


vPC1 vPC2
in time and a supervisor switchover may break keep-
alive connectivity

• Use the management interface when you have an out-


of-band management network (management switch in Standby Management Interface
between)
Active Management Interface
vPC Configuration Best Practices
Spanning Tree (STP)

STP is running to manage S1 S2


loops outside of vPC domain,
or before initial vPC
configuration !

S4
S3

S5

• All switches in Layer 2 domain should run either Rapid-PVST+ or MST


• Do not disable spanning-tree protocol for any VLAN
• Always define the vPC domain as STP root for all VLAN in that domain
vPC Configuration Best Practices
vPC Peer-Gateway

• Allows a vPC switch to act as the active S1 S2


gateway for packets addressed to the peer
router MAC
• Keeps forwarding of traffic local to the vPC
node and avoids use of the peer-link
• Allows Interoperability with features of some S3 S4
NAS or load-balancer devices

N7k(config-vpc-domain)# peer-gateway
vPC Configuration Best Practices
vPC Peer-switch
Primary Secondary
vPC vPC

Without Peer-switch
BPDUs
• STP for vPCs controlled by vPC primary.
• vPC primary send BPDU’s on STP designated ports
• vPC secondary device proxies BPDU’s to primary

With Peer-switch Primary


vPC
Secondary
vPC

• Peer-Switch makes the vPC peer devices to appear


as a single STP root
• BPDUs processed by the logical STP root formed by
the 2 vPC peer devices

N7k(config-vpc-domain)# peer-switch
vPC Configuration Best Practices
PVLAN on vPC
• PVLAN configuration across both VPC switches
should be identical
• PVLAN configuration not supported on Peer- vPC Primary vPC Secondary
Link
• Type-1 Compatibility Check S1 S2
• Port mode is a type-1 check P P
• vPC leg brought down if PVLAN port mode PVLAN- PVLAN-
PROMISC PROMISC
different on vPC legs (3500, 3501) (3500, 3501)

• Type-2 Compatibility Check C


• PVLAN will bring down mismatched tuple Community
VLAN
Note : This feature is currently not supported on N9X00
vPC Configuration Best Practices
PVLAN VPC type 1 Consistency Check

vPC Primary vPC Secondary vPC Primary vPC Secondary

S1 S2 S1 S2
P P I I

Pvlan Pvlan Isolated


Promiscuous S3 trunk S3
trunk
vPC Primary vPC Secondary

S1 S2
Type 1 I T
Consistency
Failure
S3
vPC Configuration Best Practices
PVLAN VPC type 2 Consistency Check

vPC Primary vPC Secondary vPC Primary vPC Secondary

S1 S2 S1 S2
P P I I

PVLAN- PVLAN- Secondary Secondary


PROMISC PROMISC Trunk (2,31) Trunk (2,31)
(10, 201) S3 (10, 201) (3,30), (4,100)
S3 (3,30), (4,100)

vPC Primary vPC Secondary

S1 S2
Type 2 I I
Consistency
Failure Secondary
Trunk (3,31)
Secondary
Trunk (2,31)
(2,30), (4,100) S3 (3,30), (4,100)
vPC Configuration Best Practices
vPC auto-recovery
Operational
Primary
P S P S
P

S1 S2 S1 S1
S2 S2

S3 S3 S3

1. vPC peer-link down : S2 - secondary shuts all its vPC member ports
2. S1 down : vPC peer-keepalive link down : S2 receives no keepalives
3. After 3 keepalive timeouts, S2 changes role and brings up its vPC P
vPC Primary
S vPC Secondary
vPC Configuration Best Practices For Your
vPC auto-recovery Reference

Auto-recovery addresses two cases of single switch behavior


• Peer-link fails and after a while primary switch (or keepalive link) fails
• Both VPC peers are reloaded and only one comes back up

How it works
• If Peer-link is down on secondary switch, 3 consecutive missing peer-keepalives will
trigger auto-recovery
• After reload (role is ‘none established’) auto-recovery timer (240 sec) expires while
peer-link and peer-keepalive still down, autorecovery kicks in
• Switch assumes primary role
• VPCs are brought up bypassing consistency checks

Nexus(config)# vpc domain 1


Nexus(config-vpc-domain)# auto-recovery
vPC Configuration Best Practices
Why Object-Tracking ?
S4 S5
• Modules hosting peer-link and uplink fail on
the vPC primary
Primary Secondary
• Peer-Link is down and vPC Secondary
shut all its vPC

• Auto-Recovery does not kick in as peer- S1 S2


keepalive link is active

• Traffic is black holed


S3
vPC Configuration Best Practices
Object-tracking
S4
• vPC object tracking, tracks both peer-link and S5
uplinks in a list of Boolean OR
• Object Tracking triggered when the track object
goes down
• Suspends the vPCs on the impaired device
• Traffic forwarded over the remaining vPC peer
! Track the vpc peer link
track 1 interface port-channel11 line-protocol
! Track the uplinks
track 2 interface Ethernet1/1 line-protocol
track 3 interface Ethernet1/2 line-protocol S1 S2
! Combine all tracked objects into one.
! “OR” means if ALL objects are down, this object will go down
track 10 list boolean OR
object 1
object 2
object 3

! If object 10 goes down on the primary vPC peer,


! system will switch over to other vPC peer and disable all local vPCs
S3
vpc domain 1
track 10
Agenda

• vPC Design Best Practices


− Mixed Hardware across vPC Peers
− FHRP with vPC
− Hybrid topology (vPC and non-vPC)
− vPC and Network Services
− vPC Fex Supported Topologies
− Physical port vPC
− vPC as Data Center Interconnect (DCI)
− Dynamic Routing over VPC
− vPC and Multicast
Design Best Practices
Mixed Hardware across vPC Peers : Line Cards

Always use identical line cards on either sides of the peer link and VPC legs !

Examples

vPC Primary vPC Secondary vPC Primary vPC Secondary

vPC Peer-link vPC Peer-link


S2
S1 S1 S2
N7700
N7000 F2E F2E M2
M1
F3 F3
vPC vPC
Design Best Practices
Mixed Hardware across vPC Peers : Nexus 9500

X Y vPC
vPC Primary vPC Secondary
N9K-X9636PQ N9K-X9432PQ

vPC Peer-link N9K-X9564PX N9K-X9464PX


S1 S2
N9500 X Y N9500 N9K-X9564TX N9K-X9464TX
X Y
vPC
N9K-X9536PQ N9K-X9736PQ
Design Best Practices
Mixed Hardware across vPC Peers : Chassis & Supervisors
• N7000 and N7700 in same vPC Construct -Supported
• VDC type should match on both peer device
• vPC peers can have mixed SUP version* (SUP1, SUP2, SUP2E)
• N5500 and N5600 in same vPC Construct –Not Supported

vPC Primary vPC Secondary


vPC Primary vPC Secondary

S1 S2 S1 S2
N7000 N7700
N5500 N5600

*Recommended only for short period such as migration


FHRP with vPC
HSRP / VRRP/ GLBP Active/Active

FHRP FHRP
“Active”: “Standby”:
Active for Active for
shared L3 MAC shared L3 MAC

S1 S2

S3 S4

• FHRP in Active/Active mode with vPC


• No requirement for aggressive FHRP timers
• Best Practice : Use default FHRP timers
Use one transit vlan to establish L3 routing
FHRP with vPC backup path over the vPC peerlink in case L3
Backup Routing Path uplinks were to fail, all other SVIs can use
passive-interfaces

• Point-to-point dynamic routing protocol


adjacency between the vPC peers to S3 S4
establish a L3 backup path to the core P P
through PL in case of uplinks failure OSPF/EIGRP
L3
• Define SVIs associated with FHRP as L2
routing passive-interfaces in order to avoid
routing adjacencies over vPC peer-link P VLAN 99
P
• A single point-to-point VLAN/SVI (aka OSPF/EIGRP

transit vlan) will suffice to establish a L3 Primary


vPC
Secondary
vPC S2
S1
neighbor
• Alternatively, use an L3 point-to-point link
between the vPC peers to establish a L3 S5
backup path
P Routing Protocol Peer
Hybrid topology (vPC and non-vPC)
STP Root STP Root
STP Root
VLAN 1 VLAN 1
VLAN 2
VLAN 2
Bridge Priority
VLAN 1  4K vPC Primary vPC Secondary Bridge Priority
VLAN 2  8K VLAN 1  8K
VLAN 2  4K
S1 S2
peer-switch
VLAN 1
vPC1 (blocked)
S3 S4
VLAN 2
(blocked)
• Supports hybrid topology where vPC and non-vPC are connected to the same vPC domain
• Need additional configuration parameters : spanning-tree pseudo-information
• If previously configured global spanning tree parameters and subsequently configure spanning tree pseudo
information parameters, then pseudo information parameters take precedence over the global parameters.
Design Best Practices
ASA Cluster
Cluster
Control Link

Cluster
Data Link

ASA Cluster Mode


• Use unique vPC for ASA Cluster Data Links to vPC domain
• Use vPC per ASA device for Cluster Control Link (CCL) to vPC domain
• Leverage peer-switch configuration
Nexus 2000 (FEX) Straight-Through Deployment with VPC

• Port-channel connectivity from the server


• Two Nexus switches bundled into a vPC S1 S2
pair
Fabric Links
• Suited for servers with Dual NIC and
capable of running Port-Channel

HIF Fex 100 HIF Fex 101

VPC
Nexus 2000 (FEX)Active-Active Deployment with VPC
S1 S2

• Fabric Extender connected to two


Nexus 5X00 / 6000
• Suited for servers with Single NIC or
Dual NIC not having port-channel
Fabric Links
capability.
• Scale implications of less FEX per
system and less VPC
Fex 101
Fex 100
Note : HIF HIF
• This design is currently not supported on Nexus 9X00
• Nexus 7X00 will support this from release 7.2
Nexus 2000 (FEX) - Enhanced VPC

• Port-channel connectivity to dual-


S1 S2
homed FEXs
• From the server perspective a single
access switch with port-channel
support – each line card supported by Fabric Links
redundant supervisors
• Ideal design for a combination of
single NIC and Dual NIC servers with
Fex 100 Fex 101
port-channel capability HIF HIF
• Scale implications of less FEX per
system and less VPC
Note :
This design is currently not supported on N7000 / N7700 and N9X00
Q2 CY15
NX-OS 7.2
Physical Port vPC (for FEX, scale and
F3 support)

vPC domain vPC domain

FEX101 FEX102 FEX101 FEX102


e101/1/1 VPC1 VPC1 e102/1/1 e101/1/1 VPC1 VPC1 e102/1/1
Po1 Po1

interface e101/1/1
Port-channel vPC switchport Physical port vPC

vpc 1
lacp mode active

• vPC configuration on a physical Layer 2 port as opposed to a port-channel


• Front panel ports and FEX ports connected to F2/F2e/F3 only
• Improves scaling as separate PC interface not created for single-link VPC leg
• Key benefit: more than 1000 host facing VPCs with FEX
vPC - Data Center Interconnect(DCI)
DC 1 DC 2
N Network port
vPC domain 11 vPC domain 21 E Edge or portfast
Long Distance
Dark Fiber - Normal port type

CORE
CORE

B BPDUguard
E F F E
- - F BPDUfilter
N N R Rootguard
802.1AE (Optional)
N N

- E F F E -
R
R -
- - R R
AGGR

AGGR
N N N N

- -
- -
R R
R R
vPC domain 10 vPC domain 20
ACCESS

ACCESS
- -

E E
B B

Server Cluster Server Cluster


Design Best Practices
vPC as Data Center Interconnect (DCI)

PROS
• vPC is easy to configure and it provides robust and resilient interconnect
solution

CONS
• Maximum of only two Data Centers can be interconnected
• Layer 3 peering between Data Centers cannot be done through vPC and
separate links are required
Design Best Practices
vPC -Data Center Interconnect (DCI)

• vPC Domain id for vPC layers should be UNIQUE


• BPDU Filter on the edge devices to avoid BPDU propagation
• STP Edge Mode to provide fast Failover times
• No Loop must exist outside the vPC domain
• No L3 peering between Nexus 7000 devices (i.e. pure layer 2)
Dynamic routing over vPC ?
Dynamic routing over vPC
Use Case 1 : Firewall at Aggregation layer

L3 Cloud
• Peering Firewalls in routed mode over vPC
• Firewalls may be in active-standby mode
• Static routing / L3 P2P links NOT required
S2
S1
• External and internal traffic traverse same
port channel to firewall.

FW-A FW-B

Dynamic Peering Relationship


Dynamic routing over vPC
Use Case 2 : Remote Orphan Site Peering in DCI Deployment

• vPC as Data Center Interconnect (DCI) Remote Site 1 Remote Site 2

• Each Switch has routing adjacency with


both vPC device in other DC S1 S2
• Each DC connected to a remote site by
orphan port

• Remote sites forms routing adjacency


with both peers of its directly connected
DC S3 S4
Dynamic Routing over vPC
New Supported Designs
Dynamic routing over vPC
Supported Designs

Layer 3 services devices with vPC Layer 3 over DCI - vPC

P P P P

P P

Note : Supported only in Nexus 7X00 on F3 and F2E Line Cards starting from release 7.2.
Supported on Nexus 9X00 in ACI mode
Currently not supported on Nexus 5X00, Nexus 3X00, Nexus 9X00 (standalone mode), Nexus 7000 M-series Line card
Dynamic routing over vPC
Supported Designs

STP inter-connection using a vPC VLAN Orphan device with vPC peers over vPC VLAN

P P P P

P
P

Note : Supported only in Nexus 7X00 on F3 and F2E Line Cards starting from release 7.2.
Supported on Nexus 9X00 in ACI mode
Currently not supported on Nexus 5X00, Nexus 3X00, Nexus 9X00 (standalone mode), Nexus 7000 M-series Line card
Dynamic Routing over vPC
Devices without L3 over vPC support
• Don’t attach routers to VPC domain via L2 port-channel
• Common workarounds:
• Individual L3 links for routed traffic
• Static route to FHRP VIP
A B

SVI 1 SVI 1 SVI 1 SVI 1 SVI 1 SVI 1


IP Y IP Z IP Y IP Z IP Y IP Z
VIP A VIP A VIP A VIP A VIP A VIP A

L3 ECMP S2 S1 S2
S1 S2 S1

SVI 2 Router SVI 2


Router
SVI 2 IP X IP X
IP X Router Static Route to VIP A
Design Best Practices
vPC and Multicast

Source • vPC supports PIM-SM only


• vPC uses CFS to sync IGMP state
• Sources in vPC domain
− both vPC peers are forwarders
− Duplicates avoided via vPC loop-avoidance logic
S1 S2 • Sources in Layer 3 cloud
− Active forwarder elected on unicast metric
− vPC Primary elected active forwarder in case metric
are equal

Source Receivers
vPC : Get it Right the very First time
Agenda

• vPC Operations and Upgrade


− vPC Self Isolation
− vPC Shutdown
− Graceful Insertion and Removal
− ISSU / ISSD with vPC
VPC Self-isolation Current
 Automatically triggered isolation Primary PKA Secondary
 Example Presented: All Line Cards Fail
Vlan 1-100
Current Impact Self-isolation Behavior (7.2)
Vlan 1-100
• When this failure When this failure happens: Vlan 1-100
happens on primary, • Physically bring down peer-link
peer-link is brought down. • Physically bring down all vPC legs
• This causes the • Send self-isolation through peer-keep-
secondary brings down alive Self-isolation
all legs. Peer switch: Secondary PKA Primary
• Traffic is completely • Receive self-isolation from the peer
blocked. through peer-keep-alive
• Change role to Primary
• Bring up all down vPC legs
Vlan 1-100

NOTE: Available in NX-OS 7.2, 5k/6k/7k


VPC Self-isolation Current PKA
 Automatically triggered isolation
 No up vlans on peer-link: This case address the issue that no vlans No vlan
are up on the peer-link while the port channel is physically up (i.e.,
peer-link is up with no vlans). For example: vlan misconfiguration,
hardware programming failure.
Vlan 1-100 Vlan 1-100
Current Behavior Self-isolation Behavior
• The up VLAN on vPC legs Isolated switch:
are the results of configured • Physically bring down peer-link
VLAN intersected with the • Physically bring down all vPC legs PKA
UP VLAN on peer-link (i.e., • Send “self-isolation” through peer-keep- Self-
three-way intersection). alive isolation
• All VLAN down on vPC Peer switch:
domain. • Receive self-isolation from the peer
• This completely blocks the through peer-keep-alive
traffic • Change role to Primary
• Bring up all down vPC legs Vlan 1-100

NOTE: Available in NX-OS 7.2, 5k/6k/7k


vPC Configuration Best Practices
vPC Shutdown

• Isolates a switch from the vPC complex Primary Secondary

• Isolated switch can be debugged, reloaded, or vPC


even removed physically, without affecting the
vPC traffic going through the non-isolated switch
S1 S2

switch# configure terminal


switch(config)# vpc domain 100
switch(config-vpc)# shutdown
S3

Note : This Feature is currently not supported on Nexus 3X00 and 9X00 series
Graceful Insertion and Removal

Change window begins.

vPC vPC

system mode maintenance

One command!
Pre-change System Snapshot
Graceful Insertion and Removal

Change window complete.

vPC vPC

system mode normal

One command!
Pre/Post-change Snapshot Comparison
Graceful Insertion and Removal
• Flexible framework providing a comprehensive, systemic method to isolate a
node.
• Configuration profile foundation in NX-OS
• Initial support for:
• vPC/vPC+
• ISIS
• OSPF
• EIGRP
• BGP
• Interface Platform Release
Nexus 5x00/6000 NX-OS 7.1
• Per VDC on Nexus 7x00
Nexus 7x00 NX-OS 7.2
Nexus 9000 NX-OS 7.X
ISSU / ISSD with vPC
• ISSU is the recommended system upgrade in a
multi-device vPC environment
• vPC system can be independently upgraded with
no disruption to traffic
• Upgrade is serialized and must be run one peer at
a time (config lock will prevent synchronous
upgrades) 5.2(x) / 6.2(x)

• Configuration is locked on “other” vPC peer during


ISSU
• Similar process of downgrades (ISSD)
• Check ISSU / ISSD compatibility matrix & ensure
ISSU is supported from current to target release
Agenda

• vPC with Fabric Technologies


− vPC with Fabricpath (vPC+)
− vPC with FCOE
− vPC with VXLAN
− vPC with ACI
FabricPath: an Ethernet Fabric
Shipping on Nexus 7x00, Nexus 600x and Nexus 5x00

FabricPath

• Eliminates Spanning tree limitations


• High resiliency, fast network re-convergence
• Any VLAN, Anywhere in the Fabric
• Connect a group of switches using an arbitrary topology
• With a simple CLI, aggregate them into a Fabric
N7K(config)# interface ethernet 1/1
N7K(config-if)# switchport mode fabricpath
VPC vs VPC+
Architecture of vPC and FabricPath with vPC+

CE FP

CE Port FP Port

CE VLAN’s FP VLAN’s

vPC vPC+

• Physical architecture of vPC and vPC+ is the same from the access edge
• Functionality/Concepts of vPC and vPC+ are the same
• Key differences are addition of Virtual Switch ID and Peer Link is a FP Core Port
• vPC+ is not supported on Nexus 9X00 & Nexus 3X00 Series
Dynamic Routing over vPC+
• Layer 3 devices can form routing adjacencies with
both the vPC+ peers over vPC Fabricpath Core

• The peer link ports and VLAN are configured in


FabricPath mode. vPC

• N55xx, N56xx, N6000 support this design with


IPv4/IPv6 unicast and PIM-SM multicast P P

• This design is not supported on N7X00

N55xx, N56xx,
N6000
Router/ Firewall

Fabricpath Link P
Dynamic Peering Relationship
P Routing Protocol Peer
vPC with FCoE
LAN Fabric
Unified Fabric Design Fabric A Fabric B

• vPC with FCoE is ONLY supported between hosts and


N5K/N6K or N5K/N6K & N2232 pairs.
• Must follow specific rules:
• A ‘vfc’ interface can only be associated with a VLAN 10 ONLY HERE!
single-port port-channel.
• While the port-channel configurations are the Nexus 5000
same on both switches, the FCoE VLANs are FCF-A
Nexus 5000
different. FCF-B

• FCoE VLANs are ‘not’ carried on the vPC peer-link


(automatically pruned): VLAN 10,20
STP Edge Trunk
• FCoE and FIP ethertypes are ‘not’ forwarded
over the vPC peer link.
VLAN 10,30
• vPC carrying FCoE between two FCF’s is NOT
supported.
• Best Practice: Use static port channel rather than LACP vPC contains only 2 X 10GE
with vPC and boot from SAN. links – one to each Nexus 5X00

[If NX-OS is prior to 5.1(3)N1(1)]


Why VXLAN ?
 Problems being addressed:
• VLAN scale – VXLAN extends the L2 segment ID field to 24-bits,
potentially allowing for up to 16 million unique L2 segments over the
same network
• Layer 2 segment elasticity over Layer 3 boundary – VXLAN
encapsulates L2 frame in IP-UDP header
 High Level Technology Overview:
• MAC-in-UDP encapsulation.
• Leverages multicast in the transport network to simulate flooding
behavior for broadcast, unknown unicast and multicast in the same
segment
• Leverage ECMP to achieve optimal path usage over the transport
network
For Your
Reference
VXLAN Packet Format
Outer Outer UDP Header VXLAN
Original
FCS L2 Frame FCS
Mac Header IP Header Header

14 Bytes
(4 bytes optional) 20 Bytes 8 Bytes 8 Bytes

VXLAN Port

UDP Length

Reserved
RRRR1RRR
IP Header
MAC Addr.

Misc Data
MAC Addr.

Reserved
Checksum
VLAN Type

Ether Type

Checksum
Src. Port
Protocol

Dst. IP
VLAN ID

Outer
Header
0x8100

Src. IP

0x0000
0x0800

Outer
0x11

VNID
VXLAN
UDP
Dst.

Src.

Tag

48 48 16 16 16 72 8 16 32 32 16 16 16 16 8 24 24 8

• VXLAN is a Layer 2 overlay scheme over a Layer 3 network.


• VXLAN uses Ethernet in UDP encapsulation
• VXLAN uses a 24-bit VXLAN Segment ID (VNI) to identify Layer-2 segments
VXLAN Terminology
VTEP – Virtual Tunnel End Point

Transport IP Network

VTEP VTEP
IP Interface IP Interface

Local LAN Segment Local LAN Segment

End System End System End System End System

• VXLAN terminates its tunnels on VTEPs (Virtual Tunnel End Point).


• VTEP has two interfaces :
1. Bridging functionality for local hosts
2. IP identification in the core network for VXLAN encapsulation / de-encapsulation.
vPC VTEP
• When vPC is enabled an ‘anycast’ VTEP
address is programmed on both vPC
peers
• Multicast topology prevents BUM traffic
being sent to the same IP address across
the L3 network (prevents duplication of VXLAN
flooded packets)
vPC VTEP vPC VTEP
• vPC peer-gateway feature must be
enabled on both peers
VLAN
• VXLAN header is ‘not’ carried on the vPC
Peer link
VXLAN & VPC For Your
Reference
VPC Configuration
VTEP1
vlan 10
vn-segment 10000
Map VNI to VLAN
interface loopback 0
ip address <VTEP individual IP – orphan)
ip address <VTEP anycast IP – per VPC domain> secondary
Source Interface !
individual IP is used for single attached Hosts interface nve1
anycast IP is used for VPC attached Hosts source-interface loopback0
member vni 10000 mcast-group 235.1.1.1
vtep vtep vtep vtep
1 2 3 4
VXLAN Tunnel Interface
VTEP2
vlan 10
vn-segment 10000

interface loopback 0
ip address <VTEP individual IP - orphan>
ip address <VTEP anycast IP – per VPC domain> secondary
!
interface nve1
source-interface loopback0
member vni 10000 mcast-group 235.1.1.1
H1 H2
10.10.10.10 10.10.10.20
VLAN 10 VLAN 10
(vpc) (vpc)
VXLAN & VPC For Your
Reference
VPC Configuration
VTEP1 VTEP3
vlan 10 vlan 10
vn-segment 10000 vn-segment 10000

interface loopback 0 interface loopback 0


ip address 1.1.1.1/32 ip address 1.1.1.3/32
ip address 1.1.1.201/32 secondary ip address 1.1.1.202/32 secondary
! !
Interface nve1 Interface nve1
source-interface loopback0 source-interface loopback0
member vni 10000 mcast-group 235.1.1.1 member vni 10000 mcast-group 235.1.1.1
vtep vtep vtep vtep
1 2 3 4
VTEP2 VTEP4
vlan 10 vlan 10
vn-segment 10000 vn-segment 10000

interface loopback 0 interface loopback 0


ip address 1.1.1.2/32 ip address 1.1.1.4/32
ip address 1.1.1.201/32 secondary ip address 1.1.1.202/32 secondary
! !
Interface nve1 Interface nve1
source-interface loopback0 source-interface loopback0
member vni 10000 mcast-group 235.1.1.1 member vni 10000 mcast-group 235.1.1.1
H1 H2
10.10.10.10 10.10.10.20
VLAN 10 VLAN 10
(vpc) (vpc)
VXLAN & VPC
Dual attached Host to dual attached Host (Layer-2)

• Host 1 (H1) and Host 2 (H2) are dual


connected to a VPC domain

• As H1 is behind a VPC interface, the vtep vtep vtep vtep


1 2 3 4
anycast VTEP IP is the source for
the the VXLAN encapsulation

• As H2 is behind a VPC interface, the


anycast VTEP IP is the target
H1 H2
10.10.10.10 10.10.10.20
VLAN 10 VLAN 10
(vpc) (vpc)
Nexus 9000 + APIC = ACI

APIC
APIC
APIC
ACI uses a policy based approach
that focuses on the application.
QoS QoS QoS

Filter Service Filter

Web App DB

External
Network
vPC and ACI
vPC
ACI fabric utilised for control-plane vPC peers
Domains
• No dedicated peer-link between vPC peers:
Fabric itself serves as the MCT
ACI
• No out-of-band mechanism to detect peer fabric
liveliness: vtep vtep
1 2

Due to rich fabric-connectivity (leaf-spine), it is vtep


3

very unlikely that peers will have no active


path between them
• CFS (Cisco Fabric Services) is replaced by vPC vPC
Zero Message Queue (ZMQ)
• As ACI fabric is VXLAN-based, an anycast
VTEP is shared by both leaf switches in a
vPC domain
Agenda

• Scalability
vPC Scalability

For Latest Scalability numbers please refer to the scalability limits pages for the platform

Nexus 7X00
http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_7000_Series_NX-OS_Verified_Scalability_Guide.html

Nexus 5X00
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5600/sw/verified_scalability/701N11/b_N5600_Verified_Scalability_701N11/b_N6000_Verified_
Scalability_700N11_chapter_01.html

Nexus 600X
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus6000/sw/verified_scalability/602N21/b_N6000_Verified_Scalability_602N21/b_N6000_Verified_
Scalability_602N12_chapter_01.html

Nexus 9X00
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/7-x/scalability/guide_703I12/b_Cisco_Nexus_9000_Series_NX-
OS_Verified_Scalability_Guide_703I12.html

Nexus 3X00
http://www.cisco.com/en/US/docs/switches/datacenter/nexus3000/sw/configuration_limits/503_u5_1/b_Nexus3k_Verified_Scalability_503U51.html

84
Agenda

• Reference Material
Reference Material For Your
Reference

• vPC Best Practices Design Guide:


http://www.cisco.com/c/dam/en/us/td/docs/switches/datacenter/sw/design/vpc_design/vpc_best_practices_design_guid
e.pdf
• vPC design guides:
http://www.cisco.com/en/US/partner/products/ps9670/products_implementation_design_guides_list.html
• vPC and VSS Interoperability white Paper:
http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/white_paper_c11_589890.html
• VXLAN Overview :
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-729383.html

• Fabrcipath whitepaper :
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-7000-series-switches/white_paper_c11-687554.html

 ACI Overview
http://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/aci-fabric-controller/white-paper-c11-729587.html
Key Take-Aways
vPC in 2015 VXLAN
• L2 segment scalability
VXLAN, ACI, Fabricpath • VTEP redundancy with
vPC

vPC Benefits ACI


• No Blocked Ports • Policy Based
• High availability • Fabric for vPC control
• Fast Convergence plane
Fabricpath FCoE
• Eliminates Spanning-Tree *
• High resiliency • Unified Fabric for LAN &
• vPC+ for legacy switches, SAN
servers, hosts
Participate in the “My Favorite Speaker” Contest
Promote Your Favorite Speaker and You Could Be a Winner
• Promote your favorite speaker through Twitter and you could win $200 of Cisco
Press products (@CiscoPress)
• Send a tweet and include
• Your favorite speaker’s Twitter handle
• Two hashtags: #CLUS #MyFavoriteSpeaker

• You can submit an entry for more than one of your “favorite” speakers
• Don’t forget to follow @CiscoLive and @CiscoPress
• View the official rules at http://bit.ly/CLUSwin
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner
will receive a $750 Amazon
gift card.
• Complete your session surveys
though the Cisco Live mobile
app or your computer on
Cisco Live Connect.
Don’t forget: Cisco Live sessions will be available
for viewing on-demand after the event at
CiscoLive.com/Online
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Table Topics
• Meet the Engineer 1:1 meetings
• Related sessions
Thank you

You might also like