You are on page 1of 40

Quick Start Guide

Overlay Transport Virtualization (OTV)

Architecture & Solutions Group


US Public Sector Advanced Services
Mark Stinnette, CCIE Data Center #39151

Date 16 October 2013


Version 1.4.2

© 2013 Cisco and/or its affiliates. All rights reserved. 1


This Quick Start Guide (QSG) is a Cookbook style guide to Deploying Data Center
technologies with end-to-end configurations for several commonly deployed architectures.

This presentation will provide end-to-end configurations mapped directly to commonly


deployed data center architecture topologies. In this cookbook style; quick start guide;
configurations are broken down in an animated step by step process to a complete end-to-
end good clean configuration based on Cisco best practices and strong recommendations.
Each QSG will contain set the stage content, technology component definitions,
recommended best practices, and more importantly different scenario data center
topologies mapped directly to complete end-to-end configurations. This QSG is geared for
network engineers, network operators, and data center architects to allow them to quickly
and effectively deploy these technologies in their data center infrastructure based on
proven commonly deployed designs.

© 2013 Cisco and/or its affiliates. All rights reserved. 2


OTV Configuration
Benefits Overview
Geographically dispersed data centers provide added application resiliency and workload allocation flexibility. To this end,
the network must provide Layer 2, Layer 3 and storage connectivity between data centers. Connectivity must be provided
without compromising the autonomy of data centers or the stability of the overall network. OTV provides an operationally
optimized solution for the extension of Layer 2 connectivity across any transport. OTV is therefore critical to the effective
deployment of distributed data centers to support application availability and flexible workload mobility.

OTV is a "MAC address in IP" technique for supporting Layer 2 VPNs to extend LANs over any transport. The transport can
be Layer 2 based, Layer 3 based, IP switched, label switched, or anything else as long as it can carry IP packets. By using
the principles of MAC routing, OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2
domains while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits
of an IP-based interconnection.

Overlay Transport Virtualization (OTV) provides the following benefits:


• Scalability
Extends Layer 2 LANs over any network that supports IP (Transport agnostic)
Designed to scale across multiple data centers
• Simplicity
Supports transparent deployment over existing network without redesign
Requires minimal configuration commands
• Resiliency
Preserves existing Layer 3 failure boundaries
Includes built-in loop prevention
Failure boundary preservation and site independence preservation (failover isolation between data centers)
• Efficiency
Optimized available bandwidth, by using equal-cost multi-pathing and optimal multicast replication
Multipoint connectivity
Fast failover
• Virtual Machine Mobility

© 2013 Cisco and/or its affiliates. All rights reserved. 3


OTV Configuration
Benefits Overview
Additional benefits of using OTV for Layer 2 extension:

• No need for Ethernet over Multiprotocol Label Switching (EoMPLS) or Virtual Private LAN Services (VPLS)
deployment for Layer 2 extensions

• Use any network transport that supports IP

• Provision of Layer 2 and Layer 3 connectivity using the same dark fiber connections

• Native Spanning Tree Protocol (STP) isolation:


No need to explicitly configure Bridge Data Protocol Unit (BPDU) filtering

• Native unknown unicast flooding isolation:


Unknown unicast not sent to the overlay

• Address Resolution Protocol (ARP) optimization with the OTV ARP cache

• Simplified provisioning of First Hop Redundancy Protocol (FHRP) isolation

• Simplified addition of sites

© 2013 Cisco and/or its affiliates. All rights reserved. 4


L3 & Join Interfaces

OTV Configuration L2 Internal Interfaces

Commonly Deployed Designs :: Aggregation Layer

 Most Commonly Deployed


 No Network Redesign or Re-Cabling
 Join Interface connects back through the VDC
that has the SVIs on them
 Separate OTV VDC or Appliance Switch

OTV On a Stick
Inline OTV

 Dedicated Uplink for DCI


 Join Interface has a dedicated link out to the
DCI transport (Core or WAN Edge)
 Separate OTV VDC or Appliance Switch

© 2013 Cisco and/or its affiliates. All rights reserved. 5


OTV Configuration
Terminology & Components

OTV delivers Layer 2 extensions over


any type of transport infrastructure

vPC Domain SVI Separation on the Aggregation VDC


(vPC or vPC+ Supported)

Join Interfaces
Point-to-Point Layer 3 interface Peer-Link
M-Series Line Cards Only OTV Edge Device
Performs OTV Functions
Multicast or Unicast
Transports Supported OTV Overlay Interface

Authoritative Edge Device (AED ) Authoritative Edge Device (AED )


Odd VLANs Even VLANs

M-Series Line Cards Supported Internal Interfaces


F1 & F2E Line Cards Supported in 6.2(2) Regular Layer 2 & Carries VLANs extended over OTV

© 2013 Cisco and/or its affiliates. All rights reserved. 6


OTV Configuration
Terminology & Components
OTV encapsulates packets into an IP header and where it sets the Don't
Fragment (DF) bit for all OTV control and data packets crossing the transport
network. The encapsulation adds 42 bytes to the original IP maximum transition
unit (MTU) size. So it is a best practice to configure the join interface and all
Layer 3 interfaces that face the IP core between the OTV edge devices with
the max possible MTU size supported by the transport.

WEST DC EAST DC

HSRP Active HSRP Standby HSRP Active HSRP Standby

Site ID 1 Site ID 2

Site VLAN 99 Site VLAN 99

Filter HSRP Filter HSRP Filter HSRP Filter HSRP


Site ID & Site VLAN are Deployed on Both OTV Edge Devices
Site Identifier ::
Use same Site ID within a single data center
Use unique Site ID between different data centers
Filtering FHRP in both data centers on the OTV VDC is
Site VLAN :: required to allow for existence of the same default
Use same Site VLAN between different data centers (not mandatory) gateway in different locations thus optimizing the
Site VLAN is active on internal interfaces but don’t extend Site VLAN outbound traffic flows (server to client direction)
The Site VLAN should be a dedicated VLAN

© 2013 Cisco and/or its affiliates. All rights reserved. 7


OTV Configuration
Terminology & Components :: Layer 2 & Layer 3 Features
Layer 3 Interface (Towards Routed Core)
interface ethernet x/y
mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip pim sparse-mode

Layer 3 Interface (Towards OTV Join)


interface ethernet x/z
mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip pim sparse-mode
ip igmp version 3

Must enable Site


VLAN [x] on trunk OTV Join Interfaces
towards the
Aggregation Switch interface ethernet x/y
[make vlan active] mtu 9216
ip address x.x.x.x/30
ip router ospf 1 area 0
ip ospf network point-to-point
ip igmp version 3
OTV Internal Interfaces Aggregation Internal Interfaces

interface port-channel x interface port-channel x Aggregation Switch :: Enable PIM


switchport switchport
switchport mode trunk switchport mode trunk feature pim
switchport trunk allowed vlan x, y switchport trunk allowed vlan x, y
vpc x ip pim rp-address x.x.x.x group-list 224.0.0.0/4
interface ethernet x/y - z ip pim ssm range 232.0.0.0/8
channel-group x force mode active interface ethernet x/y
channel-group x force mode active

© 2013 Cisco and/or its affiliates. All rights reserved. 8


OTV Configuration
Additional Features, Terminology, & Components
Feature Overview
Edge Device The OTV Edge Devices performs OTV functions, multiple OTV Edge Devices can exist at each site. OTV
requires the Transport Services (TRS) license. If you create the OTV Edge Device in a non default VDC; it
requires the Advanced Services license.
Internal Interfaces Internal interfaces are the site facing interfaces of the Edge device; carrying VLANs extended through OTV.
They are regular Layer 2 interfaces, switch ort mode trunk, and typically port channels in a vPC. No OTV
configuration is required on these interfaces.
Join Interfaces Join interfaces are one of the uplink of the Edge device; they are Layer 3 point-to-point routed interfaces
(physical interface, port channel, or sub-interface). Its used to physically ‘join’ the Overlay network. No OTV
specific configuration required.
Overlay Interface Virtual interface is where most of the OTV configuration happens, logical multi-access multicast-capable
interface, encapsulates Layer 2 frames in IP unicast or multicast.

Authoritative Edge The AED is responsible for MAC address advertisement for its VLANs; forwarding its VLANs traffic inside
Device (AED) and outside the site. The extended VLANs are split across the AEDs (even & odd) in OTV multi-homing.
Site VLAN The OTV Site VLAN is used to discover OTV neighbor edge devices in same local site.

Site Identifier Same site Edge devices must use a common unique Site ID. Site ID is included in the control plane; an
overlay will not come up until a Site ID is configured; and should be on all local OTV Edge devices.
MTU Join interfaces and neighboring Core interfaces need to have MTU of ≥ 1542 (hard requirement). Best
practice to the max possible MTU size supported by the transport
FHRP Isolation Filtering FHRP messages across the OTV Overlay allows to provide the same active default gateway in
each data center site. Note, in future releases OTV will offer a simple command to enable these filtering
capabilities.
SVI Separation OTV currently enforces SVI separation for the VLANs being extended across the OTV link, meaning OTV is
usually in its own VDC for OTV functions and have SVIs in another Aggregation VDC.

© 2013 Cisco and/or its affiliates. All rights reserved. 9


OTV Configuration
Additional Features, Terminology, & Components
Feature Overview
OTV Requirements Nexus 7000 Series or ASR routers. LAN ADVANCED SERVICES (VDC) license & TRANSPORT
SERVICES (OTV/LISP) license. An M-Series line card is required in the OTV VDC for OTV functions.

Multicast Transport Multicast transport (OTV Control Plane) is ideal for connecting a higher number of sites. OTV Neighbor
relationships are built over a multicast enabled core / transport infrastructure. All OTV edge devices can
be configured to join a specific ASM (Any Source Multicast) group where they simultaneously play the
role of receiver and source. Edge devices join a multicast group; adjacencies are maintained over that
multicast group and a single update reaches all neighbors.
Unicast Transport Supported since NX-OS release 5.2. Unicast-only transport (OTV Control Plane) is ideal for connecting
a small number of sites. Requires the adjacency server. Each OTV devices would need to create
multiple copies of each control plane packet and unicast them to each remote OTV device part of the
same logical overlay.
Adjacency Server Used in OTV Unicast mode; usually enabled on an OTV Edge device; can have a primary and
secondary; and all other OTV Edge client devices are configured with the address of the adjacency
server. The goal is to be able to communicate with all the remote OTV devices, each OTV node needs
to know a list of neighbors to replicate the control packets to. Rather than statically configuring in each
OTV node the list of all neighbors, a simple dynamic means is used to provide this information; this
adjacency server.
OTV Extend VLAN Enables OTV advertisements for those VLANs. OTV will not forward Layer 2 packets for VLANs not in
the extended VLAN range for the overlay interface. Assign a VLAN to only one overlay interface.

OTV Authentication OTV supports authentication of Hello messages along with authentication of PDUs.

Dual Homed OTV Edge Leverage vPC or vPC+ for dual homed OTV Edge devices. The concept of the AED role along with the
Devices site vlan allows multi-homing OTV Edge devices.

© 2013 Cisco and/or its affiliates. All rights reserved. 10


OTV Configuration
Additional Features, Terminology, & Components
Feature Overview
Selective Unicast In 6.2(2); some applications rely on unknown unicast frames; so selective unicast flooding can be
Flooding enabled on a per mac address per vlan to accommodate silent or uni-directional hosts. OTV default
behavior is no unknown unicast forwarding.
Command used: otv flood mac [xxxx.yyyy.zzzz] vlan [#]
Dedicated Data In 6.2(2); Dedicated broadcast group is a configurable option; useful for QoS purposes. A dedicated
Broadcast Forwarding multicast group can be configured for all broadcast transmission in an OTV overlay that utilizes multicast
transmission on the underlying OTV network. By default, the broadcast and control traffic will share the
same multicast group address. The broadcast group needs to be configured on all OTV Edge devices
connected to the OTV overlay network.
Source Interface with In 6.2(2)+ maintenance release; Logical interfaces as Join Interfaces; Loopback to guarantee interfaces
Loopback is up/up. An OTV Edge device can be configured to use a loopback interface as the join-interface for an
OTV overlay to increase availability. This feature requires the OTV Edge device to participate in the core
PIM multicast domain to support multiple paths. Prior to this feature only single homed Ethernet and port
channel interface options were available.
OTV VLAN Translation In 6.2(2); VLAN translation allows OTV to map a local VLAN (in DC 1) to a remote VLAN (in DC 2). In
previous NX-OS releases, the extended VLANs had to be identical in each site (ie. X to X). With the
VLAN mapping feature, VLANs can be translated, so they can be different in each site (ie. X to Y to Z)
providing more flexible deployment options. Both multicast and unicast enabled IP core networks are
supported. VLAN mappings have a one-to-one relationship.

© 2013 Cisco and/or its affiliates. All rights reserved. 11


OTV Configuration
Supported Line Card Topologies :: NX-OS 6.1 and Prior Releases

Aggregation VDC

• OTV VDC must use only M-Series ports for both Internal and Join Interfaces
[M1-48, M1-32, M1-08, M2-Series]
• OTV VDC Types (M-only)
• Aggregation VDC Types (M-only, M1-F1 or F2/F2E)

© 2013 Cisco and/or its affiliates. All rights reserved. 12


OTV Configuration
Supported Line Card Topologies :: NX-OS 6.2 and Later Releases

Aggregation VDC

• OTV VDC Join Interfaces must use only M-Series ports


[M1-48, M1-32, M1-08, M2-Series]
• OTV VDC Internal Interfaces can use M-Series, F1 and F2E ports (F1 and F2E must be in Layer 2 proxy mode)
• OTV VDC Types (M-only, M1-F1, M1-F2E)
• Aggregation VDC Types (M-only, M1-F1, M1-F2E, F2, F2E, F2F2E)

© 2013 Cisco and/or its affiliates. All rights reserved. 13


OTV Characteristics

OTV Configuration
 2-wide 7k Aggregation VDC
 Multi-homed OTV VDC
 Multicast enabled transport
 Extend VLAN 10
 OTV Site VLAN 99
Quick Start Guide Assumptions

Physical View – Connectivity Map

Layer 3 routed point-to-point interfaces. Will be using OSPF as the routing protocol.

Layer 2 interfaces. The Aggregation VDC connects through vPC to the OTV VDC.

© 2013 Cisco and/or its affiliates. All rights reserved. 14


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Create Aggregation & OTV VDCs

[Admin / Default VDC] [Admin / Default VDC]

no vdc combined-hostname no vdc combined-hostname

vdc AGG-1 vdc AGG-2


vdc AGG-1 limit-resource module-type m1 f1 m1xl m2xl vdc AGG-2 limit-resource module-type m1 f1 m1xl m2xl
cpu-share 5 cpu-share 5
allocate interface Ethernet [….] allocate interface Ethernet [….]

vdc OTV-1 vdc OTV-2


vdc OTV-1 limit-resource module-type m1 m1xl m2xl vdc OTV-2 limit-resource module-type m1 m1xl m2xl
cpu-share 5 cpu-share 5
allocate interface Ethernet [….] allocate interface Ethernet [….]

Verify the Nexus 7000 has the proper licenses to support OTV and VDC.

Step 1 :: install | validate licenses OTV requires the Transport Services license
VDC requires the Advanced Services license
Step 2 :: create aggregation VDC
Step 3 :: create OTV VDC install license bootflash:///lan_advanced_services_pkg.lic
install license bootflash:///lan_transport_services_pkg.lic

show license usage

Allocate the Interfaces to appropriate VDC role accordingly

© 2013 Cisco and/or its affiliates. All rights reserved. 15


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure Aggregation VDC :: Layer 2 vPC (Option)


feature lacp feature lacp
feature vpc feature vpc

vlan 10-20, 99 vlan 10-20, 99

spanning-tree pathcost method long spanning-tree pathcost method long


spanning-tree port type edge bpduguard default spanning-tree port type edge bpduguard default
spanning-tree port type edge bpdufilter default spanning-tree port type edge bpdufilter default
no spanning-tree loopguard default no spanning-tree loopguard default

spanning-tree vlan 10-20, 99 priority 0 spanning-tree vlan 10-20, 99 priority 0


spanning-tree pseudo-information spanning-tree pseudo-information
vlan 10-20 root priority 4096 vlan 10-20, 99 root priority 4096
vlan 1-10, 99 designated priority 8192 vlan 1-10, 99 designated priority 16384
vlan 11-20 designated priority 16384 vlan 11-20 designated priority 8192

vpc domain 1 vpc domain 1


role priority 1 role priority 2
system-priority 4096 system-priority 4096
peer-keepalive destination [….] source [….] vrf peer-keepalive destination [….] source [….] vrf
management management
peer-switch peer-switch
peer-gateway peer-gateway
auto-recovery auto-recovery
auto-recovery reload-delay auto-recovery reload-delay
delay restore 30 delay restore 30
ip arp synchronize ip arp synchronize

interface port-channel 2 interface port-channel 2


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10-20, 99 switchport trunk allowed vlan 10-20, 99
spanning-tree port type network spanning-tree port type network
vpc peer-link vpc peer-link

interface e3/1 , e4/1 interface e3/1 , e4/1


channel-group 2 force mode active See QSG :: vPC for more details … channel-group 2 force mode active

© 2013 Cisco and/or its affiliates. All rights reserved. 16


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure Aggregation VDC :: Layer 2 FabricPath vPC+ (Option)

feature lacp feature lacp


feature vpc feature vpc
install feature-set fabricpath Default / Admin install feature-set fabricpath Default / Admin
feature-set fabricpath VDC Only feature-set fabricpath VDC Only

vlan 10-20, 99 vlan 10-20, 99


mode fabricpath mode fabricpath

fabricpath switch-id 10 fabricpath switch-id 11

fabricpath domain default fabricpath domain default


root-priority 255 root-priority 254

spanning-tree pseudo-information spanning-tree pseudo-information


vlan 10-20, 99 root priority 0 vlan 10-20, 99 root priority 0

vpc domain 1 vpc domain 1


role priority 1 role priority 2
system-priority 4096 system-priority 4096
peer-keepalive destination [….] source [….] vrf peer-keepalive destination [….] source [….] vrf
management management
peer-gateway peer-gateway
auto-recovery auto-recovery
auto-recovery reload-delay auto-recovery reload-delay
delay restore 30 delay restore 30
ip arp synchronize ip arp synchronize
fabricpath switch-id 1000 fabricpath switch-id 1000

interface port-channel 2 interface port-channel 2


switchport mode fabricpath switchport mode fabricpath
vpc peer-link vpc peer-link

interface e3/1 , e4/1 interface e3/1 , e4/1


channel-group 2 force mode active See QSG :: FabricPath for more details … channel-group 2 force mode active

© 2013 Cisco and/or its affiliates. All rights reserved. 17


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure Aggregation VDC :: Layer 3 Infrastructure

feature ospf feature ospf


feature interface-vlan feature interface-vlan
feature hsrp feature hsrp

vlan 10 – 20, 99 vlan 10 – 20, 99

interface loopback0 interface loopback0


ip address [….]/32 ip address [….]/32

router ospf 1 router ospf 1


router-id [….] router-id [….]
log-adjacency-changes detail log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps auto-cost reference-bandwidth 100Gbps

interface e1/1 interface e1/1


ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface e1/10 interface e1/10


ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface vlan 10 interface vlan 10


ip address 10.10.10.2/24 ip address 10.10.10.3/24
no ip redirects no ip redirects
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf passive-interface ip ospf passive-interface
hsrp 1 hsrp 1
preempt Allocate the following accordingly :: preempt
priority 110  IP addressing ip 10.10.10.1
ip 10.10.10.1  OSPF areas
 SVIs & HSRP Groups

© 2013 Cisco and/or its affiliates. All rights reserved. 18


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure OTV :: Layer 2 & Layer 3 Infrastructure @ Aggregation

feature ospf feature ospf


feature lacp feature lacp
feature vpc feature vpc

interface e 1/2 interface e 1/2


ip address [….] / 30 ip address [….] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface port-channel 10 interface port-channel 10


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10, 99 switchport trunk allowed vlan 10, 99
vpc 10 vpc 10

interface port-channel 20 interface port-channel 20


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10, 99 switchport trunk allowed vlan 10, 99
vpc 20 vpc 20

interface e5/1 interface e5/1


channel-group 10 force mode active channel-group 10 force mode active

interface e6/1 interface e6/1


channel-group 20 force mode active channel-group 20 force mode active

Step 1 :: configure L3 link towards OTV Join Interface


Step 2 :: configure L2 vPC towards OTV Internal Interface

The OTV internal interfaces carry the VLANs to be extended and the OTV site VLAN (used within the data center to
provide multi-homing). They behave as regular Layer 2 switch port trunk interfaces; in fact, they send, receive, and
process the Spanning Tree Protocol BPDUs as they would on a regular LAN bridge device.

© 2013 Cisco and/or its affiliates. All rights reserved. 19


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure OTV :: Layer 2 & Layer 3 Infrastructure @ OTV VDC

feature ospf feature ospf


feature lacp feature lacp

vlan 10 vlan 10

spanning-tree vlan 10 priority 32768 spanning-tree vlan 10 priority 32768

interface loopback0 interface loopback0


ip address [….]/32 ip address [….]/32

router ospf 1 router ospf 1


router-id [….] router-id [….]
log-adjacency-changes detail log-adjacency-changes detail
auto-cost reference-bandwidth 100Gbps auto-cost reference-bandwidth 100Gbps

interface e 1/9 interface e 1/9


ip address [….] / 30 ip address [….] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface port-channel 10 interface port-channel 10


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10 switchport trunk allowed vlan 10

interface e2/1, e2/2 Step 1 :: configure OTV Join Interfaces interface e2/1, e2/2
channel-group 10 force mode active Step 2 :: configure OTV Internal Interfaces channel-group 10 force mode active
Step 3 :: create vlan to extend

© 2013 Cisco and/or its affiliates. All rights reserved. 20


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure OTV :: Enable Jumbo MTU


feature ospf feature ospf
feature lacp feature lacp
feature vpc feature vpc

vlan 10 – 20, 99 vlan 10 – 20, 99

interface e 1/2 interface e 1/2


mtu 9216 mtu 9216
ip address [….] / 30 ip address [….] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface e1/10 interface e1/10


mtu 9216 mtu 9216
ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

interface e1/1 interface e1/1


mtu 9216 mtu 9216
ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

feature ospf feature ospf


feature lacp feature lacp
feature vpc feature vpc

vlan 10 vlan 10

interface e 1/9 interface e 1/9


mtu 9216 mtu 9216
ip address [….] / 30 Step 1 :: increase MTU on Join Interfaces ip address [….] / 30
ip router ospf 1 area 0.0.0.0 Step 2 :: increase MTU on all Layer 3 Interfaces ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

© 2013 Cisco and/or its affiliates. All rights reserved. 21


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

Configure OTV :: Enable Required Multicast


Step 1 :: enable PIM
feature ospf Step 2 :: configure PIM sparse mode [AGG VDC] feature ospf
feature lacp (on all intra & inter data center Layer 3 links) feature lacp
feature vpc Step 3 :: configure IGMP v3 [AGG & OTV VDC] feature vpc
feature pim (join interfaces only) feature pim
Step 4 :: configure Rendezvous Point (RP)
ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4 ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4
Step 5 :: configure Source-Specific Multicast (SSM)
ip pim ssm range 232.0.0.0/8 ip pim ssm range 232.0.0.0/8

interface e1/1 interface e1/1


mtu 9216 mtu 9216
ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point
ip pim sparse-mode ip pim sparse-mode

interface e 1/2 interface e 1/2


mtu 9216 mtu 9216
ip address [….] / 30 ip address [….] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point
ip pim sparse-mode ip pim sparse-mode
ip igmp version 3 ip igmp version 3

interface e1/10 interface e1/10


mtu 9216 mtu 9216
ip address [….]/30 ip address [….]/30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point
ip pim sparse-mode ip pim sparse-mode

interface e 1/9 interface e 1/9


mtu 9216 mtu 9216
ip address [….] / 30 ip address [….] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point
ip igmp version 3 ip igmp version 3

© 2013 Cisco and/or its affiliates. All rights reserved. 22


OTV Characteristics

OTV Configuration
 2-wide 7k Aggregation VDC
 Multi-homed OTV VDC
 Multicast enabled transport
 Extend VLAN 10

Finish OTV Configuration :: Overlay, Site-ID, Site-VLAN


feature otv feature otv

vlan 10 , 99 vlan 10 , 99

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 otv site-identifier 0000.0000.0002

interface Overlay 1 interface Overlay 1


otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv control-group 239.1.1.1 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv data-group 232.1.1.0/24
otv extend-vlan 10 otv extend-vlan 10

interface port-channel 10 interface port-channel 10


switchport switchport
switchport mode trunk switchport mode trunk
switchport trunk allowed vlan 10 , 99 switchport trunk allowed vlan 10 , 99
Site ID 1 Site ID 1 Site ID 2 Site ID 2
interface e2/1, e2/2 interface e2/1, e2/2
Site VLAN 99 Site VLAN 99 Site VLAN 99 Site VLAN 99
channel-group 10 force mode active channel-group 10 force mode active

feature otv feature otv

vlan 10 , 99 vlan 10 , 99

otv site-vlan 99 Step 1 :: enable OTV feature otv site-vlan 99


otv site-identifier 0000.0000.0001 Step 2 :: configure site-vlan otv site-identifier 0000.0000.0002
Step 3 :: enable site-vlan on L2 trunks (make vlan active)
interface Overlay 1 interface Overlay 1
otv join-interface ethernet 1/9
Step 4 :: configure site-identifier otv join-interface ethernet 1/9
otv control-group 239.1.1.1 Step 5 :: configure OTV Overlay Interface otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv data-group 232.1.1.0/24
otv extend-vlan 10 otv extend-vlan 10

interface port-channel 10 interface port-channel 10


switchport switchport
switchport mode trunk interface e2/1, e2/2 interface e2/1, e2/2 switchport mode trunk
switchport trunk allowed vlan 10 , 99 channel-group 10 force mode active channel-group 10 force mode active switchport trunk allowed vlan 10 , 99

© 2013 Cisco and/or its affiliates. All rights reserved. 23


NOTES
OTV Configuration
OTV Configuration
 The Layer 2 links are known as internal interfaces and are used by the OTV edge device to learn the
MAC addresses of the site and forward Layer 2 traffic across the sites for the extended VLANs.
 The Layer 3 link is known as the join interface, which OTV uses to perform IP-based virtualization to
send and receive overlay traffic between sites. The IP address of this interface is used to advertise
reachability of a MAC addresses present in the site. There is one Join interface per OTV Overlay;
however, if multiple Layer 3 interfaces are present on the OTV edge device, the unicast extended traffic
can get routed over any of these links
 OTV encapsulates packets into an IP header and where it sets the Don't Fragment (DF) bit for all OTV
control and data packets crossing the transport network. The encapsulation adds 42 bytes to the
original IP maximum transition unit (MTU) size. So it is a best practice to configure the join interface
and all Layer 3 interfaces that face the IP core between the OTV edge devices with the max possible
MTU size supported by the transport.
 OTV uses site VLAN to allow multiple OTV edge devices within the site to talk to each other and
Site ID 1
determine the AED for the OTV-extended VLANs. It is a best practice to use a dedicated VLAN as site
Site VLAN 99 VLAN. The site VLAN should not be extended and should be carried down to the aggregation layer
across the VPC peer link.

• The OTV edge device is also configured with the overlay interface, which is • When sites are multihomed with OTV EDs, separation is achieved by electing
associated with the join interface to provide connectivity to the physical an authoritative edge device (AED) for each VLAN in the same site (site-id),
transport network. The overlay interface is used by OTV to send and receive which is the only device that can forward the traffic for the extended VLAN
Layer 2 frames encapsulated in IP packets. From the perspective of MAC- inside and outside the data center. The extended VLANs are split in odd and
based forwarding on the site, the overlay interface is simply another bridged even and automatically assigned to the site's edge devices.
interface. However, no Spanning Tree Protocol packets or unknown unicast • The multicast control group identifies the overlay; two different overlays must
packets are forwarded over the overlay interface. have two different multicast control groups. The control group is used for
• Note: The overlay interface does not come up until you configure a multicast neighbor discovery and to exchange MAC address reachability. The data
group address and the site-VLAN has at least an active port on the device. group however is an SSM (Source Specific Group) group range, which is
• A VLAN is not advertised on the overlay network; therefore, forwarding used to carry multicast data traffic generated by the sites
cannot occur over the overlay network unless the VLANs are explicitly • In the aggregation layer, Protocol Independent Multicast (PIM) is configured
extended. Once the VLAN is extended, the OTV edge device will begin on all intra- and inter-data-center Layer 3 links to allow multicast states to be
advertising locally learned MAC addresses on the overlay network. built in the core network.
• Key advantages of using multicast is that it allows optimal multicast traffic • Since PIM sparse mode requires a rendezvous point (RP) to build a multicast
replication to multiple sites and avoids head-end replication that leads to tree, one of the aggregation switches in each data center is used as an RP.
suboptimal bandwidth utilization. Local RP allows both local sources and receivers to join local RP rather than
having to go to different data center to reach an RP in order to build a shared
tree. For more information about MSDP and Anycast features of multicast,
visit: http://www.cisco.com/en/US/docs/ios/solutions_docs/ip_multicast/White_papers/anycast.html

© 2013 Cisco and/or its affiliates. All rights reserved. 24


OTV Configuration
Primary Adjacency Server :: Join Interface [x]
Secondary Adjacency Server :: Join Interface [y]

OTV Configuration :: Unicast-Only Mode


Primary Adjacency Server Secondary Adjacency Server

feature otv feature otv

vlan 10, 99 vlan 10, 99

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 otv site-identifier 0000.0000.0002

interface Overlay 1 interface Overlay 1


otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv adjacency-server unicast-only otv adjacency-server unicast-only
otv extend-vlan 10 otv use-adjacency-server [x] unicast-only
otv extend-vlan 10

interface e 1/9 interface e 1/9


mtu 9216 mtu 9216
ip address [x] / 30 Site ID 1 Site ID 1 Site ID 2 Site ID 2
ip address [y] / 30
ip router ospf 1 area 0.0.0.0 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point Site VLAN 99 Site VLAN 99 Site VLAN 99 Site VLAN 99 ip ospf network point-to-point

feature otv feature otv

vlan 10, 99 vlan 10, 99


Step 1 :: enable OTV
otv site-vlan 99 Step 2 :: configure site-vlan, site-id, Overlay Interface otv site-vlan 99
otv site-identifier 0000.0000.0001 Step 3 :: define role of adjacency server [primary] otv site-identifier 0000.0000.0002
Step 4 :: define role of adjacency server [secondary]
interface Overlay 1 Step 5 :: define all other edge devices as clients interface Overlay 1
otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only otv use-adjacency-server [x] [y] unicast-only
otv extend-vlan 10 Assume :: enable site-vlan on L2 trunks (make vlan active) otv extend-vlan 10

interface e 1/9 interface e 1/9


interface port-channel 10
mtu 9216 mtu 9216
ip address [w] / 30 switchport ip address [z] / 30
ip router ospf 1 area 0.0.0.0 switchport mode trunk ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point switchport trunk allowed vlan 10, 99 ip ospf network point-to-point

© 2013 Cisco and/or its affiliates. All rights reserved. 25


NOTES
OTV Configuration
OTV Configuration :: Unicast-Only Mode
Primary Adjacency Server Secondary Adjacency Server

feature otv feature otv

vlan 10, 99 vlan 10, 99

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 otv site-identifier 0000.0000.0002

interface Overlay 1 interface Overlay 1


otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv adjacency-server unicast-only otv adjacency-server unicast-only
otv extend-vlan 10 otv use-adjacency-server [x] unicast-only
otv extend-vlan 10

interface e 1/9 interface e 1/9


mtu 9216 mtu 9216
ip address [x] / 30 Primary Adjacency Server :: Join Interface [x] ip address [y] / 30
ip router ospf 1 area 0.0.0.0 Secondary Adjacency Server :: Join Interface [y] ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip ospf network point-to-point

feature otv Two pieces of configuration are required to deploy OTV across a unicast-only transport infrastructure: first, it
is required to define the role of Adjacency Server; whereas the other piece of configuration is required in
vlan 10, 99 each OTV edge device not acting as an Adjacency Server (i.e acting as a client). All client OTV edge
devices are configured with the address of the Adjacency Server. All other adjacency addresses are
otv site-vlan 99 discovered dynamically. Thereby, when a new site is added, only the OTV edge devices for the new site
otv site-identifier 0000.0000.0001 need to be configured with the Adjacency Server addresses. No other sites need additional configuration.

interface Overlay 1 The recommendation is usually to deploy a redundant pair of Adjacency Servers in separate DC sites.
otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only The configuration on the Primary Adjacency Server is very simple and limited to enable AS functionality (otv adjacency-
otv extend-vlan 10 server command). The same command is also required on the Secondary Adjacency Server device, but also needs to
point to the Primary AS (leveraging the otv use-adjacency-server command). Finally, the generic OTV Edge Device
interface e 1/9 must be configured to use both the Primary and Secondary Adjacency Servers. The sequence of adjacency server
mtu 9216 address in the configuration determine primary or secondary adjacency server role. This order is relevant since an OTV
ip address [w] / 30 edge device will always use the OTV neighbor-list (oNL) provided by the Primary Adjacency Server, unless it detects
ip router ospf 1 area 0.0.0.0 that specific device is not available anymore (control plane Hellos are always exchanged as keepalives between each
ip ospf network point-to-point OTV device and the Adjacency Servers).

© 2013 Cisco and/or its affiliates. All rights reserved. 26


OTV Configuration Step 1 :: create OTV HSRP access-lists (VACL)
Step 2 :: create OTV HSRP localization filters
Filter out HSRP v1 and v2

OTV Configuration :: HSRP Filtering Filter out Gratuitous ARP


Step 3 :: create route-map to prevent advertisements of HSRP VMACs

feature otv The filtering of FHRP messages across the overlay is a


ip access-list ALL_IPs critical functionality to be enabled, because it allows
10 permit ip any any vlan 10, 99 applying the same FHRP configuration in different sites.
mac access-list ALL_MACs The end result is that the same default gateway is
10 permit any any otv site-vlan 99 available (i.e. characterized by the same virtual IP and
ip access-list HSRP_IP otv site-identifier 0000.0000.0001 virtual MAC addresses) in each data center. This
10 permit udp any 224.0.0.2/32 eq 1985 capability optimizes the outbound traffic flows (server to
20 permit udp any 224.0.0.102/32 eq 1985 interface Overlay 1 client direction); but this does not solve the mechanism
mac access-list HSRP_VMAC otv join-interface ethernet 1/9 to control and improve the ingress traffic (client to server
10 permit 0000.0c07.ac00 0000.0000.00ff any otv control-group 239.1.1.1 direction) as traffic will continue to go via the original
20 permit 0000.0c9f.f000 0000.0000.0fff any otv data-group 232.1.1.0/24 DC; solutions to solve this challenge include DNS
arp access-list HSRP_VMAC_ARP otv extend-vlan 10 Based, Route Injection, or LISP.
10 deny ip any mac 0000.0c07.ac00 ffff.ffff.ff00
20 deny ip any mac 0000.0c9f.f000 ffff.ffff.f000 interface port-channel 10 The VLAN ACL is required to identify the traffic that
30 permit ip any mac any description ** OTV Internal Interface ** needs to be filtered. This configuration applies to the
switchport HSRP v1 & v2 (bold) protocols. After applying the
feature dhcp switchport mode trunk configuration on the OTV VDC to the set of VLANs that
ip arp inspection filter HSRP_VMAC_ARP vlan 10 switchport trunk allowed vlan 10, 99 are trunked from the Agg VDC to the OTV VDC, all
mac packet-classify HSRP messages will be dropped once received by the
vlan access-map HSRP_Localization 10 OTV VDC. It is also required to apply a specific filter to
match mac address HSRP_VMAC otv-isis default ensure suppression of the Gratuitous ARP (GARP)
match ip address HSRP_IP vpn Overlay1 messages that may be received across the OTV
action drop redistribute filter route-map OTV_HSRP_filter Overlay from the remote sites.
vlan access-map HSRP_Localization 20
match mac address ALL_MACs
Even though HSRP traffic is filtered via the VACL, the vMAC used to source the
match ip address ALL_IPs
HSRP packets is still learned by the OTV VDC. Therefore, OTV advertises this MAC
action forward
address information to the other sites via an IS-IS update. While this in itself is not
causing harm, it would cause the remote OTV the edge devices to see constant MAC
vlan filter HSRP_Localization vlan-list 10
moves happening for the vMAC (from the internal interface to the overlay interface
and vice versa).
mac-list OTV_HSRP_VMAC_deny seq 10 deny 0000.0c07.ac00 ffff.ffff.ff00
mac-list OTV_HSRP_VMAC_deny seq 11 deny 0000.0c9f.f000 ffff.ffff.f000  IP ACL's to drop HSRP Hellos and forward other traffic
mac-list OTV_HSRP_VMAC_deny seq 20 permit 0000.0000.0000 0000.0000.0000  MAC ACL's to drop non-IP HSRP traffic and forward other traffic
 Create the VACL & apply the VACL to each extended vlan
route-map OTV_HSRP_filter permit 10  Feature dhcp required for ARP inspection & create the ARP access-list to deny traffic from
the Virtual MAC & apply ARP ACL to each extended VLAN
match mac-list OTV_HSRP_VMAC_deny
 Create mac-list to deny advertising of virtual MAC, create the route-map, and apply the
route-map to each overlay

© 2013 Cisco and/or its affiliates. All rights reserved. 27


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

OTV Configuration :: Authentication


OTV supports authentication of Hello messages along with
authentication of Protocol Data Units (PDU)’s

feature otv feature otv

vlan 10, 99 vlan 10, 99

key chain OTV-Key key chain OTV-Key


key 1 key 1
key-string 0 Cisc0! key-string 0 Cisc0!

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 otv site-identifier 0000.0000.0001

interface Overlay 1 interface Overlay 1


otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv control-group 239.1.1.1 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv data-group 232.1.1.0/24
otv extend-vlan 10 Step 1 :: configure OTV key chain otv extend-vlan 10
otv isis authentication-type md5 Step 2 :: apply md5 authentication to OTV Hellos otv isis authentication-type md5
otv isis authentication key-chain OTV-Key Step 3 :: apply md5 authentication to OTV PDUs otv isis authentication key-chain OTV-Key

otv-isis default otv-isis default


vpn Overlay1 vpn Overlay1
authentication-check authentication-check
authentication-type md5 authentication-type md5
authentication key-chain OTV-Key authentication key-chain OTV-Key

© 2013 Cisco and/or its affiliates. All rights reserved. 28


OTV Configuration
OTV VLAN Translation :: Translation Through Transit VLAN

feature otv feature otv

vlan 10, 99, 100 vlan 20, 99, 100

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 Step 1 :: configure vlan mapping otv site-identifier 0000.0000.0002

interface Overlay 1  When a different VLAN is used at multiple sites interface Overlay 1
otv join-interface ethernet 1/9  A mapped VLAN can not be extended on another site otv join-interface ethernet 1/9
otv control-group 239.1.1.1  VLAN mappings have a one-to-one relationship otv control-group 239.1.1.1
otv data-group 232.1.1.0/24  VLAN mappings can be added or removed without otv data-group 232.1.1.0/24
otv extend-vlan 10 impacting all mappings on the overlay interface otv extend-vlan 20
otv vlan mapping 10 to 100 otv vlan mapping 20 to 100

VLAN 100 VLAN 20

VLAN 10

© 2013 Cisco and/or its affiliates. All rights reserved. 29


Perform Configuration Steps at

OTV Configuration Both DC Sites (East & West)

OTV Configuration :: Dedicated Broadcast Group


 “otv broadcast-group” configuration line under overlay
 Optional command
 Useful for QoS purposes
 The broadcast group needs to be configured for all OTV
Edge Devices connected to the OTV Overlay network

feature otv feature otv

vlan 10, 99 vlan 10, 99

interface loopback 10 interface loopback 10


ip address [….]/32 ip address [….]/32
Step 1 :: configure broadcast group
otv site-vlan 99 otv site-vlan 99
otv site-identifier 0000.0000.0001 otv site-identifier 0000.0000.0001

interface Overlay 1 interface Overlay 1


otv join-interface ethernet 1/9 otv join-interface ethernet 1/9
otv control-group 239.1.1.1 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv data-group 232.1.1.0/24
otv broadcast-group 224.2.2.0 otv broadcast-group 224.2.2.0
otv extend-vlan 10 otv extend-vlan 10

© 2013 Cisco and/or its affiliates. All rights reserved. 30


OTV Configuration
OTV Configuration :: Selective Unicast Flooding

feature otv feature otv

vlan 10, 99 vlan 10, 99

otv site-vlan 99 otv site-vlan 99


otv site-identifier 0000.0000.0001 Step 1 :: configure static OTV flood otv site-identifier 0000.0000.0002
[enabled per mac address]
otv flood mac 1111.2222.0101 vlan 10 interface Overlay 1
otv join-interface ethernet 1/9
interface Overlay 1 otv control-group 239.1.1.1
otv join-interface ethernet 1/9 otv data-group 232.1.1.0/24
otv control-group 239.1.1.1 otv extend-vlan 10
otv data-group 232.1.1.0/24
otv extend-vlan 10

Unknown
MAC 1  MAC 2
Unicast

Normally, unknown unicast Layer 2 frames are not flooded between OTV sites, and MAC addresses are not learned across the overlay interface. Any unknown
unicast messages that reach the OTV edge device are blocked from crossing the logical overlay, allowing OTV to prevent Layer 2 faults from spreading to remote
sites.

The end points connected to the network are assumed to not be silent or unidirectional. However, some data center applications require the unknown unicast traffic to
be flooded over the overlay to all the data centers, where end points may be silent. Beginning with Cisco NX-OS Release 6.2(2), you can configure selective unicast
flooding to flood the specified destination MAC address to all other edge devices in the OTV overlay network with that unknown unicast traffic.

© 2013 Cisco and/or its affiliates. All rights reserved. 31


OTV Configuration
Strong Recommendations and Key Notes
 OTV encapsulation is done on M-series modules

 Note: The control-plane protocol used by OTV is IS-IS. However, IS-IS does not need to be explicitly configured. It runs in
the background once OTV is enabled.

 In a multi-tenancy environment, the same OTV VDC can be configured with multiple overlays to provide a segmented
Layer 2 extension for different tenants or applications.

 When multiple data center sites are interconnected, the OTV operations can benefit from the presence of multicast in the
core. Multicast is not mandatory in most OTV topologies (number of sites) since you can use the unicast-mode as well.

 The same OTV VDCs can be used by multiple VDCs deployed at the aggregation tier, as well as by other Layer 2
switches connected to the OTV VDCs. This is done by configuring multiple OTV overlays. It's important to note that the
extended VLANs within these multiple overlays should not overlap.

 A separate Layer 3 link between the two aggregation VDCs should be configured as per best practices to carry any Layer
3 traffic between the them.

 The overlay interface will not come up until you configure a multicast group address and the site-VLAN has at least an
active port on the OTV edge device.

 Support for loopback interfaces as OTV Join interfaces is planned for 6.2(2) and later code releases.

© 2013 Cisco and/or its affiliates. All rights reserved. 32


OTV Configuration
Strong Recommendations and Key Notes
 FHRP Filtering Note: It is important to stress how this outbound path (server to client) optimization functionality should be
deployed in conjunction with an equivalent one optimizing inbound traffic (client to server) flows to avoid asymmetric traffic
behavior (this would be highly undesirable especially in deployments leveraging stateful services across data centers).
White Paper discussing inbound traffic optimization solutions :: http://
www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/4.0/EMC/EMC.pdf

 It is important to note how OTV support requires the use of the new Transport Services (TRS) license. Depending on the
specifics of the OTV deployment, the Advanced License may be required as well to provide Virtual Device Contexts
(VDCs) support.

 Before configuring OTV you should review and implement Cisco recommended STP best practices at each site. OTV is
independent from STP but it greatly benefits from a stable and robust Layer 2 topology.

 If the data centers are OTV multi-homed, it is a recommended best practice to bring the Overlay up in single-homed
configuration first, by enabling OTV on a single edge device at each site. After the OTV connection has been tested in as
single-homed, then enable the functionality on the other edge devices of each site.

 OTV currently enforces switch-virtual-interface (SVI) separation for the VLANs being extended across the OTV link,
meaning that OTV is usually in its own VDC. With the VDC license on the Cisco Nexus 7000 you have the flexibility to
have SVIs in other VDCs and have a dedicated VDC for OTV functions.

 Configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the
highest maximum transmission unit (MTU) size supported by the IP core. OTV sets the Don't Fragment (DF) bit in the IP
header for all OTV control and data packets so the core cannot fragment these packets.

© 2013 Cisco and/or its affiliates. All rights reserved. 33


OTV Configuration
Strong Recommendations and Key Notes
 With NX-OS 6.1 and earlier: Only one join interface can be specified per overlay; two methods are available
Configure a single join-interface, which is shared across multiple overlays
Configure a different join interface for each overlay, which increases the OTV reliability

 For a higher resiliency, you can use a port-channel, but it is not mandatory. There are no requirements for 1 Gigabit-
Ethernet versus 10 Gigabit-Ethernet or dedicated versus shared mode.

 The transport network must support PIM sparse mode (ASM) or PIM-Bidir multicast traffic.

 OTV is compatible with a transport network configured only for IPv4. IPv6 is not supported.

 Do not enable PIM on the join-interface.

 Do not configure OTV on an F-series module.

 Ensure the site identifier is configured and is the same for all edge devices on a site. OTV brings down all overlays when
a mismatched site identifier is detected from a neighbor edge device and generates a system message.

 Mixing the Nexus 7000 and the ASR 1000 devices for OTV is not supported at this time when the devices will be placed
within the same site. However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully
supported. For this scenario, please keep the separate scalability numbers in mind for the two different devices, because
you will have to account for the lowest common denominator.

 Starting in NX-OS 5.2, the site-id command was introduced as a way to harden multihoming for OTV. It is a configurable
option that must be the same for devices within the same data center and different between any devices that are in
different data centers. It specifies which site a particular OTV device is in so that two OTV devices in different sites cannot
join each other as a multihomed site. This command is now mandatory.

© 2013 Cisco and/or its affiliates. All rights reserved. 34


OTV Configuration
Strong Recommendations and Key Notes
 Using Virtual Port Channels (vPCs) and OTV together provides an extra layer of resiliency and is thus recommended as a
best practice.

 OTV & FabricPath: Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets.
Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is
necessary. In addition, when running FabricPath in your network, we highly recommend that you use the spanning-tree
domain <id> command on all devices that are participating in these VLANs. This command speeds up convergence times
greatly.

© 2013 Cisco and/or its affiliates. All rights reserved. 35


OTV Configuration
OTV Encapsulation :: MAC in IP
• 42 Bytes overhead to the packet IP MTU size
(Outer IP Header + OTV Shim) – (Original L2 Header without the 802.1Q header)
• 802.1Q header is removed and the VLAN field copied over to the OTV shim header
• Outer OTV shim header contains VLAN, overlay ID number, and an external IP header
• Consider Jumbo MTU Sizing along the path between the source and destination endpoints to account for the extra 42
bytes

802.1Q header removed

802.1Q
802.1Q Ether
DMAC SMAC Type

VL
AN
ID
,O
Ether L2

ve
r
lay
DMAC SMAC Type IP Header OTV Shim Header CRC

# VLAN
6B 6B 2B 20B 8B 14B* Payload 4B

Original L2 Frame

20B + 8B + 14B* = 42 Bytes


of total overhead
* The 4 Bytes of the 802.1Q header have already been removed

© 2013 Cisco and/or its affiliates. All rights reserved. 36


OTV Configuration
How OTV Works :: Inter-Site Packet Flow (OTV Data Plane)
MAC Table MAC Table
VLAN MAC IF VLAN MAC IF

10 MAC 1 Eth 1 10 MAC 1 IP A


Layer 2 Layer 2
10 MAC 2 IP B Lookup Lookup 10 MAC 2 Eth 2

10 MAC 3 Eth 3 10 MAC 3 IP A

WEST DC EAST DC

Encap
Decap

MAC 1  MAC 2 IP A  IP B
MAC 1  MAC 2 IP A  IP B
MAC 1  MAC 2 MAC 1  MAC 2

Assumption :: New MACs where learned in the VLANs that are OTV extended on the internal interfaces; an OTV update message was sent and replicated across the
transport and delivered to all remote OTV Edge devices; those MACs learned through OTV are then imported in the MAC address tables of the OTV Edge Devices.
Step 1 :: The Layer 2 frame is received at the aggregation layer or OTV Edge Device. A traditional Layer 2 lookup is performed, the MAC for Host B’s information in the
MAC table does not point to a local Ethernet interface (as you would see in intra-site communication); but to the IP address of the remote OTV Edge Device that
advertised that MAC’s reachability information.
Step 2 :: The OTV Edge Device encapsulates the original Layer 2 Frame; with is the source IP of the outer header of the local Join interface & the destination IP which
is the IP address of the remote Edge Device Join interface.
Step 3 :: The OTV encapsulated frame (a regular unicast IP packet) is carried across the transport infrastructure and delivered to the remote OTV Edge Device.
Step 4 :: The remote OTV Edge Device decapsulates the frame exposing the original Layer 2 packet.
Step 5 :: The OTV Edge Device performs another Layer 2 lookup on the original Ethernet frame and discovers that it is reachable through a physical interface, which
means it is a MAC address local to the site.
Step 6 :: The frame is then delivered to the MAC destination of Host B

© 2013 Cisco and/or its affiliates. All rights reserved. 37


OTV Configuration
Additional Resources & Further Reading

External (public)
Great External
OTV Best Practices Guide
Resources
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/guide_c07-728315.pdf

OTV Technology Introduction and Deployment Considerations


http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/whitepaper/DCI_1.html

Using OTV to Extend Layer 2 between Two Data Centers


http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-644634.html

Nexus 7000 NX-OS OTV Configuration Guides


http://
www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/OTV/config_guide/b_Cisco_Nexus_7000_Series_NX-OS_OTV_
Configuration_Guide.html

Cisco Nexus 7000 NX-OS Verified Scalability Guide (OTV Limits)


http://
www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_7000_Series_NX-OS_Verified_Sca
lability_Guide.html#reference_18192F87114B45D9A40A41A0DEF3F74D

Cisco Live 365 (sign up & search session catalog for OTV)
https://ciscolive365.com/
BRKDCT – 3103 :: Advance OTV – Configure, Verify and Troubleshoot OTV in Your Network; Andy Gossett (CSE)

© 2013 Cisco and/or its affiliates. All rights reserved. 38


OTV Configuration
Additional Resources & Further Reading

Quick Start Guide :: Virtual Port Channel (vPC)


https://communities.cisco.com/docs/DOC-35728

Quick Start Guide :: FabricPath


https://communities.cisco.com/docs/DOC-35725l

© 2013 Cisco and/or its affiliates. All rights reserved. 39


© 2013 Cisco and/or its affiliates. All rights reserved. 40

You might also like