Professional Documents
Culture Documents
OTV is a "MAC address in IP" technique for supporting Layer 2 VPNs to extend LANs over any transport. The transport can
be Layer 2 based, Layer 3 based, IP switched, label switched, or anything else as long as it can carry IP packets. By using
the principles of MAC routing, OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2
domains while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits
of an IP-based interconnection.
• No need for Ethernet over Multiprotocol Label Switching (EoMPLS) or Virtual Private LAN Services (VPLS)
deployment for Layer 2 extensions
• Provision of Layer 2 and Layer 3 connectivity using the same dark fiber connections
• Address Resolution Protocol (ARP) optimization with the OTV ARP cache
OTV On a Stick
Inline OTV
Join Interfaces
Point-to-Point Layer 3 interface Peer-Link
M-Series Line Cards Only OTV Edge Device
Performs OTV Functions
Multicast or Unicast
Transports Supported OTV Overlay Interface
WEST DC EAST DC
Site ID 1 Site ID 2
Authoritative Edge The AED is responsible for MAC address advertisement for its VLANs; forwarding its VLANs traffic inside
Device (AED) and outside the site. The extended VLANs are split across the AEDs (even & odd) in OTV multi-homing.
Site VLAN The OTV Site VLAN is used to discover OTV neighbor edge devices in same local site.
Site Identifier Same site Edge devices must use a common unique Site ID. Site ID is included in the control plane; an
overlay will not come up until a Site ID is configured; and should be on all local OTV Edge devices.
MTU Join interfaces and neighboring Core interfaces need to have MTU of ≥ 1542 (hard requirement). Best
practice to the max possible MTU size supported by the transport
FHRP Isolation Filtering FHRP messages across the OTV Overlay allows to provide the same active default gateway in
each data center site. Note, in future releases OTV will offer a simple command to enable these filtering
capabilities.
SVI Separation OTV currently enforces SVI separation for the VLANs being extended across the OTV link, meaning OTV is
usually in its own VDC for OTV functions and have SVIs in another Aggregation VDC.
Multicast Transport Multicast transport (OTV Control Plane) is ideal for connecting a higher number of sites. OTV Neighbor
relationships are built over a multicast enabled core / transport infrastructure. All OTV edge devices can
be configured to join a specific ASM (Any Source Multicast) group where they simultaneously play the
role of receiver and source. Edge devices join a multicast group; adjacencies are maintained over that
multicast group and a single update reaches all neighbors.
Unicast Transport Supported since NX-OS release 5.2. Unicast-only transport (OTV Control Plane) is ideal for connecting
a small number of sites. Requires the adjacency server. Each OTV devices would need to create
multiple copies of each control plane packet and unicast them to each remote OTV device part of the
same logical overlay.
Adjacency Server Used in OTV Unicast mode; usually enabled on an OTV Edge device; can have a primary and
secondary; and all other OTV Edge client devices are configured with the address of the adjacency
server. The goal is to be able to communicate with all the remote OTV devices, each OTV node needs
to know a list of neighbors to replicate the control packets to. Rather than statically configuring in each
OTV node the list of all neighbors, a simple dynamic means is used to provide this information; this
adjacency server.
OTV Extend VLAN Enables OTV advertisements for those VLANs. OTV will not forward Layer 2 packets for VLANs not in
the extended VLAN range for the overlay interface. Assign a VLAN to only one overlay interface.
OTV Authentication OTV supports authentication of Hello messages along with authentication of PDUs.
Dual Homed OTV Edge Leverage vPC or vPC+ for dual homed OTV Edge devices. The concept of the AED role along with the
Devices site vlan allows multi-homing OTV Edge devices.
Aggregation VDC
• OTV VDC must use only M-Series ports for both Internal and Join Interfaces
[M1-48, M1-32, M1-08, M2-Series]
• OTV VDC Types (M-only)
• Aggregation VDC Types (M-only, M1-F1 or F2/F2E)
Aggregation VDC
OTV Configuration
2-wide 7k Aggregation VDC
Multi-homed OTV VDC
Multicast enabled transport
Extend VLAN 10
OTV Site VLAN 99
Quick Start Guide Assumptions
Layer 3 routed point-to-point interfaces. Will be using OSPF as the routing protocol.
Layer 2 interfaces. The Aggregation VDC connects through vPC to the OTV VDC.
Verify the Nexus 7000 has the proper licenses to support OTV and VDC.
Step 1 :: install | validate licenses OTV requires the Transport Services license
VDC requires the Advanced Services license
Step 2 :: create aggregation VDC
Step 3 :: create OTV VDC install license bootflash:///lan_advanced_services_pkg.lic
install license bootflash:///lan_transport_services_pkg.lic
The OTV internal interfaces carry the VLANs to be extended and the OTV site VLAN (used within the data center to
provide multi-homing). They behave as regular Layer 2 switch port trunk interfaces; in fact, they send, receive, and
process the Spanning Tree Protocol BPDUs as they would on a regular LAN bridge device.
vlan 10 vlan 10
interface e2/1, e2/2 Step 1 :: configure OTV Join Interfaces interface e2/1, e2/2
channel-group 10 force mode active Step 2 :: configure OTV Internal Interfaces channel-group 10 force mode active
Step 3 :: create vlan to extend
vlan 10 vlan 10
OTV Configuration
2-wide 7k Aggregation VDC
Multi-homed OTV VDC
Multicast enabled transport
Extend VLAN 10
vlan 10 , 99 vlan 10 , 99
vlan 10 , 99 vlan 10 , 99
• The OTV edge device is also configured with the overlay interface, which is • When sites are multihomed with OTV EDs, separation is achieved by electing
associated with the join interface to provide connectivity to the physical an authoritative edge device (AED) for each VLAN in the same site (site-id),
transport network. The overlay interface is used by OTV to send and receive which is the only device that can forward the traffic for the extended VLAN
Layer 2 frames encapsulated in IP packets. From the perspective of MAC- inside and outside the data center. The extended VLANs are split in odd and
based forwarding on the site, the overlay interface is simply another bridged even and automatically assigned to the site's edge devices.
interface. However, no Spanning Tree Protocol packets or unknown unicast • The multicast control group identifies the overlay; two different overlays must
packets are forwarded over the overlay interface. have two different multicast control groups. The control group is used for
• Note: The overlay interface does not come up until you configure a multicast neighbor discovery and to exchange MAC address reachability. The data
group address and the site-VLAN has at least an active port on the device. group however is an SSM (Source Specific Group) group range, which is
• A VLAN is not advertised on the overlay network; therefore, forwarding used to carry multicast data traffic generated by the sites
cannot occur over the overlay network unless the VLANs are explicitly • In the aggregation layer, Protocol Independent Multicast (PIM) is configured
extended. Once the VLAN is extended, the OTV edge device will begin on all intra- and inter-data-center Layer 3 links to allow multicast states to be
advertising locally learned MAC addresses on the overlay network. built in the core network.
• Key advantages of using multicast is that it allows optimal multicast traffic • Since PIM sparse mode requires a rendezvous point (RP) to build a multicast
replication to multiple sites and avoids head-end replication that leads to tree, one of the aggregation switches in each data center is used as an RP.
suboptimal bandwidth utilization. Local RP allows both local sources and receivers to join local RP rather than
having to go to different data center to reach an RP in order to build a shared
tree. For more information about MSDP and Anycast features of multicast,
visit: http://www.cisco.com/en/US/docs/ios/solutions_docs/ip_multicast/White_papers/anycast.html
feature otv Two pieces of configuration are required to deploy OTV across a unicast-only transport infrastructure: first, it
is required to define the role of Adjacency Server; whereas the other piece of configuration is required in
vlan 10, 99 each OTV edge device not acting as an Adjacency Server (i.e acting as a client). All client OTV edge
devices are configured with the address of the Adjacency Server. All other adjacency addresses are
otv site-vlan 99 discovered dynamically. Thereby, when a new site is added, only the OTV edge devices for the new site
otv site-identifier 0000.0000.0001 need to be configured with the Adjacency Server addresses. No other sites need additional configuration.
interface Overlay 1 The recommendation is usually to deploy a redundant pair of Adjacency Servers in separate DC sites.
otv join-interface ethernet 1/9
otv use-adjacency-server [x] [y] unicast-only The configuration on the Primary Adjacency Server is very simple and limited to enable AS functionality (otv adjacency-
otv extend-vlan 10 server command). The same command is also required on the Secondary Adjacency Server device, but also needs to
point to the Primary AS (leveraging the otv use-adjacency-server command). Finally, the generic OTV Edge Device
interface e 1/9 must be configured to use both the Primary and Secondary Adjacency Servers. The sequence of adjacency server
mtu 9216 address in the configuration determine primary or secondary adjacency server role. This order is relevant since an OTV
ip address [w] / 30 edge device will always use the OTV neighbor-list (oNL) provided by the Primary Adjacency Server, unless it detects
ip router ospf 1 area 0.0.0.0 that specific device is not available anymore (control plane Hellos are always exchanged as keepalives between each
ip ospf network point-to-point OTV device and the Adjacency Servers).
interface Overlay 1 When a different VLAN is used at multiple sites interface Overlay 1
otv join-interface ethernet 1/9 A mapped VLAN can not be extended on another site otv join-interface ethernet 1/9
otv control-group 239.1.1.1 VLAN mappings have a one-to-one relationship otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 VLAN mappings can be added or removed without otv data-group 232.1.1.0/24
otv extend-vlan 10 impacting all mappings on the overlay interface otv extend-vlan 20
otv vlan mapping 10 to 100 otv vlan mapping 20 to 100
VLAN 10
Unknown
MAC 1 MAC 2
Unicast
Normally, unknown unicast Layer 2 frames are not flooded between OTV sites, and MAC addresses are not learned across the overlay interface. Any unknown
unicast messages that reach the OTV edge device are blocked from crossing the logical overlay, allowing OTV to prevent Layer 2 faults from spreading to remote
sites.
The end points connected to the network are assumed to not be silent or unidirectional. However, some data center applications require the unknown unicast traffic to
be flooded over the overlay to all the data centers, where end points may be silent. Beginning with Cisco NX-OS Release 6.2(2), you can configure selective unicast
flooding to flood the specified destination MAC address to all other edge devices in the OTV overlay network with that unknown unicast traffic.
Note: The control-plane protocol used by OTV is IS-IS. However, IS-IS does not need to be explicitly configured. It runs in
the background once OTV is enabled.
In a multi-tenancy environment, the same OTV VDC can be configured with multiple overlays to provide a segmented
Layer 2 extension for different tenants or applications.
When multiple data center sites are interconnected, the OTV operations can benefit from the presence of multicast in the
core. Multicast is not mandatory in most OTV topologies (number of sites) since you can use the unicast-mode as well.
The same OTV VDCs can be used by multiple VDCs deployed at the aggregation tier, as well as by other Layer 2
switches connected to the OTV VDCs. This is done by configuring multiple OTV overlays. It's important to note that the
extended VLANs within these multiple overlays should not overlap.
A separate Layer 3 link between the two aggregation VDCs should be configured as per best practices to carry any Layer
3 traffic between the them.
The overlay interface will not come up until you configure a multicast group address and the site-VLAN has at least an
active port on the OTV edge device.
Support for loopback interfaces as OTV Join interfaces is planned for 6.2(2) and later code releases.
It is important to note how OTV support requires the use of the new Transport Services (TRS) license. Depending on the
specifics of the OTV deployment, the Advanced License may be required as well to provide Virtual Device Contexts
(VDCs) support.
Before configuring OTV you should review and implement Cisco recommended STP best practices at each site. OTV is
independent from STP but it greatly benefits from a stable and robust Layer 2 topology.
If the data centers are OTV multi-homed, it is a recommended best practice to bring the Overlay up in single-homed
configuration first, by enabling OTV on a single edge device at each site. After the OTV connection has been tested in as
single-homed, then enable the functionality on the other edge devices of each site.
OTV currently enforces switch-virtual-interface (SVI) separation for the VLANs being extended across the OTV link,
meaning that OTV is usually in its own VDC. With the VDC license on the Cisco Nexus 7000 you have the flexibility to
have SVIs in other VDCs and have a dedicated VDC for OTV functions.
Configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the
highest maximum transmission unit (MTU) size supported by the IP core. OTV sets the Don't Fragment (DF) bit in the IP
header for all OTV control and data packets so the core cannot fragment these packets.
For a higher resiliency, you can use a port-channel, but it is not mandatory. There are no requirements for 1 Gigabit-
Ethernet versus 10 Gigabit-Ethernet or dedicated versus shared mode.
The transport network must support PIM sparse mode (ASM) or PIM-Bidir multicast traffic.
OTV is compatible with a transport network configured only for IPv4. IPv6 is not supported.
Ensure the site identifier is configured and is the same for all edge devices on a site. OTV brings down all overlays when
a mismatched site identifier is detected from a neighbor edge device and generates a system message.
Mixing the Nexus 7000 and the ASR 1000 devices for OTV is not supported at this time when the devices will be placed
within the same site. However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully
supported. For this scenario, please keep the separate scalability numbers in mind for the two different devices, because
you will have to account for the lowest common denominator.
Starting in NX-OS 5.2, the site-id command was introduced as a way to harden multihoming for OTV. It is a configurable
option that must be the same for devices within the same data center and different between any devices that are in
different data centers. It specifies which site a particular OTV device is in so that two OTV devices in different sites cannot
join each other as a multihomed site. This command is now mandatory.
OTV & FabricPath: Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets.
Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is
necessary. In addition, when running FabricPath in your network, we highly recommend that you use the spanning-tree
domain <id> command on all devices that are participating in these VLANs. This command speeds up convergence times
greatly.
802.1Q
802.1Q Ether
DMAC SMAC Type
VL
AN
ID
,O
Ether L2
ve
r
lay
DMAC SMAC Type IP Header OTV Shim Header CRC
# VLAN
6B 6B 2B 20B 8B 14B* Payload 4B
Original L2 Frame
WEST DC EAST DC
Encap
Decap
MAC 1 MAC 2 IP A IP B
MAC 1 MAC 2 IP A IP B
MAC 1 MAC 2 MAC 1 MAC 2
Assumption :: New MACs where learned in the VLANs that are OTV extended on the internal interfaces; an OTV update message was sent and replicated across the
transport and delivered to all remote OTV Edge devices; those MACs learned through OTV are then imported in the MAC address tables of the OTV Edge Devices.
Step 1 :: The Layer 2 frame is received at the aggregation layer or OTV Edge Device. A traditional Layer 2 lookup is performed, the MAC for Host B’s information in the
MAC table does not point to a local Ethernet interface (as you would see in intra-site communication); but to the IP address of the remote OTV Edge Device that
advertised that MAC’s reachability information.
Step 2 :: The OTV Edge Device encapsulates the original Layer 2 Frame; with is the source IP of the outer header of the local Join interface & the destination IP which
is the IP address of the remote Edge Device Join interface.
Step 3 :: The OTV encapsulated frame (a regular unicast IP packet) is carried across the transport infrastructure and delivered to the remote OTV Edge Device.
Step 4 :: The remote OTV Edge Device decapsulates the frame exposing the original Layer 2 packet.
Step 5 :: The OTV Edge Device performs another Layer 2 lookup on the original Ethernet frame and discovers that it is reachable through a physical interface, which
means it is a MAC address local to the site.
Step 6 :: The frame is then delivered to the MAC destination of Host B
External (public)
Great External
OTV Best Practices Guide
Resources
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/guide_c07-728315.pdf
Cisco Live 365 (sign up & search session catalog for OTV)
https://ciscolive365.com/
BRKDCT – 3103 :: Advance OTV – Configure, Verify and Troubleshoot OTV in Your Network; Andy Gossett (CSE)