Professional Documents
Culture Documents
Data Center Virtualization (DCV) Lab Guide: Peter Phan, Systems Engineer, Cisco
Data Center Virtualization (DCV) Lab Guide: Peter Phan, Systems Engineer, Cisco
Lab Guide
Peter Phan, Systems Engineer, Cisco
pephan@cisco.com
September 26, 2011
2011 Cisco
Page 1 of 217
LAB GUIDE
1.1
1.2
2.2
CABLING INFORMATION................................................................................................... 13
2.3
2.4
2.5
2.6
2.7
23
3.1
3.2
ENABLE FEATURES............................................................................................................ 34
3.3
3.4
3.5
3.6
3.7
3.8
11
61
POWER ON THE ESX HOSTS AND VERIFY THE NEXUS INTERFACES ..................................... 61
63
5.1
5.2
77
6.1
6.2
6.3
Page 2 of 217
2011 Cisco
92
7.1
7.2
7.3
98
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
113
9.1
9.2
9.3
9.4
9.5
9.6
9.7
9.8
9.9
10
151
10.1
10.2
10.3
SUCCESSFUL VMOTION ACROSS SITES DUE TO L2 CONNECTIVITY WITH OTV ................... 154
11
158
11.1
11.2
11.3
11.4
12
SUMMARY
2011 Cisco
168
FlexPod Training Guide
Page 3 of 217
12.1
13
169
14
170
15
173
15.1
15.2
15.3
15.4
15.5
15.6
15.7
15.8
15.9
16
192
16.1
16.2
16.3
16.4
17
198
18
REFERENCES
215
Page 4 of 217
2011 Cisco
2011 Cisco
Page 5 of 217
Important
Prior to configuration, be sure to obtain the latest version of this document http://db.tt/LI79cwH.
Welcome to the Cisco Data Center Virtualization Lab. This lab is intended to provide you with a solid
understanding of what you need to implement a wide range of solution features.
The lab tasks are designed to focus on achieving:
The FlexPod demonstration should go beyond the topics of interest to the technical decision maker (TDM) and
should appeal to the business decision maker (BDM) by focusing on the benefits that this solution provides.
The Quick Reference Guide section provides general positioning and primary marketing messages, as well as a
guide to which demonstrations will work together to show the benefits for a particular person in the workplace.
As always, you will want to tailor your sales presentation to address specific audience needs or issues.
Demonstration Script Style
The demonstration scripts are organized by task; they include important marketing messages as well as product
and feature overviews and demonstration instructions. Using the Quick Reference Guide, you will be able to
quickly tailor demonstrations for different customers, while communicating the benefits of each one to facilitate
product sales.
Industry trends indicate a vast data center transformation toward shared infrastructures. Enterprise customers
are moving away from silos of information and moving toward shared infrastructures to virtualized
environments and eventually to the cloud to increase agility and reduce costs.
The Cisco Data Center Virtualization lab is built on the Cisco Unified Computing System (Cisco UCS), Cisco
Nexus data center switches, NetApp FAS storage components, and a range of software partners. This guide is
based on the design principle of the FlexPod Implementation Guide.
AUDIENCE
This document describes the basic architecture of FlexPod and also prescribes the procedure for deploying a
base Data Center Virtualization configuration. The intended audience of this document includes, but is not
limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and
customers who want to deploy the core Data Center Virtualization architecture.
2011 Cisco
easily be scaled as requirements and demand change. This includes scaling both up (adding additional resources
within a Data Center Virtualization unit) and out (adding additional Data Center Virtualization units).
Data Center Virtualization includes NetApp storage, Cisco networking, Cisco Unified Computing System (Cisco
UCS), and virtualization software in which the computing and storage fit in one data center rack with the
networking residing in the same or separate rack. The networking components can accommodate multiple Data
Center Virtualization configurations. Figure 1 shows our lab components.
Our lab hardware includes:
Two Cisco UCS C200 M1 and One Cisco UCS C250 M1 servers powered by Intel Xeon processors
o Quanities and types might vary for lab
2011 Cisco
Page 7 of 217
Your management tasks will be performed on an RDP server (VC_SERVER or MGMT_PC). You will access the
UCS, Nexus, and etc via SSH and each devices element manager. The Putty SSH client is on the Desktop.
Figure 2 - Lab Tools Interface
Page 8 of 217
2011 Cisco
Here is a view of how all the Data Center Virtualization Pods are interconnected.
Figure 3 - Full Topology for Three Pods in a VDC Deployment
2011 Cisco
Page 9 of 217
The following diagram illustrates how all the different networks/vlans are interconnected. The router in the
center is connected to the Nexus 5000s via a Port-Channel Trunk.
Figure 4 - Logical Topology of Lab
2011 Cisco
Page 10 of 217
The following section provides detailed information on configuring all aspects of a base FlexPod environment.
The Data Center Virtualization architecture is flexible; therefore, the exact configuration detailed in this section
might vary for customer implementations depending on specific requirements. Although customer
implementations might deviate from the information that follows, the best practices, features, and
configurations listed in this section should still be used as a reference for building a customized Data Center
Virtualization architecture.
Management IP
Username
Password
N5K-1
10.1.111.1
admin
1234Qwer
N5K-2
N7K-1-OTV-XA
N7K-2-OTV-XB
MDS
CIMC-ESX1
CIMC-ESX2
CIMC-ESX3
Fabric Manager
Device Manager
10.1.111.2
10.1.111.3
10.1.111.4
10.1.111.40
10.1.111.161
10.1.111.162
10.1.111.163
admin
admin
admin
admin
admin
admin
admin
admin
admin
1234Qwer
1234Qwer
1234Qwer
1234Qwer
1234Qwer
1234Qwer
1234Qwer
1234Qwer
1234Qwer
Management IP
Username
Password
vMotion
NFS
ESX1
10.1.111.21
root
1234Qwer
10.1.151.21
10.1.211.21
ESX2
10.1.111.22
root
1234Qwer
10.1.211.22
ESX3
10.1.111.23
root
1234Qwer
10.1.151.22
10.1.151.23
10.1.211.23
Role
Management IP
VCENTER-1
vCenter, VSC
vsm-1
N1KV VSM
10.1.111.100
10.1.111.17
AD
AD,DNS,DHCP
10.1.111.10
Server01
XenDesktop
10.1.111.11
Server02
XenApp
10.1.111.12
Server03
PVS
10.1.111.13
Username
Password
administrator
1234Qwer
admin
1234Qwer
WIN7POC
WIN7STREAM
WIN7MASTER
Server 2003
2011 Cisco
Page 11 of 217
2011 Cisco
Description
111
131
VMTRAFFIC
151
VMOTION
171
CTRL-PKT
VSAN
Description
11
Fabric A VSAN
12
Fabric B VSAN
MGMT
211
NFS
1011
1012
999
Native VLAN
1005
Page 12 of 217
2011 Cisco
The FlexPod Implementation Guide assumes that out-of-band management ports are plugged into an
existing management infrastructure at the deployment site.
Be sure to follow the cable directions in this section. Failure to do so will result in necessary changes
to the deployment procedures that follow because specific port locations are mentioned.
Page 13 of 217
Device
Local Ports
Device
Access Ports
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD 1
POD 1
POD 2
POD 2
POD 3
POD 3
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD 1
POD 1
POD 2
POD 2
POD 3
POD 3
POD X
POD X
POD X
POD X
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-1
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
N5K-2
NetApp-A
NetApp-A
NetApp-A
e1/4
e1/7
e1/8
e1/9
e1/10
e1/11
e1/17
e1/18
e1/19
e1/20
e1/19
e1/20
e1/19
e1/20
m0
e1/4
e1/7
e1/8
e1/9
e1/10
e1/11
e1/17
e1/18
e1/19
e1/20
e1/19
e1/20
e1/19
e1/20
m0
bmc
e0a
e0b
MGMT Switch
FEX A
FEX A
ESX1
ESX2
ESX3
N5K-2
N5K-2
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
MGMT Switch
3750
FEX B
FEX B
ESX1
ESX2
ESX3
N5K-1
N5K-1
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
1/23
port1
port2
vmnic0
vmnic0
vmnic4
e1/17
e1/18
e1/14
e1/14
e1/22
e1/22
e1/30
e1/30
e1/7
1/24
port1
port2
vmnic1
vmnic1
vmnic5
e1/17
e1/18
e1/16
e1/16
e1/24
e1/24
e1/32
e1/32
e1/8
e1/12
e1/13
e1/14
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
ESX1
ESX1
ESX1
ESX1
ESX1
ESX2
ESX2
ESX2
ESX2
ESX2
ESX3
ESX3
ESX3
ESX3
ESX3
N5K-1
N5K-2
FEX A
FEX B
3750
N5K-1
N5K-2
FEX A
FEX B
3750
FEX A
FEX B
N5K-1
N5K-2
3750
e1/9
e1/9
e1/1
e1/1
1/1
e1/10
e1/10
e1/2
e1/2
1/3
e1/3
e1/3
e1/11
e1/11
1/5
Nexus 1010 A&B Ethernet Cabling Information. Note: Require the use of two 1GbE Copper SFP+s (GLC-T=) on the
N5K side.
2011 Cisco
Page 14 of 217
Device
Local Ports
Device
Access Ports
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD X
POD 1
POD 1
POD 1
POD 1
POD 2
POD 2
POD 2
POD 2
POD 3
POD 3
POD 3
POD 3
POD 4
POD 4
POD X
POD X
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
MGMT Switch
1/0/1
1/0/2
1/0/3
1/0/4
1/0/5
1/0/6
1/0/7
1/0/8
1/0/9
1/0/10
1/0/11
1/0/12
1/0/13
1/0/14
1/0/15
1/0/16
1/0/17
1/0/18
1/0/15
1/0/16
1/0/17
1/0/18
1/0/15
1/0/16
1/0/17
1/0/18
1/0/15
1/0/16
1/0/23
1/0/24
ESX1
ESX1
ESX2
ESX2
ESX3
ESX3
N5K-1
N5K-2
MDS9124
VC Server RDC
VC Server
NTAP
NTAP
NTAP
FlexMGMT
FlexMGMT
N7K-1
N7K-2
FlexMGMT
FlexMGMT
N7K-1
N7K-2
FlexMGMT
FlexMGMT
N7K-1
N7K-2
FlexMGMT
FlexMGMT
N5K-1
N5K-2
CIMC
vmnic
CIMC
vmnic
CIMC
vmnic
m0
m0
m0
bmc
e0a
e0b
1/37
1/38
3/24
3/24
1/39
1/40
3/36
3/36
1/41
1/42
3/48
3/48
1/43
1/44
e1/4
e1/4
Device
Local Ports
Device
Access Ports
POD 1
POD 1
POD 1
POD 1
NetApp Controller
POD 1
POD 1
MDS
POD 1
POD 1
POD 1
POD 1
POD 1
POD 1
N5K-1
N5K-1
N5K-2
N5K-2
fc2/3
fc2/4
fc2/3
fc2/4
MDS9124
MDS9124
MDS9124
MDS9124
fc1/1
fc1/2
fc1/3
fc1/4
NetApp-A
NetApp-A
0a
0b
MDS9124
MDS9124
fc1/5
fc1/6
MDS9124
MDS9124
MDS9124
MDS9124
MDS9124
MDS9124
fc1/1
fc1/2
fc1/3
fc1/4
fc1/5
fc1/6
N5K-1
N5K-1
N5K-2
N5K-2
NetApp A
NetApp A
fc2/3
fc2/4
fc2/3
fc2/4
0a
0b
2011 Cisco
Page 15 of 217
Variable Name
Customized Value
Description
NetApp deduplication
license code
2011 Cisco
Page 16 of 217
Customized Value
Description
211
10.1.211.0/24
111
151
10.1.151.0/24
171
999
131
Default password
1234Qwer
10.1.111.10
dcvlabs.lab
11
12
1011
1012
US
CA
San Jose
Cisco
WWPO
80.84.57.23
2011 Cisco
Page 17 of 217
Customized Value
Description
FAS2020 A hostname
NTAP1-A
Incomplete
Incomplete
Incomplete
Incomplete
Incomplete
10.1.111.151
255.255.255.0
10.1.111.254
NetApp FAS2020 A
administration host IP address
10.1.111.100
Nevada
Incomplete
Incomplete
Incomplete
NetApp FAS2020 A
administrators e-mail address
pephan@cisco.com
NetApp FAS2020 A
infrastructure vFiler IP address
10.1.211.151
NetApp FAS2020 A
infrastructure vFiler
administration host IP
2011 Cisco
10.1.111.10
Page 18 of 217
2011 Cisco
Page 19 of 217
Customized Value
Description
Customized Value
Description
20g
2011 Cisco
Page 20 of 217
Customized Value
Description
N5K-1
N5K-2
10.1.111.1
10.1.111.2
255.255.255.0
255.255.255.0
10.1.111.254
10.1.111.254
10
Customized Value
Description
vsm-1
10.1.111.17
255.255.255.0
10.1.111.254
11
2011 Cisco
Page 21 of 217
Customized Value
Description
ESX1
10.1.111.21
255.255.255.0
10.1.111.254
10.1.211.21
255.255.255.0
10.1.151.21
255.255.255.0
ESX2
10.1.111.22
255.255.255.0
10.1.111.254
10.1.211.22
255.255.255.0
10.1.211.22
255.255.255.0
n/a
n/a
VCSERVER
10.1.111.100
2011 Cisco
Page 22 of 217
The following section provides a detailed procedure for configuring the Cisco Nexus 5010 switches for use in a
DCV environment. Complete this lab exercise to learn how to configure Virtual Port Channeling (vPC), Fibre
Channel over Ethernet (FCoE), and Fabric Extender (FEX Nexus 2000) using the NX-OS command line
interface.
Note:
The Data Center Virtualization labs start up with completed configurations for VPC, FCoE, and FEX.
Sections 3 - 5 provide you with the opportunity to build up these configurations from the ground up.
If you just want to test or demo other features such as OTV or Nexus 1000v then please proceed to
Section 6.
EXERCISE OBJECTIVE
In this exercise you will use the NX-OS CLI to configure vPC and FEX in a Dual Homed Fabric Extender vPC
Topology. After completing these exercises you will be able to meet these objectives:
2011 Cisco
Page 23 of 217
COMMAND LIST
The commands used in this exercise are described in the table below.
Table 15 - Commands
Command
Description
config term
ping
interface fc1/3
show module
copy tftp://x.x.x.x/filename
bootflash:/filename
load bootflash:/filename
show file volatile
del file volatile
dir volatile
exit
end
shut
no shut
copy running-config startupconfig
copy running-config
tftp://ip_address/path
copy tftp
load bootflash
show fcns database
dir [volatile: |
bootflash:]
show file name
del name
2011 Cisco
Loads the system file (filename) from the bootflash: when booting from the loader
prompt
Examine the contents of the configuration file in the volatile file system
Deletes the file from the volatile system
Display volatile file to confirm action
Exits one level in the menu structure. If you are in EXEC mode this command will log
you off the system
Exits configuration mode to EXEC mode
Disables an interface
Enables an interface
Saves the running configuration as the startup configuration
Saves the running configuration to a TFTP server
Copy the system file from the TFPT server to the local bootflash
Loads the system file from bootflash
Shows the FCNS database
Displays the contents of the specified memory area
Displays the contents of the specified file
Deletes the specified file
Page 24 of 217
Table 16 - Commands
Command
Description
2011 Cisco
Page 25 of 217
Double-click on the tftpd32 or tftpd64 icon on the desktop. The default directory is c:\tftp:
JOB AIDS
Nexus 5000 CLI Configuration Guide
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfiguratio
nGuide.html
Cisco Nexus 5000 Series Switches - Virtual PortChannel Quick Configuration Guide
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html
Cisco Nexus 5000 Series NX-OS Software Configuration Guide - Configuring Virtual Interfaes
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli_rel_4_0_1a/VirtIntf.html
Cisco Nexus 5000 Series Switch Fabric Manager Software Configuration Guide
http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/fm/FabricManager.html
Cisco MDS 9000 Family CLI Quick Configuration Guide - Configuring VSANs and Interfaces
http://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/qcg_vin.html
2011 Cisco
Page 26 of 217
1.4
Upon initial boot and connection to the serial or console port of the switch, the NX-OS setup
should automatically start.
2011 Cisco
Page 27 of 217
1.5
1.6
1.9
Upon initial boot and connection to the serial or console port of the switch, the NX-OS setup
should automatically start.
1.10
1.11
2011 Cisco
Page 28 of 217
MANAGEMENT VRF
The default gateway is connected through the management interface. The management interface is by default
part of the management VRF. This particular VRF is part of the default configuration and the management
interface mgmt0 is the only interface allowed to be part of this VRF.
The philosophy behind Management VRF is to provide total isolation to the management traffic from the rest
of the traffic flowing through the box by confining the former to its own forwarding table.
These are the steps for the exercise:
Verify that only the mgmt0 interface is part of the management VRF
Verify that the default gateway is reachable only using the management VRF
Cisco Nexus 5010 A - N5K-1
Step 2 Verify that only the mgmt0 interface is part of the management VRF.
2.1
Log in to N5K-1
N5K-1 login: admin
Password: 1234Qwer
2.2
Step 3
3.1
Reason
--VRF-ID
2
Verify that the default gateway is reachable only using the management VRF
Ping the default gateway using the default VRF.
host
host
host
host
host
--- 10.1.111.254 ping statistics --5 packets transmitted, 0 packets received, 100.00% packet loss
Note:
2011 Cisco
The ping fails because the default gateway is reachable only from the management interface, while
we used the default VRF.
Page 29 of 217
3.2
3.3
Alternatively, we can set the routing context for the VRF management interface to allow for layer 3
access. This will also allow you to ping and TFTP as needed in the following exercises
3.4
time=3.664
time=3.881
time=4.074
time=4.058
ms
ms
ms
ms
--- 10.1.111.10 ping statistics --5 packets transmitted, 4 packets received, 20.00% packet loss
round-trip min/avg/max = 3.664/3.919/4.074 ms
3.5
5.1
Display all commands that begin with s, sh, and show. Press Enter or space to scroll through the
list of commands.
N5K-1# s?
N5K-1# sh?
N5K-1# show ?
5.2
5.3
5.4
2011 Cisco
Display the Ethernet and Fibre Channel modules of the Nexus 5020. This is where youll find the
WWN range for the FC ports and the range of Ethernet addresses for the 10 Gigabit Ethernet
Data Center Virtualization Volume 1
Page 30 of 217
ports. The first address (whether FC or Ethernet) is associated with port 1 of that transport type
and subsequent ascending address numbers are associated with the next ascending port number.
N5K-1# show module
Mod Ports Module-Type
Model
Status
--- ----- -------------------------------- ---------------------- -----------1
20
20x10GE/Supervisor
N5K-C5010P-BF-SUP
active *
2
8
4x10GE + 4x1/2/4G FC Module
N5K-M1404
ok
Mod
--1
2
Sw
-------------5.0(2)N2(1)
5.0(2)N2(1)
Hw
-----1.2
1.0
World-Wide-Name(s) (WWN)
--------------------------------------------------2f:6c:69:62:2f:6c:69:62 to 63:6f:72:65:2e:73:6f:00
Mod MAC-Address(es)
--- -------------------------------------1
0005.9b7a.03c8 to 0005.9b7a.03ef
2
0005.9b7a.03f0 to 0005.9b7a.03f7
N5K-1#
Serial-Num
---------JAF1413CEGC
JAF1409ASQD
Abbreviate the syntax, then hit tab key to complete each word; for example, type sh<tab> ru<tab>.
5.5
Display the status of the switch interfaces. Notice that only Ethernet interfaces are listed.
5.6
2011 Cisco
Secondary
---------
Type
---------------
Ports
-------------------------------------------
Page 31 of 217
5.7
The fcoe feature must be activated to use the fibre channel features.
5.8
5.9
2011 Cisco
Page 32 of 217
6.2
6.3
Access your desktop (Username: Administrator/ password: 1234Qwer) and start your TFTP server
Save your running configuration on the tFTP server.
Note:
Be sure you start the tFTP/FTP Server before attempting to save the configuration or your copy will
fail. Please review Lab 0 Lab Services for instructions on how to use the tFTP/FTP server.
Use a tFTP/FTP Server in production networks to keep backup configurations and code releases for
each network device. Be sure to include these servers in your regular Data Center backup plans.
2011 Cisco
Page 33 of 217
vpc
lacp
fcoe
npiv
fport-channel-trunk
fex
vpc
lacp
fcoe
npiv
fport-channel-trunk
fex
Type show feature and verify that the appropriate licenses are enabled.
N5K-1(config)# show feature | i enabled
assoc_mgr
1
enabled
fcoe
1
enabled
fex
1
enabled
fport-channel-trunk
1
enabled
lacp
1
enabled
lldp
1
enabled
npiv
1
enabled
sshServer
1
enabled
vpc
1
enabled
2011 Cisco
Page 34 of 217
8.2
8.3
Create an access list to match Platinum traffic. The ACL is matching for traffic from NFS vlan.
ip access-list ACL_COS_5
10 permit ip 10.1.211.0/24 any
20 permit ip any 10.1.211.0/24
8.4
8.5
ip access-list ACL_COS_4
10 permit ip 10.1.151.0/24 any
20 permit ip any 10.1.151.0/24
8.6
8.7
Create a policy map that will be used for tagging incoming traffic.
2011 Cisco
Page 35 of 217
8.8
Create a network-qos class map for Platinum traffic to be used in a Network QoS policy.
8.9
Create a network-qos Class Map for Silver traffic to be used in a Network QoS policy.
8.10
Create a network-qos policy map to be applied to the System QoS policy. Set Platinum class to CoS
value of 5 and to MTU of 9000. Set Silver class to CoS value of 4 and to MTU of 9000. Set Default
class to MTU of 9000.
!!! The following section will enable Jumbo Frames for all unclassified traffic.
class type network-qos class-default
mtu 9000
exit
8.11
Associate the policies to the system class policy map using service policies.
system qos
service-policy type qos input POL_CLASSIFY
service-policy type network-qos POL_SETUP_NQ
exit
8.12
2011 Cisco
Page 36 of 217
2011 Cisco
Page 37 of 217
Use the show run command to view the global spanning-tree configuration
N5K-1(config)# show run ipqos
class-map type qos class-fcoe
class-map type qos match-all CLASS-SILVER
match access-group name ACL_COS_4
class-map type qos match-all CLASS-PLATINUM
match access-group name ACL_COS_5
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
policy-map type qos POL_CLASSIFY
class CLASS-PLATINUM
set qos-group 2
class CLASS-SILVER
set qos-group 4
class-map type network-qos CLASS-SILVER_NQ
match qos-group 4
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos CLASS-PLATINUM_NQ
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos POL_SETUP_NQ
class type network-qos CLASS-PLATINUM_NQ
set cos 5
mtu 9000
class type network-qos CLASS-SILVER_NQ
set cos 4
mtu 9000
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9000
system qos
service-policy type qos input POL_CLASSIFY
service-policy type network-qos POL_SETUP_NQ
2011 Cisco
Page 38 of 217
9.3
Use the show vlan command to show the list of VLANs that have been created on the switch.
N5K-1(config-vlan)# show
VLAN Name
1
default
10
INFRA-MGMT-VLAN
110 MGMT
111 VMTRAFFIC-VLAN
151 VMOTION-VLAN
171 PKT-CTRL-VLAN
2011 Cisco
Page 39 of 217
10.2
Router uplink.
interface Eth1/4
description To 3750:
10.3
FEX ports.
interface Eth1/7
description N2K-1:
interface Eth1/8
description N2K-1:
10.4
Server ports.
interface Eth1/9
description ESX1:vmnic0
interface Eth1/10
description ESX2:vmnic0
interface Eth1/11
description ESX3:vmnic4
10.5
interface Eth1/17
description N5K-2:Eth1/17
interface Eth1/18
description N5K-2:Eth1/18
10.6
OTV uplinks.
interface Eth1/19
description N7K-1:
interface Eth1/20
description N7K-2:
2011 Cisco
Page 40 of 217
10.8
Router uplink.
interface Eth1/4
description To 3750
10.9
FEX ports.
interface Eth1/7
description N2K-2:
interface Eth1/8
description N2K-2:
10.10
Server ports.
interface Eth1/9
description ESX1:vmnic1
interface Eth1/10
description ESX2:vmnic1
interface Eth1/11
description ESX3:vmnic5
10.11
interface Eth1/17
description N5K-1:Eth1/17
interface Eth1/18
description N5K-1:Eth1/18
10.12
OTV uplinks.
interface Eth1/19
description N7K-1:
interface Eth1/20
description N7K-2:
2011 Cisco
Page 41 of 217
Step 11 Use the show interface status command to print a list of ports and corresponding information
including configured port descriptions
11.1
Output from N5K-1
N5K-1(config-if)#
Eth1/1
Eth1/2
Eth1/4
Eth1/7
Eth1/8
Eth1/9
Eth1/10
Eth1/11
Eth1/17
Eth1/18
Eth1/19
Eth1/20
11.2
NTAP1-A:e2a
NTAP1-B:e2a
To 3750:
N2K-1:
N2K-1:
ESX1:vmnic0
ESX2:vmnic0
ESX3:vmnic4
N5K-2:Eth1/17
N5K-2:Eth1/18
N7K-1:
N7K-2:
sfpAbsent
sfpAbsent
sfpInvali
connected
connected
connected
connected
connected
connected
connected
notconnec
notconnec
1
1
1
1
1
1
1
1
1
1
1
1
10G
10G
10G
10G
10G
10G
10G
10G
10G
10G
10G
10G
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
full
full
full
full
full
full
full
full
full
full
full
full
1000
10G
10G
10G
10G
10G
10G
10G
10G
10G
10G
10G
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
10g
2011 Cisco
full
full
full
full
full
full
full
full
full
full
full
full
Page 42 of 217
12.2
interface Po11
description NTAP1-A
interface Eth1/1
channel-group 11 mode active
no shutdown
12.3
interface Po12
description NTAP1-B
interface Eth1/2
channel-group 12 mode active
no shutdown
12.4
Define port channel for servers. Add server host link to port-channel group.
For VPC and FCoE, we recommend setting channel-mode to on versus active (aka LACP). This is
useful for operating systems that dont support port-channel negotiation such as ESXi.
interface Po13
description ESX1
interface Eth1/9
channel-group 13 mode on
no shutdown
interface Po14
description ESX2
interface Eth1/10
channel-group 14 mode on
no shutdown
interface Po15
description ESX3
interface Eth1/11
channel-group 15 mode on
no shutdown
12.5
interface Po20
description 3750
interface Eth1/4
channel-group 20 mode active
no shutdown
12.6
interface Po101
description FEX1
interface Eth1/7-8
channel-group 101 mode active
no shutdown
12.7
2011 Cisco
Page 43 of 217
12.9
2011 Cisco
Page 44 of 217
12.10
Duplex
full
auto
auto
full
full
full
auto
full
Speed
10G
auto
auto
10G
10G
10G
auto
auto
Type
---------
Duplex
full
auto
auto
full
full
full
auto
full
Speed
10G
auto
auto
10G
10G
10G
auto
auto
Type
---------
12.11
Po
Vlan
trunk
1
1
1
1
1
1
1
Verify that the correct individual ports have been added to the correct port-channel.
2011 Cisco
Page 45 of 217
Note:
13.2
Do not allow any vlans that carry FCoE traffic on the vPC peer link .
Configure port-channel for NetApp.
int Po11-12
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
spanning-tree port type edge trunk
no shut
13.3
Configure port-channel for ESX Servers. They will allow vlans 111,211,171,151,and 131.
int Po13-15
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
spanning-tree port type edge trunk
no shut
13.4
Configure port channel for L3 Switch. Our L3 switch is 1GB so we set our speed to 1000.
interface Po20
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
speed 1000
no shutdown
13.5
2011 Cisco
Page 46 of 217
13.7
int Po11-12
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
spanning-tree port type edge trunk
no shut
13.8
int Po13-15
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
spanning-tree port type edge trunk
no shut
13.9
Configure port channel for L3 Switch. Our L3 switch is 1GB so we set our speed to 1000.
interface Po20
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,211,171,151,131
speed 1000
no shutdown
13.10
Step 14 Use the show run interface <interface name> command to show the configuration for a given
interface or portchannel.
N5K-1(config-if-range)# sh run int po1,po11-15,po20
interface port-channel1
description vPC peer-link
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,131,151,171,211
spanning-tree port type network
interface port-channel11
description NTAP1-A
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,131,151,171,211
spanning-tree port type edge trunk
interface port-channel12
description NTAP1-B
switchport mode trunk
switchport trunk native vlan 999
switchport trunk allowed vlan 111,131,151,171,211
spanning-tree port type edge trunk
interface port-channel13
description ESX1
2011 Cisco
Page 47 of 217
2011 Cisco
Page 48 of 217
15.2
Configure the vPC role priority (optional): We will make N5K-1 the primary switch.
The switch with the lower priority will be elected as the vPC primary switch.
role priority 10
15.3
Configure the peer keepalive link. The management interface IP address for N5K-2 is 10.1.111.2 :
The system does not create the vPC peer link until you configure a vPC peer keepalive link.
15.4
interface Po1
vpc peer-link
15.5
interface Po11
vpc 11
interface Po12
vpc 12
15.6
interface Po13
vpc 13
interface Po14
vpc 14
interface Po15
vpc 15
15.7
interface Po20
vpc 20
15.8
2011 Cisco
Page 49 of 217
15.10
Configure the vPC role priority (optional): We will make N5K-1 the primary switch.
The switch with the lower priority will be elected as the vPC primary switch.
role priority 20
15.11
Configure the peer keepalive link. The management interface IP address for N5K-1 is 10.1.111.1 :
The system does not create the vPC peer link until you configure a vPC peer keepalive link.
15.12
interface Po1
vpc peer-link
15.13
interface Po11
vpc 11
interface Po12
vpc 12
15.14
interface Po13
vpc 13
interface Po14
vpc 14
interface Po15
vpc 15
15.15
interface Po20
vpc 20
15.16
2011 Cisco
Page 50 of 217
The following show commands are useful for verifying the vPC configuration.
Cisco Nexus 5010 A & B - N5K-1 & N5K-2
Step 16 Check the vPC role of each switch.
16.1
N5K-1 is the primary because we set the role priority number lower :
N5K-1(config)# show vpc role
vPC Role status
---------------------------------------------------vPC role
: primary
Dual Active Detection Status
: 0
vPC system-mac
: 00:23:04:ee:be:0a
vPC system-priority
: 32667
vPC local system-mac
: 00:05:9b:7a:03:bc
vPC local role-priority
: 10
16.2
N5K-2 is the secondary because we set the role priority number higher :
2011 Cisco
Page 51 of 217
17.2
Make sure the domain id and role is correct. Make sure your peer status is ok or alive.
10
peer adjacency formed ok
peer is alive
success
success
success
secondary
6
Disabled
Enabled
2011 Cisco
: peer is alive
: (2158) seconds, (636) msec
: 10.1.111.1
Page 52 of 217
17.4
View the running-configuration specific to vpc :
Cisco Nexus 5010 A - N5K-1
N5K-1(config)# show running-config vpc
feature vpc
vpc domain 10
role priority 10
peer-keepalive destination 10.1.111.2 source 10.1.111.1
interface port-channel1
vpc peer-link
interface port-channel11
vpc 11
interface port-channel12
vpc 12
interface port-channel13
vpc 13
interface port-channel14
vpc 14
interface port-channel15
vpc 15
interface port-channel20
vpc 20
2011 Cisco
Page 53 of 217
2011 Cisco
Page 54 of 217
17.6
slot 100
provision model N2K-C2148T
17.7
Configure the fabric EtherChannel links for the Fabric Extender 100.
int po100
description single-homed FEX100
int e1/7-8
channel-group 100
int po100
switchport mode fex-fabric
fex associate 100
It may take several minutes for the Nexus 2000 to register with the Nexus 5000 switches. A syslog
notification will announce when the FEX is online.
17.8
Configure the Nexus 2000 (FEX) Ethernet Interfaces on N5K-1. The FEX interfaces will be used as
management ports for the ESXi servers. Ports Eth100/1/1-3 will be configured to trunk. We are not
going to going to put these ports into a channel group, so we commented out those lines. The port
channel configuration is also not necessary, but is included in case we need to port channel them
later.
int po113
description ESX1
switchport mode trunk
vpc 113
int po114
description ESX2
switchport mode trunk
vpc 114
int po115
description ESX3
switchport mode trunk
vpc 115
int ethernet 100/1/1
description ESX1 vmnic2
switchport mode trunk
! channel-group 113 force
int ethernet 100/1/2
description ESX2 vmnic2
switchport mode trunk
! channel-group 114 force
int ethernet 100/1/3
description ESX3 vmnic0
switchport mode trunk
! channel-group 115 force
2011 Cisco
Page 55 of 217
17.10
slot 101
provision model N2K-C2148T
17.11
Configure the fabric EtherChannel links for the Fabric Extender 101.
int po101
description single-homed FEX101
int e1/7-8
channel-group 101
int po101
switchport mode fex-fabric
fex associate 101
17.12
Configure the Nexus 2000 (FEX) Ethernet Interfaces on N5K-2. The FEX interfaces will be used as
management ports for the ESXi servers. Ports Eth100/1/1-3 will be configured to trunk. We are not
going to going to put these ports into a channel group, so we commented out those lines. The port
channel configuration is also not necessary, but is included in case we need to port channel them
later.
int po113
description ESX1
switchport mode trunk
vpc 113
int po114
description ESX2
switchport mode trunk
vpc 114
int po115
description ESX3
switchport mode trunk
vpc 115
int ethernet 101/1/1
description ESX1 vmnic2
switchport mode trunk
! channel-group 113 force
int ethernet 101/1/2
description ESX2 vmnic2
switchport mode trunk
! channel-group 114 force
int ethernet 101/1/3
description ESX3 vmnic0
switchport mode trunk
! channel-group 115 force
2011 Cisco
Page 56 of 217
2011 Cisco
Page 57 of 217
19.3
Connect to the MDS 9124 using the console button on the lab interface and perform the System
Admin Account Setup:
---- System Admin Account Setup ---Do you want to enforce secure password standard (yes/no) [y]:y
Enter the password for "admin": 1234Qwer
Confirm the password for "admin": 1234Qwer
---- Basic System Configuration Dialog ---<snip>
Would you like to enter the basic configuration dialog (yes/no): yes
Create another login account (yes/no) [n]: n
Configure read-only SNMP community string (yes/no) [n]: n
Configure read-write SNMP community string (yes/no) [n]: n
Enter the switch name : MDS9124
Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: y
Mgmt0 IPv4 address : 10.1.111.40
Mgmt0 IPv4 netmask : 255.255.255.0
Configure the default gateway? (yes/no) [y]: y
IPv4 address of the default gateway : 10.1.111.254
Configure advanced IP options? (yes/no) [n]: n
Enable the ssh service? (yes/no) [y]: y
Type of ssh key you would like to generate (dsa/rsa) [rsa]: rsa
Number of rsa key bits <768-2048> [1024]: 1024
Enable the telnet service? (yes/no) [n]: n
Enable the http-server? (yes/no) [y]: y
Configure clock? (yes/no) [n]: n
Configure timezone? (yes/no) [n]: n
Configure summertime? (yes/no) [n]: n
Configure the ntp server? (yes/no) [n]: n
Configure default switchport interface state (shut/noshut) [shut]: shut
Configure default switchport trunk mode (on/off/auto) [on]: on
Configure default switchport port mode F (yes/no) [n]: n
Configure default zone policy (permit/deny) [deny]: deny
Enable full zoneset distribution? (yes/no) [n]: y
Configure default zone mode (basic/enhanced) [basic]: basic
2011 Cisco
Page 58 of 217
19.4
19.5
20.2
Note:
ms
ms
ms
ms
21.1
Display all commands that begin with S, sh, and show. Press Enter or space to scroll through the
list of commands.
MDS9124# s?
MDS9124# sh?
MDS9124# show ?
21.2
Abbreviate the syntax, then hit tab key to complete each word; for example, type sh<tab> ru<tab>.
2011 Cisco
Page 59 of 217
21.3
Display the status of the switch interfaces. Notice that fibre channel interfaces fc 1/1 - fc 1/6 are
down.
21.4
21.5
22.2
Note:
Be sure you start the tFTP/FTP Server before attempting to save the configuration or your copy will
fail. Please review Lab 0 Lab Services for instructions on how to use the tFTP/FTP server.
Use a tFTP/FTP Server in production networks to keep backup configurations and code releases for
each network device. Be sure to include these servers in your regular Data Center backup plans.
2011 Cisco
Page 60 of 217
The following section provides a detailed procedure for configuring the Cisco Unified Computing System for use
in a DCV environment. These steps should be followed precisely because a failure to do so could result in an
improper configuration.
4.1 POWER ON THE ESX HOSTS AND VERIFY THE NEXUS INTERFACES
We will use Cisco Unified Computing System C-Series Servers, powered by Intel Xeon processors, providing
industry-leading virtualization performance, to validate our configuration.
The ESX CNA interfaces must be up in order to verify interface connectivity and fabric login. Power up the ESX
hosts, then use show commands on the Nexus 5000 to verify the interfaces.
Step 23 Power up ESXi hosts.
23.1
Connect to the VC_SERVER from the SSL Dashboard.
23.2
Log into the server with credentials: administrator/1234Qwer.
23.3
Double click on the ESX1 CIMC shortcut on the desktop (or http://10.1.111.161/).
23.4
Accept any SSL warnings.
23.5
Authenticate with admin/1234Qwer.
2011 Cisco
Page 61 of 217
23.6
1
2
Step 24 Repeat Step 23 for ESX2 CIMC (http://10.1.111.162) and ESX3 CIMC (http://10.1.111.163).
2011 Cisco
Page 62 of 217
This section contains the procedural steps for the second part of the Cisco Nexus 5010 deployment.
Create a vlan to carry the FCoE traffic and bind it the vlan.
Add the FCoE vlan to the allowed vlan list.
Define a virtual FC interface(vFC) and bind it to an interface.
Configure SAN port-channel uplinks.
Create a vsan database.
Assign the vFC interfaces to their vsan.
Enable FC and vFC interfaces.
2011 Cisco
Page 63 of 217
25.1
interface po13-15
switchport trunk allowed vlan add 1011
25.2
Create virtual Fibre Channel interfaces. Bind them to server port-channel interfaces. Then bring up
the vFC interfaces.
When FCoE hosts are using vPC, vfc interfaces need to bind to the port-channel interface instead of the
physical interface.
interface vfc13
bind interface po13
interface vfc14
bind interface po14
interface vfc15
bind interface po15
int vfc13-15
switchport trunk allowed vsan 11
2011 Jan 14 06:05:37 N5K-1 %$ VDC-1 %$ %PORT-2-IF_DOWN_ERROR_DISABLED: %$VSAN 1%$
Interface vfc3 is down (Error disabled)
You will get error disabled messages, if the servers have not been powered up, yet.
25.3
25.4
Create vsan 11. On N5K-1, associate vsan 11 with vfc 13-15 and san-port-channel 111.
vsan database
vsan 11 name FABRIC_A
vsan 11 interface vfc 13-15
vsan 11 interface san-port-channel 111
exit
25.5
interface fc2/1-4
no shut
int vfc13-15
no shut
2011 Cisco
Page 64 of 217
26.1
int po13-15
switchport trunk allowed vlan add 1012
26.2
Create virtual Fibre Channel interfaces. Bind them to server port-channel interfaces. Then bring up
the vFC interfaces.
When FCoE hosts are using vPC, vfc interfaces need to bind to the port-channel interface instead of the
physical interface.
int vfc13
bind interface port-channel 13
int vfc14
bind interface port-channel 14
int vfc15
bind interface port-channel 15
int vfc13-15
switchport trunk allowed vsan 12
exit
26.3
26.4
Create vsan 12. On N5K-1, associate vsan 12 with vfc 13-15 and san-port-channel 111.
vsan database
vsan 12 name FABRIC_B
vsan 12 interface vfc13-15
vsan 12 interface san-port-channel 112
exit
Note:
VLAN and VSAN needs to be different from N5K-1. This is so we can create two paths.
26.5
interface fc2/1-4
no shut
int vfc13-15
no shut
exit
2011 Cisco
Page 65 of 217
Cisco MDS9124
Step 27 Create vsan 10 and vsan 20. Assign fc1/3,fc1/5 to vsan 10 and fc 1/4,fc 1/6 to vsan 20.:
Note:
FC Port Connectivity: MDS fc1/1 to N5K-1 fc2/1, MDS fc1/2 to N5K-2 fc2/1, MDS fc1/3 to EMC SPA.
27.1
int fc1/1
switchport
int fc1/2
switchport
int fc1/3
switchport
int fc1/4
switchport
int fc1/5
switchport
int fc1/6
switchport
exit
27.2
27.3
vsan database
vsan 11 name FABRIC_A
vsan 12 name FABRIC_B
27.4
Assign fc1/5 and port-channel 111 to vsan 11. Assign fc 1/6 and port-channel 112to vsan 12.:
vsan
vsan
vsan
vsan
exit
27.5
11
11
12
12
interface
interface
interface
interface
fc1/5
port-channel 111
fc1/6
port-channel 112
int fc1/1-6
no shutdown
Note:
2011 Cisco
Page 66 of 217
fc2/4
vfc15
fc2/4
vfc15
fc1/10
vsan 11 interfaces:
fc1/1
fc1/2
fc1/5
port-channel 111
vsan 12 interfaces:
fc1/3
fc1/4
fc1/6
port-channel 112
28.2
Note:
If the association state is non-operational, then you did not define vsan 10 in a previous step.
28.3
----
TF
TF
TF
All of the vfc interfaces will up as errDisabled if the servers are turned off.
2011 Cisco
Association State
----------------Operational
View all of the virtual Fibre Channel interfaces. Make sure all defined vFCs are present and in the
correct VSANs.
Note:
Association State
----------------Operational
----
TF
TF
TF
Page 67 of 217
28.4
Confirm the configuration of the virtual Fibre Channel interface. Note the bound Ethernet
interface information. The rest of the information is similar to a standard fibre channel port.
Note:
8 vfc
(11)
(11)
(11)
The interfaces will show down if the connecting servers are powered off.
2011 Cisco
Page 68 of 217
28.1
2011 Cisco
Page 69 of 217
28.1
auto
(12)
(12)
()
()
2011 Cisco
Page 70 of 217
Devices that do not belong to a zone follow the policy of the default zone.
Here are the general steps for creating zones and zone sets:
Create aliases
Create zones
Create zone sets
Activate the zone set.
Note:
For the following steps, you will need the information from the table below. On occasion, hardware
needs to replaced or upgraded, and the documentation is not updated at the same time. One way to
verify this is to compare the output from a show flogi database versus the output from show run
zone. In other words, compare the values of the devices registering in versus the values you
manually zoned in.
2011 Cisco
DEVICE
NTAP1-A Boot Target
ESX1
ESX2
ESX3
NTAP1-A Boot Target
ESX1
ESX2
ESX3
NTAP1-A Boot Target
ESX1
ESX2
ESX3
NTAP1-A Boot Target
ESX1
ESX2
ESX3
WWPN-A to N5K-1
WWPN-B to N5K-2
50:0a:09:81:88:bc:c3:04
21:00:00:c0:dd:12:bc:6d
21:00:00:c0:dd:14:60:31
21:00:00:c0:dd:11:bc:e9
50:06:01:60:4b:a0:66:c7
21:00:00:c0:dd:13:ec:19
21:00:00:c0:dd:14:71:8d
21:00:00:c0:dd:14:73:c1
50:06:01:60:4b:a0:6e:75
21:00:00:c0:dd:13:eb:bd
21:00:00:c0:dd:13:ed:31
21:00:00:c0:dd:14:73:19
50:0a:09:81:88:ec:c2:a1
21:00:00:c0:dd:12:0e:59
21:00:00:c0:dd:12:0d:51
21:00:00:c0:dd:14:73:65
50:0a:09:82:88:bc:c3:04
21:00:00:c0:dd:12:bc:6f
21:00:00:c0:dd:14:60:33
21:00:00:c0:dd:11:bc:eb
50:06:01:61:4b:a0:66:c7
21:00:00:c0:dd:13:ec:1b
21:00:00:c0:dd:14:71:8f
21:00:00:c0:dd:14:73:c3
50:06:01:61:4b:a0:6e:75
21:00:00:c0:dd:13:eb:bf
21:00:00:c0:dd:13:ed:33
21:00:00:c0:dd:14:73:1b
50:0a:09:82:88:ec:c2:a1
21:00:00:c0:dd:12:0e:59
21:00:00:c0:dd:12:0d:53
21:00:00:c0:dd:14:73:67
Page 71 of 217
Step 29 Create device aliases on each Cisco Nexus 5010 and create zones for each ESXi host
Duration: 30 minutes
Cisco Nexus 5010 A - N5K-1
29.1
Aliases for storage (targets).
device-alias database
device-alias name NTAP1-A_0a pwwn <ntap1_a_wwpn>
29.2
Note:
29.3
Get this information from Error! Reference source not found. and Error! Reference source not
found..
Create the zones for each service profile. Each zone contains one initiator and one target. We
place port 1 of each CNA in a zone with NTAP1-A 0a for VSAN 11.
29.4
29.5
29.6
2011 Cisco
Page 72 of 217
Note:
30.2
30.3
30.4
After all of the zones for the Cisco UCS service profiles have been created, create a zoneset to
organize and manage them.
Create the zoneset and add the necessary members.
30.5
30.6
Cisco MDS9124
Note:
2011 Cisco
When you activate the zone sets on N5K-1 and N5K-2, the switches will propagate the zone info to
the MDS.
Page 73 of 217
30.7
Verify that the entries were successfully entered into the device alias database by entering show
device-alias. Examples below are for Pod1.
30.8
Verify that the ESX hosts have completed a fabric login into N5K-1 and N5K-2. Make sure the VSAN
numbers are correct and that their alias shows up. Port numbers might not match yours.
2011 Cisco
Page 74 of 217
30.9
Verify devices registered in the Fibre Channel Name server. The output fromhere shows you all
the hosts that have registered into the database. Note that you can you see an entry for the
NetApp array in here but not in the show flogi database above.
Cisco Nexus 5010 A - N5K-1
N5K-1# sh fcns database
VSAN 11:
-------------------------------------------------------------------------FCID
TYPE PWWN
(VENDOR)
FC4-TYPE:FEATURE
-------------------------------------------------------------------------0x140000
N
50:0a:09:81:88:ec:c2:a1 (NetApp)
scsi-fcp:target
[NTAP1-A_0a]
0xdb0000
N
21:00:00:c0:dd:14:73:65 (Qlogic)
scsi-fcp:init
[ESX3_NTAP1-A_A]
0xdb0001
N
21:00:00:c0:dd:12:0d:51 (Qlogic)
scsi-fcp:init
[ESX2_NTAP1-A_A]
0xdb0002
N
21:00:00:c0:dd:12:0e:59 (Qlogic)
scsi-fcp:init
[ESX1_NTAP1-A_A]
Total number of entries = 4
2011 Cisco
Page 75 of 217
30.10
Verify that the zones are correct by issuing the command show zoneset active. The output
should show the zoneset and the zones that were added to the zoneset. Examples below are for
Pod1.
2011 Cisco
Page 76 of 217
This section presents a detailed procedure for installing VMware ESXi within a Data Center Virtualization
environment. The deployment procedures that follow are customized to include the specific environment
variables that have been noted in previous sections.
31.6
31.7
Under the Actions section, click the Launch KVM Console link. Click Run on any certificate
mismatch warning dialogs that may pop up. You will now have a java KVM Console to the server
Repeat Steps 31.1 - 31.6 for ESX2 CIMC (http://10.1.111.162), and ESX3 CIMC
(http://10.1.111.163).
2011 Cisco
Page 77 of 217
This step has already been done for you. Skip to the next step.
2011 Cisco
Page 78 of 217
This step has already been done for you. Skip to the next step.
This step has already been done for you. Skip to the next step.
2011 Cisco
DNS test will fail because we have not configured DNS, yet.
Press Esc to log out of the console interface.
To verify, in the right panel of the ESXi configuration window, when the VLAN (optional) item is
highlighted, the specified VLAN should be shown.
Page 79 of 217
This step has already been done for you. Skip to the next step.
2011 Cisco
You can verify this step and the two previous steps by selecting the Test Management Network
option from the System Customization menu. Here you can specify up to three addresses to
ping and one hostname to resolve by using the DNS server.
Page 80 of 217
This step has already been done for you. Skip to the next step.
38.1
38.2
38.3
38.4
38.5
38.6
2011 Cisco
Page 81 of 217
Used Ports
4
VLAN ID
0
111
Configured Ports
128
Used Ports
0
1
MTU
1500
Uplinks
vmnic2,vmnic3
Uplinks
vmnic2,vmnic3
vmnic2,vmnic3
vmnic2 and vmnic3 are the 1Gbps nics connected to the Cisco Nexus 2248 Fabric Extenders. They are
both active and uses the default ESXi virtual port id load balancing mechanism.
39.3
Enable jumbo frames for default vSwitch0. Type esxcfg-vswitch -m 9000 vSwitch0.
39.4
Add a new vSwitch for the 10Gbps CNA ports. Enable jumbo frames for vSwitch1.
esxcfg-vswitch -a vSwitch1
esxcfg-vswitch -m 9000 vSwitch1
39.5
2011 Cisco
Page 82 of 217
Why am I creating another Management network group? The default Management Network is a
vmkernel management interface. This new port group is for VMs to be on the Management VLAN.
40.2
Add a new port group called NFS to vSwitch1 and assign it to vlan 211.
40.3
Add a new port group called VMotion to vSwitch1 and assign it to vlan 151.
40.4
Add a new port group called CTRL-PKT to vSwitch0 and assign it to vlan 171.
40.5
Add a new port group called VMTRAFFIC to vSwitch0 and assign it to vlan 131.
40.6
Add a new port group called Local LAN to vSwitch0 and assign it to vlan 24.
40.7
vim-cmd hostsvc/net/refresh
You need to run a refresh of your network settings for the following steps. This is important when
running these commands from a script.
2011 Cisco
Page 83 of 217
40.8
Verify the MTU 9000 setting and the addition of Port Groups. Type esxcfg-vswitch -l.
On both ESXi hosts ESX1 and ESX2
~ # esxcfg-vswitch -l
Switch Name
Num Ports
vSwitch0
128
PortGroup Name
VM Network
Management Network
Switch Name
vSwitch1
PortGroup Name
Local LAN
CTRL-PKT
MGMT Network
VMotion
NFS
Used Ports
4
VLAN ID
0
111
Num Ports
128
Used Ports
0
1
Used Ports
5
VLAN ID
24
171
111
151
211
Configured Ports
128
Uplinks
vmnic2,vmnic3
MTU
9000
Uplinks
vmnic0,vmnic1
Uplinks
vmnic2,vmnic3
vmnic2,vmnic3
Configured Ports
128
Used Ports
0
0
0
0
0
MTU
9000
Uplinks
vmnic0,vmnic1
vmnic0,vmnic1
vmnic0,vmnic1
vmnic0,vmnic1
vmnic0,vmnic1
41.2
Verify your vSwitch load balancing policy. vSwitch0 should be set to lb_srcid and vSwitch1
should be set to lb_ip
42.2
Create vmkernel interface for VMotion traffic. Enable it for Jumbo Frames on port group VMotion.
42.4
Create vmkernel interface for VMotion traffic. Enable it for Jumbo Frames on port group VMotion.
2011 Cisco
Page 84 of 217
42.5
Type esxcfg-vmknic -l and verify that the vmkernel ports were added properly with an MTU of
9000.
On ESXi host ESX1
~ # esxcfg-vmknic -l
Interface Port Group/DVPort
IP Family
Broadcast
MAC Address
MTU
vmk0
Management Network IPv4
10.1.111.255
c4:7d:4f:7c:a7:6a 1500
vmk1
NFS
IPv4
10.1.211.255
00:50:56:7e:60:53 9000
vmk2
VMotion
IPv4
10.1.151.255
00:50:56:7b:ae:78 9000
IP Address
TSO MSS
Enabled
10.1.111.21
65535
true
10.1.211.21
65535
true
10.1.151.21
65535
true
Netmask
Type
255.255.255.0
STATIC
255.255.255.0
STATIC
255.255.255.0
STATIC
IP Address
TSO MSS
Enabled
10.1.111.22
65535
true
10.1.211.22
65535
true
10.1.151.21
65535
true
Netmask
Type
255.255.255.0
STATIC
255.255.255.0
STATIC
255.255.255.0
STATIC
Summary of Commands
esxcfg-vswitch -m 9000 vSwitch0
esxcfg-vswitch -a vSwitch1
esxcfg-vswitch -m 9000 vSwitch1
esxcfg-vswitch -L vmnic0 vSwitch1
esxcfg-vswitch -L vmnic1 vSwitch1
esxcfg-vswitch -A "MGMT Network" vSwitch1
esxcfg-vswitch -v 111 -p "MGMT Network" vSwitch1
esxcfg-vswitch -A VMotion vSwitch1
esxcfg-vswitch -v 151 -p VMotion vSwitch1
esxcfg-vswitch -A NFS vSwitch1
esxcfg-vswitch -v 211 -p NFS vSwitch1
esxcfg-vswitch -A "CTRL-PKT" vSwitch1
esxcfg-vswitch -v 171 -p "CTRL-PKT" vSwitch1
esxcfg-vswitch -A "VMTRAFFIC" vSwitch1
esxcfg-vswitch -v 131 -p "VMTRAFFIC" vSwitch1
esxcfg-vswitch -A "Local LAN" vSwitch1
esxcfg-vswitch -v 24 -p "Local LAN" vSwitch1
vim-cmd hostsvc/net/refresh
vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch1
2011 Cisco
Page 85 of 217
Step 43 Logging into VMware ESXi host using VMware vSphere client
Duration: 5 minutes
ESXi host 1 - ESX1
43.1
Open the vSphere client and enter 10.1.111.21 as the host you are trying to connect to.
43.2
Enter root for the username.
43.3
Enter 1234Qwer as the password.
43.4
Click the Login button to connect.
ESXi Host 2 - ESX2
43.5
Open the vSphere client and enter 10.1.111.22 as the host you are trying to connect to.
43.6
Enter root for the username.
43.7
Enter 1234Qwer as the password.
43.8
Click the Login button to connect.
43.9
To verify that the login was successful, the vSphere clients main window should be visible.
Step 44 Setting up the VMotion vKernel port on the virtual switch for individual hosts
Duration: 5 minutes per host
Now we need to enable VMotion on the vmKernel port we created.
ESXi host 1 - ESX1
44.1
Select ESX1 on the left panel.
44.2
Go to the Configuration tab.
44.3
Click the Networking link in the Hardware box.
44.4
Click the Properties link in the right field on vSwitch1.
1
3
44.5
2011 Cisco
Page 86 of 217
44.6
44.7
44.8
Click OK to continue.
Click Close to close the dialog box.
2011 Cisco
On the right panel, click the Virtual Switch View. Individual VMkernel ports will be displayed for
the various networks defined. Select a VMkernel port and display the VM associated with that
port.
Page 87 of 217
45.5
45.6
45.7
Click Edit.
Type in the VLAN ID for your Pods VM Traffic VLAN (ex 131.)
1
45.8
45.9
2011 Cisco
Click OK.
Click OK.
Page 88 of 217
46.3
~ # touch /vmfs/volumes/SWAP/test
46.4
~ # ls /vmfs/volumes/SWAP/
test
46.5
46.6
46.7
From the vSphere client, view contents of the mount to confirm files. Select your host from the left
panel.
Select the Configuration tab. Select Storage in the Hardware box.
Inspect the right panel where the cluster is displayed. You should see all of the datastores
associated with the host.
1
2
46.8
1
2
46.9
Summary of Commands
esxcfg-nas -a --host 10.1.211.151 -s /vol/VDI_VFILER1_DS DS
esxcfg-nas -a --host 10.1.211.151 -s /vol/VDI_SWAP SWAP
2011 Cisco
Page 89 of 217
2
48.5
2011 Cisco
Select the radio button for Store the swapfile in a swap file datastore selected below if it is not
already selected.
Page 90 of 217
48.6
Select SWAP as the datastore you want to store the swapfile on.
48.7
48.8
You are now done with the initial setup of a Base Data Center Virtualization
infrastructure.
The remaining tasks will allow you to configure vCenter, Nexus 1000v, and OTV.
2011 Cisco
Page 91 of 217
2
3
49.2
49.3
49.4
2011 Cisco
Page 92 of 217
The FlexPod Implementation Guide, recommends you enable and accept the defaults for VMware
DRS.
Accept the defaults for power management, and click Next to continue.
Accept the defaults for VMware HA, and click Next to continue.
Accept the defaults for Virtual Machine Options, and click Next to continue.
Accept the defaults for VM Monitoring, and click Next to continue.
Accept the defaults for VMware EVC, and click Next to continue.
Select Store the Swapfile in the datastore specified by the host in the VM Swapfile Location
section and click Next to continue.
50.10
50.11
2011 Cisco
Page 93 of 217
2011 Cisco
To verify, on the left panel, individual hosts display under the cluster.
Page 94 of 217
7.2
This task has already been completed for you. You may review for completeness. Please skip ahead to
Section 7.3.
ESX1 vmnic0 is the CNA connected to N5K-1 Eth1/9. ESX2 vmnic0 is the CNA connected to N5K-1 Eth1/4. Add a
datastore to each ESX host presented via FCoE through the fabric.
Step 52 Click on the 10.1.111.21 (ESX1) host under ClusterA cluster. Select the Configuration tab. Click on the
Storage link under Hardware. Click on the Add Storage link:
52.1
Select the Disk/LUN radio button, then click Next :
52.2
Select the 50 GB Fibre Channel disk that is found and click Next.
Note:
52.3
52.4
This LUN is connected via FcoE. ESX1 vmnic0 is the CNA port that is connected to N5K-1 Eth1/9.
Then, click Next on the Current Disk Layout dialog box that follows.
Name the datastore NetApp-SAN-1, then click Next
2011 Cisco
Page 95 of 217
52.5
Uncheck the Maximize capacity box, and then enter 40.00 GB in the size box. Click Next.
Note:
52.6
52.7
Note:
2011 Cisco
Page 96 of 217
3
4
53.3
Click on the Server-2003R2 to open the folder. Right-click on the Server-2003R2.vmx file and
select Add to Inventory from the pop-up menu.
53.4
Leave the Name as Server-2003R2. Select FlexPod_DC_1. Click Next..
53.5
Specify your cluster and click Next.
53.6
Select ESX1 for the host. Click Next, then click Finish on the Add to Inventory dialog box.
Step 54 Add Client VM to ESX2 inventory.
54.1
Click on the ClientXP to open the folder. Right-click on the ClientXP.vmx file and select Add to
Inventory from the pop-up menu.
1
54.2
54.3
54.4
54.5
2011 Cisco
Page 97 of 217
55.3
55.4
55.5
55.6
55.7
55.8
1
55.9
55.10
2011 Cisco
Page 98 of 217
55.11
Map the the Nexus 1000V Control and Packet source networks to CTRL_PKT. Map the
Management source network to "MGMT Network". Click Next.
Note:
Cisco supports using the same vlan for Management, Control, and Packet port-groups. We are using
one group for Management traffic and another group for control and packet traffic.
55.12
Fill out the VSM Configuration Properties with information below, and then click Next.
VSM Domain ID: 11
Password: 1234Qwer
Management IP Address: 10.1.111.17
Management IP Subnet Mask: 255.255.255.0
Management IP Gateway: 10.1.111.254
55.13
Click Finish.
55.14
After the template is finished deploying, click Close :
55.15
Power on the VSM by clicking on the Nexus1000v VM and pressing the Power On icon ( ).
55.16
Then, launch the VM Console and verify that the VM boots to the login prompt :
2011 Cisco
Page 99 of 217
56.3
56.4
2
56.5
In the new window, right-click in open area below "Available Plug-ins" and select New Plug-in
(you may have to expand the window to do so).
1
56.6
56.7
56.8
56.9
56.10
2011 Cisco
57.4
57.5
svs-domain
domain id 11
control vlan 171
packet vlan 171
svs mode L2
57.6
Step 58 Verify connection to the vCenter and status before adding hosts to the VSM. The command show svs
connections shows VSM connection information to the vCenter. Make sure operational status is
Connected and Sync status is Complete. If the status is good,then proceed to adding hosts.
vsm-1# show svs connections
connection vcenter:
ip address: 10.1.111.100
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: FlexPod_DC_1
DVS uuid: 84 52 1a 50 0c aa 52 b2-10 64 47 c3 8d af 46 70
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 4.1.0 build-345043
58.1
The Cisco Nexus 1000V switch should now be available in the Inventory Networking view.
2
1
2011 Cisco
59.1
Status
Ports
--------- ------------------------------active
active
active
active
active
active
60.2
Now we need to enable LACP offload. This WILL require a reboot of the VSM.
lacp offload
copy running startup
reload
60.3
2011 Cisco
Summary of Commands
hostname vsm-1
system jumbomtu 9000
svs-domain
domain id 11
control vlan 171
packet vlan 171
svs mode L2
exit
svs connection vcenter
protocol vmware-vim
remote ip address 10.1.111.100 port 80
vmware dvs datacenter-name FlexPod_DC_1
connect
exit
vlan 111
name MGMT-VLAN
vlan 131
name VMTRAFFIC
vlan 151
name VMOTION
vlan 171
name CTRL-PKT
vlan 211
name NFS-VLAN
feature lacp
lacp offload
copy running startup
reload
2011 Cisco
We have a pair of NICs that will be teamed, so we will only need one uplink port profile.
61.1
Type the following commands in the VSM console or terminal session to create the SYSTEMUPLINK profile.
61.2
Note:
61.3
no shutdown
61.4
VLAN 111, 151 , 171, and 211 are used for Management, VMotion, N1K management, and data
store traffic, so they have to be configured as system VLANs to ensure that these VLANs are
available during the boot process.
61.5
state enabled
Step 62 Create a Management port-profile for your ESXi management VMKernel interface. This port profile
will also be used by the Management interface of the VSM. As VLAN 111 is used for management
traffic, it has to be configured as a system VLAN to ensure that this VLAN is available during the boot
process of the ESXi server.
port-profile type vethernet MGMT-VLAN
vmware port-group
switchport mode access
switchport access vlan 111
no shutdown
system vlan 111
state enabled
Note:
2011 Cisco
Step 63 Create a Nexus 1000V Control and Packet port profile for the VSM virtual interfaces.
63.1
As VLAN 171 is used for management traffic it has to be configured as a system VLAN to ensure
that this VLAN is available during the boot process of the ESXi server.
Note:
The following section is not used currently, because we are using VLAN 1 for Control, Packet, and
Management.
Step 66 Create VM Traffic port-profile for VM virtual interfaces. This will be for the non-mangement Virtual
Machines residing on the ESXi hosts.
port-profile type vethernet VMTRAFFIC-VLAN
vmware port-group
switchport mode access
switchport access vlan 131
no shutdown
! system vlan 131
state enabled
exit
66.1
66.2
2011 Cisco
Summary of Commands
port-profile type ethernet SYSTEM-UPLINK
description system profile for blade uplink ports
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 111,131,151,171,211
mtu 9000
channel-group auto mode on
no shutdown
system vlan 111,151,171,211
state enabled
port-profile type vethernet MGMT-VLAN
vmware port-group
switchport mode access
switchport access vlan 111
no shutdown
system vlan 111
state enabled
port-profile type vethernet NFS-VLAN
vmware port-group
switchport mode access
switchport access vlan 211
no shutdown
system vlan 211
state enabled
exit
port-profile type vethernet VMOTION
vmware port-group
switchport mode access
switchport access vlan 151
no shutdown
system vlan 151
state enabled
exit
port-profile type vethernet VMTRAFFIC-VLAN
vmware port-group
switchport mode access
switchport access vlan 131
no shutdown
! system vlan 131
state enabled
port-profile type vethernet N1KV-CTRL-PKT
vmware port-group
switchport mode access
switchport access vlan 171
no shutdown
system vlan 171
state enabled
2011 Cisco
67.7
Type vem status and confirm that the VEM has been installed properly.
Num Ports
128
Used Ports
16
Configured Ports
128
MTU
9000
Uplinks
vmnic1,vmnic0
Note:
Summary of Commands
cd /vmfs/volumes/DS
esxupdate -b cross_cisco-vem-v130-4.2.1.1.4.0.0-2.0.1.vib update
2011 Cisco
2
68.2
Select vsm-1 from the tree on the left. Right-click on it and select Add Host from the menu.
1
68.3
68.4
Select hosts ESX1 and ESX2. Next, select the adapters for each hosts vSwitch1 (vmnic0 and
vmnic1). Dont select vmnic that are used by vSwitch0 (the default virtual switch provided by the
ESXi server).
Select SYSTEM-UPLINK as the DVUplink port group for all of the vmnics you are adding.
1
2
Placeholder
68.5
68.6
68.7
68.8
2011 Cisco
Step 69 Verify that the Virtual Ethernet Module(s) are seen by VSM.
vsm-1(config)# show module
Mod Ports Module-Type
--- ----- -------------------------------1
0
Virtual Supervisor Module
3
248
Virtual Ethernet Module
4
248
Virtual Ethernet Module
Mod
--1
3
4
Sw
---------------4.2(1)SV1(4)
4.2(1)SV1(4)
4.2(1)SV1(4)
Mod
--1
3
4
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
02-00-0c-00-03-00 to 02-00-0c-00-03-80
02-00-0c-00-04-00 to 02-00-0c-00-04-80
Model
-----------------Nexus1000V
NA
NA
Hw
-----------------------------------------------0.0
VMware ESXi 4.1.0 Releasebuild-260247 (2.0)
VMware ESXi 4.1.0 Releasebuild-260247 (2.0)
Serial-Num
---------NA
NA
NA
Mod Server-IP
Server-UUID
--- --------------- -----------------------------------1
10.1.111.17
NA
3
10.1.111.21
6da2f331-dfd4-11de-b82d-c47d4f7ca766
4
10.1.111.22
67ae4b62-debb-11de-b88b-c47d4f7ca604
* this terminal session
69.1
Status
-----------active *
ok
ok
Server-Name
-------------------NA
esx1
esx2
2011 Cisco
Step 70 Migrate the ESXi hosts existing management vmkernel interface on vSwitch0 to the Nexus 1000V.
70.1
From the browser bar, select Hosts and Clusters.
1
2
70.2
Select ESX1 (10.1.111.21), select the Configuration tab, select Networking under Hardware,
select the Virtual Distributed Switch tab, click on Manage Virtual Adapters link:
1
2
70.3
Click the Add link, select Migrate existing virtual adapters, then click Next:
1
70.4
70.5
70.6
70.7
70.8
2011 Cisco
Click Finish.
70.9
Verify that all the vmkernel ports for ESX1 have migrated to the Nexus 1000V distributed virtual
switch:
Step 71 Repeat Step 70 to move ESX2 to the Nexus 1000V distributed virtual switch.
Step 72 Verify that jumbo frames are enabled correctly for your vmkernel interfaces.
72.1
From VSM run show interface port-channel to verify that the MTU size is 9000.
vsm-1# show interface port-channel 1-2 | grep next 2 port-c
port-channel1 is up
Hardware: Port-Channel, address: 0050.5652.0e5a (bia 0050.5652.0e5a)
MTU 9000 bytes, BW 20000000 Kbit, DLY 10 usec,
-port-channel2 is up
Hardware: Port-Channel, address: 0050.5652.0d52 (bia 0050.5652.0d52)
MTU 9000 bytes, BW 20000000 Kbit, DLY 10 usec,
72.2
From both ESXi servers, verify that environment is configured for Jumbo frames end-to-end. We
are going to use the -d option to prevent fragmenting the packet.
Note:
2011 Cisco
In our environment, since the NetApp is plugged into our 3750 management switch, we had to also
enable it for jumbo frames using the command system mtu jumbo 9000.
1
2
3
73.2
73.3
Select the VMTRAFFIC port-profile from the drop down list and select OK.
Verify that your VMs virtual interface is showing up in the VSM.
73.4
Step 74 Repeat the above steps for any remaining VMs you have except for your VSM. Be sure to select the
appropriate port profile.
2011 Cisco
Extends Layer 2 LANs over any network: Uses IP-encapsulated MAC routing, works over any network
that supports IP, designed to scale across multiple data centers
Simplifies configuration and operation: Enables seamless deployment over existing network without
redesign, requires minimal configuration commands (as few as four), provides single-touch site
configuration for adding new data centers
Increases resiliency: Preserves existing Layer 3 failure boundaries, provides automated multihoming,
and includes built-in loop prevention
Maximizes available bandwidth: Uses equal-cost multipathing and optimal multicast replication
Nexus 7000
The Cisco Nexus 7000 Series is a modular data center class series of switching systems designed for highly
scalable end-to-end 10 Gigabit Ethernet networks. The Cisco Nexus 7000 Series is purpose built for the data
center and has many unique features and capabilities designed specifically for such mission critical place in the
network.
Cisco NX-OS
Cisco NX-OS is a state-of-the-art operating system that powers the Cisco Nexus 7000 Platform. Cisco NX-OS is
built with modularity, resiliency, and serviceability at its foundation. Drawing on its Cisco IOS and Cisco SAN-OS
heritage, Cisco NX-OS helps ensure continuous availability and sets the standard for mission-critical data center
environments.
2011 Cisco
EXERCISE OBJECTIVES
This hands-on lab will introduce participants to the OTV (Overlay Transport Virtualization) solution for the
Nexus 7000. This innovative feature set simplifies Datacenter Interconnect designs, allowing Data Center
communication and transparent Layer 2 extension between geographically distributed Data Centers.
OTV accomplishes this without the overhead introduced by MPLS or VPLS.
By the end of the laboratory session the participant should be able to understand OTV functionality and
configuration with the Nexus 7000. Students will go through the following steps:
1. System Verification.
2. Base configuration.
3. OSPF Configuration.
4. OTV Configuration and Verification.
5. VMotion across Data Centers.
Each lab POD has a pair of Nexus 7000s that are used as edge devices attached to a layer 3 Core cloud. The core
(which you dont configure) consists of a pair of Nexus 7000s that are used to model a simple L3 WAN core
network. A pair of Nexus 5000s with an attached ESX server represent the access layer.
The equipment we are using is the Nexus 7000 10-slot chassis with dual supervisors, one 48-port GE Copper card
(model N7K-M148GT-12) and one 32-port 10GE fiber card (model N7K-M132XP-12) each.
We will convert our single Data Center site environment into two geographically distributed Data Center sites.
Each site will have one ESXi 4.1 server that is part of the same VMWare Host cluster. The sites are connected via
Nexus 7000 edge devices (virtual device contexts) to a Nexus 7000 IP core (virtual device contexts).
We will configure the Nexus 7000s at Site A and B. The goal of the lab is to establish L2 connectivity between
the two sites and then perform a vmotion over a generic IP core leveraging the Nexus 7000 OTV technology.
2011 Cisco
We leverage the Virtual Device Context feature to consolidate multiple nodes and reduce the number of required equipment. The eight Nexus 7000s (N7K)
below are actually two physical boxes.
Figure 7 - Full Topology for Three Pods in a VDC Deployment
2011 Cisco
Device
Interface
IP on uplink
POD 1
POD 1
POD 1
POD 1
POD 2
POD 2
POD 2
POD 2
POD 3
POD 3
POD 3
POD 3
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
Eth 1/10
Eth 1/12
Lo0
Lo0
Eth 1/18
Eth 1/20
Lo0
Lo0
Eth 1/26
Eth 1/28
Lo0
Lo0
10.1.11.3/24
10.1.14.4/24
10.1.0.11/32
10.1.0.12/32
10.1.21.5/24
10.1.24.6/24
10.1.0.21/32
10.1.0.22/32
10.1.31.7/24
10.1.34.8/24
10.1.0.31/32
10.1.0.32/32
Device
Access Ports
Device
Access Ports
POD 1
POD 1
POD 2
POD 2
POD 3
POD 3
N7K-1
N7K-2
N7K-1
N7K-2
N7K-1
N7K-2
e1/14
e1/16
e1/22
e1/24
e1/30
e1/32
N5K-1
N5K-2
N5K-1
N5K-2
N5K-1
N5K-2
e1/19
e1/20
e1/19
e1/20
e1/19
e1/20
Device
Access Ports
Device
Access Ports
POD 1
POD 2
POD 3
N7K-1
N7K-1
N7K-1
e1/14
e1/22
e1/30
N5K-1
N5K-1
N5K-1
e1/19
e1/19
e1/19
Note:
2011 Cisco
If you did not do Sections 3-5, then you can load the configurations from the tftp server. See
Appendix A: Copying Switch Configurations From a tftp Server for instructions. However, you must
do Sections 6 and 7 to prepare the servers and virtual machines.
OTV between 2 DCs connected with Dark Fiber (sent to corporate editing)
"The scope of this document is to provide guidance on configuring and designing a network with Overlay
Transport Virtualization (OTV) to extend Layer 2 between two Data Centers connected via dark fiber links. This is
a very common DCI deployment model and this paper will be very helpful in guiding AS team, partners and
customer in deploying OTV."
http://bock-bock.cisco.com/wiki_file/N7K:tech_resources:otv/OTV_over_DarkFiber-AS_team.docx
Note:
2011 Cisco
If you do not have access to the above document, please contact your local Cisco SE.
Description
show module
show running-config all | section mgmt0
show vrf
show vrf interface
show vrf management interface
show version
interface Ethernet
vrf member management
show int mgmt0
ping 10.1.111.254 vrf management
sh running-config | grep next 3 mgmt0
where
Basic Configuration
vlan 20, 23, 1005
no shut
sh vlan br
spanning-tree vlan 20,23,1005 priority 4096
spanning-tree vlan 20,23,1005 priority 8192
int e1/<5k-7k link>
switchport
switchport mode trunk
switchport trunk allowed vlan 20,23,1005
no shutdown
N7K-1
N7K-2
N7K-1, N7K-2, N5K-1, N5K-2
Internal interface.
OSPF Configuration
feature ospf
router ospf 1
log-adjacency-changes
interface loopback0
ip address 10.1.0.y/32
ip router ospf 1 area 0.0.0.0
interface e1/<uplink_port>
mtu 9042
ip address 10.1.y.z/24
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
ip igmp version 3
no shutdown
show running-config ospf
show ip ospf neighbors
show ip ospf int brief
show ip route ospf-1
2011 Cisco
Log into your Nexus 7000s management interface via ssh using username of admin and password
1234Qwer.
Lets start by checking the system and its configuration.
75.2
Sw
-------------5.1(2)
5.1(2)
5.1(2)
5.1(2)
Mod
--1
3
5
6
MAC-Address(es)
-------------------------------------1c-df-0f-d2-05-20 to 1c-df-0f-d2-05-44
1c-df-0f-4a-06-04 to 1c-df-0f-4a-06-38
b4-14-89-e3-f6-20 to b4-14-89-e3-f6-28
b4-14-89-df-fe-50 to b4-14-89-df-fe-58
Mod
--1
3
5
6
Xbar
--1
2
3
Ports
----0
0
0
Model
-----------------N7K-M132XP-12
N7K-M148GT-11
N7K-SUP1
N7K-SUP1
Status
-----------ok
ok
active *
ha-standby
Hw
-----2.0
1.6
1.8
1.8
Module-Type
-------------------------------Fabric Module 1
Fabric Module 1
Fabric Module 1
Serial-Num
---------JAF1438AMAQ
JAF1443BLRQ
JAF1444BLHB
JAF1443DDHF
Model
-----------------N7K-C7010-FAB-1
N7K-C7010-FAB-1
N7K-C7010-FAB-1
Status
-----------ok
ok
ok
<snip>
2011 Cisco
75.3
Next, we will check the currently running software version. Our lab is currently NX-OS 5.1(2).
Images
Location
Hardware
cisco Nexus7000 C7010 (10 Slot) Chassis ("Supervisor module-1X")
Intel(R) Xeon(R) CPU
with 4115776 kB of memory.
Processor Board ID JAF1444BLHB
Device name: N7K-1-OTV-1A
bootflash:
2029608 kB
slot0:
2074214 kB (expansion flash)
CPU
Storage Devices
Note:
Active Plug-in
Cisco Overlay Transport Virtualization (OTV) requires NX-OS version 5.0(3) or higher.
2011 Cisco
75.4
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
<omitted interface config>
interface mgmt0
ip address 10.1.111.111/24
Management
Interface Config
75.5
This is the configuration for Pod 1. As explained earlier, the Nexus 7000s in each Pod runs within
a Virtual Device Context (VDC). By using the VDC feature, we can segment the physical Nexus
7000 into multiple logical switches, each of which runs in a separate memory space and only has
visibility into the hardware resources that it owns, providing total isolation between the VDCs.
75.6
One of the features of show running-config in NX-OS is the ability to not only look at the
running-config but to also reveal the default values, which do not appear in the base config. The
keyword to use is all.
2011 Cisco
The Management VRF provides total isolation of management traffic from the rest of the traffic flowing through
the box.
In this task we will:
Verify that only the mgmt0 interface is part of the management VRF
Verify that no other interface can be part of the management VRF
Verify that the default gateway is reachable only using the management VRF
Step 76 Verify VRF characteristics and behavior.
Duration: 15 minutes
76.1
Verify that only the mgmt0 interface is part of the management VRF
VRF-ID
1
2
State
Up
Up
Reason
---
<omitted output>
Ethernet3/24
default
mgmt0
management
N7K-1-OTV-1A# show vrf management interface
Interface
VRF-Name
mgmt0
management
Note:
2011 Cisco
VRF-ID
1
1
1
1
1
2
VRF-ID
2
The management VRF is part of the default configuration and the management interface mgmt0 is
the only interface that can be made member of this VRF. Lets verify it.
76.2
Note:
N7K-1-OTV-1A# conf t
N7K-1-OTV-1A(config)# interface ethernet1/9
N7K-1-OTV-1A(config-if)# vrf member management
% VRF management is reserved only for mgmt0
2011 Cisco
76.3
Verify that the default gateway is not reachable when using the default VRF. Try reaching the outof-band management networks default gateway with a ping.
host
host
host
host
host
--- 10.1.111.254 ping statistics --5 packets transmitted, 0 packets received, 100.00% packet loss
N7K-1-OTV-1A(config-if)#
Note:
76.4
Note:
The ping fails because we are trying to reach a system on the out-of-band management network
without specifying the correct VRF.
Verify that the default gateway is reachable using the management VRF. Try reaching the MGMT
VRFs default gateway with a ping.
In our lab environment, we could not use the mgmt0 interface or management
VRF. Instead, we used the last gigabit port in each as the management interface
and placed into a new VRF called MGMT. To ping other devices in the network
from the Nexus 7000s, you will need to specify this VRF context.
Lab
Hack!
ms
ms
ms
ms
ms
Linux-like output
--- 10.1.111.254 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.585/0.674/1.005 ms
2011 Cisco
This section is optional. You can skip this section if you are already familiar with the Nexus 7000 CLI
capabilities. In this case, jump to Base Configuration.
Verify the CLI hierarchy independence by issuing a ping from different CLI contexts
N7K-1-OTV-1A# conf t
N7K-1-OTV-1A(config)#ping ?
*** No matches in current mode, matching in (exec) mode ***
<CR>
A.B.C.D or Hostname IP address of remote system
WORD
Enter Hostname
multicast
Multicast ping
Hierarchically
Independent CLI
77.2
Hierarchically
Independent CLI
77.3
2011 Cisco
You can use the up-arrow and get the command history from the exec mode. Any command can
be issued from anywhere within the configuration.
77.4
Verify the CLI piping functionality. Multiple piping options are available. Lots of them derived from
the Linux world.
N7K-1-OTV-1A(config-if)#show running-config | ?
cut
Print selected parts of lines.
diff
Show difference between current and previous invocation
(creates temp files: remove them with 'diff-clean' command
and dont use it on commands with big outputs, like 'show
tech'!)
egrep
Egrep - print lines matching a pattern
grep
Grep - print lines matching a pattern
head
Display first lines
human
Output in human format (if permanently set to xml, else it
will turn on xml for next command)
last
Display last lines
Improved CLI Piping
less
Filter for paging
no-more Turn-off pagination for command output
perl
Use perl script to filter output
section Show lines that include the pattern as well as the
subsequent lines that are more indented than matching line
sed
Stream Editor
sort
Stream Sorter
sscp
Stream SCP (secure copy)
tr
Translate, squeeze, and/or delete characters
uniq
Discard all but one of successive identical lines
vsh
The shell that understands cli command
wc
Count words, lines, characters
xml
Output in xml format (according to .xsd definitions)
begin
Begin with the line that matches
count
Count number of lines
end
End with the line that matches
exclude Exclude lines that match
include Include lines that match
77.5
77.6
Display any line that contains mgmt0 and print the next 3 lines after that match.
77.7
The [TAB] completes a CLI command and shows the available keywords.
77.8
shutdown
snmp
vrf
where
If you want to know the CLI context you are in use the where command.
N7K-1-OTV-1A(config-if)# where
conf; interface mgmt0
admin@N7K-1-OTV-1A%default
N7K-1-OTV-1A(config-if)#end
2011 Cisco
78.3
int port-channel 1
shutdown
78.4
int e1/10
interface po14
shutdown
78.5
vlan 131,151,171,211,1005
no shut
int e1/19
switchport
switchport mode trunk
switchport trunk allowed vlan 131,151,171,211,1005
no shutdown
78.8
interface port-channel 1
shutdown
!interface port-channel 101
! shutdown
78.9
Remove ESX 1 & 3 from Site B. We are also shutting down the connection to the 3750 on the B
side.
interface e1/4,e1/9,e1/11
interface po20,po13,po15
shutdown
78.10
vlan 131,151,171,211,1005
no shut
int et 1/20
switchport
switchport mode trunk
switchport trunk allowed vlan 131,151,171,211,1005
no shutdown
2011 Cisco
Summary of Commands
You have three options at this point. Option 3 is under maintenance, so do NOT use.
1) Go to the next step (Spanning Tree) to manually configure OTV
2) Copy and paste the commands from the Command Summary for OTV on page 212.
3) Restore an OTV config and go to Section 9.8. Perform the following commands on both Nexus 7000s
to load OTV config. SSH into N7K-1 (10.1.111.3) and N7K-2 (10.1.111.4)
rollback running-config checkpoint OTV
copy run start
reload vdc
2011 Cisco
Each site must have two sets of VLANs. One will be local to the site and one set will be extended on
the overlay to the remote data-center site. VLANs are 131, 151 , 171, 211 and 1005. Vlan 131 is the
VM-Client traffic interface. Vlan 151 is used for vmotion traffic. VLAN 1005 is used for intra-site OTV
communication.
N7K-1
N7K-1-OTV-1A# conf t
Enter configuration commands, one per line.
79.1
vlan 131,151,171,211,1005
no shut
79.2
Verify VLANs.
sh vlan br
VLAN
---1
20
23
160
1005
N7K-2
79.3
Note:
Name
-------------------------------default
VLAN0020
VLAN0023
VLAN0160
VLAN1005
Status
Ports
--------- ------------------------------active
active
active
active
active
N7K-1
N7K-1-OTV-1A(config-vlan)#spanning-tree vlan 131,151,171,211,1005 priority 4096
N7K-2
N7K-2-OTV-1B(config-vlan)#spanning-tree vlan 131,151,171,211,1005 priority 8192
2011 Cisco
Step 80 Now lets bring up the interfaces facing on N5K-1 and N5K-2 in the Access Layer.
N7K-1
80.1
Enable switching for interface connecting to N5K-1.
Refer to Table 19 and Figure 7 for your specific interfaces. (ex. Pod 1:e1/14,Pod2:e1/22,Pod3:e1/30)
int e1/14
switchport
switchport mode trunk
mtu 9216
80.2
N7K-2
80.3
Enable switching for interface connecting to N5K-2.
Refer to Table 19 and Figure 7 for your specific interfaces. (ex. Pod 1:e1/16,Pod2:e1/24,Pod3:e1/32)
int e1/16
switchport
switchport mode trunk
mtu 9216
80.4
Summary of Commands
N7K-1
vlan 131,151,171,211,1005
no shut
spanning-tree vlan 131,151,171,211,1005 priority 4096
int e1/14
switchport
switchport mode trunk
no shutdown
switchport trunk allowed vlan 131,151,171,211,1005
N7K-2
vlan 131,151,171,211,1005
no shut
spanning-tree vlan 131,151,171,211,1005 priority 8192
int e1/16
switchport
switchport mode trunk
no shutdown
switchport trunk allowed vlan 131,151,171,211,1005
2011 Cisco
Step 81 Check the spanning-tree from both the Nexus 7000 and the Nexus 5000.
N7K-1
N7K-1-OTV-1A#show spanning-tree vlan 1005
VLAN1005
Spanning tree enabled protocol rstp
Root ID
Priority
5101
Address
0026.980d.6d42
This bridge is the root
Hello Time 2 sec Max Age 20 sec
Bridge ID
Priority
Address
Hello Time
5101
(priority 4096 sys-id-ext 1005)
0026.980d.6d42
2 sec Max Age 20 sec Forward Delay 15 sec
Interface
Role Sts Cost
Prio.Nbr Type
---------------- ---- --- --------- -------- -------------------------------Eth1/14
Desg FWD 2
128.142 P2p
N7K-2
N7K-1-OTV-1A# show spanning-tree vlan 131
VLAN0020
Spanning tree enabled protocol rstp
Root ID
Priority
4116
Address
0026.980d.6d42
This bridge is the root
Hello Time 2 sec Max Age 20 sec
Bridge ID
Priority
Address
Hello Time
4116
(priority 4096 sys-id-ext 20)
0026.980d.6d42
2 sec Max Age 20 sec Forward Delay 15 sec
Interface
Role Sts Cost
Prio.Nbr Type
---------------- ---- --- --------- -------- -------------------------------Eth1/14
Desg FWD 2
128.142 P2p
2011 Cisco
Priority
Address
Hello Time
Interface
---------------Eth1/4
Eth1/19
Eth100/1/1
Eth100/1/2
Role
---Desg
Root
Desg
Desg
Sts
--FWD
FWD
FWD
FWD
Cost
--------2
2
4
4
Prio.Nbr
-------128.132
128.147
128.1025
128.1026
Type
-------------------------------P2p
P2p
Edge P2p
Edge P2p
Step 82 Verify that you have the correct licenses. OTV requires the LAN Advanced Services license and the
Transport Services license.
N7K-1 and N7K-2
N7K-1-OTV-1A# show license usage
Feature
Ins
Lic
Status Expiry Date Comments
Count
-------------------------------------------------------------------------------ENHANCED_LAYER2_PKG
No
Unused
SCALABLE_SERVICES_PKG
No
Unused
TRANSPORT_SERVICES_PKG
Yes
In use Never
LAN_ADVANCED_SERVICES_PKG
Yes
Unused Never
LAN_ENTERPRISE_SERVICES_PKG
Yes
In use Never
-
Note:
Be sure to confirm the status of your customers license status and remind them to purchase the
license before the feature grace period expires. Temporary licenses are indicated by the word
Grace in the comments field that reflects the grace period in days and hours left on your temporary
license. In the example below, there is 105 days 15 hours left.
TRANSPORT_SERVICES_PKG
2011 Cisco
No
Unused
83.1
138
123
R S I s
S I s
N7K-C7010
Eth1/1
N5K-C5010P-BF Eth1/19[Snip]
N7K-2
N7K-2-OTV-1B(config)# int e 1/<uplink>
N7K-2-OTV-1B(config-if-range)# no shut
83.2
Note:
173
148
R S I s
S I s
N7K-C7010
Eth1/2
N5K-C5010P-BF Eth1/20
Summary of Commands
int e 1/<uplink>
no shut
2011 Cisco
NX-OS is a fully modular operating system. Most software modules dont run unless the correspondent
feature is enabled. We refer to these features that need to be specifically enabled as conditional
services. Once the service is enabled, the CLI becomes visible and the feature can be used and
configured.
84.2
Configure loopback interface for OSPF.
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces.
N7K-1-OTV-1A(config)# interface loopback0
N7K-1-OTV-1A(config-if)# ip address 10.1.0.X1/32
N7K-1-OTV-1A(config-if)# ip router ospf 1 area 0.0.0.0
84.3
Configure each OTV Edges uplink interface that connects to the Nexus WAN(Core Layer).
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces. (ex.
Pod 1:e1/10,Pod2:e1/18,Pod3:e1/26)
N7K-1-OTV-1A(config)# interface e1/<uplink_port>
84.4
We increased the MTU on the layer 3 links to 9042 bytes. OTV encapsulates the original frame adding 42
bytes to your IP packet, so you will need to increase the MTU on all your WAN links. Since the MTU on
the core has already been adjusted to 9042, you will get an OSPF state of EXSTART until your MTU
matches the core MTU.
N7K-1-OTV-1A(config-if)# ip address 10.1.X1.Y /24
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your specific interfaces.
(ex. Pod 1:10.1.11.3,Pod2:10.1.21.5,Pod3:10.1.31.7)
84.5
Specify OSPF interface network type and OSPF Area.
N7K-1-OTV-1A(config-if)# ip ospf network point-to-point
N7K-1-OTV-1A(config-if)# ip router ospf 1 area 0.0.0.0
84.6
84.7
N7K-1-OTV-1A(config-if)# no shutdown
The edge devices interface towards the IP core will later be used by OTV as a join interface. Therefore, it
needs to be configured for IGMP version 3.
2011 Cisco
N7K-2
For the following steps, refer to Table 18 - IP Addresses for Uplinks and Loopbacks and Figure 7 for your
specific interfaces.
84.8
Enable OSPF feature and configure OSPF instance.
N7K-2-OTV-1B(config)# feature ospf
N7K-2-OTV-1B(config)# router ospf 1
N7K-2-OTV-1B(config-router)# log-adjacency-changes
84.9
84.10
Configure each OTV Edges uplink interface that connects to the Nexus WAN(Core Layer).
We increased the MTU on the layer 3 links to 9042 bytes. OTV encapsulates the original frame adding 42
bytes to your IP packet, so you will need to increase the MTU on all your WAN links. Since the MTU on
the core has already been adjusted to 9042, you will get an OSPF state of EXSTART until your MTU
matches the core MTU.
Summary of Commands
N7K-1
feature ospf
router ospf 1
log-adjacency-changes
interface loopback0
ip address 10.1.0.X1/32
ip router ospf 1 area 0.0.0.0
interface e1/<uplink_port>
mtu 9042
ip address 10.1.X1.Y/24
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
ip igmp version 3
no shutdown
N7K-2
feature ospf
router ospf 1
log-adjacency-changes
interface loopback0
ip address 10.1.0.X2/32
ip router ospf 1 area 0.0.0.0
interface e1/<uplink>
mtu 9042
ip address 10.1.X4.Y/24
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
ip igmp version 3
no shutdown
2011 Cisco
85.2
N7K-1
Cost
1
4
State
Neighbors Status
LOOPBACK 0
up
P2P
1
up
Cost
1
4
State
Neighbors Status
LOOPBACK 0
up
P2P
1
up
N7K-2
N7K-2-OTV-1B# show ip ospf int bri
OSPF Process ID 1 VRF default
Total number of interface: 2
Interface
ID
Area
Lo0
1
0.0.0.0
Eth1/12
2
0.0.0.0
85.3
N7K-1
Up Time Address
02:49:37 10.1.11.1
Interface
Eth1/10
Interface
Eth1/12
N7K-2
2011 Cisco
85.4
N7K-1
N7K-2
Note:
2011 Cisco
Congratulations, youve successfully configured OSPF. Please continue to the next section.
Site: A Layer 2 network that may be single-homed or multi-homed to the core network and the OTV
overlay network. Layer 2 connectivity between sites is provided by edge devices that operate in an
overlay network. Layer 2 sites are physically separated from each other by the core IP network.
Core Network: The customer backbone network that connects Layer 2 sites over IP. This network can be
customer managed, provided by a service provider, or a mix of both. OTV is transparent to the core
network because OTV flows are treated as regular IP flows.
Edge Device: A Layer 2 switch that performs OTV functions. An edge device performs typical Layer 2
learning and forwarding on the site-facing interfaces (internal interfaces) and performs IP-based
virtualization on the core-facing interfaces. The edge device can be collocated in a device that performs
Layer 3 routing on other ports. OTV functionality only occurs in an edge device.
Internal Interface: The Layer 2 interface on the edge device that connects to site-based switches or sitebased routers. The internal interface is a Layer 2 interface regardless of whether the internal interface
connects to a switch or a router.
Join Interface: The interface facing the core network. The name implies that the edge device joins an
overlay network through this interface. The IP address of this interface is used to advertise reachability
of a MAC address present in this site.
2011 Cisco
MAC Routing: MAC routing associates the destination MAC address of the Layer 2 traffic with an edge
device IP address. The MAC to IP association is advertised to the edge devices through an overlay
routing protocol. In MAC routing, MAC addresses are reachable through an IP next hop. Layer 2 traffic
destined to a MAC address will be encapsulated in an IP packet based on the MAC to IP mapping in the
MAC routing table.
Overlay Interface: A logical multi-access multicast-capable interface. The overlay interface encapsulates
Layer 2 frames in IP unicast or multicast headers. The overlay interface is connected to the core via one
or more physical interfaces. You assign IP addresses from the core network address space to the physical
interfaces that are associated with the overlayinterface.
Overlay Network: A logical network that interconnects remote sites for MAC routing of Layer 2 traffic.
The overlay network uses either multicast routing in the core network or an overlay server to build an
OTV routing information base (ORIB). The ORIB associates destination MAC addresses with remote edge
device IP addresses.
Multicast Control-Group: For core networks supporting IP multicast, one multicast address (the controlgroup address) is used to encapsulate and exchange OTV control-plane protocol updates. Each edge
device participating in the particular Overlay network shares the same control-group address with all the
other edge devices. As soon as the control-group address and the join interface is configured, the edge
device sends an IGMP report message to join the control group and with that participates in the overlay
network. The edge devices act as hosts in the multicast network and send multicast IGMP report
messages to the assigned multicast group address.
Multicast Data-Group: In order to handle multicast data-traffic one or more ranges of IPv4 multicast
group prefixes can be used. The multicast group address is an IPv4 address in dotted decimal notation. A
subnet mask is used to indicate ranges of addresses. Up to eight data-group ranges can be defined. An
SSM group is used for the multicast data generated by the site.
Authoritative Edge Device: An edge device that forwards Layer 2 frames into and out of a site over the
overlay interface. For the first release of OTV, there is only one authoritative edge device for all MAC
unicast and multicast addresses per VLAN. Each VLAN can be assigned to a different authoritative edge
device.
2011 Cisco
Select the Join interface and establish OSPF connectivity with the Core.
Enable OTV
Configure the Overlay interface
Join the Data-Center site to the Core
Extend a VLAN across the overlay
2011 Cisco
86.2
The OTV Site VLAN is used to communicate with other OTV edge devices in the local site. If our site had
dual edge devices, it will be used to elect the active forwarder device in the site.
Ensure that the site VLAN is active on at least one of the edge device ports.
86.3
Configure the site identifier. We will use 0x1 for Site A on N7K-1.
OTV uses the site identifier to support dual site adjacency. Dual site adjacency uses both site VLAN and
site identifier to determine if there are other edge devices on the local site and if those edge devices can
forward traffic. Ensure that the site identifier is the same on all neighbor edge devices in the site.
You must configure the site identifier in Cisco NX-OS release 5.2(1) or later releases.
The overlay network will not become operational until you configure the site identifier.
The Site-VLAN and site identifier must be configured before entering the no shutdown command for any
interface overlay and must not be modified while any overlay is up within the site.
86.4
Create an overlay interface.
interface Overlay 1
86.5
Specify the multicast group OTV will use for control plane traffic.
The control-group address is used for control plane related operations. Each edge device joins the group
and sends control/protocol related packets to this group. This is used for discovery of other edge-devices.
86.6
Specify the multicast address range OTV will use for multicast data traffic.
otv data-group 239.X.2.0/28
The data-group-range specifies a multicast group range that is used for multi-destination traffic.
86.7
After you enter the join command an informational message reminds you that IGMPv3 is required to be
configured on the join interface. This message can be ignored if IGMPv3 was already configured as
instructed earlier in the guide.
This interface is used for overlay operations such as discovering remote edge-devices, providing the
source address for OTV encapsulated packets and the destination address for unicast traffic sent by
remote edge-devices.
86.8
Specify the VLANs to be extended across the overlay. We will extend VLAN 131,151,171, and 211.
otv extend-vlan 131,151,171,211
no shutdown
OTV only forwards Layer 2 packets for VLANs that are in the specified range for the overlay interface.
2011 Cisco
N7K-2-OTV-XB
86.9
Enable the OTV feature.
feature otv
86.10
86.11
Configure the site identifier. We will use 0x2 for Site B on N7K-2.
86.12
interface Overlay 1
86.13
Specify the multicast group OTV will use for control plane traffic.
86.14
Specify the multicast address range OTV will use for multicast data traffic.
86.15
86.16
Specify the VLANs to be extended across the overlay. We will extend VLAN 131,151,171, and 211.
2011 Cisco
Note:
Summary of Commands
N7K-1
feature otv
otv site-vlan 1005
otv site-identifier 0x1
interface Overlay 1
otv control-group 239.<X>.1.1
otv data-group 239.<X>.2.0/28
otv join-interface Ethernet1/<uplink>
otv extend-vlan 131,151,171,211
no shutdown
N7K-2
feature otv
otv site-vlan 1005
otv site-identifier 0x2
interface Overlay 1
otv control-group 239.<X>.1.1
otv data-group 239.<X>.2.0/28
otv join-interface Ethernet1/<uplink>
otv extend-vlan 131,151,171,211
no shutdown
2011 Cisco
Step 87 First, lets display the OTV overlay status for your sites:
N7K-1-OTV-1A(config-if-overlay)# show otv overlay 1
OTV Overlay Information
Site Identifier 0000.0000.0000
Overlay interface Overlay1
VPN name
VPN state
Extended vlans
Control group
Data group range(s)
Join interface(s)
Site vlan
AED-Capable
Capability
:
:
:
:
:
:
:
:
:
Overlay1
UP
131 151 171 211 (Total:4)
239.1.1.1
239.1.2.0/28
Eth1/10 (10.1.11.3)
1005 (up)
Yes
Multicast-Reachable
Note:
2011 Cisco
:
:
:
:
:
:
:
:
:
Overlay1
UP
131 151 171 211 (Total:4)
239.1.1.1
239.1.2.0/28
Eth1/12 (10.1.14.4)
1005 (up)
Yes
Multicast-Reachable
Make sure the state is up, and that the vlans and addresses are correct.
87.1
Note:
Next, lets check the status of the VLANs extended across the overlay.
The authoritative device is the OTV node elected to forward traffic to/from the L3 core. For any
given VLAN, only one authoritative edge device (AED) will be elected in a site. The * symbol next to
the VLAN ID indicates that the device is the AED for that vlan.
Vlan State
---------active
active
active
active
Overlay
------Overlay1
Overlay1
Overlay1
Overlay1
87.2
Vlan State
---------active
active
active
active
Overlay
------Overlay1
Overlay1
Overlay1
Overlay1
Next, lets see how many OTV edge devices are present at the local site. The * symbol next to the
hostname indicates that this is the local node.
Note:
System-ID
-------------0026.980d.6d42
0026.980d.92c2
Up Time
--------00:05:58
00:05:37
Ordinal
---------0
1
If this was a dual-homed site, two nodes would be listed through this command. The other node
would not have a * symbol next to it.
2011 Cisco
System-ID
-------------0026.980d.6d42
0026.980d.92c2
Up Time
--------00:10:09
00:09:49
Ordinal
---------0
1
Step 88 Verify if we connected to the peer edge device at the peer Site.
Note:
:
System-ID
Dest Addr
0026.980d.92c2 10.1.14.4
Up Time
1w2d
State
UP
System-ID
Dest Addr
0026.980d.6d42 10.1.11.3
Up Time
07:16:05
State
UP
2011 Cisco
88.1
The MAC address table will report MAC addresses of end-hosts and devices learnt on the VLAN. If
no traffic was ever sent across the overlay, then only the local router MAC will be populated in the
table.
88.2
The MAC address in the table is actually the local router MAC, lets verify this:
Refer to Table 18 - IP Addresses for Uplinks and Loopbacks for the correct uplink interface.
show interface e1/<uplink> mac-address
N7K-1-OTV-1A# show interface e1/10 mac-address
-------------------------------------------------------------------------------Interface
Mac-Address
Burn-in Mac-Address
-------------------------------------------------------------------------------Ethernet1/10
0026.980d.6d42 1cdf.0fd2.0529
Step 89 Display the OTV ARP/ND L3->L2 Address Mapping Cache. In OTV, we also cache ARP resolution for
MAC addresses that are not local to the site and that are learnt via the overlay. If no traffic was ever
sent across the overlay, then no ARP would have been resolved, and so no entries are cached by the
OTV process.
N7K-1-OTV-1A# show otv arp-nd-cache
OTV ARP/ND L3->L2 Address Mapping Cache
2011 Cisco
Uplink
VLAN
Connecting
Connecting
Port
Device
Ports
ESX1
VM-Client
vSwitch1
vmnic0
131
N5K-1
E1/9
ESX1
Local Lan
vSwitch1
vmnic0
24
N5K-1
E1/9
* ESX1 uses physical adapter vmnic 0 (port 1 on 10G CNA) as the physical uplink for vSwitch1 to N5K-1.
ESX2
VM-Client
vSwitch1
vmnic1
131
N5K-2
E1/10
ESX2
Local Lan
vSwitch1
vmnic1
24
N5K-2
E1/10
* ESX2 uses physical adapter vmnic 1 (port 2 on 10G CNA) as the physical uplink for vSwitch1 to N5K-2.
Note:
2011 Cisco
Port Group
Virtual Switch
Remember that only VLANs 131 and 151 have been configured to stretch across the OTV overlay
between the two sites. The VLAN 24 is only local to the two individual sites.
Step 91 VM-Client: Use Cisco Discovery Protocol (CDP) from within the VMware vSphere Client to verify the
physical adapter vmnic2 of the ESX host is connected to the sites 10G access device at port Eth1/9.
91.1
91.2
Identify the Virtual Switch vSwitch1. Click on the bubble icon ( ) on the right side of the
corresponding physical adapter vmnic0.
Verify that the active CNA adapter for ESX1 (vmnic0) is connected to the N5K-1.
Click on the bubble icon ( ) on the right side of the corresponding physical adapter vmnic0.
1
91.3
Verify that the active CNA adapter for ESX2 (vmnic0) is connected to the N5K-2.
Click on the bubble icon ( ) on the right side of the corresponding physical adapter vmnic1.
2011 Cisco
Right click on the virtual machine Server 2003R2 and select Edit Settings from pop up menu.
2
92.2
Click on Network Adapter. Under Network label, select the Local Lan port group. Click OK.
1
2
92.3
2011 Cisco
2011 Cisco
Click on the Virtual Machine ClientXP. Click on the Open Console ( ) icon to connect to the
VMs desktop.
Within the Console of the VM, on the desktop, double-click on the PingServer icon.
This will start a continuous ping between the local ClientXP VM (10.1.131.33) and the Server
2003R2 VM (10.1.131.31)
Notice that Server 2003R2 is unreachable due to the lack of Layer 2 connectivity between the VMs
2
Note:
2011 Cisco
Leave the continuous ping running and the Console window open for further lab steps.
Right-click on the Virtual Machine Server 2003R2 to open the Action menu for this VM.
Choose Migrate within the Action menu to start the VMotion process
2
94.3
94.4
94.5
94.6
94.7
For vMotion Priority, leave the default setting of High Priority and click on Next
Verify the selected choices and click on Next to start the VMotion process.
Monitor the Console of the VM Server 2003R2 during the VMotion process.
94.8
When the VMotion process nears completion, network connectivity between the VM ClientXP
(10.1.131.32) and the VM Server 2003R2 (10.1.131.31) is established. Therefore the ping
between them succeeds.
2011 Cisco
The vmotion vlan itself is extended over the WAN. However, since vmotion can technically work over
an IP boundary, we will test Layer 2 activities as well to show that there is no trickery here.
Step 95 Configure both Virtual Machines to use the port group VM-Client. As demonstrated in previous lab
steps, this port group uses a VLAN that has been extended between the two sites via OTV:
95.1
Click on the Virtual Machine Server 2003R2 to highlight this VM. Then perform a right-click to
open the Action menu for this VM. Choose Edit Settings within the Action menu to change the
virtual NIC settings of the VM
1
95.2
Choose Network Adapter 1 under Hardware. In the Network Connection area, change the
Network Label to VMTRAFFIC and confirm the settings with OK.
1
2
2011 Cisco
95.3
Verify that the port group for the VM Server 2003R2 has been changed to VMTraffic.
95.4
Repeat the steps above for the VM ClientXP.
You will lose network connectivity between the two VMs while one VM is connected to the port group
VM-Client and the other VM is still connected to Local LAN. This is due to the two port groups being
mapped to two different Layer 2 domains.
95.5
Verify that the VM Server 2003R2-Clone has Layer 2 network connectivity to the VM Server
2003R2 while both are connected to the port group VM-Client and reside within the same site.
95.6
Migrate (VMotion) the VM Server 2003R2 back to site Site A. During and after this migration
the VM ClientXP will still have connectivity to the VM Server 2003R2:
95.7
Click on the Virtual Machine Server 2003R2 to highlight this VM. Then perform a right-click to
open the Action menu for this VM.
95.8
Choose Migrate within the Action menu to start the VMotion process
2
95.9
95.10
95.11
95.12
95.13
2011 Cisco
Note:
You will notice that while the VMotion is progressing, network connectivity between the VM
ClientXP (10.1.131.33) and the VM Server 2003R2 (10.1.131.31) remains active. Therefore the
ping between them succeeds.
95.14
Check on the local Nexus 7000 that MAC addresses of the remote VM servers were learned on the
local site and that ARP Table entries, mapping remote IPs and MACs, were cached successfully.
Your MAC addresses will be different depending on what vSphere assigns your VMs.
If the Authoritative Edge Device (AED) is the local node, the remote MAC address will be learned
through the Overlay. If the Nexus 7000 is not the Authoritative Edge Device the remote MAC address will
be learned through the interconnection to the AED Node.
N7K-2-OTV-1B# show mac address-table
Legend:
* - primary entry, G - Gateway MAC, (R) - Routed MAC, O - Overlay MAC
age - seconds since last seen,+ - primary entry using vPC Peer-Link
VLAN
MAC Address
Type
age
Secure NTFY Ports/SWID.SSID.LID
---------+-----------------+--------+---------+------+----+-----------------G
0026.980d.92c2
static
F
F sup-eth1(R)
O 151
0050.5670.e096
dynamic
0
F
F Overlay1
* 151
0050.5674.b27f
dynamic
0
F
F Eth1/16
O 151
0050.567b.cdd7
dynamic
0
F
F Overlay1
O 211
0016.9dad.8447
dynamic
0
F
F Overlay1
* 211
0050.5676.bc47
dynamic
0
F
F Eth1/16
O 211
0050.567d.6c56
dynamic
0
F
F Overlay1
O 211
0050.567e.d107
dynamic
0
F
F Overlay1
O 211
02a0.9811.5474
dynamic
0
F
F Overlay1
2011 Cisco
95.15
Age
00:01:55
Expires In
00:06:04
95.16
Age
00:00:46
Expires In
00:07:13
You can check reachability of remote MACs through the OTV route command.
MAC-Address
-------------0050.56b6.0000
0050.56b6.0006
0050.56b6.0007
0050.5672.b514
0050.5678.38a6
Metric
-----42
1
42
1
42
Uptime
-------03:37:33
00:08:34
00:30:10
00:08:41
00:08:41
Owner
--------overlay
site
overlay
site
overlay
Next-hop(s)
----------N7K-2-OTV-1B
Ethernet1/14
N7K-2-OTV-1B
Ethernet1/14
N7K-2-OTV-1B
MAC-Address
-------------0050.56b6.0000
0050.56b6.0006
0050.56b6.0007
0050.5672.b514
0050.5678.38a6
Metric
-----1
42
1
42
1
Uptime
-------03:38:04
00:09:05
00:30:41
00:09:11
00:09:12
Owner
--------site
overlay
site
overlay
site
Next-hop(s)
----------Ethernet1/16
N7K-1-OTV-1A
Ethernet1/16
N7K-1-OTV-1A
Ethernet1/16
Congratulations! You successfully migrate a VM across data center sites, while the VM remains
reachable via Layer 2 thanks to Cisco Overlay Transport Virtualization (OTV).
2011 Cisco
EXERCISE OBJECTIVE
In this exercise you will use VMware vSphere to migrate a Virtual Machine to SAN attached storage, configure
the Virtual Machine networking, and add VM disks. After completing these exercises you will be able to meet
these objectives:
2011 Cisco
2
96.2
96.3
96.4
96.5
96.6
96.7
96.8
2011 Cisco
Name the VM Server 2003R2-Clone. Click on FlexPod_DC_1 datacenter. Then, click Next.
Select FlexPod_Mgmt for the cluster. Click Next.
Select ESX1 for the host. Click Next.
For Datastore, select the Netapp-SAN (FC shared storage). Click Next.
Click the Same format as source radio button, then click Next
Use the default settings. Click Next until you get to the final dialog box. Click Finish.
Wait for the Clone to complete.
1
97.2
97.3
Click on the Virtual Machine Console button ( ) in the toolbar, then click in the console window.
You should already be automatically logged on. If needed, press CTL-ALT-INSERT (instead of CTLALT-DEL). Alternatively, select the VM menu > Guest > Send Ctrl+Alt+del to get to the windows
log on window. Authenticate with administrator/1234Qwer.
1
2
3
97.4
2011 Cisco
Change the Server name and IP address by double-clicking on the MakeMe Server1 shortcut. This
launches a batch file that changes the computer name to server1 and the IP address to
10.1.131.31. Allow the computer to restart.
97.5
After the server restarts, verify that the hostname is SERVER1 and the IP address is 10.1.131.31.
The background image should reflect this.
1
3
4
Note:
2011 Cisco
Step 98
Step 99 Check that both VMs virtual nic settings are in the ESX hosts vSwitch0 and in the proper Port Group.
99.1
Select the ESX host (ESX1 (10.1.111.21) in this example), select Configuration tab, select
Networking under Hardware, select Virtual Switch tab and verify that the VM nic is in the Port
Group.
1
3
2
4
99.2
If the VM nic is not in the proper Port Group, select the VM (Server 2003R2 in this example), rightclick on it and select Edit Settings from the pop up menu.
1
2
99.3
Select the Network adapter, and change the Port Group under the Network Label drop-down.
1
2
2011 Cisco
2
100.2
Select Change both host and datastore radio box and click Next.
100.3
100.4
100.5
100.6
100.7
2011 Cisco
2011 Cisco
1
102.3
102.4
Select the Create a new virtual disk radio button, and then click Next.
102.5
Change the Disk Size to 3 GB, select the Specify a datastore radio button, and then click Browse.
1
2
3
2011 Cisco
102.6
Select the Netapp-SAN-1 datastore, then click OK. Back at the Create a Disk window, click Next.
1
102.7
102.8
102.9
102.10
2
102.11
2
102.12
102.13
102.14
2011 Cisco
102.15
Right-click in the Disk1 Unallocated window and select New Volume from the pop-up menu. Go
through the wizard using the default settings for all of the settings.
102.16
Right-click in the New Volume and select Format. Use the default settings for the pop-up
windows. Close the Computer Management window.
102.17
2011 Cisco
12 SUMMARY
In this lab you:
12.1 FEEDBACK
We would like to improve this lab to better suit your needs. To do so, we need your feedback. Please take 5
minutes to complete the online feedback for this lab. We carefully read and consider your scores and
comments, and incorporate them into the content program
Just click on the link below and answer the online questionnaire.
Click here to take survey
Thank you!
2011 Cisco
103.2
Using the console from each switch, copy the appropriate file to running-config:
Cisco MDS9124
MDS9124# copy tftp://10.1.111.100/mds-base.cfg running-config
Trying to connect to tftp server......
Connection to server Established. Copying Started.....
|
<snip>
<snip>
Note:
2011 Cisco
You will have to run the copy twice due to features not active when the configuration is applied.
NEXUS 5000 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE ON BOOTFLASH
Cisco Nexus 5010 A or B - N5K-1 or N5K-2
Step 104 Use the directory command to determine if the kickstart and system files required for the Nexus 5000
to work are stored locally in bootflash. You will need these file names in the boot variables set for the
Nexus 5000.
loader> dir
bootflash:
lost+found
config.cfg
license_SSI14100CHE_4.lic
n5000-uk9-kickstart.5.0.2.N2.1.bin
n5000-uk9.5.0.2.N2.1.bin
<snip>
104.1
104.2
104.3
104.4
N5K-1# conf t
N5K-1(config)# boot system bootflash:n5000-uk9.5.0.2.N2.1.bin
N5K-1(config)# boot kickstart bootflash:n5000-uk9-kickstart.5.0.2.N2.1.bin
N5K-1(config)# copy run st
[########################################] 100%
NEXUS 5000 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE NOT ON BOOTFLASH
Cisco Nexus 5010 A or B - N5K-1 or N5K-2
Step 105 Use the set command to assign an IP address to the management interface:
loader> set ip 10.1.111.1 255.255.255.0
105.1
105.2
Once the kickstart is booted, configure the IP address on the management interface
switch(boot)# conf t
switch(boot)(config)# int mgmt0
switch(boot)(config-if)# ip address 10.1.111.1 255.255.255.0
switch(boot)(config-if)# no shut
switch(boot)(config-if)# end
105.3
Copy the kickstart and system files from the tftp server to bootflash:
105.4
105.5
105.6
N5K-1# conf t
N5K-1(config)# boot system bootflash:n5000-uk9.5.0.2.N2.1.bin
N5K-1(config)# boot kickstart bootflash:n5000-uk9-kickstart.5.0.2.N2.1.bin
105.7
MDS9124 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE ON BOOTFLASH
Cisco MDS9124
Step 106 Complete these steps on the MDS9124
106.1
Use the directory command to view the files stored on bootflash.
loader> dir
bootflash:
12288
2296
18723840
56219997
2995
106.2
lost+found/
mts.log
m9100-s2ek9-kickstart-mz.5.0.1a.bin
m9100-s2ek9-mz.5.0.1a.bin
config.cfg
106.3
106.4
106.5
MDS9124# conf t
MDS9124(config)# boot system bootflash:m9100-s2ek9-mz.5.0.1a.bin
MDS9124(config)# boot kickstart bootflash:m9100-s2ek9-kickstart-mz.5.0.1a.bin
MDS9124(config)# end
106.6
MDS9124 RECOVERY WHEN THE KICKSTART AND SYSTEM FILES ARE NOT ON BOOTFLASH
Step 107 Complete these steps on the MDS9124
107.1
Use the network command to set the ip address and mask for the management interface:
loader> network --ip=10.1.111.40 --nm=255.255.255.0
2011 Cisco
107.2
107.3
switch(boot)# conf t
switch(boot)(config)# int mgmt0
switch(boot)(config-if)# ip address 10.1.111.40 255.255.255.0
switch(boot)(config-if)# no shut
switch(boot)(config-if)# end
107.4
107.5
107.6
107.7
MDS9124# conf t
MDS9124(config)# boot system bootflash:m9100-s2ek9-mz.5.0.1a.bin
MDS9124(config)# boot kickstart bootflash:m9100-s2ek9-kickstart-mz.5.0.1a.bin
107.8
2011 Cisco
108.6
108.7
Half the total number of disks in the environment will be assigned to this controller and half to the
other controller. Divide the number of disks in half and use the result in the following command for
the <# of disks>.
Type disk assign -n <# of disks>.
Type halt to reboot the controller.
Controller B - NTAP1-B
108.8
During controller boot, when prompted to Press CTRL-C for special boot menu, press CTRL-C.
108.9
At the menu prompt, choose option 5 for Maintenance Mode.
108.10 Type Yes when prompted with Continue to boot?
108.11 Type disk show.
108.12 Reference the Local System ID: value for the following disk assignment.
Note:
108.13
108.14
2011 Cisco
Half the total number of disks in the environment will be assigned to this controller and half to the
other controller. Divide the number of disks in half and use the result in the following command for
the <# of disks>.
Type disk assign -n <# of disks>.
Type halt to reboot the controller.
108.15
Type disk show on the command line for each controller to generate a list of disks owned by
each respective controller.
POOL
SERIAL NUMBER
----- ------------Pool0 JLVD3HRC
Pool0 JLVD2NBC
Pool0 JLVD3KPC
Pool0 JLVBZW1C
Pool0 JLVD3HTC
Pool0 JLVBZ9ZC
This step is not necessary if Data ONTAP 7.3.5 is already installed on your storage controllers.
109.2
After the netboot interface is configured, netboot from the 7.3.5 image.
netboot Incomplete
109.3
109.4
109.5
109.6
Note:
109.7
2011 Cisco
You might receive a message saying that the cluster failover is not yet licensed. That is fine, because
we will license it later.
110.14
110.15
110.16
110.17
110.18
110.19
110.20
110.21
110.22
110.23
110.24
110.25
110.26
110.27
110.28
110.29
110.30
110.31
110.32
110.33
110.34
Enter 10.1.111.254 as the IP address for the default gateway for the storage system.
Enter 10.1.111.100 as the IP address for the administration host.
Enter Nevada as the location for the storage system.
Answer y to enable DNS resolution.
Enter dcvlabs.lab as the DNS domain name.
Enter 10.1.111.10 as the IP address for the first nameserver.
Answer n to finish entering DNS servers, or answer y to add up to two more DNS servers.
Answer n for running the NIS client.
Answer y to configuring the SP LAN interface.
Answer n to setting up DHCP on the SP LAN interface.
2011 Cisco
110.35
110.36
110.37
110.38
110.39
110.40
110.41
110.42
110.43
110.44
To verify the successful setup of Data ONTAP 7.3.5, make sure that the terminal prompt is
available and check the settings that you entered in the setup wizard.
Step 111 Installing Data ONTAP to the onboard flash storage DONE/INSTRUCTOR
Duration: 2 minutes
Note:
For this step, you will need a web server to host your ONTAP installation file.
Controller A - NTAP1-A
111.1
Install the Data ONTAP image to the onboard flash device.
software update Incomplete
111.2
After this is complete, type download and press Enter to download the software to the flash
device.
Controller B - NTAP1-B
111.3
Install the Data ONTAP image to the onboard flash device
software update Incomplete
111.4
111.5
After this is complete, type download and press Enter to download the software to the flash
device.
Verify that the software was downloaded successfully by entering software list on the command
line and verifying that the Data ONTAP zip file is present.
112.2
2011 Cisco
To verify that the licenses installed correctly, enter the command license on the command line
and verify that the licenses listed above are active.
Step 113 Start FCP service and make sure of proper FC port configuration. DONE/INSTRUCTOR
Duration: 3 minutes
On both controllers - NTAP1-A and NTAP1-B
113.1
Start fcp and verify status.
NTAP1-A> fcp start
Fri May 14 06:48:57 GMT [fcp.service.startup:info]: FCP service startup
NTAP1-A> fcp status
FCP service is running.
113.2
The fcadmin config command confirms that our adapters are configured as targets
113.3
If either FC port 0c and 0d is listed as initiator, use the following command to change its
status to target
113.4
113.5
2011 Cisco
Re-run the fcadmin config: both ports should now either state initiator or (Pending)
initiator.
Reboot the storage controller to enable the cluster feature and also to enable the FC ports as
target ports as necessary.
This command usually finishes quickly. Depending on the state of each disk, some or all of the disks might
need to be zeroed to be added to the aggregate. This might take up to 60 minutes to complete.
114.2
2011 Cisco
Status
raid_dp, aggr
32-bit
raid_dp, aggr
32-bit
Options
root
115.2
115.3
115.4
115.5
Type rdfile /etc/rc and verify that the commands from the previous steps are in the file
correctly.
115.6
2011 Cisco
Verify that in the output of the command ifconfig -a the interface ifgrp1-211 shows up.
116.7
116.8
116.9
116.10
116.11
116.12
116.13
116.14
116.15
2011 Cisco
Verify that the root password has been setup by trying to log into the controller with the new
credentials. To verify that telnet is disabled, when you try to access the controller by telnet, it
should not connect. To verify that http access has been disabled, you should not be able to access
FilerView through http but rather through https.
Step 117 Create SNMP requests role and assign SNMP login privileges. Duration: 3 minutes
On both controller A and B - NTAP1-A and NTAP1-B
117.1
Execute the following command:
useradmin role add snmpv3role -a login-snmp
117.2
To verify, execute the useradmin role list on each of the storage controllers.
Step 118 Create SNMP management group and assign SNMP request role to it. Duration: 3 minutes
118.1
Execute the following command:
useradmin group add snmpv3group -r snmpv3role
118.2
To verify, execute the useradmin role list on each of the storage controllers.
Step 119 Create SNMP user and assign it to SNMP management group. Duration: 3 minutes
119.1
Execute the following command:
useradmin user add Incomplete -g snmpv3group
Note:
119.2
You will be prompted for a password after creating the user. Use 1234Qwer when prompted
To verify, execute the useradmin role list on each of the storage controllers.
121.2
To verify, execute the command snmp community on each of the storage controllers.
2011 Cisco
Step 122 Set SNMP contact, location, and trap destinations for each of the storage controllers
Duration: 6 minutes
On both controller A and B - NTAP1-A and NTAP1-B
122.1
Execute the following commands:
snmp
snmp
snmp
snmp
122.2
contact pephan@cisco.com
location Nevada
traphost add ntapmgmt.dcvlabs.lab
traphost add snmp_trap_dest??
Netapp1> snmp
contact:
pephan@cisco.com
location:
TNI
authtrap:
0
init:
0
traphosts:
10.1.111.10 (10.1.111.10) <10.1.111.10>
community:
124.2
Create the volume that will later be exported to the ESXi servers as an NFS datastore.
124.3
Set the Snapshot reservation to 0% for this volume. Disable automatic snapshot option for this
volume.
124.4
vol
vol
vol
vol
create
create
create
create
124.5
2011 Cisco
Create the volume that will hold the ESXi boot LUNs for each server.
ESX_BOOT_A -s none aggr1 20g
ESX1_BOOT_A -s none aggr1 20g
ESX2_BOOT_A -s none aggr1 20g
ESX3_BOOT_A -s none aggr1 20g
Set the Snapshot reservation to 0% for this volume. Disable automatic snapshot option for this
volume.
Data Center Virtualization Lab 6: Overlay Transport Virtualization
Note:
This volume will be used to store VM swap files. Since swap files are temporary they do not need
snapshots or deduplications.
125.2
Disable the Snapshot schedule and set the Snapshot reservation to 0% for this volume. Disable
automatic snapshot option for this volume.
Verification
NTAP1-A> snap sched VDI_SWAP
Volume VDI_SWAP: 0 0 0
NTAP1-A> vol options VDI_SWAP
nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
2011 Cisco
on /vol/VDI_VFILER1_DS
on /vol/ESX1_BOOT_A
on /vol/ESX2_BOOT_A
on /vol/ESX3_BOOT_A
on /vol/vol1
config -s 0@sun-sat /vol/VDI_VFILER1_DS
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX1_BOOT_A
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX2_BOOT_A
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/ESX3_BOOT_A
config -s 0@sun-sat /vol/vol1
126.2
sis start -s
/vol/VFILER1_DS
127.2
State
Enabled
Schedule
0@mon,tue,wed,thu,fri,sat,sun
0@sun-sat
NTAP1-A> df -s
Filesystem
<snip>
/vol/INFRA_DS_1/
/vol/VMHOST_BOOT_A/
2011 Cisco
Progress
Idle for 00:01:53
127.3
Status
Idle
used
saved
%saved
156
136
0
0
0%
0%
127.4
Status
raid_dp,
sis
raid_dp,
sis
raid_dp,
sis
raid_dp,
raid_dp,
raid_dp,
sis
flex
Options
guarantee=none
flex
guarantee=none
flex
guarantee=none
flex
flex
flex
guarantee=none
nosnap=on, guarantee=none
guarantee=none
Here are the LAB INSTRUCTOR commands for enabling deduplication for all the lab volumes.
sis
sis
sis
sis
sis
sis
on /vol/LAB_VFILER1_DS
on /vol/LAB_VFILER2_DS
on /vol/LAB_VFILER3_DS
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER1_DS
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER2_DS
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER3_DS
sis
sis
sis
sis
on /vol/LAB_VFILER210_DS
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/LAB_VFILER210_DS
on /vol/INFRA_DS_XEN
config -s 0@mon,tue,wed,thu,fri,sat,sun /vol/INFRA_DS_XEN
2011 Cisco
In this step we will create secure IP space (logical routing table specific for each vfiler). Each IP Space provides an
individual IP routing table per vFiler unit. The association between a VLAN interface and a vFiler unit allows all
packets to and from the specific vFiler unit to be tagged with the appropriate VLAN ID specific to that VLAN
interface. IP spaces are similar to the concept of VRFs in the Cisco world.
Controller A - NTAP1-A
128.1
Type ipspace create ips-vfiler211 to create the IP space for the vdi_vfiler_211 vFiler unit.
NTAP1-A> ipspace create ips-vfiler111
NTAP1-A> ipspace create ips-vfiler211
128.2
Assign interfaces to our IP spaces using the command ipspace assign vdi_vfiler_211 ifgrp1-211.
128.3
Verify that the IP space was created and assigned successfully by issuing the command ipspace
list and verifying that the ipspace and interface assigned to it are listed.
2011 Cisco
Note:
129.2
129.3
129.4
129.5
129.6
129.7
129.8
129.9
129.10
You can only create one vfiler at a time. The commands below should NOT be copied and pasted all
at once.
Accept the IP address that you specified on the command line by pressing Enter.
Type ifgrp1-211 for the interface to assign to the vFiler unit.
Press Enter to accept the default subnet mask.
If necessary, type 10.1.111.10 as the IP address of the administration host for the vFiler unit.
Enter n for running a DNS resolver.
Enter n for running an NIS client.
Enter a password for the vFiler unit.
Enter the same password a second time to confirm.
Enter y for setting up CIFS.
129.11
To verify that the vFiler unit was created successfully, enter the command vfiler status and verify
that the vFiler unit is listed and that its status is running.
2011 Cisco
running
running
running
running
Step 130 Mapping the necessary infrastructure volumes to the infrastructure vFiler unit
DONE/INSTRUCTOR
Duration: 5 minutes
In this step we are going to add a datastore volume and a swap volume to each vfiler. This will provide each lab
pod the required volumes to support a virtualization infrastructure.
Controller A - NTAP1-A
130.1
Type vfiler add vdi_vfiler_211 /vol/VDI_VFILER1_DS. The add subcommand adds the
specified paths to an existing vfiler.
NTAP1-A> vfiler add vdi_vfiler_211 /vol/VDI_SWAP /vol/VDI_VFILER1_DS
<snip>
Mon Sep 26 11:00:26 PDT [cmds.vfiler.path.move:notice]: Path /vol/VDI_SWAP was mov
ed to vFiler unit "vdi_vfiler_211".
Mon Sep 26 11:00:26 PDT [cmds.vfiler.path.move:notice]: Path /vol/VDI_VFILER1_DS w
as moved to vFiler unit "vdi_vfiler_211".
130.2
To verify that the volumes were assigned correctly, enter the command vfiler run
infrastructure_vfiler vol status and then check that the two volumes are listed in the output.
Status
raid_dp, flex
VDI_SWAP online
raid_dp, flex
VDI_VFILER211_ROOT online
2011 Cisco
raid_dp, flex
Options
nosnap=on, fs_size_fixed=on,
guarantee=none
nosnap=on, guarantee=none,
fractional_reserve=0
guarantee=none,
fractional_reserve=0
131.2
Allow the ESXi servers read and write access to the infrastructure nfs datastore. The following
command exports /vol/VDI_VFILER1_DS and /vol/VDI_SWAP
131.3
To verify that the volumes were exported successfully, enter the command exportfs and make
sure the volumes are listed.
vdi_vfiler_211@NTAP1-A> exportfs
/vol/VDI_VFILER1_DS
-sec=sys,rw=10.1.111.0/27:10.1.211.0/27,root=10.1.111.0/27
:10.1.211.0/27,nosuid
/vol/VDI_SWAP
-sec=sys,rw=10.1.111.0/27:10.1.211.0/27,root=10.1.111.0/27:10.1.21
1.0/27,nosuid
/vol/VDI_VFILER211_ROOT -sec=sys,rw=10.1.10.100,root=10.1.10.100
132.2
!!! Before
ntap1-A> priority show
Priority scheduler is stopped.
NTAP1-A> priority on
Priority scheduler starting.
!!! After
ntap1-A> priority show
2011 Cisco
132.3
priority
priority
priority
priority
priority
priority
132.4
Set the priority level for operations sent to the volume when compared to other volumes. The
value may be one of VeryHigh, High, Medium, Low or VeryLow. A volume with a higher priority
level will receive more resources than a volume with lower resources. This option sets derived
values of scheduling (CPU), concurrent disk IO limit and NVLOG usage for the volume, based on the
settings of other volumes in the aggregate.
set
set
set
set
set
set
volume
volume
volume
volume
volume
volume
INFRA_DS_1 level=VeryHigh
ESX1_BOOT_A level=VeryHigh cache=keep
ESX2_BOOT_A level=VeryHigh cache=keep
ESX3_BOOT_A level=VeryHigh cache=keep
VDI_VFILER1_DS level=VeryHigh cache=keep
VDI_SWAP level=Medium cache=reuse
To verify that the priority levels were set correctly, issue the command priority show volume
and verify that the volumes are listed with the correct priority level.
2011 Cisco
133.2
Verify that the igroups were created successfully by entering the command igroup show and
verify that the output matches what was entered.
logged
logged
logged
logged
in)
in)
in)
in)
133.3
Verify that the igroups were created successfully by entering the command igroup show and
verify that the output matches what was entered.
Step 134 Creating LUNs for the service profiles - DONE/Instructor
Duration: 5 minutes
Controller A - NTAP1-A
134.1
Create a LUN for the service profile booting from NTAP1-A. It will be 10GB in size, type vmware,
and will not have any space reserved.
Note:
We are currently only using controller for active connections in our lab.
134.2
Verify that the LUNs were created successfully by entering the command lun show and verify
that the new LUNs show up in the output.
2011 Cisco
4g (4294967296)
(r/w, online)
135.2
Verify that the LUNs were mapped successfully by entering the command lun show and verify
that the LUNs report their status as mapped.
2g (2147483648)
2g (2147483648)
2g (2147483648)
16.1 FLEXCLONE
Step 136 FlexClone the ESX boot volume to create individual boot volume/luns for each ESX server.
136.1
FlexClone a fas3170_vfiler2 volume and add that clone to fas3170_vfiler1
136.2
Take a snapshot of the FlexVol that has the VMFS datastore you want cloned. Name your snapshot
clone_base_snap so that you can identify the purpose of the snapshot. The command below will
create a snapshot of DCV_VFILER9_DS named clone_base_snap.
NTAP1-A>
136.3
NTAP1-A>
NTAP1-A>
NTAP1-A>
136.4
Create a FlexClone based on the Snapshot that you just created. You will provide the name of the
new volume, the base volume, and the snapshot from the base volume.
vol clone create ESX1_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap
vol clone create ESX2_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap
vol clone create ESX3_BOOT_A_clone -s none -b ESX_BOOT_A clone_base_snap
136.5
create_ucode=on, guarantee=none,
fractional_reserve=0
(optional) You can split your clone off so that it is completely independent.
136.6
136.7
Bring cloned luns online. Cloned LUNs are offline when created.
136.8
2011 Cisco
136.9
2011 Cisco
rw=10.1.211.21,root=10.1.211.21 /vol/LAB_VFILER1_DS
136.10
Note:
136.11
Might be useful to add _CLONE suffix to the end for ease of reference.
# show volumesclone is now in vfiler1
2011 Cisco
rw=10.1.211.21,root=10.1.211.21 /vol/LAB_VFILER1_XEN
rw=10.1.211.20:10.1.211.21,root=10.1.211.20:10.1.211.21
137.2
137.3
2011 Cisco
137.5
Removing vFilers
These steps should be performed after the extra vfilers have been destroyed.
137.7
vol
vol
vol
vol
vol
vol
2011 Cisco
offline
offline
offline
destroy
destroy
destroy
LAB_VFILER9_ROOT
LAB_VFILER9_DS
LAB_VFILER9_SWAP
LAB_VFILER9_ROOT -f
LAB_VFILER9_DS -f
LAB_VFILER9_SWAP -f
trap
trap
trap
trap
trap
public
public
public
public
public
description
description
description
description
description
role network-admin
#
ip domain-lookup
ip domain-lookup
switchname N5K-1
logging event link-status default
service unsupported-transceiver
class-map type qos class-fcoe
!class-map type queuing class-fcoe
! match qos-group 1
class-map type queuing class-all-flood
match qos-group 2
class-map type queuing class-ip-multicast
match qos-group 2
!class-map type network-qos class-fcoe
! match qos-group 1
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos jumbo
class type network-qos class-fcoe
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9000
system qos
service-policy type network-qos jumbo
2011 Cisco
Data Center Virtualization Lab 6: Overlay Transport Virtualization
fex 100
pinning max-links 1
description "FEX0100"
interface port-channel3
description ESX1
switchport mode trunk
vpc 3
switchport trunk allowed vlan 1,20-25,100,160,200-201
spanning-tree port type edge trunk
speed 10000
interface port-channel4
description ESX2
switchport mode trunk
vpc 4
switchport trunk allowed vlan 1,20-25,100,160,200-201
spanning-tree port type edge trunk
speed 10000
interface port-channel5
description ESX3
switchport mode trunk
vpc 5
switchport trunk allowed vlan 1,20-25,100,160,200-201
spanning-tree port type edge trunk
speed 10000
interface port-channel60
description link to core
switchport mode trunk
vpc 60
switchport trunk allowed vlan 1,20-25,160
speed 10000
!!! We currently do not have IP storage plugged directly into our 5Ks.
!!! IP storage comes through core switches.
!interface port-channel70
! description IP Storage Array
! vpc 70
! switchport access vlan 162
interface port-channel100
description dual-homed 2148 can use as management switch
switchport mode fex-fabric
vpc 100
fex associate 100
interface fc2/1
switchport trunk allowed vsan 10
switchport description To MDS9124 1/1
switchport trunk mode on
! channel-group 256 force
no shutdown
interface fc2/2-4
!!! Associate interfaces e1/7-8 to fex 101 when moving to single homed FEX.
interface Ethernet1/7
fex associate 100
switchport mode fex-fabric
channel-group 100
interface Ethernet1/8
fex associate 100
switchport mode fex-fabric
channel-group 100
interface Ethernet1/9-16
role network-admin
name backend-storage
vlan 999
name NATIVE
udld aggressive
port-channel load-balance ethernet source-dest-port
vpc domain 1
role priority 2000
peer-keepalive destination 10.1.111.1
vsan database
vsan 20
interface Vlan1
!interface san-port-channel 256
! channel mode active
! switchport mode NP
! switchport description To p3-mds9148-1
! switchport trunk mode on
interface port-channel1
switchport mode trunk
vpc peer-link
!!! We currently do not have IP storage plugged directly into our 5Ks.
!!! IP storage comes through core switches.
!interface port-channel70
! description IP Storage Array
! vpc 70
! switchport access vlan 162
2011 Cisco
interface port-channel100
description dual-homed 2148
switchport mode fex-fabric
vpc 100
fex associate 100
interface fc2/1
switchport trunk allowed vsan 20
switchport description To MDS9124 1/2
switchport trunk mode on
! channel-group 256 force
no shutdown
interface fc2/2-4
!!! This is a placeholder for a single-homed FEX.
!feature npv
!npv enable
interface Ethernet1/1
description To 3750
switchport mode trunk
switchport trunk allowed vlan 1
speed 1000
interface Ethernet1/2
interface Ethernet1/3
description To ESX1 vmnic0
switchport mode trunk
switchport trunk allowed vlan 1,20-25,120,160,200-201
spanning-tree port type edge trunk
spanning-tree bpduguard enable
channel-group 3
interface Ethernet1/4
description To ESX2 vmnic0
switchport mode trunk
switchport trunk allowed vlan 1,20-25,120,160,200-201
spanning-tree port type edge trunk
spanning-tree bpduguard enable
channel-group 4
2011 Cisco
interface Ethernet1/5
description To ESX3 vmnic0
switchport mode trunk
switchport trunk allowed vlan 1,20-25,120,160,200-201
spanning-tree port type edge trunk
spanning-tree bpduguard enable
channel-group 5
interface Ethernet1/6
!!! Associate interfaces e1/7-8 to fex 101 when moving to single homed FEX.
interface Ethernet1/7
fex associate 100
switchport mode fex-fabric
channel-group 100
interface Ethernet1/8
fex associate 100
switchport mode fex-fabric
channel-group 100
interface Ethernet1/9-16
interface Ethernet1/17
switchport mode trunk
switchport trunk allowed vlan 1,20-25,160,200-201
channel-group 1 mode active
interface Ethernet1/18
switchport mode trunk
switchport trunk allowed vlan 1,20-25,160,200-201
channel-group 1 mode active
interface Ethernet1/19
description link to core
switchport mode trunk
! swtchport trunk native vlan 999
switchport trunk allowed vlan 1,20-25,160
channel-group 60 mode active
interface Ethernet1/20
description link to core
switchport mode trunk
! swtchport trunk native vlan 999
switchport trunk allowed vlan 1,20-25,160
channel-group 60 mode active
interface Ethernet2/1-4
interface mgmt0
ip address 10.1.111.2/24
interface Ethernet100/1/1
description ESX1 vmnic3
switchport mode trunk
spanning-tree port type edge trunk
interface Ethernet100/1/2
description ESX2 vmnic3
switchport mode trunk
spanning-tree port type edge trunk
interface Ethernet100/1/3-48
line console
exec-timeout 0
2011 Cisco
line vty
exec-timeout 0
boot kickstart bootflash:/n5000-uk9-kickstart.5.0.2.N2.1.bin
boot system bootflash:/n5000-uk9.5.0.2.N2.1.bin
interface fc2/1-4
ESX
ESX1 and ESX2
esxcfg-vswitch -m 9000 vSwitch0
esxcfg-vswitch -a vSwitch1
esxcfg-vswitch -m 9000 vSwitch1
esxcfg-vswitch -L vmnic0 vSwitch1
esxcfg-vswitch -L vmnic1 vSwitch1
esxcfg-vswitch -A "MGMT Network" vSwitch1
esxcfg-vswitch -v 111 -p "MGMT Network" vSwitch1
esxcfg-vswitch -A VMotion vSwitch1
esxcfg-vswitch -v 151 -p VMotion vSwitch1
esxcfg-vswitch -A NFS vSwitch1
esxcfg-vswitch -v 211 -p NFS vSwitch1
esxcfg-vswitch -A "CTRL-PKT" vSwitch1
esxcfg-vswitch -v 171 -p "CTRL-PKT" vSwitch1
esxcfg-vswitch -A "VMTRAFFIC" vSwitch1
esxcfg-vswitch -v 131 -p "VMTRAFFIC" vSwitch1
esxcfg-vswitch -A "Local LAN" vSwitch1
esxcfg-vswitch -v 24 -p "Local LAN" vSwitch1
vim-cmd hostsvc/net/refresh
vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch1
role network-admin
vmware port-group
switchport mode access
switchport access vlan 160
no shutdown
system vlan 160
state enabled
port-profile type vethernet VM_CLIENT
vmware port-group
switchport mode access
switchport access vlan 20
no shutdown
state enabled
vdc VSM-P id 1
limit-resource
limit-resource
limit-resource
limit-resource
limit-resource
limit-resource
limit-resource
limit-resource
interface port-channel1
inherit port-profile VM_UPLINK
interface port-channel2
inherit port-profile VM_UPLINK
interface port-channel3
inherit port-profile VM_UPLINK
interface mgmt0
ip address 192.168.1.200/24
interface Vethernet1
inherit port-profile N1KV_CONTROL_PACKET
description Nexus1000V-P,Network Adapter 1
vmware dvport 164 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe"
vmware vm mac 0050.5699.000B
interface Vethernet2
inherit port-profile N1KV_CONTROL_PACKET
description Nexus1000V-P,Network Adapter 3
vmware dvport 165 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe"
vmware vm mac 0050.5699.000D
interface Vethernet3
inherit port-profile VMOTION
description VMware VMkernel,vmk1
vmware dvport 129 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe"
vmware vm mac 0050.567F.90F4
interface Vethernet4
inherit port-profile N1KV_CONTROL_PACKET
description Nexus1000V-S,Network Adapter 1
vmware dvport 162 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe"
vmware vm mac 0050.5699.0011
interface Vethernet5
inherit port-profile N1KV_CONTROL_PACKET
description Nexus1000V-S,Network Adapter 3
vmware dvport 163 dvswitch uuid "90 8a 19 50 83 ea 6a 15-c8 2c 13 44 d3 43 06 fe"
vmware vm mac 0050.5699.0013
interface Vethernet6
inherit port-profile VMOTION
2011 Cisco
2011 Cisco
OTV
Cisco Nexus 5010 A - N5K-1
no feature vpc
int port-channel 1
shutdown
int e1/10
interface po14
shutdown
vlan 131,151,171,211,1005
no shut
int e1/19
switchport
switchport mode trunk
switchport trunk allowed vlan 131,151,171,211,1005
no shutdown
2011 Cisco
N7K-1
vlan 131,151,171,211,1005
no shut
spanning-tree vlan 131,151,171,211,1005 priority 4096
!int e1/14
int e1/22
!int e1/30
switchport
switchport mode trunk
mtu 9216
no shutdown
switchport trunk allowed vlan 131,151,171,211,1005
int e 1/<uplink>
no shut
feature ospf
router ospf 1
log-adjacency-changes
interface loopback0
! ip address 10.1.0.11/32
ip address 10.1.0.21/32
! ip address 10.1.0.31/32
ip router ospf 1 area 0.0.0.0
!interface e1/10
interface e1/18
!interface e1/26
mtu 9042
! ip address 10.1.11.3/24
ip address 10.1.21.5/24
! ip address 10.1.31.7/24
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
ip igmp version 3
no shutdown
feature otv
otv site-vlan 1005
otv site-identifier 0x1
interface Overlay 1
! otv control-group 239.1.1.1
otv control-group 239.2.1.1
! otv control-group 239.3.1.1
! otv data-group 239.1.2.0/28
otv data-group 239.2.2.0/28
! otv data-group 239.3.2.0/28
! otv join-interface Ethernet1/10
otv join-interface Ethernet1/18
! otv join-interface Ethernet1/26
otv extend-vlan 131,151,171,211
no shutdown
2011 Cisco
N7K-2
vlan 131,151,171,211,1005
no shut
spanning-tree vlan 131,151,171,211,1005 priority 8192
!int e1/16
int e1/24
!int e1/32
switchport
switchport mode trunk
mtu 9216
no shutdown
switchport trunk allowed vlan 131,151,171,211,1005
int e 1/<uplink>
no shut
feature ospf
router ospf 1
log-adjacency-changes
interface loopback0
! ip address 10.1.0.12/32
ip address 10.1.0.22/32
! ip address 10.1.0.32/32
ip router ospf 1 area 0.0.0.0
!interface e1/12
interface e1/20
!interface e1/28
mtu 9042
! ip address 10.1.14.4/24
ip address 10.1.24.6/24
! ip address 10.1.34.8/24
ip ospf network point-to-point
ip router ospf 1 area 0.0.0.0
ip igmp version 3
no shutdown
feature otv
otv site-vlan 1005
otv site-identifier 0x2
interface Overlay 1
! otv control-group 239.1.1.1
otv control-group 239.2.1.1
! otv control-group 239.3.1.1
! otv data-group 239.1.2.0/28
otv data-group 239.2.2.0/28
! otv data-group 239.3.2.0/28
! otv join-interface Ethernet1/12
otv join-interface Ethernet1/20
! otv join-interface Ethernet1/28
otv extend-vlan 131,151,171,211
no shutdown
2011 Cisco
18 REFERENCES
2011 Cisco
11
21:00:00:c0:dd:14:73:2f
2011 Cisco
2011 Cisco