Professional Documents
Culture Documents
18
All racks are physically interconnected with one another per the class diagram. These
interconnections will be used later throughout this document to build larger topologies for
technologies such as FabricPath and Overlay Transport Virtualization (OTV).
The lab scenarios in this document are meant to be worked on collaboratively with your
assigned lab partner. Feel free to divide the work however you choose, for example one of
you being responsible for the odd device numbers (e.g. N5K1, N7K1, Server 1) and the other
being responsible for the even device numbers (e.g. N5K2, N7K2, Server 2). Many of the
technologies covered in these scenarios, such as vPC, require the devices to work in pairs in
order to build a successful configuration, so play nice
For simplicity, references to device names and numbers will use Rack 1 throughout this
document. If you are assigned to a different rack number you will need to adjust your device
names, numbers, port assignments, etc. accordingly per the class diagrams to complete the
configuration. For example if a task references N5K1, but you are assigned Rack 4, then you
should configure device N5K7. Refer to the attached diagrams to see the specific device
assignments for your rack.
Ensure that you are connecting to the correct servers for your rack and not someone
else’s. To double check, you can see the VM’s name under Windows Server Manager, as
seen below. You can also tell by the IP address assigned to the MGMT NIC, which should be
192.168.0.1Y/24. Do not make any changes to the MGMT NIC or you will lock yourself out
of the VM, but feel free to make whatever other changes you want to on these machines, as
their disks are non-persistent and will revert to the previous snapshot upon power cycling.
Hostname mgmt0 IP
N5K1 192.168.0.51/24
N5K2 192.168.0.52/24
N5K3 192.168.0.53/24
N5K4 192.168.0.54/24
N5K5 192.168.0.55/24
N5K6 192.168.0.56/24
N5K7 192.168.0.57/24
N5K8 192.168.0.58/24
2.2 Port-Channels
Modify the previously configured trunk links so that they are grouped together as Port-
Channels, as follows:
o N5K1’s links to N7K1 should use no Port-Channel negotiation.
o N5K2’s links to N7K2 should actively send LACP negotiation for Port-Channel
creation, while N7K2 should passively listen for LACP.
o Links connecting N7K1 & N7K2 should all actively send LACP negotiation for
Port-Channel creation.
All Port-Channels should use the most granular load balancing method available for that
platform.
Generate bulk TCP flows between Server 1 & Server 2 to verify that their traffic flows
are being load distributed amongst the member links of the Port Channels.
Section 5 – FabricPath
The goal of this section is to establish FabricPath reachability between Rack 1 & Rack 2, and
between Rack 3 & Rack 4. Note that you will have to work collaboratively with the students
assigned to these other racks to complete the below tasks.
5.2 FabricPath
Configure FabricPath on N7K1 & N7K2 as follows:
o Links to the N5Ks will be Classical Ethernet trunk ports.
o All links connecting the N7Ks within your rack and to the adjacent rack should be
FabricPath Core Ports.
o VLAN 10 should be a FabricPath VLAN.
Once complete, Servers 1 & 2 in Rack 1 should have IP reachability to Servers 3 & 4 in
Rack 2, and Servers 5 & 6 in Rack 3 should have IP reachability to Servers 7 & 8 in
Rack 4.
Disable the links between the N7Ks within your rack, and verify that connectivity is still
maintained between the servers by FabricPath through the other adjacent rack.
5.3 vPC+
Configure vPC+ between the Nexus 7K’s as follows:
o Create vPC domain 7X, where X is your rack number on the 7Ks.
o Use the FabricPath Switch-ID 7X on both switches.
o Configure the vPC Peer Link between the 7Ks as a FabricPath Core Port.
o Configure the links to N5K1 as vPC 51.
o Configure the links to N5K2 as vPC 52.
Once complete verify that the end servers still have reachability to each other between
racks, and that the FabricPath core sees the vPC+ emulated Switch-IDs.
Devices Subnet
N7K1 - N7K2 10.71.72.0/24
N7K3 - N7K4 10.73.74.0/24
N7K2 - N7K4 10.72.74.0/24
N7K5 - N7K6 10.75.76.0/24
N7K7 - N7K8 10.77.78.0/24
N7K5 - N7K7 10.75.77.0/24
N7K2 - N7K5 10.72.75.0/24
N7K4 - N7K7 10.74.77.0/24
N7K2 - N7K7 10.72.77.0/24
N7K4 - N7K5 10.74.75.0/24
Enable OSPF area 0 on all of these links. Once complete the N7Ks should have full IP
reachability to all of these links.
Enable PIM Sparse Mode for multicast routing on all of the native layer 3 interfaces, as
well as the SVI interfaces of the HSRP gateways.
N7K2 should create a Loopback interface with the address 72.72.72.72/32, and
advertise it into OSPF. Use this address as the PIM Rendezvous Point (RP).
6.3 OTV
Configure the OTV Edge Devices N7K1 & N7K3 in DC1 as follows:
o The Edge Device layer 3 Port-Channels will be the OTV Join Interface.
o Enable IGMPv3 on the Join Interface.
o Use the OTV site VLAN 1111, and OTV site identifier 0x101.
o Use interface Overlay1 for the tunnel.
o Use the OTV control-group 224.1.1.1 and data-group 232.1.1.0/24.
o Extend VLANs 11 & 22 over the OTV tunnels.
Configure the OTV Edge Devices N7K6 & N7K8 in DC2 as follows:
o The Edge Device layer 3 Port-Channels will be the OTV Join Interface.
o Enable IGMPv3 on the Join Interface.
o Use the OTV site VLAN 2222, and OTV site identifier 0x102.
o Use interface Overlay1 for the tunnel.
o Use the OTV control-group 224.1.1.1 and data-group 232.1.1.0/24.
o Extend VLANs 11 & 22 over the OTV tunnels.
Once complete, all servers in VLAN 11 should have reachability to each other, and all
servers in VLAN 22 should have reachability to each other. This should include both
unicast and multicast reachability.