You are on page 1of 36

Documents:

Exam Scenario

Exam Scenario

During this lab exam you are building three new data centers. You will perform common
configuration and troubleshooting tasks in all the data centers. These tasks are related to the
technologies that are outlined in CCIE DC blueprint.

 Data Center-1 (DC1) has conventional networking equipment based on Cisco Nexus 7000,
and Nexus 5000 Series Switches. The compute platform used in DCI is Cisco UCS B-Series
and C-Series servers, with a virtualized and non-virtualized application workload. For
storage services in DC1, a centralized storage device is used. The storage area network
services are provided using Cisco Nexus 5000 switches.
 Data Center-2 (DC2) has a network fabric that is based on Cisco ACI technology. This
includes Cisco Nexus 9000 Spine Switches, and Cisco Nexus 9000 Leaf Switches, and three
Cisco APIC'S. Cisco Nexus 7000 Series Switches are also used in DC2 to provide routing to
the WAN. The compute platform used in DC2 is cisco UCS C-Series.
 Data Center-3 (DC3) has network fabric based on ACI technology. This includes Cisco
Nexus 9000 Spine Switches, Cisco Nexus 9000 Leaf Switches, and one cisco APIC The
compute platform used in DC2 is Cisco UCS C-Series.

All data centers are equipped with an out-of-band (OOB) management network. The management
ports of all the equipment in both data centers is connected to this network. The console port of all
devices is connected to a Cisco 2811 router acting as a CommServer. You have full access to the
equipment via both the management ports as well as via console ports.

All data centers are managed from a centralized location within the WAN. This centralized
management network is hosting management functions such as Cisco US Director and Center.
This network use and out-of-band (OOB) management network for administering components
within the data center.

This CCIE Data Center lab exam is divided into three sections that will test proficiency of a
CCIE Data center:
1. Nexus Switch Infrastructure (DC1 and DC2)
2. Compute and Storage (DC1)
3, ACI (DC2 and DC3)

The topology shows the physical connectivity of the device within DC. DC2 and DC3 a logical
diagram and a detailed physical topology is provided with each question, where suitable.
Device Access

Device IP Username Password


Cluster-IP 10.1.1.40 admin Cisco!123
Fabric-A 10.1.1.41 admin Cisco!123
Fabric-B 10.1.1.42 admin Cisco!123
DC1-N5K-1 10.1.1.61 admin Cisco!123
DC1-N5K-2 10.1.1.62 admin Cisco!123
DC1-N5K-3 10.1.1.63 admin Cisco!123
DC1-N5K-4 10.1.1.64 admin Cisco!123
APIC 1 CIMC 10.1.1.54 cisco Cisco!123
APIC 2 CIMC 10.1.1.55 cisco Cisco!123
APIC 3 CIMC 10.1.1.56 cisco Cisco!123
APIC 1 00B 10.1.1.51 admin Cisco!123
APIC 2 00B 10.1.1.52 admin Cisco!123
APIC 3 00B 10.1.1.53 admin Cisco!123
N7K1-ADM 10.1.1.70 admin Cisco!123
N7K2-ADM 10.1.1.80 admin Cisco!123
IPN 10.1.1.82 admin Cisco!123
N9K-BGW1 10.1.1.91 admin Cisco!123
N9K-BGW2 10.1.1.92 admin Cisco!123

VM Access Details Physical Server Username Password


legacy1 SRV-5 root Cisco!123
legacy2 SRV-3 root Cisco!123
aci-web SRV-5 root Cisco!123
aci-app SRV-5 root Cisco!123
aci-db SRV-5 root Cisco!123
aci-db2 SRV-3 root Cisco!123
denm SRV-4 admin Cisco!123
TEST-PC SRV-4 admin Cisco!123
Center SRV-4 candidate@cisco.com Cisco!123

Note:
 The GUI for the APIC and venter can be accessed from the Test-PC.
 The VDC's for N7K device can be accessed from the mputty in Test-PC.
Vlans

VLAN ID NAME
10 VMotion
101 Web
102 App
103 DB
104 ESXI-MGMT
105 ACI-DCI
50 UCS FCOE VLAN for VSAN 50
57 Legacy EPG

Storage Objects

Storage objects Value


Boot LUN ID 0
Fabric A zone name DC_vsan_50
Fabric B zone name DC_vsan_60
Zoneset names DC_vsan_50, DC_vsan_60
1.1 Introduction
Welcome to the Xandar company!

Read the Guidelines, documents and resources before you proceed to the next item.
1.2 VPC and Tagging/Trunking
Modify the existing configuration so that Cisco UCS Fabric Interconnects can see N5K1 and N5K2
as a single device. Make sure to delay vPC leg bring-up on the recovering vPC peer device for 150
seconds. Each vPC peer device must replicate locally the MAC address of the interface VLAN
defined on the other vPC peer device.

Configure ports connected to Cisco UCS in such a way that:

 It can carry traffic for multiple VLANs 101-104.


 It automatically transitions the port to the spanning tree forwarding state without passing
spangling tree tor through the blocking or learning states.

2 Points
1.3 Configure BGP
Configure the VXLAN between the DC1-NK5-1, DCI-N5K-2, and BGW1 in the DC1 and BGW2
and N7K1 in the DC2 Ensure that:

 The network virtualization endpoint interface 1 is created to forward VXLAN traffic for
VLAN 101-104 with control plane host learning between devices.
 Loopback 0 should be, used as source interface.

Use these ip addresses and VNIs:

VAN Name/ID VNID MCasGroup VIP


Web (101) 10101 225.0.0.101 10.2.1.1/24
App (102) 10102 225.0.0.102 10.2.2.1/24
DB (103) 10103 255.0.0.103 10.2.3.1/24
VM (104) 10104 255.0.0.104 10.2.4.1/24

2 Points
1.4 PIM
Modify the existing configuration to support many sources and receivers and make sure PIM
routers do not build the SPT toward the source. Build a shared tree between sources and receivers
of a multicast group on all interfaces. Configure a PIM static RP address for a multicast group
range. Use DC1 RP-Address 100.0.0.1. Use DC2 RP-Address 200.0.0.1.

2 Points
1.5. EVPN
DC1-N5K1 and DC1-N5K2 must learn traffic from hosts on VLAN 101 to 104 through VRF
XANDAR. DC2-N7K-1 must learn traffic from DC2 hosts through VRF XANDAR. Hosts from
DC1 and DC2 hosts must ping each other. Use below information to achieve reachability

 Configure the OSPF for underlay routing.


 Enable relevant protocols to configure the EVPN control plane.
 Enable VXLAN with distributed anycast-gateway using BGP EVPN.
o Create the network virtualization endpoint interface 1 to forward VXLAN traffic.
o Create server facing SVI and enable distributed anycast-gateway
o Use first-hop gateway MAC address 0000.2222.3333.
 For routing between different Layer 2 segments use VLAN 50 and VNI 5000.
 Configure VXLAN BGP EVPN on vPC pairs.
 BGW routers must be able to send and receive IPv4 routes through IPN.
 Redistribute directly connected routes using route map on BGW routers.
 Enable EVPN address family to exchange EVPN routes between BGW routers.
 Use BGP process 65501 for DC1.
 Use BGP process 65502 for DC2.

3 Points
1.6 OSPF
Modify the existing configuration to ensure that OSPF neighborship builds across all devices.

 OSPF Process- DATACENTER.


 Ensure that within DC1 all routers participate in the nonbroadcast Ethernet network using
Jumbo MTU 9192.
 All routers must be able to learn all OSPF routes from other routers.

Device Router-ID
DC1-N5K-1 10.5.1.1
DC1-N5K-2 10.5.1.2
DC1-N7K-1 10.7.1.1
DC1-N7K-2 10.7.1.2
DC1-N9K-BGW1 10.9.1.1

2 Points
1.7 NTP and Traffic Management
Configure N5K1 as an NTP server and N5K2 as an NTP client, and use loopback0 for
connectivity. Set the MT to its maximum size 9192 bytes for the whole switch on N5K1 and N5K2,
and configure policy-map name jumbo to set MTU 9192 for the default class

3 Points
1.8 DCNM
Save existing template int_loopback_11_1 as Loopback_n7k and modify it so that it allows you to
create a loopback interface on DC1-N7K-1.

2 Points
1.9 Python with BGW2
Modify the existing Python script in dir bootflash:pythoninitial.py to configure the interface and
create a VLAN on DC2 N9K-2(BGW2).

 Provide IP address 192.168.1.1/24 to interface eth1/6 and enable it.


 Create VLAN 150.
 Do Not change the name of python file.

2 Points
1.10 Syslog Server
Configure a syslog server on Nexus 9K devices so that it forwards all messages with severity level
5 or lower for VRF DATACENTER which set of configuration commands meets this requirement?

Switch# configure terminal


Switch(config)# logging server 192.168.0.254 5-1 use-vrf DATACENTER
Switch(config)# logging event link-status default

Switch# configure terminal


Switch(config)# logging source-interface vrf DATACENTER
Switch(config)# logging server 192.168.0.254.5
Switch(config)# logging event link-status default

Switch# configure terminal


Switch(config)# logging source-interface vrf DATACENTER
switch(config)# logging server 192.168.0.254 5-1
Switch(config)# logging event link-status default

Switch# configure terminal


Switch(config)# logging server 192.168.0.254 5 use-vrf DATACENTER
Switch(config)# logging event link-status default

2 Points
2.1 Compute Connectivity

Refer to the exhibit and perform these actions:


Discover all attached compute devices.
Configure connections between chassis & fabric interconnects to increase high availability.

2 Points
2.2 LAN/SAN uplinks from fabric interconnects
LAN Connectivity:

 Configure port channels from each fabric interconnect to DC1-N5K-1 and DC1-NSK-2 using
the VANs 101-104.
 Configure port ethernet 1/12 from each fabric interconnect to DC1-N5K-3 and DC1-NSK-
4 using the VLANs listed in the table below.
 Ensure that configuration is in place to maximize the number of VLANs in domain and
minimize CPU utilization caused by VLAN configuration.

VLAN Name VLAN ID Fabric Visibility


vMotion 10 Dual
Mgmt 11 Dual
Database 12 Dual
VM 13 Dual

SAN Connectivity:

 Configure Fibre Channel uplink from each fabric interconnect to DC1-N5K-3 and DC1-N5K-
4 using the SANs listed in the table.
 Configure the fabric interconnect to view the SAN and LAN neighbors.

VSAN Name VSAN ID Fabric Visibility


VSAN-A 50 A
VSAN-B 60 B

3 Points
2.3 UCS Resource pool and network configuration
Create new resource pools for UUID, MAC, WWNN and WWPN. Use the information in this
table,

Resources Pool Name Pool size


UUID-POOL 10
MAC-POOL 10
WWPN-POOL 10
WWNN-POOL 10

Create, and configure a policy LINK to detect unidirectional link in these situations:
 One of the interfaces cannot send or receive traffic.
 One of the interfaces is down.
 One of the fiber strands in the cable is disconnected.

Create vNIC templates one on each fabric interconnect with names vNIC-A and vNIC-B and with
these configurations:
 Use previously created resource pools and policies.
 Use 9000 MTU.
 No redundancy required.
 Any changes in the template should reflect for any VNICs created using it.
 Add necessary VLANs

Create vHBA templates one on each fabric interconnect with names vHBA-A and vHBA-B and
with these configurations:

 Use previously created resource pools and policies.


 Any changes in template must reflect for any vHBAS created using it.
 Add necessary VSANs.

2 points
2.4 UCS service profiles
Modify the WEB-SRVR-TEMP template so that it allows service profile Web-SRVR1 to associate
successfully. All policies must be placed into the service profile template.

Clone previously created template WEB-SRVR-TEMP with a name ESX-SRVR-TEMP then


deploy the service profile ESX-SRVR01 to blade1/1 so that the server admin can install a VMware
Operating System on the local disk. The requirements for the service profile are below. Use your
own naming convention where one is not provided.

 Use your previously created resource pools wherever it is applicable.


 Service profiles cloned from the template must be dynamically linked to allow any
changes to
 the template to propagate to the service profiles.
 The service profile must leverage your previously created vNIC templates.
 Configure the system so that it can upgrade firmware on newly discovered blades.

4 points
2.5 Boot from SAN

Make changes in service profile ESX-SRVR01 and associate it to blade1/1 go that it can boot from
storage. The requirements for the service profile are below. Use your own naming convention
where one is not provided.

 Use your previously created resource pools wherever it is applicable.


 Ensure that DC1-N5K-3 and DC1-NSK-4 are configured and ready for boot from SAN
 Boot from following target names and the LUN ID must be 0:
 Initiator A- 20:00:00:25:b5:00:00:10
 Initiator B- 20:00:00:25:b5:00:00:11

3 points
2.6 UCS Service Profile

Refer to the exhibit.


Xandar plans to use multi node deployment model for Cisco US Director. Which two roles are
part of the Cisco UCS Director multi node deployment? (Choose two.)

 Service database node


 Inventory database node
 Primary database node
 Monitoring database node
 Managed device database node

2 Points
2.7 DB credit and EE credit

Refer to the exhibit.


The Xandar technical team in the New York data center is confused about the use of buffer-to-
buffer credit and end credit. Which two statement are true about BB credit and EE credit? (Choose
two.)

 BB credit is the flow of connectionless traffic.


 EE credit is the flow of connection traffic.
 BB credit is the flow of connection traffic.
 EE credit is the flow of connectionless traffic.

2 Points
2.8 Traffic Monitoring in UCS
Create SPAN session FCMonto analyze Fiber Channel traffic between Cisco UCS and Nexus
Series 5000 Switches. The requirements for the session are listed here. Use your own naming
convention where one is not provided.

 Use interface fc1/32 on both fabric interconnects.


 Monitor all Fibre Channel traffic from blade1/1.

2 points.
3.1 Access Provisioning

 Configure a VPC domain to link Leaf-1 and Leaf-4 with each other to configure a vPC with
DC2-N7K1/SW1 based on the above diagram.
vPC Domain policies.

Domain ID Name PC Domain Policy Switch 1 Switch 2


114 vPC-L1-L4 default Leaf-1 Leaf-2

 Configure the interface policies, policy groups, and interface and switch profiles based on
the information below.

Interface Policy

Policy Name Characteristic


CDP Policy CDP-ENABLE Enabled
LLDP Policy LLDP-DISABLE Receive & Transmit: Disabled
Storm-Control SC-65 Rate & Max. Burst Rate: 65%
Storm-Control SC-50 Rate & Max. Burst Rate: 50%
Port-Channel Policy LACP-ACTIVE Mode Active
Port-Security Port-Secure-1 Maximum Endpoints:1

Policy Group

Name Policies Type


IPG-SERVERS SC-50 Access Port
CDP-ENABLE
LLDP-DISABLE
IPG-SW1-vPC VPC
SC-65
LACP-Active
Port-Secure-1
IPG-ROUTERS Access Port
CDP-ENABLE

Interfaces Profiles.

Name Port Name Port Policy Group


L1-E4 1/4 IPG-ROUTERS
LEAF1-INT-PROF
L1-E5 1/5 IPG-SW1-VPC
LEAF2-INT-PROF L2-E1 1/1 IPG-SERVERS
LEAF3-INT-PROF L3-E1 1/1 IPG-SERVERS
LEAF4-INT-PROF L4-E13 1/13 IPG-SW1-VPC
Switch Profiles.

Name Node Name Node ID Interface Profile


LEAF1-SW-PROF LEAF-1 101 LEAF1-INT-PROF
LEAF2-SW-PROF LEAF-2 102 LEAF2-INT-PROF
LEAF3-SW-PROF LEAF-3 103 LEAF3-INT-PROF
LEAF4-SW-PROF LEAF-4 104 LEAF4-INT-PROF

2 Points
3.2 Domains, AAP and vPC
Connect Leaf-1/Leaf-4 as a vPC toward DC2-N7K1/SW1. Configure the VLAN pool, domain,
and AAEP for the vPC based on the parameters below. This switch will be used later for an L2OUT
configuration.

VLAN POOL

Pool Name DC-POOL-A


Allocation Static
Range 1-500

External Domain (Used for vPC)

Domain Name EXT-L2


Pool DC-POOL-A

AAEP
Name: vPC-AAEP
Domain EXT-L2
IPG IPG-SW1-vPC

 Configure a port channel 10 on DC2-N7K1/SW1 based on the above diagram. Use LACP as
the port channel protocol. The port channel must be configured as a Layer 2 port channel.

 Configure the domain and AAEP for servers based on the parameters below.

Domain (Used for Servers)

Domain Name ACI-PORTS


Pool DC-POOL-A

AAEP
Name SERVERS-AAEP
Domain ACI-PORTS
IPG IPG-SERVERS
 Configure the domain and AAP for routers based on the parameters

Domain (Used for Routers)

Domain Name EXT-L3


Pool DC-POOL-A

AAEP
Name ROUTERS-AAEP
Domain EXT-L3
IPG IPG-ROUTERS

2 Points
3.3 Tenant/Application Profile Provisioning
 Configure a tenant based on the parameters below.

Tenant Parameters

Tenant Name VRF Bridge Domain(s) Subnets Subnet Scope


BD-1 172.16.1.254/24 Limited to VRF
172.16.2.254/24
Xandar Global
BD-2 172.16.3.254/24 Advertised Outside of ACI
172.16.4.254/24

 Configure an application profile named MIGRATION. Create an EPG named EPG-


EXTERNAL. It should use BD-1 as the bridge domain. Assign it a static port based on this
information:

Port: Port 1/1 on LEAF-2


VLAN: 50
Port Type: Trunk
Provision it immediately

EPG Port VLAN Port Type Provisioning


EPG-EXTERNAL LEAF-2-E1/1 50 Trunk immediate

 Configure an application profile named BIZ-APP. Create two EPGs named EPG-WEB and
EPG-DB. Both the EPGs should use BD-2 as the bridge domain. Assign them static ports
based on this information:

EPG Port VLAN Port Type Provisioning


Leaf-2- E1/1 57 Trunk immediate
EPG-WEB
Leaf-3-E1/1 55 Trunk immediate
EPG-DB Leaf-3-E1/1 60 Trunk immediate

 Make sure that the end points are seen on the leafs.

2 Points
3.4 Internal Networking
 Configure these filters for tenant Xandar:
Name: PING
Protocol: ICMP

Name: DB-ACCESS
Ports: 1521
Stateful: True

 Configure two Contract with these parameters:


Contract Name: WEB-2-DB
Subject Name: WEB-2-DB-1
Filter: DB-ACCESS

Contract Name: WEB-DB-ICMP


Subject Name: WEB-2-DB-ICMP-1
Filter: PING
 EPG-WEB must be able to initiate access toward EPG-DB using the WEB-2-DB contract.
 EPG-WEB and EPG-DB must be able to ping each other using the WEB-DB-ICMP Contract.
 EPG-WEB must only be able to consume the contracts. EPG-DB should only provide the
contracts.
 Test by initiating a ping from EPG-WB toward EPG-DB.
 You must also be able to initiate a ping from EPG-DB toward EPG-WB.

3 Points
3.5 External Networking

 Configure tenant Xandar so that EPG-EXTERNAL can communicate to VLAN 50 on DC2-


N7K1/SW1. Name the external network L2OUT-VLAN50. Use the appropriate bridge domain
and external domains to link VLAN 50 to the DC2-N7K1/SW1 using the appropriate port. Name
the new EPG EPG-VLAN-50.

 Configure and provision a contract to allow ICMP reachability between Legancy1 & SVI 50
on DC2-N7K1/SW1. Name the contract and subject EPG-EXTERNAL-VLAN-50. Use an
existing filter in the common tenant for this contract.

 Configure the EPG-EXTERNAL to consume the contract and EPG-VLAN-50 to provide the
contract.

 The ip address of the Legacy1 VM is 172.16.1.10/74.

 DC2-N7K1/SW1 has been configured with a SVI for VLAN 50 with an ip address of
172.16.1.111/24.

 Make sure you can ping between Legacy1 and DC2-N7K1SW1 SVI 50.

2 Points
3.6 L3-OUT

 Configure a L3OUT to connect the Cisco ACI fabric with SW2 using OSPF. Name the
external network as L3OUT. It must be able to communicate to BD-2. Use the appropriate
external domains to link LEAF-1 to the DC2-N7K1/SW2 using the appropriate port.

 Configure the L3OUT using these parameters:


 Area:0
 Area type: Regular
 LEAF-1 OSpf router-id: 123.123.123.123
 SW2 interface toward LEAF-1 is configured as a trunk. It will be
communicating to LEAF-1using an SVI 199 with an ip address of
172.16.25.254/24. Configure LEAF-1 as (.1) on the network.

 Name the new EPG EPG-L3OUT. It should contain the 172.16.30.0/24 network.
 Configure MP-BGP in AS 65001. Use Spine-1 and Spine-2 as route reflectors.
 Verify that the neighbor relationship between SW2 and LEAF-1 is coming up.
 Verify that the 172.16.30.0/24 route is propagated to LEAF-1.

2 Points
3.7 Contract
 Configure and provision a contract to allow only ICMP reachability from EPG-WEB toward
EPG-L3OUT. Name the contract and subject L3OUT-CONTRACT. Use an existing filter in
the common tenant for this contract.

 Provision the contract to the appropriate EPGS.

 Make sure you can ping from 172.16.301 (5W2) from ACI Web-1 and/or AC Web 2

3 Points
3.8 Fex Profiles
 Configure FEX profiles based on the above diagram. The FEX profiles for N2K-3 must be
named FEX-3 and the FEX profile for N2K-4 should be named FEX-4. The ports should use
the existing IPG for servers.
 Configure interface and switch profiles based on this information:

Interface Profiles

Name Port Name Ports FEX-ID FEX-Profile


LEAF7-INT-PROF E45-48 1/45-48 101 FEX-3
LEAF8-INT-PROF E45-48 1/45-48 102 FEX-4

Switch Profiles

Name Node Name Node ID Interface Profile


LEAF7-SW-PROF LEAF-7 201 LEAF7-INT-PROF
LEAF8-SW-PROF LEAF-8 201 LEAF8-INT-PROF

 Verify that the FEX Devices are up and ports are seen on the Leaf Switches.

3 Points
3.9 Inter-Pod Communication

 Multipod configuration is preconfigured between Pod-1 and Pod-2.


 Create a new PG named EPG-APP in the BIZ-APP application profile using BD-2 as the
bridge domain. Device within EPG-APP must not be able to communicate to each other.
Assign the EPG static ports based on the information.

EPG Port VLAN Port Type Provisioning


Leaf-7-E101/1/1 55 Trunk Immediate
EPG-APP
Leaf-8-E102/1/1 57 Trunk Immediate

 Configure these filters for tenant Xandar:


o Name: Web-ACCESS
o Protocol: TCP
o Ports: 80 and 443

 Configure a contract with these parameters:


o Contract Name: APP-2-WEB
o Subject Name: APP-2-WEB-1
o Filter: Web-ACCESS
o Type: Two-Way Filter

 EPG-APP must be able to initiate access towards EPG-WEB using the APP-2-WEB
contract.
 EPG-WEB should only be able to consume the contracts. EPG APP should only provide
the contracts.
 Test by initiating a ping from EPG-APP toward EPG-WEB
 You should also be able to initiate a ping from EPG-WEB toward EPG-APP as well.

3 Points
3.10 The End

You might also like