Professional Documents
Culture Documents
(Wired/Wireless)
Cisco DNA Center v2.1.2.6
Lab Guide
FEBRUARY 22, 2022
2
Task 3: Discovering the Existing Network Infrastructure using the DNAC Discovery Tool 58
Task 4: Integration of Cisco DNAC with Existing Cisco ISE ...................................................... 67
Module 3: Designing the SD-Access Network ................................................................................ 74
Task 1: Define Network Settings and Services ............................................................................ 74
Task 2: Create a Global IP Address Pool ...................................................................................... 79
Task 3: Reserve IP Address Pools .................................................................................................. 83
Task 4: Creating Segmentation with the Cisco DNA Center Policy Application .................. 84
Task 5: Enable Client Telemetry ...................................................................................................... 91
Module 4: Onboarding Access switches- Leveraging Cisco DNAC Plug n Play and
Template Editor ....................................................................................................................................... 94
Cisco Plug and Play Overview ......................................................................................................... 94
Task 1: Creating Day-0 Template for Switch Onboarding ........................................................ 95
Creating Template Editor Project ................................................................................................ 96
Creating Day-0 Template for Switch Onboarding and OSPF underlay ............................ 100
Associating Template with Network Switch Profile .............................................................. 105
Task 2: Preparing the Upstream Switch for PnP Discovery ................................................... 108
Task 3: Onboarding and Claiming the new Cisco Cat 9300 on the Cisco DNA Center ... 111
Module 5: Bringing up the SDA Fabric- Using DNA Center Provision Application ............... 119
Task 1: Assigning Devices to a Site – Provisioning Step 1 .................................................... 120
Task 2: Assigning Shared Services – Provisioning Step 2 ..................................................... 123
Task 4: Creating the Fabric Overlay ............................................................................................. 128
Identify and create Transits ........................................................................................................ 129
Create IP-Based Transits – Fabric Provisioning Part I ......................................................... 130
Create a Fabric Domain – Fabric Provisioning Part 2 ........................................................... 131
Assigning Fabric Roles – Fabric Provisioning Part 3 ............................................................ 133
Adding Devices to the Fabric ..................................................................................................... 136
Task 5: Wired User Host Onboarding .......................................................................................... 143
Assign Authentication Template and Create Wired Host Pool - Host Onboarding Part 1
.......................................................................................................................................................... 144
Host Onboarding Part 2 – Associating the Host Pools- CAMPUS VN ............................. 145
Module 6: Migrating Cisco ISE AAA Policies for SD-Access ..................................................... 154
Task 1: Verify the Pre-Existing network Group and Users ...................................................... 154
3
Task 2: Define Wired Dot1X Authorization Profiles and Policies for SD- Access ............. 155
Module 7: Connecting SDA fabric to External Routing Domains ............................................... 166
Task 1: Creating Layer-3 Connectivity ........................................................................................ 168
Task 2: Extending the VRFs to the Fusion Router ..................................................................... 174
Task 3: Use VRF Leaking to Share Routes on Fusion Router ................................................. 176
Task 4: Use VRF Leaking to Share Routes and Advertise to Border Node ......................... 178
About Route Leaking .................................................................................................................... 179
Task 5: Route Redistribution between SDA and Traditional Subnet ..................................... 180
Redistribute OSPF into BGP ....................................................................................................... 181
Redistribute BGP into OSPF ....................................................................................................... 181
Module 8: Incremental Migration: Routed Access with existing subnets, existing switches.
.................................................................................................................................................................. 183
Module 9: SDA Incremental Migration: Migrating to Fabric Enabled Wireless ...................... 192
Task 1: Discovering the Cat 9800-CL from the DNA-C .......................................................... 192
Task 2: Learning Configuration from Traditional WLC 3504 .................................................. 195
Task 3: Prepare WLANs for Provisioning .................................................................................... 200
Task 4: Provisioning the Cat-9800-CL WLC ............................................................................. 205
Task 5: Onboarding Access Points on Cat-9800-CL WLC .................................................... 212
Task 6: Disable WLANs on WLC-3504 ................................................................................................... 218
Module 10: Host Onboarding and Verification ............................................................................... 220
Task1: Onboarding SDA Wired and Wireless Host ................................................................... 220
Task2: Verification Communication from SDA User to Traditional VLAN 101 user .......... 223
Module 11: Migrating the last network segment into SD-Access ............................................ 225
4
Idea Behind this Lab
Everyone is excited about the New-Era of Networking from Cisco as known as SD-Access.
The design philosophy behind Cisco SD-Access architecture centers on the concept of
policy-based automation thereby provisioning network infrastructure with secure user,
device segmentation independent of media connectivity (wired or wireless users).
The idea behind the lab is to educate the participant on the approaches and strategies
adopted for the migration of Traditional Campus Network into a Software Defined Fabric.
This lab will walk the participant through the process to transform a complete Traditional
(Core, Distribution, Access) Network into an SD-Access Fabric accommodating both Wired
and Wireless users, considering various real time scenarios and Customer requirements.
The Migration strategy must be business driven and planned from ground up. The very
reason we have a flat network with no policy applied in our current networks is because
the intent was never defined.
5
Lab Topology
6
Partially Migrated SD-Access Logical Topology Overview
7
Migrated SD-Access Logical Topology Overview
8
Cisco Digital Network Architecture and Software-Defined Access
Fabric technology, an integral part of SD-Access, provides wired and wireless campus
networks with programmable overlays and easy-to-deploy network virtualization,
permitting a physical network to host one or more logical networks as required to meet the
design intent. In addition to network virtualization, fabric technology in the campus network
enhances control of communications, providing software-defined segmentation and policy
enforcement based on user identity and group membership. Software-defined
segmentation is seamlessly integrated using Cisco TrustSec® technology, providing micro-
segmentation for scalable groups within a virtual network using scalable group tags
(SGTs). Using Cisco DNA Center to automate the creation of virtual networks reduces
operational expenses, coupled with the advantage of reduced risk, with integrated security
and improved network performance provided by the assurance and analytics capabilities.
This lab guide provides an overview of the requirements driving the evolution of campus
network designs, followed by a discussion about the latest technologies and designs that
are available for building an SD-Access network to address those requirements. It is a
companion to the associated deployment guides for SD-Access, which provide
configurations explaining how to deploy the most common implementations of the designs
described in this guide. The intended audience is a technical decision maker who wants to
understand Cisco’s campus offerings and to learn about the technology options available
and the leading practices for designing the best network for the needs of an organization.
9
Network requirements for the digital organization
With digitization, software applications are evolving from simply supporting business
processes to becoming, in some cases, the primary source of business revenue and
competitive differentiation. Organizations are now constantly challenged by the need to
scale their network capacity to react quickly to application demands and growth. Because
the campus LAN is the network within a location that people and devices use to access
applications, the campus wired, and wireless LAN capabilities should be enhanced to
support those changing needs.
The following are the key requirements driving the evolution of existing campus networks.
10
the IEEE has now ratified the 802.3bz standard that defines 2.5 Gbps and 5 Gbps Ethernet.
Cisco Catalyst® Multigigabit technology supports that bandwidth demand without requiring
an upgrade of the existing copper Ethernet wiring plant.
● Identity services—Identifying users and devices connecting to the network provides the
contextual information required to implement security policies for access control, network
segmentation by using SGTs for group membership, and mapping of devices into virtual
networks (VNs).
11
SD-Access Solution Components
The SD-Access solution combines the Cisco DNA Center software, identity services, and
wired and wireless fabric functionality. Within the SD-Access solution, a fabric site is
composed of an independent set of fabric control plane nodes, edge nodes, intermediate
(transport only) nodes, and border nodes. Wireless integration adds fabric WLC and fabric
mode AP components to the fabric site. Fabric sites can be interconnected using different
types of transit networks like IP Transit, SD-WAN Transit (future) and SD-Access transit to
create a larger fabric domain. This section describes the functionality for each role, how
the roles map to the physical campus topology, and the components required for solution
management, wireless integration, and policy application
Map server—The LISP MS is used to populate the HTDB from registration messages
from fabric edge devices.
Map resolver—The LISP MR is used to respond to map queries from fabric edge
devices requesting RLOC mapping information for destination EIDs.
Edge node
The SD-Access fabric edge nodes are the equivalent of an access layer switch in a
traditional campus LAN design. The edge nodes implement a Layer 3 access design with
the addition of the following fabric functions:
Tech Tip:
Cisco IOS® Software enhances 802.1X device capabilities with Cisco Identity Based Networking Services (IBNS)
2.0. For example, concurrent authentication methods and interface templates have been added. Likewise, Cisco
DNA Center has been enhanced to aid with the transition from IBNS 1.0 to 2.0 configurations, which use Cisco
Common Classification Policy Language (commonly called C3PL). See the release notes and updated deployment
guides for additional configuration capabilities. For more information about IBNS, see: https://cisco.com/go/ibns
13
Anycast Layer 3 gateway—A common gateway (IP and MAC addresses) can be
used at every node that shares a common EID subnet providing optimal forwarding
and mobility across different RLOCs.
Intermediate node
The fabric intermediate nodes are part of the Layer 3 network used for interconnections
among the edge nodes to the border nodes. In the case of a three-tier campus design
using core, distribution, and access layers, the intermediate nodes are the equivalent of
distribution switches, although the number of intermediate nodes is not limited to a single
layer of devices. Intermediate nodes route and transport IP traffic inside the fabric. No
VXLAN encapsulation/de-encapsulation, LISP control plane messages, or SGT awareness
requirements exist on an intermediate node, which has only the additional fabric MTU
requirement to accommodate the larger-size IP packets encapsulated with VXLAN
information.
Border node
The fabric border nodes serve as the gateway between the SD-Access fabric site and the
networks external to the fabric. The fabric border node is responsible for network
virtualization interworking and SGT propagation from the fabric to the rest of the network.
Most networks use an external border, for a common exit point from a fabric, such as for
the rest of an enterprise network along with the Internet. The external border is an efficient
14
mechanism to offer a default exit point to all virtual networks in the fabric, without
importing any external routes. A fabric border node has the option to be configured as an
internal border, operating as the gateway for specific network addresses such as a shared
services or data center network, where the external networks are imported into the VNs in
the fabric at explicit exit points for those networks. A border node can also have a
combined role as an anywhere border (both internal and external border), which is useful in
networks with border requirements that can't be supported with only external borders,
where one of the external borders is also a location where specific routes need to be
imported using the internal border functionality.
Fabric domain exit point—The external fabric border is the gateway of last resort for
the fabric edge nodes. This is implemented using LISP Proxy Tunnel Router
functionality. Also possible are internal fabric borders connected to networks with a
well-defined set of IP subnets, adding the requirement to advertise those subnets
into the fabric.
Policy mapping—The fabric border node also maps SGT information from within the
fabric to be appropriately maintained when exiting that fabric. SGT information is
propagated from the fabric border node to the network external to the fabric, either
by transporting the tags to Cisco TrustSec-aware devices using SGT Exchange
Protocol (SXP) or by directly mapping SGTs into the Cisco metadata field in a
packet, using inline tagging capabilities implemented for connections to the border
node.
Extended Node
You can extend fabric capabilities to Cisco Industrial Ethernet switches, such as the Cisco
Catalyst Digital Building Series and Industrial Ethernet 3000, 4000, and 5000 Series, by
connecting them to a Cisco Catalyst 9000 Series SD-Access fabric edge node, enabling
segmentation for user endpoints and IoT devices.
Using Cisco DNA Center automation, switches in the extended node role are connected to
the fabric edge using an 802.1Q trunk over an EtherChannel with one or multiple physical
members and discovered using zero-touch Plug-and-Play. Endpoints, including fabric-
15
mode APs, connect to the extended node switch. VLANs and SGTs are assigned using
host onboarding as part of fabric provisioning. Scalable group tagging policy is enforced at
the fabric edge.
The benefits of extending fabric capabilities using extended nodes are operational IoT
simplicity using Cisco DNA Center-based automation, consistent policy across IT and OT,
and greater network visibility of IoT devices.
For more information on extended node go to: https://www.cisco.com/go/iot
A key difference with non-fabric WLC behavior is that fabric WLCs are not active
participants in the data plane traffic-forwarding role for the SSIDs that are fabric enabled—
fabric mode APs directly forward traffic to the fabric edges for those SSIDs.
Typically, the fabric WLC devices connect to a shared services distribution or data center
outside the fabric and fabric border, which means that their management IP address exists
in the global routing table. For the wireless APs to establish a CAPWAP tunnel for WLC
management, the APs must be in a VN that has access to the external device. In the SD-
Access solution, Cisco DNA Center configures wireless APs to reside within the VRF
named INFRA_VRF, which maps to the global routing table, avoiding the need for route
leaking or fusion router (multi-VRF router selectively sharing routing information) services
to establish connectivity. Each fabric site has to have a WLC unique to that site. It is
recommended to place the WLC in the local site itself because of latency requirements for
SD-Access. Latency is covered in a section below in more detail.
Small- to medium-scale deployments of Cisco SD-Access can use the Cisco Catalyst
9800 Embedded Wireless Controller. The controller is available for the Catalyst 9300
Switch as a software package update to provide wired and wireless (fabric only)
infrastructure with consistent policy, segmentation, security and seamless mobility, while
maintaining the ease of operation of the Cisco Unified Wireless Network. The wireless
control plane remains unchanged, using CAPWAP tunnels initiating on the APs and
terminating on the Cisco Catalyst 9800 Embedded Wireless Controller. The data plane
uses VXLAN encapsulation for the overlay traffic between the APs and the fabric edge.
The Catalyst 9800 Embedded Wireless Controller for Catalyst 9300 Series software
package enables wireless functionality only for Cisco SD-Access deployments with two
supported topologies:
16
● Cisco Catalyst 9300 Series switches functioning as colocated border and control plane.
● Cisco Catalyst 9300 Series switches functioning as a fabric in a box.
When wireless clients connect to a fabric mode AP and authenticate into the fabric-
enabled wireless LAN, the WLC updates the fabric mode AP with the client Layer 2 VNI
and an SGT supplied by ISE. Then the WLC registers the wireless client Layer 2 EID into the
control plane, acting as a proxy for the egress fabric edge node switch. After the initial
connectivity is established, the AP uses the Layer 2 VNI information to VXLAN-encapsulate
wireless client communication on the Ethernet connection to the directly connected fabric
edge switch. The fabric edge switch maps the client traffic into the appropriate VLAN
interface associated with the VNI for forwarding across the fabric and registers the wireless
client IP addresses with the control plane database.
17
Scalable groups are identified by the SGT, a 16-bit value that is transmitted in the VXLAN
header. SGTs are centrally defined, managed, and administered by Cisco ISE. ISE and
Cisco DNA Center are tightly integrated through REST APIs, with management of the
policies driven by Cisco DNA Center.
SD-Access fabric edge node switches send authentication requests to the Policy Services
Node (PSN) persona running on ISE. In the case of a standalone deployment, with or
without node redundancy, that PSN persona is referenced by a single IP address. An ISE
distributed model uses multiple active PSN personas, each with a unique address. All PSN
addresses are learned by Cisco DNA Center, and the Cisco DNA Center user maps fabric
edge node switches to the PSN that supports each edge node.
Design—Configures device global settings, network site profiles for physical device
inventory, DNS, DHCP, IP addressing, software image repository and management,
device templates, and user access.
Policy—Defines business intent for provisioning into the network, including creation
of virtual networks, assignment of endpoints to virtual networks, policy contract
definitions for groups, and configures application policies.
18
Platform—Allows programmatic access to the network and system integration with
third-party systems using APIs, using feature set bundles, configurations, a runtime
dashboard, and a developer toolkit.
Cisco DNA Center supports integration using APIs. For example, Infoblox and Bluecat IP
address management and policy enforcement integration with ISE are available through
Cisco DNA Center. A comprehensive set of northbound REST APIs enables automation,
integration, and innovation.
● All northbound REST API requests are governed by the controller RBAC mechanism.
Cisco DNA Center is key to enabling automation of device deployments into the network,
providing the speed and consistency required for operational efficiency. Organizations
using Cisco DNA Center benefit from lower cost and reduced risk when deploying and
maintaining their networks.
When you deploy a single Cisco DNA Center Appliance only, and then that appliance node
becomes unavailable, an SD-Access network provisioned by the node still functions, but
automated provisioning capabilities are lost until the single node availability is restored. For
high-availability purposes, configure three Cisco DNA Center appliances of the same
appliance type to form a three-node cluster. The Cisco DNA Center cluster is accessed
using a single GUI interface hosted on a virtual IP, which is serviced by the resilient nodes
within the cluster. Single nodes should be configured with future clustering in mind, to
easily enable multi-node clustering, as required in the future.
19
For provisioning and assurance communication efficiency, the Cisco DNA Center cluster
should be installed in close network proximity to the greatest number of devices being
managed, minimizing communication delay to the devices.
For additional information about the Cisco DNA Center Appliance capabilities, see the data
sheet on Cisco.com.
Shared Services
Designing for end-to-end network virtualization requires detailed planning to ensure the
integrity of the virtual networks. In most cases, there is a need to have some form of
shared services that can be reused across multiple virtual networks. It is important that
those shared services are deployed correctly to preserve the isolation between different
virtual networks sharing those services. The use of a fusion router directly attached to the
fabric border provides a mechanism for route leaking of shared services prefixes across
multiple networks, and the use of firewalls provides an additional layer of security and
monitoring of traffic between virtual networks. Examples of shared services include:
20
The SD-Access architecture is supported by fabric technology implemented for the
campus, enabling the use of virtual networks (overlay networks or fabric overlay) running
on a physical network (underlay network) creating alternative topologies to connect
devices. Overlay networks in data center fabrics commonly are used to provide Layer 2
and Layer 3 logical networks with virtual machine mobility (examples: Cisco ACI™,
VXLAN/EVPN, and FabricPath). Overlay networks also are used in wide-area networks to
provide secure tunneling from remote sites (examples: MPLS, DMVPN, and GRE).
Underlay network
The underlay network is defined by the physical switches and routers that are used to
deploy the SD-Access network. All network elements of the underlay must establish IP
connectivity via the use of a routing protocol. Instead of using arbitrary network topologies
and protocols, the underlay implementation for SD-Access uses a well-designed Layer 3
foundation inclusive of the campus edge switches (also known as a routed access design),
to ensure performance, scalability, and high availability of the network.
In SD-Access, the underlay switches support the endpoint physical connectivity for users.
However, end-user subnets and endpoints are not part of the underlay network—they are
part of a programmable Layer 2 or Layer 3 overlay network.
The validated SD-Access solution supports IPv4 underlay networks, and IPv4 and IPv6
overlay networks.
Latency considerations
Fabric access points operate in local mode. This requires a RTT (round-trip time) of 20ms
or less between the AP and the Wireless LAN Controllers. This generally means that the
WLC is deployed in the same physical site as the Access Points. If dedicated dark fiber
exists between the physical sites and the WLCs in the data center and the latency
requirement is meant, WLCs and APs may be in different physical locations. This is
commonly seen in metro area networks and SD-Access for Distributed Campus. APs
should not be deployed across the WAN from the WLCs.
Cisco DNA Center 3-Node Clusters must have a RTT of 10ms or less between nodes in the
cluster. For physical topology options and failover scenarios, please see Cisco DNA Center
3-Node Cluster High Availability scenarios and network connectivity details technote.
Latency in the network is an important consideration for performance and the RTT between
Cisco DNA Center and any network device it manages should be taken into account. The
optimal RTT should be less than 100 milliseconds to achieve optimal performance for Base
Automation, Assurance, Software-Defined Access, and all other solutions provide by Cisco
DNA Center. The maximum supported latency is 200ms. Latency between 100ms and
21
200ms is supported, although longer execution times could be experienced for certain
events including Inventory Collection, Fabric Provisioning, SWIM, and other processes.
Overlay network
An overlay network is created on top of the underlay to create a virtualized network. The
data plane traffic and control plane signaling are contained within each virtualized network,
maintaining isolation among the networks as well as independence from the underlay
network. The SD-Access fabric implements virtualization by encapsulating user traffic in
overlay networks using IP packets that are sourced and terminated at the boundaries of the
fabric. The fabric boundaries include borders for ingress and egress to a fabric, fabric edge
switches for wired clients, and fabric APs for wireless clients. The details of the
encapsulation and fabric device roles are covered in later sections. Overlay networks can
run across all or a subset of the underlay network devices. Multiple overlay networks can
run across the same underlay network to support multitenancy through virtualization. Each
overlay network appears as a virtual routing and forwarding (VRF) instance for connectivity
to external networks. You preserve the overlay separation when extending the networks
outside of the fabric by using VRF-lite, maintaining the network separation within devices
connected to the fabric and on the links between VRF-enabled devices.
Layer 2 overlays emulate a LAN segment to transport Layer 2 frames, carrying a single
subnet over the Layer 3 underlay. Layer 2 overlays are useful in emulating physical
topologies and, depending on the design, and can be subject to Layer 2 flooding. By
default, SD-Access supports transport of IP frames without Layer 2 flooding of broadcast
and unknown multicast traffic, altering from the behavior and reducing restrictions of a
22
traditional LAN to permit creation of larger subnetworks. The SD-Access Solution
Components section describes the fabric components required to allow ARP to function
without broadcasts from the fabric edge, accomplished by using the fabric control plane
for MAC-to-IP address table lookups.
Layer 3 overlays abstract the IP-based connectivity from the physical connectivity and
allow multiple IP networks as part of each virtual network.
23
Fabric data plane and control plane
SD-Access configures the overlay network for fabric data plane encapsulation using the
Virtual eXtensible LAN (VXLAN) technology framework. VXLAN encapsulates complete
Layer 2 frames for transport across the underlay, with each overlay network identified by a
VXLAN network identifier (VNI). The VXLAN header also carries the SGTs required for
micro-segmentation.
The function of mapping and resolving endpoint addresses requires a control plane
protocol, and SD-Access uses Locator/ID Separation Protocol (LISP) for this task. LISP
brings the advantage of routing based not only on the IP address or MAC address as the
endpoint identifier (EID) for a device but also on an additional IP address that it provides as
a routing locator (RLOC) to represent the network location of that device. The EID and
RLOC combination provide all the necessary information for traffic forwarding, even if an
endpoint uses an unchanged IP address when appearing in a different network location.
Simultaneously, the decoupling of the endpoint identity from its location allows addresses
in the same IP subnetwork to be available behind multiple Layer 3 gateways, versus the
one-to-one coupling of IP subnetwork with network gateway in traditional networks.
The following diagram shows an example of two subnets that are part of the overlay
network. The subnets stretch across physically separated Layer 3 devices. The RLOC
interface is the only routable address that is required to establish connectivity between
endpoints of the same or different subnet.
24
SD Access Brownfield Migration Overview
The Migration strategy must be business driven and planned from ground up. The very
reason we have a flat network with no policy applied in our current networks is because
the intent was never defined. I cannot emphasize more about the importance of defining a
policy as everything we do moving forward simply revolves around these fundamental
principles. At a high level here is what we have today, that we deal with on a daily basis
the complex and challenging environment that barely meets the needs of today's fast
changing IT requirements.
25
Basic Approaches to Migration
There are two primary approaches when migrating an existing network to SD-Access
https://www.ciscolive.com/global/on-demand-library.html?search=BRKCRS-
3493#/session/1571888607137001yDeW
https://www.ciscolive.com/global/on-demand-library.html?search=DGTL-BRKENS-
3822#/session/1570575336196001v4R5
26
If many of the existing platforms are to be replaced, and if there is sufficient power, space,
and cooling, then building an SD-Access network in parallel may be an option allowing for
easy user cutovers. Building a parallel network that is integrated with the existing network
is effectively a variation of a greenfield build.
To assist with network migration, SD-Access supports a Layer 2 border construct that can
be used temporarily during a transition phase. Create a Layer 2 border handoff using a
single border node connected to the existing traditional Layer 2 access network, where
existing Layer 2 access VLANs map into the SD-Access overlays. You can create link
redundancy between a single Layer 2 border and the existing external Layer 2 access
network using EtherChannel. Chassis redundancy on the existing external Layer 2 access
network can use StackWise switch stacks, Virtual Switching System, or StackWise Virtual
configurations.
Migration Considerations
The following are considerations to take into account before beginning the migration of the
existing network to Cisco SD-Access. They are categorized as follows:
27
Network Considerations
MTU is defined as the largest network protocol data unit that can be transmitted in a single
transaction. The higher the MTU, the more efficient the network. The VXLAN encapsulation
adds 50 bytes to the original packet. This can cause the MTU to go above 1500 bytes for
certain applications. For example, wireless is deployed with SD-Access, where the
additional Control and Provisioning of Wireless Access Points (CAPWAP) overhead needs
to be considered. In general, increasing the MTU to 9100 (Jumbo Frames) bytes on
interfaces across all switches and routers in the fabric domain (underlay and overlay) is
recommended to cover most cases and to prevent fragmentation.
Network Topology
Existing campus networks are flat and do not have any concept of underlay and overlay.
The IP address schema is flat, with no distinction between intranetwork prefixes and
endpoint network prefixes. SD-Access, by its very nature, contains overlay and underlay to
differentiate between the two spaces. It is recommended that two distinct IP ranges be
selected, one for the endpoint network prefixes (overlay) and one for the intranetwork
prefixes (underlay). The advantages are twofold. First, it enables summarization of the IP
space when advertising in routing considerations. Second, troubleshooting is easier, since
one has a clear understanding of which IP space one is looking at. For example, the
overlay could be a 10.0.0.0/8 space, and the underlay range could be a 172.16.0.0/16
space.
Shared services in the network include services such as Dynamic Host Configuration
Protocol (DHCP), DNS, IP address management, Network Time Protocol (NTP), NetFlow
collector, syslog, network management systems (NMS), and others. Traditionally, these
services lie outside the campus or branch network in a data center. Some network designs
do have some or all of these services in the campus or branch, connected to either a core
or a distribution layer. Additionally, the shared services are normally in the global routing
table (GRT), although in some deployments they might lie in a separate VRF context. It is
essential that network devices and endpoints have access to basic services such as DHCP
28
and DNS in order to connect to the network and forward traffic. The steps for migrating to
SD-Access differ depending upon the physical location as well as the presence in either
GRT or VRF of the shared services in the existing network.
In a Layer 2 access design, in most cases features such as IP access control lists (ACLs),
NetFlow, quality-of-service (QoS) classification, and marking and policing are configured
at the distribution layer switches. Since SD-Access is a fabric solution, the incoming
packets from the endpoints are encapsulated in the fabric data plane by the fabric edge,
making the distribution layer switches act as intermediate nodes that switch IP packets
back and forth between fabric edge (access layer) and upstream switches in the network.
Due to the encapsulation at the fabric edge itself, the IP classification that the features
were based on at the distribution layer is not available; hence the consideration of moving
these features to the access layer switches in the network.
The routing locator (RLOC) addresses (typically Loopback0) and underlay physical
connectivity address space are in the GRT. The endpoint IP space will typically be in VRFs
if not the default VRF. The network devices will still be reachable by the infrastructure and
network management stations via the RLOC space in the GRT.
Policy Considerations
A mind shift is needed when SGT enforcement is considered, because the enforcement is
based not on static IP ACLs but rather on dynamic downloaded security group (SG) ACLs,
which are more secure. Implementation of 802.1X further strengthens the onboarding of
endpoints onto the network, since network connections are now authenticated and/or
profiled and placed in the right area in the network. How the users and things on the
network should be isolated from each other is another consideration that the network
administrator should work on with the security administrator of the network. SD-Access
provides dual levels of segmentation within the network. With the deployment of VRF or
VNs providing the classic path isolation among endpoints and SG ACL enforcement
providing differentiated access control within the VN, it is imperative that network and
security administrators work together to form a segmentation and access control policy
that will be applied consistently in the network.
29
Hardware Platform Considerations
The Cisco SD-Access fabric scaling depends on the number of hosts and devices in a
single site or across multiple sites. In the first release, Cisco DNA Center will support 1000
network devices as fabric nodes (that includes fabric edge, fabric border, and fabric
control plane nodes and wireless LAN controllers, excluding access points) and 20,000
endpoints per fabric domain. A total of 20 fabric domains are supported with Cisco DNA
Center 2.1x. Geographical locations that are in close proximity from a latency and
performance standpoint can be controlled by a single Cisco DNA Center instance. It is
recommended that Cisco DNA Center be co-located near other software control plane
entities such as ISE administrative nodes to reduce latency for communications between
them. This way there is one variable which is latency due to the WAN infrastructure (links
and speeds) and not a combination of both. The locations might be in the data center or
the main campus site, depending upon customer implementation.
It is recommended to run two control plane nodes per fabric domain for redundancy. For a
given fabric domain, the choice of platform will depend upon the number of host entries to
be managed by the CP node. Hosting a single CP node instance on a switch platform with
active and standby supervisors or stack members provides additional level of redundancy
within a system. Hosting the other instance on another switch also provides an additional
layer of redundancy across systems. In the latter case, both control planes are active-
active and all registrations are sent to both control plane nodes independently. There is no
synchronization of the database across two control plane nodes.
30
Module 1: Review the Existing Traditional Network
In this module, we will be reviewing the configuration of the Traditional Network Devices
following the below topology:
31
Task 1: Review Traditional Campus Config
In this task, we will be reviewing the configuration of the three-tier architecture Network
Devices: Core Switches (Traditional Core 1 and Traditional Core 2), Intermediate Switch
(Lab Access Switch) and Access Layer Switches (Traditional Access 1 and Traditional
Access 2)
Step 1 From the Jump Host desktop, open the Secure CRT application
verify the console sessions of the Network Devices.
Step 2 Open console session for Traditional Core 1 and run the below commands to verify
its underlay connectivity with Traditional Core 2.
32
show run interface Te1/0/3
Step 3 Similarly, open console session for Traditional Core 2 and run the below commands
to verify its underlay connectivity with Traditional Core 1 and other network devices.
Step 4 Now open console session for Traditional Access 2 and verify interface Gig1/0/23
for the Wired user VLAN 101.
33
show run interface Gig1/0/23
Step 5 From the console of Traditional Core 2, verify the default gateway for User VLAN is
configured.
Step 6 Verify that the VLAN 101 is allowed on the connecting trunk TenGig1/0/1 on
Traditional Core 2
In this task, we will be reviewing the wireless configuration in our Traditional Network.
34
Step 7 Open the browser on the Jump Host to access WLC GUI “https://192.168.51.240”
WLC Credentials:
Username: Operator
Password: CiscoDNA!
Step 9 Under the WLANs tab, you will see the existing WLAN Profiles and associated SSIDs
35
Note: SSID’s will match the current student pod. e.g. Pod03 will have “SSID CAMPUS_DEVICES-
Pod03” (XX = pod number)
Note: SSID’s will match the current pod. e.g. Pod03 will have “SSID
Step 12 Navigate to APs tab and verify the (2), Learned Access Points (AP-2802)
physically connected to the Traditional Access 2
36
Note: The APs MAC Address on each pod will be different from the others. Also, it is okay if the AP is not
joined (this may be due to the lab reset procedure)
37
Step 16 Navigate to Security > Layer 2, it should be configured as WPA2+WPA3/Enterprise.
38
Step 18 Follow the same steps as above to verify the Security Configuration, on the
WLAN ID 11 “Enterprise_MAB” WLAN profile
It should be configured as WPA2+WPA3/Enterprise
39
Verify the existing WLC interfaces
Step 20 In order to review the configured WLC interfaces, Navigate to CONTROLLER >
Interfaces
Verify the below highlighted interfaces
40
Step 22 Click “enterprise_ssid_mab” interface and review the detailed configuration.
41
Task 3: Review existing Cisco Identity Services Engine Config
42
We would also you be using the pre-existing ISE server into our Migrated SDA Fabric.
Step 24 In order to verify the existing policies, navigate to Policy > Policy Sets
Step 25 Click on the icon at the extreme right of the Default Policy.
Step 26 Now expand the Authorization Policy and verify the top two policies for Wired User
43
group: Student and Wireless User group: Faculty named
STUDENT_172_16_101_0 and FACULTY_172_16_103_0 respectively exist
from the Traditional Network.
*It is okay if the total number of policies on your pod’s ISE varies from the above shown screenshot.
44
Module 2: Preparing Cisco DNA Center 2.1.2.6 for Migration
The idea is to build the Cisco SD-Access overlay network over the top of the existing
network – that forms the underlay. In this module, we are going to discover our existing
Traditional Core 1 (Catalyst 3850) and deploy it as a SD Access border/control plane node
that routes between the SDA fabric network and the existing Traditional network plus the
external networks.
This Migration approach where we are using the existing network devices and topology to
build a SD-Access fabric is called the Incremental approach.
Refer the SD-Access Product Compatibility Matrix to choose the existing network devices
for Incremental Migration:
https://www.cisco.com/c/en/us/solutions/enterprise-networks/software-defined-
access/compatibility-matrix.html
Step 27 Open the browser to DNA Center using the management IP address
https://192.168.100.10 login with the following credentials
Username: admin
Password: DNACisco!
NOTE: DNA Center’s login on screen is dynamic. It may have a different background Once logged
in, the DNA Center dashboard is displayed. DNA Center’s SSL certificate may not be automatically
accepted by your browser. If this occurs, use the advanced settings to allow the connection.
45
Step 28 To view the DNA Center version, click on the ? at the top right and then
select About. Notice the DNA Center Controller version 2.1.2.6
46
*Note: If the DNAC on your pod is at (2.1.2.6), you are good to proceed further.
*Note: Please do not update the appliance at any point while doing this lab.
Step 29 Click the SIDE arrow next to Packages to view the various packages that make up
the
DNA Center 2.1.2.6. In addition to this, we can navigate to the view the
Release Notes and the Serial Number
Step 30 The DNA Center main screen is divided into four main areas.
The topmost is the output from the Assurance Application along with the
Telemetry, displaying the overall network health.
These areas contain the primary components for creating and managing the
Solutions provided by the DNA Center Appliance.
47
Step 31 At the top right corner, click on search icon which will serve as a DNAC search
engine, for example we want to check the Users on DNAC.
48
Step 32 You can also navigate to the DNAC tools by clicking on the hamburger at the top
left.
The System Settings pages control how the DNA Center system is integrated with
other platforms show information on users and applications and provide the ability
to perform system backup and restore.
49
To view the System Settings, click on the hamburger at the top left.
Step 33 On the System 360 Tab, DNA Center displays the number of currently running
primary services.
50
High Availability: Displays the DNAC 3-Node Cluster (if available) and the
workflow to manage the cluster
We can edit this dashboard to manage the services you want to want to view
on the dashboard.
51
Step 34 Navigate to System> Data Platform to view the Virtualization, automation,
analytics, and cloud capabilities that are available for the business network through
DNA.
52
Step 35 Click the logo to return to the
DNA Center dashboard.
Using Cisco DNA Center, create a network hierarchy of areas, buildings, and floors that
reflect the physical deployment. In later steps, discovered devices are assigned to
buildings so that they are displayed hierarchically in the topology maps using the device
role defined earlier in the Inventory tool.
Areas are created first. Within an Area, sub-areas and buildings are created. To create a
building, the street address must be known to determine the coordinates and thus place
the building on the map. Alternatively, use latitude and longitude coordinates without the
53
street address. Floors are associated with buildings and support the importation of floor
maps. For SD-Access Wireless, floors are mandatory as this is where Access Points are
assigned, as clarified in later steps. Buildings created in these steps each represent a
fabric site in the later Provisioning application procedures
Step 36 From the main Cisco DNA Center dashboard, click the DESIGN tab.
54
Step 39 Select Add Area from the drop-down list.
Step 40 Enter the area name San Jose and click Add.
Note: By using Import Sites Option, Site Hierarchy can be imported from the CSV file extracted Cisco Prime
Infrastructure
55
Step 41 Now, A new Building will be added to San Jose,
Click the San Jose and then click the gear sign and then Add Building.
click Add.
Step 43 To support SD-Access Wireless, select the gear icon next to the building in
the hierarchy,
choose Add Floor
56
Step 44 Enter Floor name as “Floor-1”
57
Note: Actual floor plan layout may appear differently in your lab
Cisco DNA Center is used to discover and manage the SD-Access underlay network
devices. To discover equipment in the network, the Appliance must have IP reachability to
these devices, and CLI and SNMP management credentials must be configured on them.
Once discovered, the devices are added to Cisco DNA Center’s inventory, allowing the
controller to make configuration changes through provisioning.
The following steps show how to initiate a discovery job by supplying an IP address range
or multiple ranges for scanning for network devices. IP address range discovery provides
a small constraint to the discovery job which may save time. Alternatively, by providing the
IP address of an initial device for discovery, Cisco DNA Center can use Cisco Discovery
Protocol (CDP) to find neighbors connected to the initial device.
Tech Note: If using CDP for discovery, reduce the default number of hops to speed up the discovery job.
At a minimum, CLI and SNMP credentials must be provided to initiate a discovery job. Either SNMPv2c Read
and Write credentials or SNMPv3 credentials are required. SNMPv3 is given priority if both are used.
58
Step 46 Return to DNA Center in the browser.
This opens the Discovery dashboard, displaying various attributes associated with the
Network discoveries run by DNAC, such as Inventory Overview, the latest discovery and
the Recent 10 discoveries etc.
Click on Add Discovery
59
Step 47 This opens a New Discovery page
Enter the Discovery Name as Traditional Network Core
Note: Outside of the lab environment, this IP address could be any Layer-3 interface or Loopback
Interface on any switch that DNA Center has IP reachability to. In this lab, DNA Center is directly
connected to the LabAccessSwitch on Gig 1/0/12. That interface has an IP address of
192.168.100.6. The LabAccessSwitch is also DNA Center’s default gateway to the actual Internet. It
represents the best starting point to discover the lab topology.
Tech Note: This will instruct DNA Center to use the Loopback IP address of the discovered equipment for
management access. DNA Center will use Telnet/SSH to access the discovered equipment through their Loopback IP
address. Later, DNA Center will configure the Loopback as the source interface for RADIUS/TACACS+ packets.
60
Step 51 Expand the Credentials section.
Click on Add Credentials and use the following values only for discovery to
be successful.
61
Final screenshot should look as below:
62
Only SSH Should already be selected in the Advanced dropdown, as Telnet
has been deprecated.
Step 52 Click the Discover button in the lower right-hand corner to begin the discovery job.
Tech Tip: SSH is the preferred discovery type over Telnet. Telnet is most commonly used in lab settings and not in
production environments. If the environment uses Telnet, select the check box, and drag and drop Telnet higher in
the Protocol Order. If both SSH and Telnet are selected, Cisco DNA Center attempts to connect to the discovered devices
using both options in the order they are listed. Once Cisco DNA Center has a successful connection with a device, it will not
attempt the next protocol type
Step 53 A scheduler flyout window will pop-up. Click Start to begin the discovery process.
Step 54 Verify that the Discovery process was able to find four (4) devices. This may take
several seconds to complete.
63
Step 55 Verify that the devices with the following IP addresses have been discovered.
Step 56 Now we will repeat the above steps, for discovering the Traditional Wireless
Controller (3504) separately.
Step 57 Click on the + sign at the top of the same page to start the new discovery.
64
Step 58 Enter the Discovery Name as Traditional WLC 3504 and Discovery type as IP
Address Range
Add the below details:
Step 59 In DNA Center expand credentials section below and verify selected
credentials are
there and globally (purple highlighted) selected.
65
Only SSH Should already be selected in the Advanced dropdown, as Telnet
has been deprecated.
Step 60 Click the Discover button in the lower right-hand corner to begin the discovery job.
Step 61 A scheduler flyout window will pop-up. Click Start to begin the discovery process.
Step 62 Verify that the device with the following IP addresses have been discovered
192.168.51.240 – WLC_3504
66
Step 63 Click the logo to return to the DNA Center dashboard.
The Identity Services Engine (ISE) is the authentication and policy server required for
Software-Defined Access. Once integrated with Cisco DNA Center using pxGrid,
information sharing between the two platforms is enabled, including device information and
group information. This allows Cisco DNA Center to define policies that are pushed to ISE
and then rendered into the network infrastructure by the ISE Policy Service Nodes (PSNs).
When integrating the two platforms, a trust is established through mutual certificate
authentication. This authentication is completed seamlessly in the background during
integration and requires both platforms to have accurate NTP sync.
Cisco DNA Center devices, as soon as they are provisioned and belong to a particular site
in the Cisco DNA Center site hierarchy, are pushed to ISE. Any updates to a Cisco DNA
Center device (such as change to IP address, SNMP or CLI credentials, ISE shared secret,
and so on) will flow to the corresponding device instance on ISE automatically. When a
Cisco DNA Center device is deleted, it is removed from ISE as well. Please note that Cisco
DNA Center devices are pushed to ISE only when these devices are associated to a
particular site where ISE is configured as its AAA server.
During the integration of ISE and DNA Center, all Scalable Group Tags (SGTs) present in
ISE are pulled into DNA Center. Whatever policy is configured in the (TrustSec) egress
matrices of ISE when DNA Center and ISE are integrated are also pulled into DNA Center.
This is referred to as the Day 0 Brownfield Support: If policies are present in ISE at the
point of integration, those policies are pulled in DNA Center and populated.
Except for the SGTs, anything TrustSec and TrustSec Policy related that is created directly
on ISE OOB (out-of-band) from DNA Center after the initial integration will not be available
or be displayed in DNA Center. There is a cross launch capability in DNA Center to see
what is present in ISE with respect to TrustSec Policy.
67
This integration must be done before completing the workflows in the Design and Policy
applications so that the results of the workflows can be provisioned to the network
equipment to use ISE as the AAA server for users and endpoints via RADIUS and for device
Tech Note: Many organizations use TACACS+ for management authentication to network devices. TACACS+ was
used for this prescriptive guide, although it is not required. TACACS+ is not a mandatory requirement for
Software-Defined Access. If TACACS+ is not used in the deployment, do not select the option in Advanced
Settings when integrating Cisco DNA Center and ISE
Step 64 From the main Cisco DNA Center dashboard select the hamburger icon in the top-
left corner.
68
Step 66 This hyperlink will navigate to:
System Settings > Settings > Authentication and Policy Servers.
Click the + Add button.
Step 67 A dialog box will slide over from the right labeled Add AAA/ISE server.
Use the below table to populate the credentials and fields.
Field Value
Server IP Address * 192.168.100.20
Shared Secret * CiscoDNA!
Cisco ISE Server Slide Switch ON
Username * admin
Password * CiscoDNA!
FQDN * ise.dna.local
View Advanced Settings Slide Switch Open and Expand
Protocol TACACS Selected
69
Step 68 Slide the View Advanced Settings switch.
Step 69 Use the scroll on the right to scroll down.
Step 70 Click the TACACS box. It should change color to .
Click Save.
Step 71 DNA Center will begin integrating with ISE using pxGrid.
70
This includes the process of mutual certificate authentication between DNA
Center and ISE.
During the establishment of communication, Cisco DNA Center displays
below, while Creating AAA server…
Step 72 Verify that the Status displays as INPROGRESS by continually refreshing the page as
it eventually changes to ACTIVE.
This may take several minutes to complete. Note: You may have to refresh your
browser to see the ACTIVE state displayed.
Step 73 Open a new browser tab, and log into ISE using IP address https://192.168.100.20
and credentials:
Cisco ISE Credentials:
username: admin
password: CiscoDNA!
71
Step 74 Once logged in, Navigate to Administration > pxGrid Services
Step 76 You CAN verify the Client connectivity in ISE under Web Clients tab
72
Step 77 Once established, the current communication status between ISE and Cisco DNA
Center can be viewed by navigating from the gear icon to System Settings > System 360
Under External Network Services, the Cisco ISE server shows an Available
status.
73
Module 3: Designing the SD-Access Network
Cisco DNA Center provides a robust Design application to allow customers of varying sizes
and scales to easily define their physical sites and common resources. Using an intuitive
hierarchical format, the Design application removes the need to redefine the same
resource – such as DHCP, DNS, and AAA servers – in multiple places when provisioning
devices. The network hierarchy created in the Design application should reflect the actual
physical network hierarchy of the deployment.
The Design application is the building block for every other workflow in both Cisco DNA
Center and Software-Defined Access. The configuration and items defined in this section
are used and provisioned in later steps. The Design application begins with the creation of
the network hierarchy of Areas, Buildings, and Floors. Once the hierarchy has been
created, network settings are defined. These include DHCP and DNS servers, AAA servers
and NTP servers, and when applicable, SNMP, Syslog, and Netflow servers. These servers
are defined once in the Design application and provisioned to multiple devices in later
steps. This allows for faster innovation without the repetitive typing of the same server
configuration on the network infrastructure. After network settings are defined, IP address
pools are designed, defined, and reserved. These pools are used for automation features
such as border handoff and LAN automation and are used in host onboarding in later steps.
The final step in the Design application in this guide is the creation and configuration of the
wireless network settings including a Guest Portal in ISE.
In Cisco DNA Center, common network resources and settings are saved in the Design
application’s Network Settings tab. Saving allows information pertaining to the enterprise
to be stored so it can be reused throughout various Cisco DNA Center workflows. Items
are defined once so that they can be used many times.
Configurable network settings in the Design application include AAA server, DHCP server,
DNS server, Syslog server, SNMP server, Netflow collector, NTP server, time zone, and
Message of the Day. Several of these items are applicable to a Cisco DNA Center
Assurance deployment. For SD-Access, AAA and DHCP servers are mandatory and DNS
and NTP servers should always be used.
By default, when clicking the Network Settings tab, newly configured settings are assigned
as Global network settings. They are applied to the entire hierarchy and inherited by each
site, building, and floor. In Network Settings, the default selection point in the hierarchy is
Global
74
It is possible to define specific network settings and resources for specific sites. In fact,
each fabric site in this deployment has its own dedicated ISE Policy Service Node, as
shown in the next steps. For this prescriptive deployment, NTP, DHCP, and DNS have the
same configuration for each site and are therefore defined at the Global level.
75
Step 79 Select AAA, Stealthwatch Flow Destination, and NTP, and press OK.
Step 80 Select the next to both Network and Client/Endpoint. The boxes change
to and additional settings are displayed.
76
Step 81 Configure the AAA Server for Network Authentication using the table below
Field Value
Servers ISE
Protocol TACACS
Network drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20
Step 82 Configure the AAA Server for Client/Endpoint Authentication using the table
below
Field Value
Servers ISE
Protocol RADIUS
Client/Endpoint drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20
77
Step 83 Configure the remaining servers using the information in the table below.
Field Value
DHCP Server 192.168.100.1
DNS Server – Domain Name dna.local
DNS Server – IP Address 192.168.100.1
NTP Server 192.168.100.6
Stealthwatch Flow Destination IP Address 192.168.100.10 (Cisco DNAC)
Stealthwatch Flow Destination Port 2055
Time Zone EST5EDT
Step 84 Click on radio button for Add an external flow destination server enter the
values from the above table.
78
Step 86 Verify that an Information and a Success notification appears
indicating the
settings were saved.
Step 87 On the same page, Navigate to Network Settings > Device Credentials.
Verify the below credentials are enabled against each section and Click Save
This section provides information about global IP address pools and shows how to define
the global IP address pools that are referenced during the pool reservation process.
79
address pool tells Cisco DNA Center to set aside that block of addresses for one of the
special uses.
Tech Note: IP address pool reservation is not available at the global level. It must be done at the area, building,
or floor level.
IP address pools that will used for DHCP must be manually defined and configured on the DHCP server. Cisco
DNA Center does not provision the actual DHCP server, even if it is a Cisco device. It reserves pools as a visual
reference for use in later workflows. DHCP scopes on the DHCP server should be configured with any additional
DHCP options required to make a device work. For example, Option 150 is used to direct an IP phone to a TFTP
server to receive their configuration and Option 43 is commonly used for Access Points to direct them to their
corresponding wireless LAN controller.
Several IP Address Pools will be created for various uses. Some will be used for Device
Onboarding (End-host IP Addresses) while others will be used for Guest Access and
Infrastructure.
Step 89 Configure the six IP Address Pools as shown in the table below:
Because the DHCP and DNS servers have already been defined, they are
available from the drop-down boxes and do not need to be manually
defined.
This demonstrates the define once and use many concepts that was
described earlier.
The Overlapping checkbox should remain unchecked for all IP Address
Pools.
80
IP Pool Name IP Subnet CIDR Prefix Gateway DHCP DNS Server(s)
Address Server(s)
81
82
Step 91 Verify that a Success notification appears when saving each IP Address
Pool.
Step 92 Once completed, the IP Address Pools tab should appear as below.
Tech Tip: Because an aggregate IP address pool was defined at the global level, there is increased flexibility and ease in
reserving pools. It requires precise subnet planning and may not be available for all deployments – particularly if integrating
with a third-party IPAM tool that has API integration with Cisco DNA Center. If an aggregate subnet is not available, then the
same IP address pools need to be defined at the global level and then reserved at the applicable site level.
The defined global IP address pool is used to reserve pools at the area, building, or floor in
the network hierarchy. For single-site deployments, the entire set of global IP address
pools is reserved for that site. In an SD-Access for Distributed Campus deployment, each
site has its own assigned subnets that do not overlap between the sites.
Step 93 In the Cisco DNA Center dashboard, navigate to Design > Network Settings
> IP Address Pools.
Step 94 Now we will reserve the IP Pool at the Floor level to be able to be utilized
later in the lab.
Note: Make sure you are adding the below details at the Floor-1 Network
Hierarchy
Level (as shown below)
83
Step 95 Add the details as follows for the IP Pools:
IP Pool Name IP Subnet Prefix Global Pool IP Gateway IP DHCP DNS Server(s)
Length (dropdown) Address Server(s)
AccessPoints_F1 172.16.60.0 /24 AccessPoints 172.16.60.1 192.168.100.1 192.168.100.1
Fabric_AP_F1 172.16.50.0 /24 Fabric_AP 172.16.50.1 192.168.100.1 192.168.100.1
FusionRouter_F1 192.168.170.0 /24 FusionRouter – – –
Production_User_F1 172.16.101.0 /24 Production_User 172.16.101.1 192.168.100.1 192.168.100.1
SDA_ProdUser_F1 172.16.201.0 /24 SDA_Prod_User 172.16.201.1 192.168.100.1 192.168.100.1
WirelessUser_F1 172.16.103.0 /24 WirelessUser 172.16.103.1 192.168.100.1 192.168.100.1
Task 4: Creating Segmentation with the Cisco DNA Center Policy Application
SD-Access supports two levels of segmentation – macro and micro. Macro segmentation
uses overlay networks - VRFs. Micro segmentation uses scalable group tags (SGTs) to
apply policy to groups of users or devices.
84
In a university example, students and faculty machines may both be permitted to access
printing resources, but student machines should not communicate directly with faculty
machines, and printing devices should not communicate with other printing devices. This
micro-segmentation policy can be accomplished using the Policy application in Cisco DNA
Center which leverages APIs to program the ISE TrustSec Matrix.
For a deeper exploration of designing segmentation for SD-Access with additional use
cases, see the Software-Defined Access Segmentation Design Guide.
The Policy application supports creating and managing virtual networks, policy
administration and contracts, and scalable group tag creation. Unified policy is at the heart
of the SD-Access solution, differentiating it from others. Therefore, deployments should
set up their SD-Access policy (virtual networks and contracts) before doing any SD-
Access provisioning.
The general order of operation for SD-Access is Design, Policy, and Provision,
corresponding with the order of the applications seen on the Cisco DNA Center dashboard.
In this section, the segmentation for the overlay network is defined. (Note that the overlay
network will only be fully created until the host onboarding stage). This process virtualizes
the overlay network into multiple self-contained virtual networks (VNs). After VN creation,
the TrustSec policies are created to define which users and groups within a VN are able to
communicate.
Use these procedures as prescriptive examples, for deploying macro and micro
segmentation policies using Cisco DNA Center.
Virtual networks are created first, then group-based access control policies are used to
enforce policy within the VN.
Step 97 From the main Cisco DNA Center dashboard, navigate to POLICY>Group-
Based Access Control.
85
Step 98 Observe a warning message on the screen, stating the Migration compliance
on DNAC
86
Step 99 Notice the progress of the Migration, it might take few minutes to complete
the migration.
87
Step 101 Now with this migration, you will be able to see the group-based policy on
the DNAC policy dashboard
Step 102 Now on the main DNAC dashboard, Navigate to Policy>Virtual Network
Step 103 Begin by clicking the + on the right side to create a new virtual network.
Note: A single plane, for all policy configuration is supported starting with DNAC 1.3.1.x, where we can completely
configure the policies on the DNAC policy dashboard, instead of returning to the ISE dashboard.
88
Step 104 Enter the Virtual Network Name of CAMPUS and click Save
Note: Please pay attention to Capitalization. The name of the virtual network defined
in DNA Center will later be pushed down the Fabric devices as a VRF Definition. VRF
Definitions on the CLI are case sensitive. VRF Campus and VRF CAMPUS would be
considered two different VRFs.
Step 105 Multiple Scalable Groups can be selected by clicking on them individually in
the check boxes on the right side of the screen. Check off all Available Scalable
Groups except for BYOD, Guests, Quarantined_Systems, Test_Servers and
Unknown .
Note: Make sure to do SGTs on both page 1 and page 2 on the right side.
89
Step 106 Click Save.
90
Step 107 Verify the CAMPUS VN has been created and contains thirteen (13) SGTs.
TechNote: Since this is lab setup, we are only creating one VN, however in production environment you can
scale the number of VNs
Note: In the Host Onboarding section, the VNs that were just created will be associated with the created
IP Address Pools. This process is how a particular subnet becomes associated with a particular VRF.
In order to create the the SD-Access Fabric, we must first enable Wired Client Data
Collection
91
Step 110 Make sure you have the Global level in the network hierarchy selected.
Expand the NetFlow and check the Use Cisco DNA Center as NetFlow collector
server check box.
The NetFlow configuration on the device interfaces is completed only when you enable
application telemetry on the device.
Step 111 Expand the Wired Client Data Collection area and check the Monitor wired
clients check box.
This selection turns on IP Device Tracking (IPDT) on the access devices of the site. By
default, IPDT is disabled for the site.
92
This completes Module-3
93
Module 4: Onboarding Access switches- Leveraging Cisco
DNAC Plug n Play and Template Editor
This module is focused on onboarding new access switches (Catalyst 9300) in the
Brownfield deployment using Cisco DNAC PnP and Day-0 Template Editor.
The Cisco® Plug and Play solution is a converged solution that provides a highly secure,
scalable, seamless, and unified zero-touch deployment experience. Enterprises incur major
operating costs to install and deploy networking devices as part of campus and branch
deployments.
Typically, every device must be pre-staged, which involves repetitively copying Cisco IOS®
Software images and applying configurations manually through a console connection. Once
pre-staged, these devices are then shipped to the final site for installation. The end-site
installation may require a skilled installer for troubleshooting, bootstrapping, or modifying
the configuration. The entire process can be costly, time consuming, and prone to errors.
At the same time, customers would like to increase the speed and reduce complexity of
the deployment without compromising security.
Cisco DNA Center is designed for intent-based networking (IBN). The solution breaks the
process in to Day 0 and Day N. The solution provides a unified approach to provision
enterprise networks comprised of Cisco routers, switches, and wireless devices with a
near zero touch deployment experience. When planning to provision any project, the PnP
feature within Cisco DNA Center can help pre-provision and add devices to the project.
This includes entering device information and setting up a bootstrap configuration, full
configuration, and Cisco device image for each device to be installed. The bootstrap
configuration enables the PnP Agent, specifies the device interface to be used, and
configures a static IP address for it
94
Task 1: Creating Day-0 Template for Switch Onboarding
Cisco DNA Center provides an interactive editor to author CLI templates. Template Editor is
a centralized CLI management tool to help design and provision templates in the Cisco
DNA Center. The template is used to generate a device deployable configuration by
replacing the parameterized elements (variables) with actual values and evaluating the
control logic statements. With the Template Editor, you can:
95
Creating Template Editor Project
Step 114 From the DNAC home page, choose Tools > Template Editor
Step 115 From the left pane, next to Onboarding Configuration, click the gear icon
and select Add Template
96
Step 116 In the Add New Template window, select Regular Template and fill in the
following details and Click Add
97
98
*Once the device type is selected, click on Back to Add New Template in order to return to the previous flyout
Points to be Noted:
Tagging a configuration template helps you to search a template using the tag name in the search field.
Use the tagged template as a reference to configure more devices.
There are different granularity levels for choosing the device type from the hierarchical structure. The
device type is used during deployment to ensure that templates deploy devices that match the specified
device type criteria. This lets you create specialized templates for specific device models.
Template Editor does not show device product IDs (PIDs); instead, it shows the device series and model
description. You can use cisco.com to look up the device data sheet based on the PID, find the device
series and model description, and choose the device type appropriately.
During provisioning, Cisco DNA Center checks to see if the selected device has the software version
listed in the template. If there is a mismatch, the provision skips the template.
window.
Once added, you shall see the success message for adding a PnP project
You can find the template under the Onboarding Configuration dropdown at the left pane
of the page
Step 117 Under Actions>Commit, press commit. Template must be committed in order
to use for provisioning.
99
Creating Day-0 Template for Switch Onboarding and OSPF underlay
In order to onboard this switch into the OSPF underlay routing, we will leverage the
template to provision the OSPF area 0 configuration.
Step 118 Open the above created template named PNP from the left pane
100
Step 119 Copy and paste the below configuration to the above created PNP Template.
hostname $hostname
interface loopback0
ip address $loopback 255.255.255.255
ip routing
router ospf 1
network $loopback 0.0.0.0 area 0
network $subnet1 0.0.0.255 area 0
network 172.16.50.0 0.0.0.255 area 0
line con 0
logging synchronous
101
Note: For detailed Template creation, refer https://blogs.cisco.com/developer/velocity-
templates-dnac-1
Step 120 To save the template content, from the Actions drop-down list, choose
Save
102
Step 121 To commit the template, from the Actions drop-down list, choose
Commit.
Note: Only the committed templates can be associated with a network profile and to
use it for provisioning
Step 122 To test the template, click the button to switch to Simulation Editor.
Step 123 Click Create Simulation.
103
Simulation Name: Fabric Edge
hostname: EdgeNode1
loopback: 192.168.255.1
subnet1: 192.168.11.0
Step 125 Click Run, and all the variables in the CLI will now displays the actual
value entered in the form fields on the left.
104
Step 126 Verify the final Configuration that will be added to the device by the
Template.
Step 127 Click the logo to return to the DNA Center dashboard.
In this step, we will be creating a Network Switch Profile to associate the Template (PNP)
created above to a Network Site
Step 128 From the DNAC home page, Navigate to Design > Network Profiles.
Step 129 Click +Add Profiles and choose Switching
Step 130 Give a Profile Name such as EdgeNode Profile, and Click +Add, under
OnBoarding Template(s) tab.
105
Step 131 Select Cisco Catalyst 9300 Series Switches from the Device Type drop-
down list.
Note: Make sure that you are select the same device type as you have defined
earlier at a previous step.
Step 132 Select an onboarding configuration template (PNP, in our case) from the
drop-down list.
106
Each network profile can have multiple device types and sites assigned. But multiple
network profiles cannot share the same site, even though two different network profile
can be assigned different floors from the same site
Step 135 On the side panel for Add Sites to Profile, expand Site (example: San
Jose) expand
Building-1 and select Floor-1
Click Save
107
Task 2: Preparing the Upstream Switch for PnP Discovery
For the device to connect with the controller (PnP Server), there are five options:
● DHCP server, using option 43 (set the IP Address of the controller).
● DHCP server, using a DNS domain name (DNS lookup of pnphelper).
● Cisco Plug and Play Connect (cloud-based device discovery).
● USB key (bootstrap config file).
● Cisco Installer App (For iPhone/Android).
For the devices to call home to plug and play server in Cisco DNA Center, this guide will
cover only the second option, DHCP server, using DNS lookup for PnP discovery.
For this option, we need to add some configuration on the Upstream switch from the PnP
NOTE:
In this lab, some of this configuration has been pre-configured on your lab pods. Let’s re-visit the
topology and review the configuration.
Agent.
108
Step 137 From the Jump Host desktop, open the Secure CRT application and
login to the console session of Trad Core 1 / CPNBN
Username: cisco
Password: cisco
Enable Password: cisco
109
Step 138 Let’s first clean the interface configuration (TenGigabitEthernet1/0/1)
connected to the PnP Agent (Cat-9300 which we are trying to onboard through
PnP process in this Module)
configure terminal
default interface TenGigabitEthernet1/0/1
end
Step 139 Run the below commands to verify the configuration on VLAN 201, as we
will be using VLAN 201 for the PnP discovery
configure terminal
interface TE1/0/1
switchport mode trunk
switchport trunk allowed vlan 201
end
We have added the “pnp startup-vlan 201” command, any pnp switch will have vlan 201
created and the uplink converted to a trunk with vlan 201 enabled. This process uses CDP
under the covers to communicate to the PnP device, and a process on the device creates
the vlan and enables DHCP.
110
Step 141 To the review the startup vlan configuration on the Traditional_Core_1, run
the below command.
Task 3: Onboarding and Claiming the new Cisco Cat 9300 on the Cisco DNA
Center
As discussed earlier, the PnP Agent will only boot on the new device if it does not have
a startup configuration. However, in our lab the EdgeNode1 has a start-up config.
Therefore, we must first clean the device for PnP discovery.
Step 142 From Secure CRT, Open the console session for TraditionalAccess1
(Note: Don’t get confused with the hostname of this device, as it would be
EdgeNode1 Switch)
Login Credentials:
Username: cisco
Password: cisco
Enable Password: cisco
Step 143 Run the following commands in the same order as shown below:
(Note: Do not copy and paste the whole set of commands at once, run
single command at a time)
Configure terminal
no pnp profile pnp-zero-touch
(respond yes to removing all downloaded certificates)
Note: Since there was never a pnp profile on this switch, you will get a
%Error : Profile Not Found message on the first command.
111
Also remove any other crypto certs shown by
exit
delete /force flash:pnp-reset-config.cfg
delete /force flash:vlan.dat
delete /force flash:pnp-tech-discovery-summary
delete /force flash:pnp-tech-time
write erase
reload (enter no if asked to save)
Step 145 Verify that the switch boots up on the start-up config prompt as shown
below:
112
Do not press anything when it asks to enter the initial configuration dialog.
Step 146 Once the device boots up, it will get IP address of the Cisco DNA Center
using the DNS resolution and will do a PnP discovery as below:
NOTE: Do not enter any input to the console, as that will interrupt the PnP process. It might take
2-3 minutes to get the PnP process initiated.
Step 147 Once the successful message is reflected on the switch console. Go to back
to DNA Center: Provision > Network Devices and select Plug and Play
Step 148 Check the status of the switch to make sure it’s Unclaimed before
proceeding
113
Note: Devices can also be added and claimed using Serial Number and Product ID. On
Plug and Play Devices page click on Add and select Single Device, Bulk Devices or Smart
Account Devices and provide information respectively
Step 149 Select the switch and click on Actions drop-down and select Claim to
start the claim wizard.
Step 151 Assign the Switch to Site: Global/San Jose/Building-1/Floor-1, then click
Next.
114
Step 152 Confirm that the correct Onboarding template/Configuration PNP (that
was created in previous section) is reflected and click Next
Step 153 In the Devices configuration step, Select the switch and enter the
provisioning parameters (variables defined in the onboarding template)
hostname: EdgeNode1
loopback: 192.168.255.1
subnet1: 192.168.11.0
115
Step 154 Carefully review the summary by expanding each tab and click Claim.
Step 155 Select Yes to confirm to proceed with the claim request.
116
Note: If for some reason there is an error in claiming the switch, return to Design>Network
Settings and re-save the credentials for Devices, RO and RW in SNMP credentials at the Floor-1
level.
Step 156 Now watch the state of the switch changing from Unclaimed to Planned to
Onboarding and finally, to Provision
117
Step 157 Go to Provision > Inventory
Select the Global level of the Hierarchy.
Step 158 You shall see the EdgeNode1 device under Floor-1, it is already part of
the fabric.
118
Module 5: Bringing up the SDA Fabric- Using DNA Center
Provision Application
The Provision application in Cisco DNA Center is used to take the network components
defined in the Design application and the segmentation created in the Policy application
and deploy them to the devices. The Provision application has several different workflows
that build on one another. This application can also be used for SWIM and LAN automation
although these are beyond the scope of this document.
The process begins by assigning and provisioning devices to a site. Once devices are
provisioned to a site, the fabric overlay workflows can begin. This starts through the
creation of transits, the formation of a fabric domain, and the assignment of sites,
buildings, and/or floors to this fabric domain. Once assigned, Cisco DNA Center lists these
locations as Fabric-Enabled.
Once a site is fabric-enabled, devices provisioned to that site can be assigned a fabric
role. This creates the core infrastructure for the fabric overlay. Host onboarding is
completed afterwards to bind VNs and reserved IP address pools together, completing the
overlay configuration.
Assigning a device to a site causes Cisco DNA Center pushes certain site-level network
settings configured in the Design application to the devices, whether used as part of the
SD-Access fabric overlay or not. Specifically, Netflow Exporter, SNMP server and traps,
and syslog server information configured in the Design application, are provisioned on the
devices.
After provisioning devices to a site, the remaining network settings from the Design
application are pushed down to the devices. These include time zone, NTP server, and
AAA configuration.
When devices are defined in fabric roles, the fabric overlay configuration is pushed to the
devices.
119
The above workflows are described as separate actions as they support different solution
workflows. The first is required for the Assurance solution, and all three workflows are
required for the SD-Access solution.
Step 159 From the DNA Center home page, click Provision > Inventory to enter the
Provision Application.
Step 160 The Provision Application will open to the Device Inventory Page. Verify that the
current inventory shows six (6) devices excluding the Access Points (we will
provision APs later in Wireless Migration Module)
The first step of the provisioning process begins by selecting devices and associating
(assigning) them to a site, building, or floor previously created with the Design Application.
Before devices can be provisioned, they must be discovered and added to Inventory. This
is why the Discovery tool and Design exercises were completed first. There is a distinct
order-of-operation in DNA Center workflows.
In this lab, all devices in Inventory will be assigned to a site (Step 1). After that, only some
devices will be provisioned to a site (Step 2). Among that second group, only certain
devices could be provisioned to a site will become part of the Fabric and operate in a
Fabric Role. This is the level of granularity that DNA Center provides in Orchestration and
Automation. In the lab, all devices provisioned to the site will become receive further
provisioning to be operate in a Fabric Role.
120
Step 161 Select the EdgeNode1, FusionRouter, LabAccessSwitch, TraditionalCore_1
device(s) The box changes to and all devices are highlighted.
121
Step 164 Click on Choose a site button, a fly out will open. Select site
Global/San Jose/Building-1/Floor-1
Step 165 Click Apply to All.
Step 166 Click Assign
Step 167 Verify that a Success notification appears indicating the selected devices
were added to the site.
122
Task 2: Assigning Shared Services – Provisioning Step 2
Now that all the selected devices have been assigned a site, the next step is to Provision
them with the “Shared Services” (AAA, DHCP, DNS, NTP, etc) which were setup in the
Design App. In order to do this, select the same devices again(EdgeNode1, FusionRouter,
Deployment Note: Devices that are provisioned to the site will now authenticate and authorize their console
and VTY lines against the configured AAA server.
Note and Reminder: During Step 1, DNA Center pushes the Netflow Exporter, SNMP Server and Traps,and
Syslog network server information configured in the Design Application for a site to the devices assigned to the
site.
LabAccessSwitch, TraditionalCore_1) and this time select from Actions > Provision>
Provision Device.
Step 168 Now select TraditionalCore_1 and the LabAccessSwitch as both the devices
are the same hardware platform (catalyst 3850 in this lab setup)
123
Step 169 Click Actions
Step 170 Go to Provision and then click Provision Device.
Step 171 DNA Center opens to the Provision Devices page at Step Assign Site. Verify
the assigned sites the devices as Global/San Jose/Building-1/Floor-1. Click Next
Step 172 DNA Center moves to Step Advanced Configuration. This section would list
available configuration templates had any been configured. As there are no
Templates configured for these devices,
Click Next.
Step 173 DNA Center moves to step Summary. This page lists a summary of the selected
devices, their details, and which network settings will be provisioned to the devices.
124
Click Deploy.
Step 176 Now, repeat the same steps as above for newly PnP discovered
125
EdgeNode1 (hardware Catalyst 9300)
Step 179 DNA Center opens to the Provision Devices page at Step Assign Site.
Assign site as Global/San Jose/Building-1/Floor-1
Click Next
126
Step 182 The DNA Scheduler appears.
The Scheduler allows configuration to be configured in advanced and then
actually provisioned during a Change Window. The scheduler can also be run
on-demand.
completes.
Step 184 Now, repeat the same steps as above for Fusion Router
Step 185 The final step for FusionRouter provisioning should be Success notifications
127
Step 186 Navigate to Global/San Jose/Building-1/Floor-1 and change the focus to
Provision.
Step 187 Verify the Provision Status of the above provisioned four (4) devices as
shown below:
128
The Fabric Overlay is the central component that defines SDA. In documentation, devices
that are supported for SDA are devices that are capable of operating in one of the Fabric
Overlay Roles – operating as a Fabric Node. From a functionality standpoint, this means
the device has the ability to run LISP and to encapsulate LISP data packets in the VXLAN
GPO format. When assigning devices to a fabric role (Border, Control Plane, or Edge),
DNA Center will provision a VRF-based LISP configuration on the device.
Creating the Fabric Overlay is a multi-step workflow. Devices must be discovered, added
to Inventory, assigned to a Site, and provisioned to a Site before they can be added to the
Fabric. Each of Fabric Overlay steps are managed under the Fabric tab of the Provision
Application.
1. Identify and create Transits
2. Create Fabric Domain (or use Default)
3. Assign Fabric Role(s)
4. Setup Up Host Onboarding
Identify and create Transits
With version 1.2.X, the concept of SD-Access Multisite was introduced. Also, there is an
obvious requirement of connecting the SD-Access Fabric with the rest of the company. As
a result, the new workflow asks for you to create a “Transit” which will connect the fabric
to beyond its domain.
1. SDA Transit: To connect 2 or more SDA Fabric Domains with each other (requires
an end to end MTU of 9100)
2. IP Transit: To connect the SDA Fabric Domain to the Traditional network for a Layer
3 hand-off
In this lab, you will be configuring an IP transit to connect the CP-PN with the Fusion
Router to further integrate it with ACI Fabric.
After the transit creation, a fabric domain is created. A fabric domain is a logical
organizational construct in Cisco DNA Center. It contains fabric-enabled sites and their
associated transits. Sites are added to the fabric domain one at a time. During fabric role
provisioning, a transit is associated with a border node, joining the transit and site together
under the domain.
With the domain created and sites added to it, devices in each site are then added to
fabric roles. Border node provisioning is addressed separately in the document due to the
different types (Internal, External, and Anywhere) and the separate transit options. Once
the SD-Access overlay infrastructure (fabric devices) is provisioned, host onboarding and
port assignment are completed allowing endpoint and Access Point connectivity.
129
Create IP-Based Transits – Fabric Provisioning Part I
Step 188 In the Cisco DNA Center dashboard, navigate to Provision> Fabric.
Step 189 At the top right, click + Add Fabric or Transit Peer/ Network
Step 190 Click Add Transit Peer/Network.
130
Step 192 Enter the following for the Transit/Peer Network
Transit/Peer Network Name: SDA_External
Transit/Peer Network Type: IP-Based
Routing Protocol: BGP – this is currently the only option
Autonomous System Number: 65001
131
A fabric domain is an administrative construct in Cisco DNA Center. It is combination of
fabric sites along with their associated transits. Transits are bound to sites later in the
workflow when the fabric borders are provisioned. A fabric domain can include all sites in
a deployment or only certain sites depending on the physical and geographic topology of
the network and the transits. The prescriptive deployment includes a single fabric domain
that will encompass the buildings (sites) created during previous steps in the Design
application.
Step 195 In the Cisco DNA Center dashboard, navigate to PROVISION > Fabric.
Step 196 At the top right, click + Add Fabric or Transit Peer/ Network
Step 197 Click Fabric.
Step 198 Name the Fabric domain SD-Access_Network and click Next
Step 199 Ensure that you choose the site level as Floor-1.
Name the Fabric domain and click Next.
Step 200 Add the virtual networks to the fabric, Select all the created/pre-defined VNs
Click Add.
132
Step 201 DNA Center will create the Fabric Domain.
Verify a Success notification appears indicating the Fabric Domain was
created.
133
A fabric overlay consists of three different fabric nodes: control plane node, border node,
and edge node.
To function, a fabric must have an edge node and control plane node. This allows
endpoints to traverse their packets across the overlay to communicate with each other
(policy dependent). The border node allows communication from endpoints inside the
fabric to destinations outside of the fabric along with the reverse flow from outside to
inside.
A border node can have a Layer-3 handoff, a Layer-2 handoff, or both (platform
dependent). A border node can be connected to an IP transit, to an SDA transit, or both
(platform dependent). A border node can provide connectivity to the Internet, connectivity
outside of the fabric site to other non-Internet locations, or both. It can operate strictly in
the border node role or can also operate as both a border node and control plane node.
Finally, border nodes can either be routers or switches which creates slight variations in
the provisioning configuration to support fabric DHCP and the Layer-3 handoff.
With the number of different automation options, it is important to understand the use case
and intent of each selection to have a successful deployment.
An Internal border is connected to the known routes in the deployment such as a Data
Center. As an Internal border, it will register these known routes with the site-local control
plane node which directly associates these prefixes with the fabric.
An External border is connected to unknown routes such as the Internet, WAN, or MAN. It
is the gateway of last resort for the local site’s fabric overlay. A border connected to an
SD-Access transit must always use the External border functionality. It may also use the
Anywhere border option as described below.
An Anywhere border is used when the network uses one set of devices to egress the site.
It is directly connected to both known and unknown routes. A border node connected to
an SD-Access transit may use this option if it is also connected to a fusion router to
provide access to shared services
134
Border Nodes – Fabric Roles
When provisioning a border node, the device can also be provisioned as a control plane
node. Alternatively, the control plane node role and border node role can be independent
devices. While the resulting configuration on the devices is different based on these
selections, Cisco DNA Center abstracts the complexity, understands the user’s intent, and
provisions the appropriate resulting configuration to create the fabric overlay.
In the GUI, Cisco DNA Center refers to these provisioning options as Add as CP, Add as
Border, and Add as CP+Border.
135
Adding Devices to the Fabric
Step 203 From the Secure CRT, Open the console session for TraditionalCore_1 using
the below
Credentials
Username: cisco
Password: cisco
Enable Password: cisco
136
Step 204 Change the TraditionalCore_1 hostname to CP-BN_L2
configure terminal
hostname CP-BN_L2
To ensure proper operation later in the lab we will now need to change the
role of the CP-BN from DISTRIBUTION to BORDER-ROUTER, in DNAC.
Step 205 In the Cisco DNA Center dashboard, navigate to PROVISION > NETWORK
DEVICES > Inventory.
Step 206 Change Traditional Core_1(CP-BN) Device Role from DISTRIBUTION to BORDER
ROUTER, by clicking on it and selecting BORDER ROUTER.
137
We will be configuring this device as Co-located Control-Border node,
hence start by enabling the slide next to Control Node
Note: You might still see the hostname as Traditional_Core_1, that is okay as the
DNAC resync interval is 24 minutes
138
This shall open a new fly out window to the right with additional settings to be done for the
fabric border role.
Step 209 We will be adding the transit that we created earlier. Add the details as
follows:
Option Value
Enable Layer-3 Handoff Checked
Local Autonomous Number 65000
Default to all Virtual Network Checked
Do not import external routes Un-checked
Transit/Peer Site IP:SDA_External
Select IP Address Pool FusionRouter_F1(192.168.170.0/24)
Step 210 Click Add External Interface under the SDA_External transit.
139
Step 211 Select the External interface to be TenGigabitEthernet1/0/2
Slide switches to enable Layer-3 Handoff for CAMPUS, INFRA_VN on the
Virtual Networks
Note: The INFRA_VN is described in the next process. It is associated with the global routing table – it is
not a VRF definition – and is used by Access Points and Extended Nodes. If these devices require DHCP,
DNS, and other shared services, the INFRA_VN should be selected under Virtual Network.
Step 212 All the fields are populated now, Click Add to complete the border
configuration.
140
Step 213 Click Add to complete the transit addition.
Step 214 Observe that the device has a Blue outline to it, but it has not yet been deployed.
You need to
click on Deploy in the end.
141
Step 215 This shall open a new Fly-out window to the right. You need to click on
Apply in the end.
Step 216 It shall show you that the Fabric device provisioning has initiated and after it
has pushed the requisite configurations, it shall show up as that the Device has
been
updated to the Fabric Domain Successfully.
142
Step 218 Slide the button next to the Edge Node and click Add.
Step 219 Observe that the device has a Blue outline to it, but it has not yet deployed. You
need to click on Deploy in the end.
Host onboarding is the culmination of all the previous steps. It binds the reserved IP
address pools from the Design application with the VN configured in the Policy application,
and provisions the remainder of the fabric configuration down to the devices operating in a
fabric role.
143
Host onboarding allows attachment of endpoints to the fabric nodes. The host onboarding
workflow will all allow you to authenticate, classify an endpoint to a scalable group tag, and
associate to a virtual network and IP Pool. Host onboarding is comprised of four distinct
steps – all located under the Provision > Fabric > Host Onboarding tab for a fabric site.
Assign Authentication Template and Create Wired Host Pool - Host Onboarding
Part 1
The first step is to select the authentication template. These templates are predefined in
Cisco DNA Center and are pushed down to all devices that are operating as edge nodes
within a site. It is mandatory to complete this step first – an authentication template must
be defined before host pool creation.
These templates are based on the AAA Phased Deployment Implementation Strategy of
High Security mode, Low Impact mode, Monitor mode, and No Authentication mode.
When a host pool is created, a subnet in the form of a reserved IP address pool is bound to
a VN. From the perspective of device configuration, Cisco DNA Center creates the VRF
definition on the fabric nodes, creates an SVI or loopback interface (on switches and
routers, respectively), defines these interfaces to forward for the VRF, and gives it the IP
address defined as the gateway for the reserved pool.
144
Step 220 Navigate Provision > Fabric > SD-Access_Network > San Jose > Building-1 >
Floor-1
Choose the Host Onboarding tab
The second step (of Host Onboarding) is to bind the IP Address Pools to the Virtual
Networks (VNs). At that point, these bound components are referred to as Host Pools.
Multiple IP address pools can be associated with the same VN. However, an IP Address
Pool should not be associated with multiple VNs. Doing so would allow communication
between the VNs and break the first line of segmentation in SDA.
The second step (of Host Onboarding) has a multi-step workflow that must be completed
for each VN.
145
2. Select the desired Pool(s)
3. Select the traffic type
4. Enable Layer-2 Extension (optional)
Step 223 If not already there, Navigate Provision > Fabric > SD-Access_Network >
San Jose > Building-1 > Floor-1
Step 225 The IP Address pools created during the Design Application exercises are
displayed.
Step 226 Select the IP Pool as Production_User_F1(172.16.101.0/24)
Step 227 Edit the VLAN Name and Name it as CAMPUS_VLAN
Step 228 From the Choose Traffic dialog box, select Data.
Step 229 Check the pool as Wireless Pool as well
146
Step 230 Click Add
Step 231 Verify the options match as below, and press Deploy.
Step 232 We will now be associating Wireless User Host Pool with CAMPUS VN.
Click Add.
147
Step 233 Select the IP Pool as Wireless_User_F1(172.16.103.0/24)
Step 234 Edit the VLAN Name and Name it as FACULTY_VLAN
Step 235 From the Choose Traffic dialog box, select Data.
Step 236 Check the pool as Wireless Pool
Step 237 Click Add
148
Step 239 Verify a Success notification appears indicated as below:
149
Create INFRA_VN Host Pool – Host Onboarding Part 2
Access Points are a special case in the fabric. They are connected to edge nodes like an
endpoint, although they are actually part of the fabric infrastructure. Because of this, their
traffic pattern is unique. Access Points receive a DHCP address via the overlay network
and associate with the WLC via the underlay network. Once associated with the WLC, they
are registered with Cisco DNA Center by the WLC through the overlay network. To
accommodate this traffic flow, the Access Point subnet – which is in the Global Routing
Table (GRT) – is associated with the overlay network. Cisco DNA Center GUI calls this
special overlay network associated with the GRT the INFRA_VN.
INFRA_VN stands for Infrastructure Virtual Network and is intended to represent devices
that are part of the network infrastructure but associate and connect to the network in a
similar method to endpoints - directly connected to the downstream ports of an edge
node. Both Access Points and Extended Nodes (SD-Access Extension for IoT) are part of
the INFRA_VN.
Step 240 Now we will associate the Fabric_AP Host Pool to INFRA_VN
From Provision > Fabric > Host Onboarding for the SD-Access_Network >
San Jose > Building-1 > Floor-1, select INFRA_VN under Virtual Network.
The Edit Virtual Network dialog box appears for the INFRA VN. Click Add
150
Step 245 Verify the details as below and Click Deploy
151
We have assigned the IP Pools to all the Virtual Networks.
At this point, we shall be doing the segment assignment to the User connected
interfaces of the EdgeNode1 to set it perform 802.1x Closed Authentication
Step 247 On the Host Onboarding Page, go to Port Assignment and select interface
GigabitEthernet1/0/23 and click on Assign
152
Step 249 Click Deploy at the end.
Now In order for us to authenticate the Production Wired User and Onboard it into the SD-
Access Fabric we need to Migrate the existing Cisco ISE authorization policies into the SD-
Access Fabric Enabled Polices.
153
Module 6: Migrating Cisco ISE AAA Policies for SD-Access
In this module, we will be focusing on updating the existing ISE AAA Policies in order to
onboard SDA Wired and Wireless Clients.
Step 250 On the browser, open the Cisco ISE GUI https://192.168.100.20 using
credentials as
below:
Username: admin
Password: CiscoDNA!
Step 251 Navigate to Administration > Identity Management > Groups > User Identity
Groups
Verify that we have the Student and Faculty User Group already present
Step 252 Navigate to Administration > Identity Management > Identities > Users
Verify that the two users Emily and Fred are already created and belong to
the Student and Faculty Group respectively.
154
Task 2: Define Wired Dot1X Authorization Profiles and Policies for SD-
Access
The authorization policy applies the Virtual Network and SGT assignment based on the
authorization conditions/attributes. These policies will be specific to the authenticating
user’s identity group. Post successful authentication, ISE will look at the User Identity
Group the authenticated user is part of and assign the Virtual Network and SGT accordingly
(e.g. for this lab, Student and Faculty). The authentication policy is matched to the
authorization policy and verifies that the user authenticates via a dot1x connection. If the
user’s password is correct, authentication succeed, and the authorization policy is then
matched depending on the authorization policy credentials. In this lab, the authorization
policy is based on the user’s identity group. Upon successful match of authorization policy,
the user is placed into the configured Virtual Network and assigned the configured SGT.
This section explains how to create an authorization result for each user Identity Group:
Student and Faculty. The authorization result is used in an authorization policy that informs
the edge switch which VN and SGT to apply to the successfully authenticated
endpoint/user.
155
156
Step 254 Click on in the upper right-hand corner of the page.
Step 255 Enter Scalable group as Student with a tag of 26 and put in CAMPUS VN.
Click Save in the end.
Let it synch and replicate this Scalable group (SGT) in ISE database as well.
Step 256 Similarly, go ahead and add a Faculty Scalable group as well with a tag of 30
and put in CAMPUS VN
157
Step 257 Next, we must review the pre-existing AAA polices for the Traditional Users.
Open ISE dashboard.
Navigate to Policy > Policy Sets and expand the Default Policy Set
Step 258 Expand the Authentication Policies and review the configuration for 802.1x
and MAB.
(This authentication policy will remain unaffected in SDA)
Step 259 Expand the Authorization Policies and review the existing onboarding policies
for Wired and Wireless Clients.
Tech Note: For brevity in the lab guide, we have associated the Student Group with the Wired Policies
and the Faculty Group with Wireless Policies. However, in production it is recommended to have
different set of Wired and Wireless Authorization rules in the Traditional Network
158
Step 260 In ISE, click Authorization Profiles from the left-hand pane under Policy →
Policy
Elements → Results → Authorization → Authorization Profiles. Click Add
159
Step 263 At the bottom left of the page, click Submit
Similarly, create another Authorization Profile for Faculty, using the details below
160
Step 267 Go back to Policy > Policy Sets and expand the Default Policy Set
Access Authorization Policy
Step 268 Click on the gear icon on the pre-existing Student Authorization policy
STUDENT_172_16_101_0 and Select Duplicate below
Step 269 Change the Name to SDA_ STUDENT_172_16_101_0 and click on Edit
Conditions
161
Step 270 As you can see, this rule is matching the condition on Wired 802.1x and the
User
Identity Group Student. We will now change this policy to accept
authorization from an SDA Network Fabric Edge
Click New
Step 271 Select Click to add attribute and choose the Device icon and hence the
Device IP Address
162
Step 272 Enter the EdgeNode1 IP Address: 192.168.255.1 and Click Use at the
bottom right.
Tech Note: In a Production setup, it would be recommended to add the SDA Fabric Devices to a different
network device group than the default. That would allow us to configure the Policy for all migrated fabric
edge devices
Step 273 Change the Result profile to the newly created Campus_Student result
profile
And Click Save on the bottom right corner
163
Step 274 Similarly Duplicate the FACULTY_172_16_103_0 authorization rule to create
an SDA_ FACULTY_172_16_103_0 wireless authorization rule as shown below:
Step 275 Enter SDA_CAMPUS_USERS, then Click Use at the bottom and the Save.
You have successfully prepared the ISE Server for SDA Wired/Wireless Client Onboarding.
Your Authorization Policies should look like below:
164
This completes Module-6
165
Module 7: Connecting SDA fabric to External Routing Domains
The generic term fusion router comes from the MPLS world. The basic concept is that the
fusion router is usually aware of the prefixes available inside each VPN (VRF), either
because of static routing configuration or through route peering, and can therefore fuse
some of these routes together. A fusion router’s responsibilities are to route traffic using
separate VRFs and to route traffic to and from a VRF to a shared pool of resources such as
DHCP servers, DNS servers, and the WLC.
A fusion router has a number of support requirements. It must support:
1. Multiple VRFs
2. 802.1q tagging (VLAN Tagging)
3. Sub-interfaces (when using a router)
4. BGPv4 and specifically the MP-BGP extensions
Deployment Note: While it is feasible to use a switch as a fusion router, switches add additional
complexity, as generally only the high-end chassis models support sub-interfaces. Therefore, on a
fixed configuration model such as a Catalyst 9300, an SVI must be created on the switches and
added to VRF forwarding definition. This abstracts the logical concept of a VRF even further through
logical SVIs. A Layer-2 trunk is used to connect to the border node, which itself is likely configured
for a Layer-3 handoff using a sub-interface. To reduce unnecessary complexity, an Integrated
Services Router (ISR) is used in the lab as the fusion router .
Tech Tip: Because the fusion router is outside the SDA fabric, it is not specifically managed (for
Automation) by DNA Center. Therefore, the configuration of a fusion router will always be manual.
Future release and development may reduce or eliminate the need for a fusion router.
The first option is used when the shared services routes are in the GRT. IP prefix lists are
used to match the shared services routes, route-maps reference the IP prefix lists, and the
VRF configurations reference the route-maps to ensure only the specifically matched
routes are leaked.
The second option is to place shared services in a dedicated VRF. With shared services in
a VRF and the fabric endpoints in other VRFs, route-targets are used leak between them.
This option, along with the reason it was not selected are in this guide. However,
production deployments may use one option or the other.
1. Create the Layer-3 connectivity between borders nodes and fusion routers.
2. Use BGP to extend the VRFs from the border nodes and fusion routers.
166
3. Use route leaking or VRF leaking to share routes between the routing tables on the
exitfusion router.
4. Distribute the leaked routes back to the border nodes via BGP.
Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be
different. Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be
aware of what sub-interface is forwarding for which VRF and what IP address is assigned to that sub-
interface on your particular lab pod during your particular lab run-through. The fusion router’s configuration
is meant to be descriptive in nature, not prescriptive.
There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
When following the instructions in the lab guide, DNA Center will provision three SVIs on
Control-BorderNode beginning with Vlan 3001 through Vlan 3003. These SVIs will be
assigned an IP address with a /30 subnet mask (255.255.255.252) and will always use the
lower number (the odd number address) of the two available addresses.
DNA Center will vary which SVI is forwarding for which VRF and the Global Routing Table
(GRT).
167
To understand which explanatory graphic and accompanying configuration text file to
follow, identify the order of the VRF/GRT that DNA Center has provisioned on the sub-
interfaces.
In the example above, Vlan3001 is forwarding for CAMPUS VRF, Vlan3002 is forwarding
for the GRT and (also known as the INFRA_VN).
Note: Please be sure to use the appropriate VLAN and Sub-interfaces and do not directly
copy and paste from the lab guide unless instructed directly and specifically to do so.
TechNote: During the Layer-3 border handoff automation, Cisco DNA Center uses VLSM on the defined IP address
pool to create multiple /30 subnets. Each subnet is associated with a VLAN beginning at 3001. Cisco DNA Center
does not currently support the reuse of VLANs when a device is provisioned and un-provisioned. The VLAN
number will continue to advance as demonstrated in the screen captures.
168
The first task is to allow IP connectivity from the Control-BorderNode (CP-BN) to
FusionRouter. This must be done for each Virtual Network that requires connectivity to
shared services. DNA Center has automatically configured the Control-Border Node (CP-
BN) in previous provision exercises.
Using this information, a list of interfaces and IP address can be planned on the
FusionRouter.
Note: Please be sure to use the appropriate VLAN and Sub-interfaces and do not directly
copy and paste from the lab guide unless instructed directly and specifically to do so.
Also let’s verify the interfaces on the CP-BN_L2 (previously named as TraditionalCore_1)
However, to configure an interface to forward for a VRF forwarding instance, the VRF must
first be created. Before creating the VRFs on FusionRouter, it is important to understand
the configuration elements of a VRF definition. The most important portion of a VRF
configuration – other than the
169
case-sensitive name – is the route-target (RT) and the route-distinguisher (RD).
A route distinguisher makes an IPv4 prefix globally unique. It distinguishes one set of
routes (in a VRF) from another. This is particularly critical when different VRFs contain
overlapping IP space. A route distinguisher is an eight-octet/eight-byte (64-bit) field that is
prepended to a four-octet/four-byte (32-bit) IPv4 prefix. Together, these twelve
octets/twelve bytes (96 bits) create the VPNv4 address. Additional information can be
found in RFC 4364. There are technically three supported formats for the route
distinguisher, although they are primarily cosmetic in difference. The distinctions are
beyond the scope of this guide.
Route targets, in contrast, are used to share routes among VRFs. While the structure is
similar to the route distinguisher, a route target is actually a BGP Extended-Community
Attribute. The route target defines which routes are imported and exported into the VRFs
Many times, for ease of administration, the route-target and route-distinguisher are
configured as the same number, although this is not a requirement. It is simply a
configuration convention that reduces an administrative burden and provides greater
simplicity. This convention is used in the configurations provisioned by DNA Center. The
RD and RT will also match the LISP Instance-ID.
Open the terminal application SecureCRT, open the console session for CP-
BN_L2 (previously named as TraditionalCore_1) and FusionRouter
Username: cisco
Password: cisco
Enable Password: cisco
170
Every time DNAC pushes any configuration to the device or just run the sync
with the device, a console message should indicate the DNA Center logged
into the VTY lines of the device using the Operator user.
Step 277 Display and then copy the DNA Center provisioned VRFs, RTs, and RDs
shown on
collocated located CP-BN_L2
Note: The management vrf - Mgmt-intf is not part of the route-leaking process. It
can be ignored as part of this exercise.
Step 278 On the console of FusionRouter, paste the VRF configuration that was copied from
CP-BN_L2 node.
Copying and pasting is required. The RDs and RTs must match exactly
Enter configure terminal, then paste the following that was directly copied
171
exit-address-family
end
Step 279 Create the Layer-3 sub-interface that will be used for CAMPUS VRF
Use the following information:
Description: Fusion to BorderNode for VRF CAMPUS
VLAN: 3001
VRF Instance: CAMPUS
IP Address: 192.168.170.6/30
configure terminal
interface GigabitEthernet0/0/0.3001
172
interface GigabitEthernet0/0/0.3002
Step 290 Ping the CP-BN_L2 from the FusionRouter using the Global routing table and a
sub-interface.
exit
ping 192.168.170.5
Step 291 Ping the CP-BN_L2 from the FusionRouter using vrf CAMPUS
ping vrf CAMPUS 192.168.170.1
173
Task 2: Extending the VRFs to the Fusion Router
BGP is used to extend the VRFs to the Fusion router. As with the sub-interface
configuration, DNA Center has fully automated CP_BN’s BGP configuration.
Note: The BGP Adjacencies created between a border node and fusion router use the IPv4 Address
Family (not the VPNv4 Address family). Note, however, the adjacencies will be formed over a VRF
session.
Step 292 Create the BGP process on Fusion Router. Use the corresponding Autonomous-
System number automated by DNA Center on the CP-BN_L2.
configure terminal
router bgp 65001
Step 293 Define the neighbor and its corresponding AS Number. This neighbor should use
the IP address associated with the GRT sub-interface.
Step 295 Activate the exchange of NLRI with the Control-BorderNode (CP-BN-L2)
174
address-family ipv4
neighbor 192.168.170.5 activate
exit-address-family
Step 299 Define the neighbor and its corresponding AS Number. This neighbor should use
the IP address associated with the CAMPUS sub-interface.
Step 301 Activate the exchange of NLRI with the CP_BN-L2 for vrf CAMPUS.
exit-address-family
Step 303 Exit configuration mode
end
175
Step 304 Verify the BGP neighborship between the Border and FusionRouter.
show ip bgp ipv4 unicast summary
176
FusionRouter has routes to the SDA Prefixes learned from Control-Border Node (CP-BN-
L2). It also has routes to its directly connected subnets where the DHCP/DNS servers and
WLC reside. Now that all these routes are in the routing tables on FusionRouter, they can
be used for fusing the routes (route leaking).
Route-maps are used to specify which routes are leaked between the Virtual Networks.
These
route-maps need to match very specific prefixes. This can be best accomplished by first
defining a
prefix-list and then referencing that prefix-list in a route-map.
Prefix-lists are similar to ACLs in that they can be used to match something. Prefix-lists are
configured to match an exact prefix length, a prefix range, or a specific prefix. Once
configured, the prefix-list can be referenced in the route-map. Together, prefix-lists and
route-maps provides the deep granularity necessary to ensure the correct NLRI are
advertised to Control-Border Node (CP-BN).
Note: The following prefix-lists and route-maps can be safely copied and pasted.
Step 305 On FusionRouter, configure a one-line prefix-list that matches the /24 CAMPUS
VRF
subnets. Name the prefix list CAMPUS_VRF_NETWORK.
configure terminal
ip prefix-list CAMPUS_VRF_NETWORK seq 5 permit 172.16.101.0/24
ip prefix-list CAMPUS_VRF_NETWORK seq 10 permit 172.16.103.0/24
ip prefix-list CAMPUS_VRF_NETWORK seq 15 permit 172.16.201.0/24
Step 306 On FusionRouter, configure a prefix-list that matches the Shared services subnets.
Name the prefix list SHARED_SERVICES
177
Step 307 Route-maps can now be configured to match the specific prefixes referenced in the
prefix list.
Configure a route-map to match the CAMPUS_VRF_NETWORK prefix list.
Name the route-map CAMPUS_VRF_NETWORK.
configure terminal
route-map CAMPUS_VRF_NETWORK permit 10
match ip address prefix-list CAMPUS_VRF_NETWORK
exit
Step 308 Route-maps can now be configured to match the specific prefixes referenced in the
prefix lists.
Configure a route-map to match the SHARED_SERVICES prefix list.
Name the route-map SHARED_SERVICES
Task 4: Use VRF Leaking to Share Routes and Advertise to Border Node
178
About Route Leaking
Route leaking is done by importing and exporting route-maps under the VRF configuration.
VRFs should export prefixes belonging to itself using a route-map. The VRF should also
import desired routes used for access to shared services using a route-map.
Using the route-map SHARED_SERVICES with the import command will permit the shared
services and to be leaked to the CAMPUS VRF. This will allow the End-Hosts in the Fabric
to communicate with the DHCP/DNS/ISE Servers and the WLC, but not allow inter-VRF
communication.
179
Using the route-target import command will allow for inter-VRF communication. Inter-VRF
communication is beyond the scope of this lab guide, and uncommon in campus
production networks.
If route-maps are not used to specify a particular set of prefixes, VRF leaking can be performed by
importing and exporting route-targets. Using route-targets in this way can export all routes from a
particular VRF instance and imports all routes from another VRF instance. It is less granular and
more often used in MPLS. Route-target allows for multiple import and export commands to be
applied, as they are used without any filtering mechanism – such as a route-map.
Step 309 On Fusion router, configure the CAMPUS VRF to import the Shared Services and
Traditional Network subnets using the Route maps created earlier.
configure terminal
vrf definition CAMPUS
address-family ipv4
import ipv4 unicast map SHARED_SERVICES
export ipv4 unicast map CAMPUS_VRF_NETWORK
exit-address-family
end
This will guarantee that Shared Services network is now reachable in the CAMPUS vrf on
the SDA Border.
180
The routing protocol between the Fabric and the Shared Services is OSPF. This implies
that communication between the SDA users and access points to the DHCP server on the
Shared Services will not be possible unless selected routes are redistributed between the
BGP instance and the OSPF instance running on the Fusion Router
We have already exported the SHARED_SERVICES map into the vrf, on the FusionRouter
so there is no need to re distribute these routes into BGP.
Note that we will not be redistributing the SDA user subnet 172.16.101.0/24 into OSPF.
This is intentional as we will use that subnet to later test L2-Border migration Module.
Step 310 Configure the prefix list for the fabric APs
configure terminal
ip prefix-list FABRIC_ACCESS_POINTS seq 5 permit 172.16.50.0/24
Step 312 Add the below configuration to the OSPF Instance of the FusionRouter
181
router ospf 1
redistribute bgp 65001 route-map FABRIC_to_SHARED_SERVICES
end
182
Module 8: Incremental Migration: Routed Access with existing
subnets, existing switches.
At this point, our lab topology has been transitioned into a Partially SDA Migrated Network,
where the left side of the Network Infrastructure is SD-Access and right side is still
Traditional Network.
In this module we will be bridging the existing VLAN in traditional network to Fabric VLAN
using Layer 2 Border Handoff functionality.
Note: Due to lab constraints, the L3 Border and L2 Border are running on the same device.
L3 Control Plane Border Node and L2 Border Node should NOT be running on the same
device in a production environment. Cisco does not recommend or support this
deployment. This will be resolved in future SRE hands-on labs.
183
Step 314 To start with, let’s verify that you have VLAN 101 configured on a
TraditionalCore_2
and it is allowed on trunk to the CP-BN_L2 node.
Step 315 Verify that the default gateway for 172.16.101.0/24 subnet exists on the
TraditionCore_2 as interface VLAN 101
Step 316 Also verify that the same default gateway exists on the CP-BN-L2 for SDA
Fabric users as
shown below:
184
Step 317 Now we will re-configure the Internal L3 border (CP-BN_L2) and add it as a
L2-Border to the Fabric and bridge Traditional VLAN 101 with Fabric VLAN 1021
(CAMPUS)
Before that we need to make sure that we remove the default Gateway for
VLAN 101 on TraditionalCore_2, so that once DNAC has the L2 Border
Configured can automatically configure the CAMPUS VN Anycast Gateway.
Thereby, the Traditional Network Users can then be able to reach the SVI
situated on L2 Border and get bridged to the SDA Fabric VLAN.
Configure terminal
end
185
Confirm that Vlan 101 is still active
show vlan
Step 318 Login back to Cisco DNA Center and deploy the L2-Border
Navigate to Provision > Fabric > SD-Access_Network and Select Floor-1
Step 319 A flyout for configuring CP-BN-L2 will appear, click on configure, next to
Border
186
Step 320 Select the L2 Hand off Tab
187
Step 322 Enter the external Interface: TenGigabitEthernet 1/0/3 .Associate Fabric User
Pool Production_User_F1 with Traditional user VLAN 101
Click Save
Step 323 A warning message will appear indicating that L2 Border is not
recommended for a border device that is the default to all virtual networks.
188
Click Save and then Add (As we are configuring L2 Handoff on only Internal
Border)
189
Step 325 Click Add
Step 326 Click Deploy
190
NOTE: The testing and Verification for this module will be addressed in Module 10: Host Onboarding and
Verification
191
Module 9: SDA Incremental Migration: Migrating to Fabric
Enabled Wireless
In this module, we will be leveraging the Cisco DNA Center’s capability of Learning
configuration from the Traditional Wireless LAN Controller and provisioning the new
Catalyst 9800-CL as the Fabric Enabled WLC.
Step 328 Return to DNA Center in the browser. Click on the Discovery tool from the
home page.
Step 329 This opens the Discovery dashboard, displaying various attributes associated
to the network discoveries ran by DNAC such as Inventory Overview, the latest
discovery and the Recent 10 discoveries etc.
Step 330 This opens a New Discovery page. enter the Discovery Name as Cat_9800-
CL
192
Step 331 Select the Range button, which now changes to
IP: 192.168.50.240 – 192.168.50.240
Step 332 Expand the Credentials section, and then click on Add Credentials.
The Add Credentials pane will slide in from the right side of the page.
Step 333 Use the table below to populate the applicable credential.
Field
Port 830
NETCONF
Credentials
193
Step 334 Scroll down the page and click to open the Advanced section.
Click the protocol SSH. Ensure it has a blue check mark to it.
Click on Discover in the end and from the scheduler fly-out, click start.
Step 335 Verify that the device with the following IP addresses have been discovered
192.168.50.240 – C9800 LAN Controller
194
Step 336 Click the logo to return to the DNA Center dashboard .
Task 2: Learning Configuration from Traditional WLC 3504
Step 337 From the DNA Center home page, click Provision to enter the Provision
Application
Step 338 The Provision Application will open to the Inventory Page
Step 339 Select the WLC 3504
Step 340 Click Actions .
Step 341 Choose Provision and then Learn Device Config
195
Step 342 Click on Choose a site button, a fly out will open.
196
Step 343 Now Select the learned Access Point (if any)
Click on Assign Site and Choose the same site as Floor-1 above
Note: Your actual lab environment may have a different number of access points.
197
Step 345 Confirm that there are no configuration conflicts and Click Next
198
Step 346 Verify the device configuration that will be Migrated from WLC 3504 to new C9800-
CL
Note: SSID’s will match the current pod. e.g. Pod03 will have “SSID CAMPUS_DEVICES-
Pod03”
Step 347 At step , verify the Network Profiles configured on the Traditional WLC 3504
Those will now be migrated to the new C9800-CL. Click Save
199
Step 348 Confirm the Brownfield Learn configuration was successful
200
201
Step 350 Edit the CAMPUS_USERS SSID to make it fabric enabled.
Step 351 We will now change the Wireless Profile to associate the SSID from a
Brownfield
Profile, to a Fabric Enabled Wireless Profile
Click Next
202
Uncheck the Selected Brownfield Profile and Click Add
203
Step 353 Verify the SDA_Wireless_Profile and click Finish.
Step 354 Similarly, Edit the CAMPUS_DEVICES SSID and associate it with the same
SDA_Wireless_Profile
204
Task 4: Provisioning the Cat-9800-CL WLC
205
206
207
Step 356 Hence, confirm the successful C9800-CL Provisioning
Step 358 On the main dashboard, Navigate to Configuration> Tags and Profiles >
WLANs
208
Step 359 We will notice that both the WLANs on Cat-9800 WLC are in Down state
Step 360 To enable, we must first add the Cat 9800-CL to the fabric and associate the
IP Pool to the Wireless SSIDs.
Go back to DNAC GUI and Navigate to Provision > Fabric > SD-
Access_Network > Fabric > Floor-1 Fabric Infrastructure tab
209
Select C9800-CL
210
Click apply at fly-out window
Step 362 Click Deploy and confirm the C9800-CL icon turned to solid Blue
211
Step 363 Next, we will associate the IP Pools with the Wireless SSIDs under the Host
Onboarding > Wireless SSIDs tab.
Select FACULTY_VLAN address pool for SSID CAMPUS_USERS-Pod(xx)
Select CAMPUS_VLAN address pool for SSID CAMPUS_DEVICES-Pod(xx)
(your pod number will be displayed).
Step 364 Once completed, we can navigate back to the Cat9800 GUI to confirm that
the SSID
CAMPUS_USERS-Pod(XX) is now enabled.
Step 365 In order to do that, we need to add Option 43 to the DHCP server pointing to
Management IP address of C9800 WLC (192.168.50.240).
Note: This step has been already been configured
212
Step 366 Verify that the ports on the EdgeNode1 is configured as Access Points ports.
Go back to the DNAC, Navigate to Provision> Fabric> SD_Access-Network
>Floor-1
213
On the Host Onboarding tab, go to Port Assignment and verify Gig1/0/1 and Gig1/0/2
If not done already, assign them as Access Points ports using the pool as
Fabric_AP_F1 and No Authentication.
Step 367 On the Host Onboarding Page, go to select Port Assignment and select
interface
GigabitEthernet1/0/1 and 1/0/2 click on Assign
214
Then click Update
Step 369 We should be able to see the Access Points in few minutes into the DNAC
inventory, once they join the controller with the new IP assigned to it.
Navigate to Provision>Network Devices>Inventory>Floor-1
Step 370 Once the AP are reachable as shown above, provision the discovered
Access Points as below. You may need to go to Global Level and assign the (2)
access points to Floor-1, to be visible.
Note: You might need to resynch the 9800-CL controller once for APs to be shown in
DNAC GUI and not wait for default resynch from the DNAC.
215
Step 371 Make sure they are assigned to the site: Global/SanJose/Building-1/Floor-1
216
Step 374 Click OK to the warning for APs to reboot on Provisioning. This will take
several minutes. Continue to refresh Cat-9800-WLC GUI, to check status.
Step 375 Confirm the Access Points provisioning by the Success Message.
Step 376 Now let’s verify Fabric enabled APs on the C9800-CL WLC
Open the Cat-9800-WLC GUI https://192.168.100.30 using Credentials
Username: Operator
Password: CiscoDNA!
Step 378 You can also verify the Fabric Status and the WLC control plane on C9800-
CL
Go to: Configuration > Wireless > Fabric
217
On the Control Plane tab, we must see the CP-BN-L2 (192.168.255.3) as the control plane
for Fabric enabled C9800-CL WLC
Now that the we have migrated the profiles to the SDA Fabric C9800, we can disable
the Wireless LAN on the WLC-3504
Step 379 Open the web browser and log into 3504-WLC GUI https://192.168.51.240
using Credentials
Username: Operator
Password: CiscoDNA!
Step 380 On the dashboard, click on Wireless Networks
Step 381 In the left pane, select WLANs under the WLANs dropdown.
218
Step 382 Select the Enterprise and Enterprise_mab profiles by checking the box. From
the dropdown box at the top of the page and select Disable Selected. Then click Go
Step 383 A confirmation will appear at the top of the page. Select OK
Note that both profiles are now Disabled under Admin Status
219
Module 10: Host Onboarding and Verification
In this module, we will be onboarding the SDA Wired and Wireless Host. We will also be
confirming if the SDA Wired and Wireless Host can successfully communicate with the
Traditional Users over both the L3 and L2 Borders.
Wired User
Let’s connect to the Wired user and make sure it is getting onboarded in the fabric. You
will see this authentication through ISE
Step 384 From the Chrome browser on your jump host, launch the Guacamole client
and Access PC01-Wired (PC VMs in upper left next to ISE)
Note: you will be auto logged in to the windows machine with the username and
password of admin:CiscoDNA!
Confirm the user Emily can authenticate and ping its default gateway hosted on the
fabric edge (Anycast Gateway). You will be auto-logged in, using the local
credentials
Step 385 Click the windows icon and type cmd and hit enter, this will open up a
Command Prompt. Enter the following commands:
ipconfig/all
220
ping 172.16.101.1
Step 386 From EdgeNode1 you can confirm Dot1x authentication. run the following
command on EdgeNode1:use Operator/CiscoDNA! creds when logging in
show authentication sessions interface g1/0/23 details
221
Onboarding Wireless User
Let’s connect to the Wireless user and make sure it is getting onboarded in the
fabric after authentication through ISE
Step 387 From the Chrome browser on your jump host, launch the Guacamole client
and Access PC(XX)-Wireless
Step 388 We must be able to view the Fabric Enabled SSID “CAMPUS_USERS-
PodXX” (XX = pod number)
222
Login credentials are:
Username: Fred
Password: CiscoDNA!
Step 390 From the Chrome browser on your jump host, launch the Guacamole client
and Access PC02-Wired. Using the command prompt with the ping command
verify the Traditional User (Vlan101) (IP Address: 172.16.101.10) has reachability
to SDA Fabric user (172.16.101.50) via ping
ping 172.16.101.50
Step 391 Now let’s test in the other direction. Launch the Guacamole client and
Access PC01-Wired. Using the command prompt with the ping command verify
the SDA Fabric User (IP Address: 172.16.101.50) has reachability to Traditional
User (172.16.101.10) via ping
223
ping 172.16.101.10
This shows the successful configuration of SDA L2 Border and the SDA VLAN 1021
(CAMPUS vrf) bridging with Traditional user VLAN 101.
224
Module 11: Migrating the last network segment into SD-
Access
In this module, we will migrate the last traditional network into SD-Access by transitioning
Traditional_Core2 into the SDA Collocated Control-Border Node (External Border) for
Control Node redundancy purposes and discovering TraditionalAccess_2 as Edge Node2.
225
Before we provision TraditionalCore_2, we will need to remove interface vlan 103 from
TraditionalCore_2
Step 392 From SecureCRT console into TraditionalCore_2 and remove interface vlan
103
configure terminal
no interface vlan 103
NOTE: For a production environment, you should remove any existing configuration and
start with a fresh configuration with just the underlay connectivity. Because this a lab
environment, the configuration on this switch is configured with just the underlay in
place.
Before we proceed, we will want to re-sync the Traditional_Core_2 with DNA Center to
update DNA Center with the changes made.
Step 393 In DNA Center go to Provision > Inventory. Select Traditional_Core_2 and
under Actions > Inventory > Resync Device.
Step 394 Make sure the device has the role of BORDER-ROUTER instead of ACCESS,
before proceeding further.
226
Step 395 Now we need to provision and assign Traditional_Core2. Select
Traditional_Core_2 and under Actions > Provision > Provision Device.
Step 396 Proceed through the provision process and Deploy. This must be done or the
next step will fail.
Step 397 Go back to Provision > Fabric > SD-Access_Network > Fabric
> Floor-1 Fabric Infrastructure tab
Click on Floor-1
227
Step 398 Click on TraditionalCore_2, to assign site and reprovision…
228
Step 399 This will open a new fly out window to the right with additional settings, to be
done for the fabric border role. Select Layer 3 Handoff
We will be adding the transit that we created earlier. Add the details as follows
Option Value
Local Autonomous Number 65000
Select IP Address Pool FusionRouter_F1(192.168.170.0/24)
Transits IP:SDA_External
229
Step 400 Click the dropdown for the SDA_External transit and click on Add Interface.
Select the External interface to be TenGigabitEthernet1/0/2
Step 401 All the fields are populated now, Click Save, Add, and then Add again to
complete the border configuration
Step 402 Observe that the device has a Blue outline, but it has not yet been deployed.
You need to click on Deploy
Step 403 A Fly-out window to the right will appear. Verify that it will deploy Now and
click on Apply.
230
Step 404 The Fabric device provisioning has initiated and after it has pushed the
requisite configurations, it will show up as that the device has been updated to the
Fabric Domain Successfully
231
Step 408 The next step is to discover the EdgeNode2 into DNAC inventory and add it
to the fabric. Go to the DNAC, under Tools>Discovery, open a new Discovery
Step 409 Click on Add Discovery and populate the below details
Discovery Name:
IP Address Range: 192.168.255.2 – 192.168.255.2
Preferred Management IP Address: Use Loopback
CLI Credentials: Devices
Advance Settings: SSH
232
Step 410 You must see the discovery Success and then EdgeNode2, will have been
discovered.
233
Step 411 Now, Navigate to Provision>Network Devices>Inventory and Provision the
EdgeNode2
Step 413 Now, Navigate to Provision> Fabric >SD-Access_Network> Floor-1 and click
EdgeNode2 to add it to the SDA Fabric
234
Step 414 Enable it as an Edge Node and Click Add
235
Hence, we have successfully migrated the Traditional Network into a SD-Access Network.
236
This completes Module-11
237