You are on page 1of 47

1

2
Cisco Live 2013 3/11/15

3
Network engineers fully understand the layering concepts of the OSI Reference
Model. While ACI presents a new paradigm, it is still possible to map the constructs
of the ACI fabric to the ACI Logical Model to 7 Layer OSI Reference Model. Cisco
ACI is not breaking the OSI model; rather, it is fixing it.

4
This is an overview on Initial Configuration steps for an ACI fabric after fabric
discovery (previously discussed). Creating Tenants, Private Networks, Bridge
domains etc will be covered in later sections.
Various tasks are performed by different administrative roles. Fabric-wide or tenant
administrators create predefined policies that contain application or shared resource
requirements. These policies automate the provisioning of applications, network-
attached services, security policies, and tenant subnets, which puts administrators in
the position of approaching the resource pool in terms of applications rather than
infrastructure building blocks. The application needs to drive the networking behavior,
not the other way around.
A tenant is a container for policies that enable an administrator to exercise domain-
based access control. The fabric-wide administrator must create both the user tenant
containers and the tenant administrators.
The system provides the following four kinds of tenants:
•  The infrastructure tenant is provided by the system but can be configured by
the fabric administrator. It contains policies that govern the operation of
infrastructure resources such as the fabric VXLAN overlay. It also enables a
fabric provider to selectively deploy resources to one or more user tenants.
Infrastructure tenant Policies are configurable by the fabric administrator.
•  The common tenant is provided by the system but can be configured by the
fabric administrator. It contains policies that govern the operation of resources
accessible to all tenants, such as firewalls, load balancers, Layer 4 to Layer 7
services, intrusion detection appliances, and so on.

5
Cisco Live 2013 3/11/15

6
Out-of-band management network is a necessary part for providing access to the
network infrastructure when the primary links to the devices are down. In the Data
Center OOB is generally provided via an exclusive OOB management LAN .

Like other Cisco Nexus devices, ACI elements provide a a separate management port
from each device to connect to the OOB management LAN.

7
Cisco recommends designating individual ranges that are for a “single” address.
Inband refers to management access performed through one or more leaf’s front-
panel ports
You can also assign a pool of inband mgmt IP addresses in the APIC GUI. Just like
with OOB, you can do one IP per pool and one pool per device. Or you can have one
large pool and let APIC handle the distribution of Ips.
Inband management is a bit peculiar on ACI. APIC ships with a default mgmt tenant
under which you’ll find a default inb BD and a default inb VRF. The inband mgmt
subnet you assign through APIC cannot extend outside the fabric.
The preferred method to configure OOB addresses is within APIC.
•  Navigation: Tenant mgmt | Node Management Address

The infrastructure administrator can configure a Node Management Policy for each
APIC node to tightly control the IP address assignment; designate individual ranges
that are for a “single” address.
•  Managed Object Property Name
Class mgmt:OoB

OOB in Nexus standalone mode; presented to provide a frame of reference for OOB
in ACI:
configure
interface mgmt0
ip address ipv4-address[/length]

8
9
10
Fabric policies govern the operation of the interfaces that connect spine and leaf
switches (switch fabric ports), including such functions as Network Time Protocol
server synchronization (NTP), Intermediate System-to-Intermediate System Protocol
(IS-IS), Border Gateway Protocol (BGP) route reflectors, Domain Name System
(DNS) and so on. The fabric MO (managed object) contains objects such as power
supplies, fans, chassis, and so on.
Access policies govern the operation of external-facing interfaces that do not
connect to a spine switch (switch access ports) that provide connectivity to
resources such as storage, compute, Layer 2 (bridged) and Layer 3 (routed)
connectivity, virtual machine hypervisors (VMM), Layer 4-to-7 devices, and so on. If a
tenant requires interface configurations other than those provided in the default link,
Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP), Link
Aggregation Control Protocol (LACP), or Spanning Tree, an administrator (infra
admin) must configure access policies to enable such configurations on the access
ports of the leaf switches.
Access policies are grouped into the following categories:
•  Switch profiles specify which switches to configure and the switch configuration
policy.
•  Module profiles specify which leaf switch access cards and access modules to
configure and the leaf switch configuration policy.
•  Interface profiles specify which access interfaces to configure and the interface
configuration policy.
•  Global policies enable the configuration of DHCP, QoS, and attachable access

11
APIC
Ntpstat
Switches
Show ntp peers
Show ntp peer-status

12
13
14
Tips from Don:
1. Two ways to config NTP in ACI: a) add an NTP server under default Date-
and-Time policy, or b) create a new Date-and-Time policy with an NTP server
under it. Either way, select your date-time policy (default or custom) on the
Pod-Policy. I tested both methods in the Sim lab. I config the BGP-RR and
Date-Time policies on the Pod-Policy up front.

2. Verify NTP sync on APIC CLI using “ntpstat” cmd. Should be


“synchronized”. It take several minutes to sync at first.

3. Windows server as NTP server. I had customer try this and APIC never
sync’d to it. I found articles stating that the default Win service (W32Time) has
low accuracy and Linux ntpd will not sync to it.

4. The ACI GUI does not show the sync’d time from the APICs. This looks like
a defect.

5. I setup a CSRouter VM as an NTP server in the Sim lab, and the APIC can
sync to it. Example config is below. This could work in a pinch at customer
site. The same CSR can also be used as a test VM for ANPs.

15
16
Fabric policies govern the operation of the interfaces that connect spine and leaf
switches (switch fabric ports), including such functions as Network Time Protocol
server synchronization (NTP), Intermediate System-to-Intermediate System Protocol
(IS-IS), Border Gateway Protocol (BGP) route reflectors, Domain Name System
(DNS) and so on. The fabric MO (managed object) contains objects such as power
supplies, fans, chassis, and so on.
Access policies govern the operation of external-facing interfaces that do not
connect to a spine switch (switch access ports) that provide connectivity to
resources such as storage, compute, Layer 2 (bridged) and Layer 3 (routed)
connectivity, virtual machine hypervisors (VMM), Layer 4-to-7 devices, and so on. If a
tenant requires interface configurations other than those provided in the default link,
Cisco Discovery Protocol (CDP), Link Layer Discovery Protocol (LLDP), Link
Aggregation Control Protocol (LACP), or Spanning Tree, an administrator (infra
admin) must configure access policies to enable such configurations on the access
ports of the leaf switches.
Access policies are grouped into the following categories:
•  Switch profiles specify which switches to configure and the switch configuration
policy.
•  Module profiles specify which leaf switch access cards and access modules to
configure and the leaf switch configuration policy.
•  Interface profiles specify which access interfaces to configure and the interface
configuration policy.
•  Global policies enable the configuration of DHCP, QoS, and attachable access

17
This slide outlines the interface configuration process.

Naming Convention examples:


Pools: App_1_VlanPool, Outside_VlanPool
•  Fabric | Access Policies | Pools | VLAN
Physical and External Domains: App_1_PhyDom
•  Fabric | Access Policies | Physical and External Domains | Physical Domains
Attachable Entity Profile: App_1_AEP, Outside_L2_AEP, vDS_AEP
•  Fabric | Access Policies | Global Policies | Attachable Access Entity Profiles
Interface Policies: CDP_Enabled, LACP_Active
•  Fabric | Access Policies | Interface Policies | Policies
Interface Policy Group: App_1_Acc_PolGrp, UCS_vPC_PolGrp, ESX_BM_PolGrp*
•  Fabric | Access Policies | Interface Policies | Policy Groups
Interface Profiles: Leaf_1_App_1_IntProf, Leaf_to_UCS_FI_A
•  Fabric | Access Policies | Interface Policies | Profiles
Switch Profiles: Leaf_1_PhySwi, Leaf_2_PhySwi
•  Fabric | Access Policies | Switch Policies | Profiles

18
19
Policy-based Configuration of Access Ports – The infrastructure administrator
configures ports in the fabric for speed, Link Aggregation Control Protocol (LACP)
mode, LLDP and Cisco Discovery Protocol, etc.
In Cisco ACI, the configuration of physical ports is designed to be extremely simple
for both small- and large-scale data centers. The underlying philosophy of Cisco ACI
is that the infrastructure administrator categorizes servers based on their
requirements: virtualized servers with hypervisor A connected at a Gigabit Ethernet,
nonvirtualized servers running OS A connected at 10 Gigabit Ethernet, etc.
Cisco ACI provides a way to keep this level of abstraction when defining the
connection of the servers to the fabric. The infrastructure administrator prepares a
template of configurations for servers connected with active-standby teaming,
PortChannels, and vPCs and bundles all the settings for the ports into a policy group.
The administrator then creates objects that select interfaces of the fabric in ranges
that share the same policy-group configuration.

20
Access policies configure external-facing interfaces that do not connect to a spine
switch. External-facing interfaces connect to external devices such as virtual machine
controllers and hypervisors, hosts, routers, or Fabric Extenders (FEXs). Access
policies enable an administrator to configure port channels and virtual port channels,
protocols such as LLDP, CDP or, LACP, and features such as monitoring or
diagnostics.
While tenant network policies are configured separately from fabric access policies,
tenant policies are not activated unless the underlying access policies they depend on
are in place.

Interface Policies are created globally for the fabric and assign interface level
settings.
•  Navigate: Fabric | Access Policies | Interface Policies | Policies

Configurable parameters include:


•  Link Speed
•  CDP State
•  LLDP State
•  LACP Mode
•  LACP Priority
•  Example: Link Speed = 10Gbps | CDP State = Enabled | LLDP = Disabled

21
An Interface Policy Group can be seen as a container. It associates to a set of
policies. Interface Policy Groups are interface definitions for Access ports and (v)Port
Channels. Interface Policy Groups consume Interface Policies and provide them
to an Interface Profile.
Configurable parameters include:
•  Port Channel
•  vPC
•  CDP Policy
•  Monitoring Policy
•  Attached Entity Profile
•  …
Example: Leaf-to-UCS-FI_PolGrp, ESX_BM_PolGrp

When creating a Interface Policy Group, admin has three options:


•  Access Port Policy Group: all interfaces which will consume this Policy Group will
share same configuration and will act as individual interface
o  All interfaces associated with this type of policy group act as individual
interfaces but share same configurations
•  Port Channel Interface Policy Group: on each switch, all interfaces which will
consumer this Policy Group will be part of a port channel
o  All interfaces associated with this type of policy group are part of same port
channel
•  Virtual Port Channel Interface Policy Group: if two switches are paired together to

22
The infra admin can verify the Interface Policy Groups created by selecting FABRIC |
ACCESS Policies | Policy Groups.

The example illustrates two tables, the first specific to interfaces, the second for
VPCs. Each column identifies the specific Interface Policies included in the Interface
Policy Group.

23
While creating an Interface Access Profile, the infrastructure admin specifies which
interfaces to configure.
Admin associates the Interface Access Profile with an Interface Policy Group:
•  All interfaces in the Interface Access Profile will be configured using policies
selected in the Interface Policy Group

Selectors were thought to provide admin with an easy way to deploy a group of
switches or interfaces/modules sharing same configuration.
Assume a set of interfaces (for instance from 1/10-20) on a set of switches (for
instance from 101-105 and 110) have same configuration, instead of asking admin to
log into one switch at a time and configure one interface at a time, let the admin:
1.  Select a set of switches (in our example switches 101-105 and 110)
2.  Select a set of interfaces (in our example interfaces 1/1 to 1/10);
3.  Select a set of policies (all policies to be deployed on the selected interfaces)
4.  Connect all those together.

Final result would be that on all the selected switches, on all the selected interfaces,
the selected policies will be deployed.

24
While creating a Switch Access Profile, admin associates it with an Interface Access
Profile:
•  On all switches specified by the Switch Access Profile, all interfaces specified in
the associated Interface Access Profile will be configured using policies selected in
the Interface Policy Group
While creating an Interface Access Profile, admin specifies which interfaces to
configure. Admin associates the Interface Access Profile with an Interface Policy
Group:
•  All interfaces in the Interface Access Profile will be configured using policies
selected in the Interface Policy Group
Selectors were thought to provide admin with an easy way to deploy a group of
switches or interfaces/modules sharing same configuration. Assume a set of
interfaces (example interfaces 1/10-20) on a set of switches (example nodes 101-105
and 110) have same configuration, instead of asking admin to log into one switch at a
time and configure one interface at a time, let the admin:
•  Select a set of switches (in our example switches 101-105 and 110)
•  Select a set of interfaces (in our example interfaces 1/10 to 1/20);
•  Select a set of policies (all policies to be deployed on the selected interfaces)
•  Connect all those together.

Final result would be that on all the selected switches, on all the selected interfaces,
the selected policies will be deployed.
Preprovisioning Switch Profiles for Each Leaf – After bringing up the fabric, the

25
26
Access Layer – The configuration of Cisco ACI access policies requires an
understanding of VLAN and VxLAN name spaces. Access policies also make
configuration of PortChannels and vPCs easy to accomplish.
Use of VLANs as a Segmentation Mechanism – In Cisco ACI, the VLANs used
between a server and a leaf have local significance and they are used exclusively to
segment traffic coming from the servers. Cisco ACI has been designed so that when
using virtualized workloads you don’t have to enter VLAN numbers manually per each
port-group. Whenever possible one should leverage the dynamic negotiation of
VLANs between the virtualized server and the Cisco ACI fabric.
In ACI you create VLAN pools that are associated with “Domains”. VLAN Pools are
create for use both internal to the fabric (e.g. physical or virtual machines) and
external to the fabric (e.g. L2 extension)
•  VLANs cannot be overlapping on the same leaf switch
•  VLAN Pools are consumed by Physical or VMM (virtual) Domains
•  Navigate: Fabric | Access Policies | Pools | VLAN

27
VLANs in ACI don’t have the same meaning as VLANs in a regular switched
infrastructure. The VLAN tag is used purely for classification purposes. Traffic is
mapped to a Bridge Domain that has a global scope so local VLANs on two ports may
differ even if they belong to the same “broadcast” domain. Further mode VLANs can
be used to segment traffic into EPGs but again they have local significance on a link.
In ACI you create VLAN pools that you can then associate with “Domains”.

28
29
Concept of Domain – Whether you connect physical or virtual servers to the Cisco
ACI fabric, you define a physical or a virtual domain. Virtual domains reference a
particular virtual machine manager (for example, VMware vCenter 1 or data center
ABC) and a particular pool of VLANs or VxLANs that will be used. A physical domain
is similar to a virtual domain except that there’s no virtual machine manager
associated with it.
The person who administers the VLAN or VxLAN space is the infrastructure
administrator. The person who consumes the domain is the tenant administrator. The
infrastructure administrator associates domains with a set of ports that are entitled or
expected to be connected to virtualized servers or physical servers through an attach
entity profile (AEP).
You don’t need to understand the details of the AEP except that it encapsulates the
domain. The AEP can include boot policies for the virtualized server to boot from the
network, and you can include multiple domains under the same AEP and authorize
virtualized servers of different kinds.

External Bridged Domains, External Routed Domains, and Physical Domains


consume VLAN pools and provide resources to Attachable Entity Profiles (AEP).
Physical Domains are used for devices attaching to the fabric – servers, virtual
machines, firewalls etc
Navigate: Fabric | Access Policies | Physical and External Domains

30
The Infrastructure Administrator carves up the VLAN space into multiple Physical
Domains. These domains are then associated with interfaces via Attach Entity Profile.
Click FABRIC | ACCESS POLICIES | Physical and External Domains | Physical
Domains | Create a physical domain and associate it with the VLAN pool
In NXOS, there’s no concept of Domains. You can do something vaguely similar by
using VLAN translation, which translates a VLAN from a link into some other VLAN
number in order to deal with overlapping VLAN spaces. You can also use VDCs to
achieve a similar result by partitioning ports into different VDCs and re-using the
same VLAN address space in different VDCs.

To analyze what these dialogs illustrate:


The “TEST-PhyDom” Physical Domain is associated to the “TEST-VLAN-Pool” and
the physical ports via the TEST-AEP.

31
32
Attach Entity Profiles (AEP) represent a group of external entities with similar
infrastructure policy requirements. The infrastructure policies consist of physical
interface policies, such as Cisco Discovery Protocol (CDP), Link Layer Discovery
Protocol (LLDP), Maximum Transmission Unit (MTU), or Link Aggregation Control
Protocol (LACP).
An AEP is required to deploy VLAN pools on leaf switches. Encapsulation pools (and
associated VLAN) are re-useable across leaf switches. An AEP implicitly provides the
scope of the VLAN pool to the physical infrastructure.
The following AEP requirements and dependencies must be accounted for in various
configuration scenarios:
•  While an AEP provisions a VLAN pool (and associated VLANs) on a leaf switch,
endpoint groups (EPGs) enable VLANs on the port(s). No traffic flows unless an
EPG is deployed on the port.
•  Without AEP VLAN pool deployment, a VLAN is not enabled on the leaf port
even if an EPG is provisioned.
•  A particular VLAN is provisioned or enabled on the leaf port that is based on
EPG events, either statically binding on a leaf port or based on VM events from
external controllers such as Vmware vCenter.
•  A leaf switch does not support overlapping VLAN pools. Different overlapping
VLAN pools must not be associated with the same AEP.

In summary, AEPs provide VLAN access control (i.e. which interfaces or VMM
domains have which VLANs assigned). In the Fabric Access Policies, the

33
Attachable Entity Profiles (AEP) consume Physical and/or External Domains and
are provided to Interface Policy Groups and VMM Domains.

Navigate: Fabric | Access Policies | Global Policies | Attachable Access Entity Profiles

34
AEPs are required to enable VLANs on the leaf. Without VLAN pool deployment
using AEP, a VLAN is not enabled on the leaf port, even if an EPG is provisioned.
VLANs are not actually provisioned/enabled on the port. No traffic flows unless EPG
provisions a VLAN.
A particular VLAN is provisioned and enabled on the leaf port based on EPG events:
•  Static binding on a leaf port
•  LLDP discovery in the case of a VMM domain (i.e. VMM Events from vCenter)

35
To apply a configuration across a potentially large number of switches, an
administrator defines switch profiles that associate interface configurations in a single
policy group. In this way, large numbers of interfaces across the fabric can be
configured at once. Switch profiles can contain symmetric configurations for multiple
switches or unique special purpose configurations. The following figure shows the
process for configuring access to the ACI fabric.
Interface Configuration Summary
1.  Configure VLAN Pools
2.  Configure Domain [ Physical, External.. ] à associate VLAN Pool
3.  Configure AEP à associate Interface Policy Group
4.  Configure Interface Policies [ link speed, CDP, LLDP, LACP, etc.. ]
5.  Configure Interface Policy Group à associate Interface Policies
6.  Configure Interface Profile à associate Physical Interfaces
7.  Configure Switch Policy Profile à associate Interface Profiles

36
Cisco Live 2013 3/11/15

37
Over the past few years many customers have sought ways to move past the
limitations of spanning tree. The first step on the path to modern data centers based
on Cisco Nexus solutions came in 2008 with the advent of Cisco virtual
PortChannels (vPC). A vPC allows a device to connect to two different physical
Cisco Nexus switches using a single logical Cisco PortChannel interface.
Prior to Cisco vPC, Cisco PortChannels generally had to terminate on a single
physical switch. Cisco vPC gives the device active-active forwarding paths. Because
of the special peering relationship between the two Cisco Nexus switches, spanning
tree does not see any loops, leaving all links active. To the connected device, the
connection appears as a normal Cisco PortChannel interface, requiring no special
configuration. The industry standard term is called Multi-Chassis EtherChannel; the
Cisco Nexus implementation is called vPC.
Cisco vPC deployed on a spanning tree Ethernet network is a very powerful way to
curb the number of blocked links, and thereby increasing available bandwidth. Cisco
vPC on Cisco Nexus 9000 switches is a great solution for commercial customers, and
those satisfied with current bandwidth, oversubscription, and Layer 2 reachability
requirements.

To use existing hardware, increase access port density and increase 1 Gigabit
Ethernet port availability, Cisco Nexus 2000 Series Fabric Extenders can be
attached to the Cisco Nexus 9300 platform in a single-homed straight-through
configuration. Host uplinks that are connected to the fabric extenders can either be
Active/Standby, or Active/Active, if configured in a vPC.

38
Cisco vPC provides the following benefits:
•  Allows a single device to use a Cisco PortChannel connected to two upstream
Cisco Nexus switches.
•  Eliminates spanning tree blocked ports.
•  Provides a loop-free topology.
•  Uses all available uplink bandwidth.
•  Provides fast convergence if either a link or a switch fails.
•  Provides link-level resiliency through standard Cisco PortChannel mechanisms.
•  Helps ensure high availability by connecting to two different physical switches.

The Council of Oracles Protocol (COOP) runs on Spine to ensure all spines
maintain a consistent copy of end-point addresses and location information.

39
Pre-provisioning vPC Domains for each Leaf pair – As part of the initial
configuration, you can divide the leaf switches into pairs of vPC domains by creating
a vPC protection policy. You should pair the leaf switches in the same way as you
paired them in the switch profiles: that is, you could create vpcdomain1, vpcdomain2,
etc., where vpcdomain1 selects leaf switches 101 and 102, vpcdomain2 selects leaf
switches 103 and 104, etc..
The infrastructure administrator is responsible for the following operations:
•  Creating the vPC policy group
•  Associating the correct AEP with the policy group

The tenant administrator is responsible for associating the EPG with the path that
includes the vPC

40
The following vPC configuration example is a review of the Access Policy already
discussed in this lesson. You can either configure the individual Policies (switch
profile, interface profile, etc) or utilize the “Quick Start” wizard.
Virtual Port-Channel
In ACI, the logic to define a virtual Port-Channel is as follows:
•  The infrastructure administrator creates the VPC domain from the "Access
Policies” as "protection" policy (i.e. which "nodes" are part of a vPC domain)
•  The infrastructure administrator defines the "Access Policy Group" "Bundle
Interfaces" where one creates the specific vPC channel-group configuration
•  The interface policy defines a list of interfaces that are associated with the vPC
channel-group but it doesn’t specify which leafs this is associated with
•  The switch policy defines a list of "nodes" nodes whose interfaces selected by the
interface policy are defined as part of the vPC

The configuration of the vPC domain (to be performed only once per pair of vPC leaf
switches) is as follows:
http://10.51.66.236/api/mo/uni/fabric.xml
<fabricProtPol name="protocolpolicyforvpc">
<fabricExplicitGEp name="myVpcGrp" id="101">
<fabricNodePEp id="101"/>
<fabricNodePEp id="102"/>
</fabricExplicitGEp>
</fabricProtPol>

41
Port Channels in Nexus standalone mode is illustrated to provide a frame of reference
for ACI.

42
Cisco Live 2013 3/11/15

43
The picture illustrates the relation of the components that define the association of the
port with the VLANs.
•  In the Fabric Access Policies the Infrastructure Administrator assigns the Physical
domain to a set of leafs / ports via the Attach Entity Profile, which is referenced by
Switch Policies Profiles and by Interface Policy Profiles via the Policy Group.
•  A Domain automatically derives all the physical interfaces policies from the
interface policy groups associated with AEP.
•  AEP is the glue between the valid vlans and the interfaces

Although End Point Groups (EPGs) have yet to be discussed, it is important to know
that when you create an EPG, you associate the EPG to a Domain.
•  If the AEP or other access policies are misconfigured, faults are not generated until
you configure end point groups (EPGs).

44
In the Fabric Access Policies, the Infrastructure Administrator assigns the Physical
domain to a set of leaves/ports via the Attach Entity Profile (AEP); which is associated
to an Interface Policy Group. A Domain automatically derives all the physical
interface policies from the Interface Policy Group associated with the AEP. In
essence, the AEP is the glue between the valid VLANs and the interfaces.
Although End Point Groups (EPGs) have yet to be discussed, it is important to know
that when you create an EPG, you associate the EPG to a Domain.
•  If the AEP or other access policies are misconfigured, faults are not generated until
you configure end point groups (EPGs).

Ensure no two domains are deployed on the same leaf with associated namespaces
with overlapping encap ranges. If this happens then, the encap range which reaches
the leaf first is respected; the later one is considered overlapping and ignored. In
such cases, the EPG which is associated to the domain with overlapping encap
namespace will have a “invalid-vlan” fault raised on it.

45
46
47

You might also like