You are on page 1of 112

ASR9000 Selected Topics and

Troubleshooting

Presenter Name and Title


BRKSPG-2904
Agenda

• Integrated Routing and Bridging


• Punt and Inject
• ARP
• Loadbalancing
• Troubleshooting Packet Forwarding
• eXR
• Usability
Integrated Routing and
Bridging
What’s IRB ?
• IRB: Integrated Routing and Bridging
– From CCO configuration guide: “allows you to route a
given protocol between routed interfaces and bridge groups
within a single switch router”
–IRB is old technology on Router platform for over 16 years
–IRB use BVI (Bridge-group virtual interface) for L3 routing.
BVI represents logical L3 interface for a group of L2 ports
–BVI is considered as regular L3 logical interface, which has
IP address and other L3 features
 SVI: Switched Virtual Interface (also called interface
VLAN)
–Switch platforms use feature name “SVI” (switched virtual
interface) instead of IRB. SVI represents the logical L3
interface for all L2 ports in that VLAN. It provides the same
function as IRB. SVI is much more widely deployed than
IRB

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
IRB CLI Example Interface gig 0/0/0/1.50 l2transport
encapsulation dot1q 50
rewrite ingress tag pop 1 Symmetric
VPLS/H-VPLS
PWs Interface gig 0/0/0/2 l2transport
BVI (bridge
Logical virtual interface)
encapsulation dot1q 50
“bridge” port MPLS rewrite ingress tag pop 1 Symmetric

l2vpn
L2 ports
bridge group cisco
bridge-domain domain50
Interface gig 0/0/0/1.50
Bridge Interface gig 0/0/0/2
Domain routed interface bvi 20  BVI

neighbor 1.2.3.4 pw-id 55


vfi 60
ASR 9000 IRB neighbor 2.3.4.5 pw-id 60

L3 port
Interface gig 0/0/0/5.20
encapsulation dot1q 20
ipv4 address 2.2.2.2 255.255.255.0

Interface bvi 20  BVI


ipv4 address 1.1.1.1 255.255.255.0

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
IRB Packet Forwarding Overview
VPLS/H-VPLS
PWs
BVI (bridge
Logical virtual interface)
“bridge” port MPLS
L3 port
L3 routing is via BVI. It involves two steps:
(1) L2 bridging between L2 ports and the
L2 ports BVI “bridge” port. This is the regular
MAC address based L2 forwarding. BVI
is tied to a internal bridge “port”. BVI’s
Bridge
MAC address is used for L2 forwarding
Domain
(2) L3 routing between BVI and the other
L3 interfaces

ASR 9000 IRB Depends on the HW capability, the above


two steps L2 and L3 forwarding could be
handled in one forwarding lookup cycle,
Regard L2 bridging or need two lookup cycles.
among all logical L3 services
“bridge” ports in the involve two If needs two lookup cycles, packet is
same bridge-domain steps typically re-circulated back to the
forwarding ASIC for second lookup,
which will have potential performance
impact
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
IRB Packet Flow (1) – Unicast: L2  L3

L3 port
L2 port

CPU CPU
PHY NP0 NP0 PHY
1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY 1 2 Egress LC NP3 PHY
TM
Ingress LC NP
3
1 1 Egress Regular L3
Ingress L2 Lookup Lookup
TM TM packet replication and loop
entire packet (not just packet
head) back to NP
One additional NP lookup on ingress NP
2 Ingress L3 Lookup
One TM packet replication
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
IRB Packet Flow (2) – Unicast: L3  L2

L2 port

CPU CPU
PHY NP0 NP0 PHY
L3 port 1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY 1 2 Egress LC NP3 PHY
TM
Ingress LC NP
3
1 Ingress L3 Lookup & L2 rewrite 1 Egress Regular L2
(including egress BVI counters) Lookup
TM TM packet replication and loop
packet back to NP
2 Ingress L2 Lookup One additional NP lookup on ingress NP
One TM packet replication
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
IRB Packet Flow (3) –
L3 Multicast Fan-out to BVI/L2
L3 port

CPU CPU
PHY NP0 NP0 PHY
L3 port 1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY NP3 Ingress LC Egress LC 2
NP3 PHY

L2 port
1 Regular L3 Multicast replication
2 Multicast replication to L2 ports
(see next slide for details)

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
L3 Multicast Fan-out to BVI/L2 – Inside NP
To egress Example: there is only
Replicated
L2 port one L2 bridge port in
packet lookup
2 the bridge-domain on a
given NP
TM
This is the worst case scenario.
L2 lookup 1
One multicast need 3 lookup, 2
replications.
From Fabric L3 lookup

NP
 Initial Lookup
– Regular L3 multicast lookup, result indicate replication to BVI interface
1– Packet is sent to replication engine which replicates the packets and send the copy
back to the processing engine
 Second Lookup
–Regular L2 multicast lookup, result indicate replication to L2 port
2 –Packet is sent to replication engine which replicates the packet and send the copy back
to the processing engine
 Third Lookup
–Replicated L2 multicast packet lookup
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
L3 Multicast Fan-out to BVI/L2 – Inside NP
One Copy Optimization
To egress Example: there is only
Replicated
L2 port one L2 bridge port in
packet lookup
2 the bridge-domain on a
Internal given NP
Replication
Engine
L2 lookup 1

From Fabric L3 lookup

NP

On a given NP, for particular mroute, if there is only one local L3 interface (BVI) in
the OIF list, then L3 and L2 lookups could be optimized into one. This also
eliminate the first multicast replication as shown in the picture

Overhead: one multicast need two lookup, one replication.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
RSP/RP BVI Inject Path Difference
• The RSP doesn’t maintain the RP/0/RP1/CPU0:ASR9922-A#sh adjacency bvI-fia
Fri Jul 8 08:52:07.745 EDT
L2FIB Service card for BVI interfaces on node 0/RP1/CPU0
MPLS packets: Slot 0/1/CPU0 VQI 0xc0
Non-MPLS packets: Slot 0/1/CPU0 VQI
• RSP will use a service card 0xc0 ----- Service
Card List -----
• 1st linecard up is usually selected 0/RP0/CPU0 VQI:
• NP0 on that slot is used for L2 lookup Not Applicable
0/RP1/CPU0 VQI:
for RSP injected packets. Not Applicable
0/0/CPU0 VQI: 0x180
0/1/CPU0 VQI: 0xc0

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Punt and Inject
Items covered
• Netio
• SPP
• LPTS
• Punt/Inject paths

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Netio
• PI module that handles all protocol packets (either on
RSP or LC)
• Each card has a FINT (Fabric interface: MTU – 9500)
• RSP has Mgmt interfaces – 2
• LC has ‘N’ physical interfaces
• In addition has Virtual interface association

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Netio (cont…)
• Chain-walker works on the packet based on
packet type set in the Punted packets
• All modules install their handlers in the Chain
• Deals with SPP for Tx / Rx.
• Netio PD trims or adds PD headers

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
SPP
• Software Packet path
• Deals with super-frame
• Optimized packet processing
• CPU pipe-lining of instructions and data cache usage
• Runs on miscellaneous CPUs with different I/O driver
support

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
LPTS
• Local Packet Transport System
• Pre-IFIB packet processing (for-us packets)
• Control plane for Control packets

• L3 applications on RSP responsible for triggering / installation of the


LPTS entries
• LPTS entries are installed in software (on the local CPU) and in hardware
(TCAM)
• LPTS entries in TCAM also include a policer
• Polices on protocol (BGP, OSPF, SSH) and flow state (BGP est. BGP
configure, and BGP listen)

• Leverages HW asics for policing before hitting RP/LC


CPU

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Punt infra path Segments
• Segment 1: RSP CPU  LC CPU
• Segment 2: LC CPU  RSP CPU
• Segment 3: LC CPU  Wire (via fabric)
• Segment 4: LC CPU  Wire (direct)
• Segment 5: Wire  RSP CPU
• Segment 6: Wire  LC CPU
• Segment 7: RSP CPU  Wire

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
ping
• Ping to local ip address on the LC physical interface
• Ping request: ping  ipv4_io  netio  spp  punt-fpga  fia_rsp  xbar 
fia_lc  np  punt_switch  spp  netio  ipv4_io  icmp
• Ping response: icmp  ipv4_io  lpts  netio  spp  punt_switch  np 
fia_lc  xbar  fia_rsp  punt_fpga  spp  netio  lpts  ping

• Remote device (towards LC) ping from RSP


• Ping request:: ping  ipv4_io  netio  spp  punt-fpga  fia_rsp  xbar 
fia_lc  np  wire
• Ping respons: wire  np  bridge  fia_lc  xbar  fia_rsp  punt_fpga  spp
 netio  lpts  ping

• Ping to mgmt interface on RSP


• Ping request:: ping  ipv4_io  netio  ether_driver  mgmt-lan
• Ping response: mgmt-lan  ether_driver  netio  ipv4_io  ping

• Ping from remote


• Ping request:: wire  np  punt_switch  spp  netio  ipv4_io  icmp
• Ping response: icmp  ipv4_io  netio  spp  punt_switch  ingress-np 
fia_lc  xbar  fia_lc  egress-np  wire

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Punt and Inject

L3 port

CPU
PHY NP0

PHY NP1
FIA Punt CPU
FIA
PHY NP2
Switch
Fabric
PHY NP3 RSP

LC

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
ARP
ARP Architecture LDP RSVP-TE BGP OSPF
IOS XR: Static
ISIS EIGRP
• fully distributed
• two-stage forwarding operation (clear LSD RIB
separation of ingress and egress feature RP
processing)
Internal EOBC
• Layer 2 header imposition is an egress
operation
•  only the line card that hosts the FIB Adjacency
egress interface needs to know the ARP LC NPU
Layer 2 encapsulation for packets that SW FIB
have to be forwarded out of that
AIB AIB: Adjacency Information Base
interface. LC CPU
RIB: Routing Information Base
•  As a consequence, ARP and FIB: Forwarding Information Base
adjacency tables are local to a line card. LSD: Label Switch Database

Exceptions:
• Bundle-Ethernet interfaces: ARP synced b/w all LCs that host the bundle members.
• Bridged Virtual Interfaces (BVI): ARP synced b/w all LCs. (Ingress+Egress L3
processing on ingress LC).
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
High-Scale ARP Deployments
• Challenges:
• synchronisation of large number of ARP entries across line cards during large ARP
churn:
• ARP storm in the attached subnet
• Router starts forwarding to end devices in a large attached subnet, triggering ARP resolution
requests
• Running the “show arp” command or poll ARP via SNMP in very high scale can further
slow down the ARP process
• Supported scale:
• ARP table can grow up to the dynamic memory limit imposed on the ARP process
• You can go beyond what is thoroughly tested at Cisco, but make sure you test your
deployment scenario
• 128k entries per LC tested thoroughly at Cisco

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
ARP Data Plane Line card ARP LC
CPU
• Ingress NP classifies the packet to an
interface and detects that it's an ARP packet. spio netio

• Packet is subjected to Local Packet Transport SPP


Services (LPTS) processing: Tsec Driver
• the packet goes through the dedicated policer for ARP
packets (all ARP is subject to this policer, the NP ucode
Punt Switch
doesn't validate the request and would punt ARP requests
directly to the LC CPU)
• if not dropped by the policer, packet is punted to the local LPTS
line card CPU PHY NP FIA
• Packet is received by SPP and passed on to
the ARP process via spio.
• Outgoing ARP packets are injected from RP CPU to the NP that hosts the
egress interface.
• BE: RP performs the load-balancing hash calculation and injects the packet to selected bundle
member.
• BVI: RP sends the APR packet toward np0 of a LC dedicated to handle packets injected into BVI.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
ARP Data Plane - Resolution Line card ARP LC
CPU
• When the line card that performs the egress IPv4
processing receives a packet for a prefix for spio netio
which there is no adjacency information, the SPP
packet must be punted to slow path for ARP
resolution. Tsec Driver

• Packet is subjected to Local Packet Transport Punt Switch


Services (LPTS) processing:
• the packet goes through the dedicated policer for ARP packets LPTS
(all ARP is subject to this policer, the NP ucode doesn't PHY NP
validate the request and would punt ARP requests directly to FIA
the LC CPU)
• if not dropped by the policer, packet is punted to the local line
card CPU
• Packet is received by NetIO (dedicated queue for packets that require ARP resolution).
• NetIO triggers an ARP resolution request for the given IPv4 address.
• The original IPv4 packet sits in NetIO queue until ARP resolution is completed and
adjacency created, after which it's injected towards the NP.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
ARP Data and Control Plane Commands
RP/0/RSP0/CPU0:ASR9006-H#sh controllers np counters np5 location 0/1/CPU0 NP5 is receiving
<…output omitted..> excessive ARP packets
776 ARP 130760261 929 • 929 pps punted
777 ARP_EXCD 613523651 4422 • 4422 pps dropped
RP/0/RSP0/CPU0:ASR9006-H#sh netio clients location 0/1/CPU0
<…output omitted..>
Input Punt XIPC InputQ XIPC PuntQ 993 packets are sitting in
ClientID Drop/Total Drop/Total Cur/High/Max Cur/High/Max NetIO awaiting ARP
-------------------------------------------------------------------------------- resolution
<…output omitted..>
arp 0/0 17025280/138794437 0/0/1000 993/1000/1000

RP/0/RSP0/CPU0:ASR9006-H#sh packet-memory job 115 data 128 location 0/1/CPU0


Fri Jan 29 07:26:03.500 MyZone
Pakhandle Job Id Ifinput Ifoutput dll/pc Size NW Offset If you want to see which
0xdd8e18f8 115 TenGigE0/1/0/17BVI1051 0x4e78bd78 60 14 packets are awaiting ARP
resolution check ARP
0000000: 00082C00 1058C800 00000000 04DA4500 002E0000 process job ID on desired
0000020: 00003F3D 8BFA1600 00160D00 CC830001 02030405 location and dump the
0000040: 06070809 0A0B0C0D 0E0F1011 12131415 16171819 packets
-----------------------------------------------------------------

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
ARP Configuration Commands
!
• Configure the LPTS ARP policer.
lpts punt police location 0/1/CPU0
protocol arp rate 400 • This rate is applied to all NPs on the LC. If you have 8 NPs on the LC the max
! total ARP rate hitting the LC CPU would be: 400 x 8 = 3200 pps

!
• Recommended in BNG deployments
subscriber arp scale-mode-enable
! • Available starting from 5.3.3

• Recommended in HSRP/VRRP deployments.


RP/0/RSP0/CPU0:ASR9006-H(config)#arp redundancy group 1 ? Keeps ARP pre-populated on standby
interface-list List of Interfaces for this Group • Available starting from 5.3.2
peer Peer Address for this Group
source-interface Source interface for Redundancy Peer Communication

Read more on ARP operation and troubleshooting in:


https://supportforums.cisco.com/document/12766486/troubleshooting-arp-asr9000-routers

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
How Do I Tell If ARP Process Is Overloaded?
LC/0/1/CPU0:Jan 27 06:38:12.445 : netio[270]: %PKT_INFRA-PQMON-6-QUEUE_DROP : Taildrop on XIPC queue 2 owned
by arp (jid=115)
• NetIO queue is full, packets that require ARP resolution are dropped

RP/0/RSP0/CPU0:ASR9006-H#sh arp trace location 0/1/CPU0 | i GSP send failed


Jan 28 00:18:59.839 ipv4_arp/slow 0/1/CPU0 104# t1 ERR: GSP send failed for 128 arp entries, message ID:
5140529: No space left on device
• ARP syncronisation requests are dropped
• This is more likely occur only in IRB deployments (BVI interface)

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
High-Scale ARP Deployments Tips
• Validate your solution if you’re going beyond 128k/LC
• Tighten the LPTS policer rate.
• A safe ball-park number is 1000 per LC (if the LC has 4xNPs, configure 250).
• Monitor ARP counters at NP
• Monitor NetIO ARP resolution queue
• Monitor adjacencies summary info
• If using BVI, look for GSP failures in ARP traces
• In BNG deployment on IOS XR release 5.3.3 or later configure "subscriber arp
scale-mode-enable”
• Read more on ARP operation and troubleshooting in:
• https://supportforums.cisco.com/document/12766486/troubleshooting-arp-asr9000-routers

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
ARP on Unnumbered Interfaces - BNG!
• In typical BNG configuration bundle-access interface is unnumbered
• Unnumbered interfaces inherit all attributes from the parent
• Including primary and all secondary IPv4 addresses
• ARP table quickly grows interface Bundle-Ether100.902
RP/0/RSP0/CPU0:a9k#sh arp Bundle-Ether100.902.ip8 location 0/0/cPU0 ipv4 point-to-point
Address Age Hardware Addr State Type Interface ipv4 unnumbered Loopback902
172.17.255.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 encapsulation dot1q 902
172.31.252.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 ipsubscriber ipv4 l2-connected
172.31.253.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 initiator dhcp

interface Loopback902
• New command in 5.3.3: ipv4 address 172.17.255.1 255.255.255.255
subscriber arp scale-mode-enable ipv4 address 172.31.252.1 255.255.255.0 secondary
ipv4 address 172.31.253.1 255.255.255.0 secondary
• Disables ARP on subscriber interfaces
• DHCP or PPPoE already provide the MAC  IPv4 binding

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Core
ARP in Large Scale VRRP/HSRP
X
• Scenario:
• VRRP active is also the preferred router for downstream traffic  VRRP 
• Suddenly standby VRRP starts receiving traffic from the core Large L2
•  if standby didn’t have the ARP table populated, large portion network
of downstream traffic is punted for ARP resolution
• Solution:
• ARP Geo-Redundancy feature
arp redundancy
group 1
source-interface Loopback901
interface-list
interface Bundle-Ether7606.2 id 2

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Loadbalancing revisted
Access L2VPN PE Core

Loadbalancing FAQ
• If packets are fragmented, L4 is omitted from the hash
calculation
• V6 flow label addition (includes the 5 tupple/needs config) to the
hash (target 6.x)
• cef load-balancing fields ipv6 flow-label
• Show cef exact route or bundle hash BE<x> can be used to
feed info and determine actual path/member, but this is a
shadow calculation that is *supposed* to be the same as the
HW.
• Mixing Bundle members between trident/typhoon/tomahawk
can be done, not recommended (though hash calc same).

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
How does ECMP path or LAG member selection work?

• Every packet arriving on an NPU will under go a “HASH” computation. What


fields are used is dependent on encap (see overview shortly)
L2 L3 L4 payload
CRC32
32 bits

HASH 8 bits selected


8 bits selected
(3 drawn) 256 buckets

Path (ECMP)
Member (LAG)

path2 path2 path2


path1 path1 path1
Buckets distributed over available members/paths

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
ECMP Load balancing Bundle Load balancing
A: IPv4 Unicast or IPv4 to MPLS (3) • L3 bundle uses 5 tuple as “A” (e.g.: IP
• No or unknown Layer 4 protocol: IP SA, DA and Router ID enabled routed bundle interface)

• UDP or TCP: IP SA, DA, Src Port, Dst Port and Router ID • MPLS enabled bundle follows “C”
B: IPv4 Multicast • L2 access bundle uses access S/D-MAC +
• For (S,G): Source IP, Group IP, next-hop of RPF RID, OR L3 if configured (under l2vpn)
• For (*,G): RP address, Group IP address, next-hop of RPF • L2 access AC to PW over mpls enabled
C: MPLS to MPLS or MPLS to IPv4 core facing bundle uses PW label (not
FAT-PW label even if configured)
• # of labels <= 4:
• if inner is IP based: same as IPv4 unicast () • FAT PW label only useful for P/core routers
• EoMPLS: etherheader will follow: 4th label+RID

• # of labels > 4 : 4th label and Router ID

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
MPLS vs IP Based loadbalancing
• When a labeled packet arrives on the interface.
• The ASR9000 advances a pointer for at max 4 labels.
• If the number of labels <=4 and the next nibble seen right after that label is
• 4: default to IPv4 based balancing
• 6: default to IPv6 based balancing

• This means that if you have a P router that has no knowledge about the MPLS service of the packet,
that nibble can either mean the IP version (in MPLS/IP) or it can be the DMAC (in EoMPLS).
• RULE: If you have EoMPLS services AND macs are starting with a 4 or 6. You HAVE to use Control-
Word
45… (ipv4)
L2 MPLS MPLS 0000 (CW) 4111.0000.
41-22-33 (mac)

• Control Word inserts additional zeros after the inner label showing the P nodes to go for label based
balancing.
• In EoMPLS, the inner label is VC label. So LB per VC then. More granular spread for EoMPLS can be achieved with
FAT PW (label based on FLOW inserted by the PE device who owns the service).
• Take note of the knob to change the code: PW label code 0x11 (17 dec, as per draft specification). (IANA assignment is 0x17)

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Loadbalancing ECMP vs UCMP and polarization
• Support for Equal cost and Unequal cost
• 32 ways for IGP paths
• 32 ways (Typhoon) for BGP (recursive paths) 8-way Trident
• 64 members per LAG
• Make sure you reduce recursiveness of routes as much as possible (static route
misconfigurations…)
• All loadbalancing uses the same hash computation but looks at different bits from that hash.
• Use the hash shift knob to prevent polarization.
• Adj nodes compute the same hash, with little variety if the RID is close
• This can result in north bound or south bound routing.
• Hash shift makes the nodes look at complete different bits and provide more spread.
• Trial and error… (4 way shift trident, 32 way typhoon, values of >5 on trident result in modulo)

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Hash shift, what does it do?
L2 L3 L4 payload
8 bits selected

Hash shift 8
100 = 8 1 1 HASH
1 HASH 8 bits selected
010 = 4
(3 drawn) 256 buckets

1 2 3 4 7 8
Path (ECMP)
Member (LAG)

path1 path1 path2


path1 path1 path2
Buckets distributed over available members/paths

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Loadbalancing knobs and what they affect
• L2vpn loadbalancing src-dest-ip
• For L2 bundle interfaces egress out of the AC
• FAT label computation on ingress from AC towards core
• Note: upstream loadbalancing out of core interface does not look at fat label (inserted after hash is
computed)

• On bundle (sub)interfaces:
•Loadbalance on srcIP, dest IP, src/dest or fixed hash value (tie vlan to hash result)
Used to be on L2transport only, now also on L3.
• GRE (no knob needed anymore)
• Encap: based on incoming ip
• Decap: based on inner ip
• Transit: based on inner payload if incoming is v4 or v6 otherwise based on GRE header
• So running mpls over GRE will result in LB skews!

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
ASR9000 L2VPN Load-Balancing (cont.)
• ASR9000 PE Imposition load-balancing
behaviors PE Per-Flow load-balance configuration based
• Per-PW based on MPLS VC label (default) on L2 payload information
• Per-Flow based on L2 payload information; i.e. !
L2 DMAC / L2 SMAC, RTR ID l2vpn
load-balancing flow src-dst-mac
• Per-Flow based on L3/L4 payload information; !
i.e. L3 D_IP / L3 S_IP / L4 D_port / L4 S_port1,
RTR ID

• ASR9000 PE Disposition load-balancing PE Per-Flow load-balance configuration based


behaviors on L3/L4 payload information
• Per-Flow load-balancing based on L2 payload !
l2vpn
information; i.e. L2 DMAC / L2 SMAC (default) load-balancing flow src-dst-ip
• Per-Flow load-balancing based on L3/L4 !
payload information; i.e. L3 D_IP / L3 S_IP / L4
D_port / L4 S_port
(1) Typhoon/Tomahawk LCs required for L3&L4 hash keys. Trident LCs only
capable of using L3 keys

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
ASR9000 L2VPN Load-Balancing (cont.)
Applicable to Applicable to
L2VPN Is MPLS L3VPN
 ASR9000 as P router load-balancing behaviors payload
 Based on L3/L4 payload information for IP MPLS IP?
payloads No Yes1 RTR DA
 Based on Bottom of Stack label for Non-IP MPLS RTR SA
payloads
Select ECMP / Bundle Select ECMP / Bundle
Member according to Member according to MPLS E-Type (0x8847)
 IP MPLS payloads identified based on version field hash operation based hash operation based PSN MPLS Label
value (right after bottom of stack label) on bottom of stack on L3 / L4 payload PW MPLS Label
 Version = 4 for IPv4 label value information 0 PW CW
 Version = 6 for IPv6 DA
 Anything else treated as Non-IP Non-IP SA

 L2VPN (PW) traffic treated as Non-IP For L2VPN, bottom of stack 802.1q Tag (0x8100)
label could be: C-VID
 PW Control-Word strongly recommended to avoid
• PW VC label or E-Type (0x0800)
erroneous behavior on P router when DMAC starts
with 4/6 • Flow label (when using
FAT PWs) IPv4 Payload

(1) MPLS –encap IP packets with up to four (4) MPLS labels Typical EoMPLS frame

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
ASR9000 L2VPN Load-Balancing (cont.)
• In the case of VPWS or VPLS, at the ingress PE side, it’s possible to
change the load-balance upstream to MPLS Core in three different ways:
1. At the L2VPN sub-configuration mode with “load-balancing flow” command with either
src-dst-ip or src-dst-mac [default]
2. At the pw-class sub-configuration mode with “load-balancing” command where you can
choose flow-label or pw-label.
3. At the bundle interface sub-configuration mode with “bundle load-balancing hash”
command with either dst-ip or src-ip.
• It’s important to non only understand these commands but also that 1 is
weaker than 2 which is weaker than 3.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
ASR9000 L2VPN Load-Balancing (cont.)
For example, if you had the configs in interface Bundle-Ether1
bundle load-balancing hash dst-ip
all 3 locations !
L2vpn
• Because of the priorities, on the load-balancing flow src-dst-ip
pw-class FAT
egress side of the ingress PE (to the encapsulation mpls
MPLS Core), we will do per-dst-ip control-word
transport-mode ethernet
load-balance. load-balancing
pw-label
• If the bundle-specific configuration is
removed, we will do per-VC load-
balance.
• If the pw-class load-balance
configuration is removed, we will do
per-src-dst-ip load-balance.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Flow Aware Transport PWs

 Problem: How can LSRs load-balance RTR DA


traffic from flows in a PW across core RTR DA RTR SA
ECMPs and Bundle interfaces? RTR SA MPLS E-Type (0x8847)
MPLS E-Type (0x8847) PSN MPLS Label
 LSRs load-balance traffic based on IP
header information (IP payloads) or PSN MPLS Label PW MPLS Label
based on bottom of stack MPLS label PW MPLS Label Flow MPLS Label
(Non-IP payloads) PW CW PW CW
 PW traffic handle as Non-IP payload DA DA
SA SA
 RFC6391 defines a mechanism that 802.1q Tag (0x8100) 802.1q Tag (0x8100)
introduces a Flow label that allows P C-VID C-VID
routers to distribute flows within a PW
E-Type (0x0800) E-Type (0x0800)
 PEs push / pop Flow label 4 4
 P routers not involve in any signaling / IPv4 Payload IPv4 Payload
handling / manipulation of Flow label
EoMPLS frame without EoMPLS frame with
Flow Label Flow Label

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Flow Aware Transport PWs (cont.)
!
• ASR9000 PE capable of negotiating (via l2vpn
LDP – RFC6391) the handling of PW Flow pw-class sample-class-1
encapsulation mpls
labels load-balancing flow-label both
!
• ASR9000 also capable of manually pw-class sample-class-1
encapsulation mpls
configure imposition and disposition load-balancing flow-label tx
behaviors for PW Flow labels !
pw-class sample-class-1
encapsulation mpls
• Flow label value based on L2 or L3/L4 PW load-balancing flow-label rx
payload information
• ASR9000 PE capable of load-balancing !
regardless of the presence of Flow Label l2vpn
pw-class sample-class
• Flow label aimed at assisting P routers encapsulation mpls
load-balancing flow-label both static
!

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
PW2 (Service Y)
P router with ECMP and P router without ECMP
Bundle interfaces and Bundle interfaces

PW1 (Service X) PE router with ECMP and


PE router with ECMP and P1 P3
Non-bundle interfaces
Bundle interfaces

PE1 PE2 PE router with Bundle


interface as PW
attachment circuit (AC)
P2 P4

P router with ECMP and P router without ECMP


Non-bundle interfaces and Bundle interfaces

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
L2VPN Load-balancing (Per-VC LB)
Default - ASR9000 P with PW2 (Service Y)
Default - ASR9000 PE with
Core-facing Bundle AC Bundle
P rtr load-balances traffic PE load-balances traffic across
across Bundle members based PW1 (Service X) Bundle members based on
P3
on VC label; i.e. all traffic from P1
DA/SA MAC
F1x F2x F3x
a PW assigned to one member F4x

Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3
Svc X – Flow 4
Svc Y – Flow 1
Svc Y – Flow 2 F1y F2y F3y
Svc Y – Flow 3
PE1 F4y PE2
Svc Y – Flow 4
Default - ASR9000 PE with P4
P2
ECMP
Default - ASR9000 PE with Default - ASR9000 P with
PE load-balances PW traffic
Core-facing Bundle ECMP
across ECMPs based on VC
PE load-balances traffic across P rtr load-balances traffic
label; i.e. all traffic from a PW
Bundle members based on VC across ECMPs based on VC
assign to one ECMP
label; i.e. all traffic from a PW label; i.e. all traffic from a PW
assigned to one member assigned to one ECMP

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
L2VPN Load-balancing (L2/L3 LB)
Default - ASR9000 P
PW loadbalancing based on
VC label; only one ECMP and PE L2VPN load-balancing knob:
one bundle member used for l2vpn
PE L2VPN load-balancing knob: load-balancing flow {src-dst-mac
l2vpn all PW traffic PW1 (Service X) | src-dst-ip}
P1 P3
load-balancing flow {src-dst-mac
| src-dst-ip} F1x F2x

Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3
Svc X – Flow 4

Two-stage Hash PE1 PE2


process
F3x F4x
ASR9000 PE with ECMP P4 ASR9000 PE with AC Bundle
P2
PE now load-balances PW ASR9000 PE with Core- PE load-balances now traffic
traffic across ECMPs based facing Bundle across Bundle members based
on L2 or L3 payload info; i.e. PE now load-balances traffic on L2 or L3 payload info
flows from a PW distributed across Bundle members based
over ECMPs on L2 or L3 payload info; i.e.
flows from a PW distributed
over members

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
L2VPN Load-balancing (L2/L3 LB + FAT)
PE L2VPN load-balancing ASR9000 P with Core-facing
knob (same as before) Bundle No new configuration
PW loadbalancing based on required on P routers
PE FAT PW
l2vpn Flow label; i.e. flows from a PW PE L2VPN load-balancing
pw-class sample-class distributed over bundle members PW1 (Service X) knob
encapsulation mpls P1 P3
PE FAT PW
load-balancing flow- F1x
label both

Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3 F2x
Svc X – Flow 4

ASR9000 PE F3x
PE1 PE2
PE now adds Flow
labels based on L2 or F4x
L3 payload info P2 P4 ASR9000 PE with AC Bundle
ASR9000 PE with ECMP ASR9000 PE with Core- PE load-balances now traffic
PE now load-balances PW facing Bundle ASR9000 P with ECMP across Bundle members based
traffic across ECMPs based PE now load-balances traffic P rtr now load-balances traffic on L2 or L3 payload info
on L2 or L3 payload info; i.e. across Bundle members based across ECMPs based on Flow
flows from a PW distributed on L2 or L3 payload info; i.e. label; i.e. flows from a PW
over ECMPs flows from a PW distributed distributed over ECMPs
over members

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
L2VPN LB Summary
• ASR9000 as L2VPN PE router performs multi-stage hash algorithm to select
ECMPs / Bundle members
• User-configurable hash keys allows for the use of L2 fields or L3/L4 fields in PW
payload in order to perform load-balancing at egress imposition
• ASR9000 (as PE) complies with RFC6391 (FAT PW) to POP/PUSH Flow labels
and aid load-balancing in the Core
• PE load-balancing is performed irrespective of Flow PW label presence
• FAT PW allows for load-balancing of PW traffic in the Core WITHOUT requiring any
HW/SW upgrades in the LSR
• Cisco has prepared a draft to address current gap of FAT PW for BGP-signaled PWs

• ASR9000 as L2VPN P router performs multi-stage hash algorithm to select


ECMPs / Bundle members
• Always use bottom of stack MPLS label for hashing
• Bottom of stack label could be PW VC label or Flow (FAT) label for L2VPN

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
Significance of PW Control-Word

Solution:
Problem: Add PW Control Word in
DANGER for LSR front of PW payload. This
LSR will confuse payload guarantees that a zero will RTR DA
as IPv4 (or IPv6) and RTR DA always be present and RTR SA
attempt to load-balance RTR SA thus no risk of confusion MPLS E-Type (0x8847)
based off incorrect fields MPLS E-Type (0x8847) for LSR PSN MPLS Label
PSN MPLS Label PW MPLS Label
PW MPLS Label 0 PW CW
4 DA 4 DA 4 DA
SA SA SA
802.1q Tag (0x8100) 802.1q Tag (0x8100) 802.1q Tag (0x8100)
C-VID C-VID C-VID
Payload E-Type Payload E-Type Payload E-Type

Non-IP Payload Non-IP Payload Non-IP Payload

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Satellite ICL Bundle Load-Balancing
a9k

hash_value = (satellite_port_number + 2) % number_of_ICL_members

Eg: on a bundle ICL with 4 members, hash value of satellite port 13 is: Satellite
X = (13 + 2) % 4 = 15 % 4 = 3
123 ... 40

# of ICL members Satellite Port Hash


4 1 3
4 2 0
4 3 1
4 4 2

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Important ASR9k MFIB Data-Structures
• FGID = Fabric Group ID
1. FGID Index points to (slotmask, fabric-channel-mask)
2. Slotmask, fabric-channel-mask = simple bitmap
• MGID = Multicast Group ID (S,G) or (*,G)
• 4-bit RBH
1. Used for multicast load-balancing chip-to-chip hashing
2. Computed by ingress NP ucode using these packet
fields: (S,G) MGID FGID
3. IP-SA, IP-DA, Src Port, Dst Port, Router ID
10.0.0.1, 230.0.0.1 12345 0x82
• FPOE = FGID + 4-bit RBH 10.1.1.1, 230.12.1.1 3456 0x209

172.16.1.3, 229.1.1.3 23451 0x100

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Multicast Replication Model Overview
 Multicast Replication in ASR9k is like an SSM tree
 2-stage replication model:
1. Fabric to LC replication
2. Egress NP OIF replication

 ASR9k doesn’t use inferior “binary tree” or “root uniary tree” replication model

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Multicast Bundle Load-Balancing
RP/0/RSP0/CPU0:a9k#sh mrib route 172.16.255.3 230.9.0.8 detail a9k
(172.16.255.3,230.9.0.8) Ver: 0x3f71 RPF nbr: 0.0.0.0 Flags: C RPF,
PD: Slotmask: 0x0
MGID: 16905
BE7606
Up: 00:04:00
Outgoing Interface List
Bundle-Ether7606.1 (0/0/CPU0) Flags: LI, Up: 00:04:00 Gi0/0/1/2 Gi0/0/1/3

RP/0/RSP0/CPU0:a9k#sh mrib platform interface bundle-ether 7606.1 detail


< ..output omitted.. >
--------------------------------------------------------------
Route OLE on Bundle-Ether7606.1 (0x620)
Route: 0xe0000000:(172.16.255.3, 230.9.0.8)/32
UL Interface: GigabitEthernet0/0/1/2 (0x4000140) Available from 5.3.2
Bundle Member: GigabitEthernet0/0/1/2 (0x4000140)
Raw hash: 0x1977fd33
Intrf Handle: 0x5000147b
Entry Handle: 0x50001493
--------------------------------------------------------------

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
Troubleshooting Packet
Forwarding
Update and recap
Input Drops Troubleshooting
Troubleshooting this?
Piece of cake starting with
GigabitEthernet0/0/1/6.1 is up, line protocol is up
<..output omitted..>
IOS XR 5.3.3!!! 😀
307793 packets input, 313561308 bytes, 227987 total input drops

New packet drops troubleshooting tools in IOS XR 5.3.3 and later:


• Per uidb drop counter monitoring: “monitor np interface”
• Pervasive dropped packet capture: “show controller np capture”
• Available on both Tomahawk and Typhoon
• ”monitor np counter” still available for all other counters

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Monitor NP Interface

• Part of the stats memory carved out for per-uidb drop counters
• UIDB == µIDB == Micro Interface Descriptor Block
• NP’s view of an interface
• Only one uidb at the time per LC can be monitored
• Drop counters that are updated for selected uidb are not updated in the global
stats memory Per uidb
GLOBAL STATS MEMORY drop
counters

NP complex
STATS MEMORY
FIB MAC
LOOKUP Forwarding chip (multi core) FRAME MEMORY
MEMORY
- TCAM

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Interface µIDB Info
RP/0/RSP0/CPU0:our9001#sh uidb data location 0/0/CPU0 g0/0/1/6.1 ingress | e 0x0
Location = 0/0/CPU0
Index = 35
INGRESS table
------------ ingress
Layer 3 ing-extension
------------
egress
Status 0x1
extension
Ifhandle 0x40009c0
Index 0x23
Stats Pointer 0x5303ce
IPV4 ACL Enable 0x1
IPV4 ACL ID 0x10 interface GigabitEthernet0/0/1/6.1
IPV4 Mcast Enable 0x1 ipv4 address 172.18.0.1 255.255.255.0
MPLS Enable 0x1 encapsulation dot1q 901
IPV4 Enable 0x1 ipv4 access-group CL16 ingress
IPV4 ICMP Punt 0x1 !
mpls Racetrack Eligible 0x1 mpls ldp
router-id 172.16.255.1
interface GigabitEthernet0/0/1/6.1
address-family ipv4

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Monitor NP Interface
RP/0/RSP0/CPU0:our9001#monitor np interface g0/0/1/6.1 count 2 time 1 location 0/0/CPU0
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec

<..output omitted..>
**** Sun Jan 31 22:14:32 2016 ****

Monitor 2 non-zero NP1 counters: GigabitEthernet0_0_1_6.1


Offset Counter FrameValue Rate (pps)
-------------------------------------------------------------------------------
262 RSV_DROP_MPLS_LEAF_NO_MATCH_MONITOR 101 49
1307 PARSE_DROP_IPV4_CHECKSUM_ERROR_MONITOR 101 50

(Count 2 of 2)
RP/0/RSP0/CPU0:our9001#

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Monitor NP Interface
• Counters reported with ‘_MONITOR’ appendix
• These counters are not added to global NP counters

• By default runs one capture during 5 seconds (configurable count and time)
• One session at the time per LC
RP/0/RSP0/CPU0:our9001#monitor np interface
• Supports g0/0/1/6.1
physical and BEcount 2 time 1 location 0/0/CPU0
(sub)interfaces
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec
• Physical (sub)int: monitoring runs on NP that hosts the interface

<..output omitted..> • BE(sub)int: monitoring runs on all NP that host the members
**** Sun Jan 31 22:14:32 2016 ****
• Applicable only to ucode stages where uidb is known
Monitor 2 non-zero NP1 counters: GigabitEthernet0_0_1_6.1
• Works perfectly for input drops troubleshooting and some output drops
Offset Counter FrameValue Rate (pps)
-------------------------------------------------------------------------------
262 RSV_DROP_MPLS_LEAF_NO_MATCH_MONITOR 101 49
1307 PARSE_DROP_IPV4_CHECKSUM_ERROR_MONITOR 101 50

(Count 2 of 2)
RP/0/RSP0/CPU0:our9001#

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Pervasive Capture of Dropped Packets
Dedicated
• Dedicated pool of buffer buffer pool
pointers
• Instead of dropping the Drop Drop Drop
packet, the first RFD buffer ? ? ?
of a packet is enqueued into
Traffic

WRED
a dedicated pool Mana
ger

TOP Resolve
Line

TOP Search

TOP Modify
TOP Parse
Line Input
Output

ICFDQ

WRED
Traffic
Mana
FIA Input FIA Output
ger

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0

NP1 capture buffer has seen 426268 packets - displaying 32

Sun Jan 31 22:55:13.935 : RSV_DROP_MPLS_LEAF_NO_MATCH


From GigabitEthernet0_0_1_6: 1222 byte packet on NP1
0000: 84 78 ac 78 ca 3e 30 f7 0d f8 af 81 81 00 03 85
0010: 88 47 05 dc 11 ff 45 00 00 64 01 ae 00 00 ff 01
0020: 62 c3 ac 12 00 02 ac 10 ff 02 00 00 02 3a 00 0a
<..output omitted..>
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 help location 0/0/CPU0

NP1 Status Capture Counter Name


---------------------+------------------------------
Capturing PARSE_UNKNOWN_DIR_DROP
Capturing PARSE_UNKNOWN_DIR_1
<…output omitted..>
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 filter RSV_DROP_MPLS_LEAF_NO_MATCH disable location 0/0/CPU0

Disable NP1 packet capture for: RSV_DROP_MPLS_LEAF_NO_MATCH

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0

NP1 capture buffer has seen 426268 packets - displaying 32

Sun Jan 31 22:55:13.935 : RSV_DROP_MPLS_LEAF_NO_MATCH


From GigabitEthernet0_0_1_6: 1222 byte buffer
• Circular packet on NP1
captures the recently dropped packets
0000: 84 78 ac 78 ca 3e 30 f7 0d f8 af 81 81 00 03 85
0010: 88 47 05 dc 11 ff 45 00 •00 Tomahawk:
64 01 ae 00128
00buffers
ff 01
0020: 62 c3 ac 12 00 02 ac 10 •ff Typhoon:
02 00 00 32
02buffers
3a 00 0a
<..output omitted..>
• Enablednp
RP/0/RSP0/CPU0:our9001#sh controllers bycapture – nohelp
defaultnp1 configuration required!
location 0/0/CPU0

NP1 Status • Works


Capture at port-level
Counter Name
---------------------+------------------------------
• L2 encapsulation is included in the dump  you can figure out the sub-interface if you
Capturing PARSE_UNKNOWN_DIR_DROP
decode the packet dump
Capturing PARSE_UNKNOWN_DIR_1
<…output omitted..> • In case of packets spanning more than one buffer, only the first buffer is captured
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 filter RSV_DROP_MPLS_LEAF_NO_MATCH disable location 0/0/CPU0
• Filtering is supported – you can select which drop reasons not to capture
Disable NP1 packet capture for: RSV_DROP_MPLS_LEAF_NO_MATCH
• Run the ‘help’ option to see the eligible counters and their status

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Show Controllers NP Capture – Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100) ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901 encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
Type: 0 (Echo (ping) reply)
Code: 0 ()

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Troubleshooting Input Drops – Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100) ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901 encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001 RP/0/RSP0/CPU0:our9001#sh mpls forwarding labels 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255 RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings local-label 24001
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings 172.16.255.2/32
Type: 0 (Echo (ping) reply)
Code: 0 () 172.16.255.2/32, rev 48
Local binding: label: 24010
Remote bindings: (1 peers)
Peer Label
Drop reason: Upstream peer is
----------------- --------- sending packets with a wrong
172.16.255.3:0 23 MPLS label.
RP/0/RSP0/CPU0:our9001#
Local binding shows that the label
24010 should be used

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Monitor NP Counter

• Available since 4.3.x


• ACL with capture can be used to filter packets you want to match
• IPv4/v6 ACL can still be used for matching if the MPLS stack is one level deep
• All captured packets are dropped!!!
• NP reset is required upon capture completion
• ~50ms traffic outage on Typhoon, ~150 on Tomahawk

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Monitor NP Counter
RP/0/RSP0/CPU0:our9001#monitor np counter ACL_CAPTURE_NO_SPAN.1 np1 location 0/0/CPU0

Warning: Every packet captured will be dropped! If you use the 'count'
option to capture multiple protocol packets, this could disrupt
protocol sessions (eg, OSPF session flap). So if capturing protocol
packets, capture only 1 at a time.

Additional packets might be dropped in the background during the


capture; up to 1 second in the worst case scenario. In most cases
only the captured packets are dropped.

Warning: A mandatory NP reset will be done after monitor to clean up.


This will cause ~50ms traffic outage. Links will stay Up.
Proceed y/n [y] >
Monitor ACL_CAPTURE_NO_SPAN.1 on NP1 ... (Ctrl-C to quit)

<…output omitted..> ipv4 access-list CL16


10 permit icmp host 172.18.0.2 host 172.16.255.2 capture
!
Cleanup: Confirm NP reset now (~50ms traffic outage).
Ready? [enter] > interface GigabitEthernet0/0/1/6.1
RP/0/RSP0/CPU0:our9001# ipv4 access-group CL16 ingress

RP/0/RSP0/CPU0:our9001#sh controllers np counters np1 location 0/0/CPU0 | i SPAN


483 ACL_CAPTURE_NO_SPAN 14859 3

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
“Show drops all” enhancement
• Supported starting with 5.3.0
• Uses a ‘grammar’ file to combine outputs of other show commands
• Easy way to achieve a combined view of relevant aspects (drops are the most obvious
use case)
• Grammar file:
• Can be modified to suite particular troubleshooting tasks
• System will look for it at two locations:
1. disk0a:/usr/packet_drops.list
2. /pkg/etc/packet_drops.list (default)

• “show drops all commands” shows the constituent commands that will be called
for parsing the final output

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
“Show drops all” sample output (1)
RP/0/RP0/CPU0:ios#sh drops all commands
Wed Feb 4 05:27:40.915 UTC
Module CLI
[arp] show arp traffic
[cef] show cef drops
[fabric] show controllers fabric fia drops egress
[fabric] show controllers fabric fia drops ingress
[lpts] show lpts pifib hardware entry statistics
[lpts] show lpts pifib hardware police
[lpts] show lpts pifib hardware static-police
[netio] show netio drops
[netio] fwd_netio_debug
[niantic-driver] show controllers dmac client punt statistics
[np] show controller np counters
[np] show controllers np tm counters all
[spp] show spp node-counters
[spp] show spp client detail
[spp] show spp ioctrl

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
“Show drops all” sample output (2)
RP/0/RP0/CPU0:ios#sh drops all location 0/5/CPU0
Wed Feb 4 05:26:30.192 UTC

=====================================
Checking for drops on 0/5/CPU0
=====================================

show cef drops:


[cef:0/5/CPU0] Discard drops packets : 5

show controllers fabric fia drops ingress:


[fabric:FIA-0] sp0 crc err: 2746653
[fabric:FIA-0] sp0 bad align: 663
[fabric:FIA-0] sp0 bad code: 2
[fabric:FIA-0] sp0 align fail: 101
<snip>
[fabric:FIA-3] sp1 prot err: 150577

show controller np counters:


[np:NP0] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP1] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP2] MODIFY_PUNT_REASON_MISS_DROP: 1
[np:NP3] PARSE_ING_DISCARD: 5
[np:NP3] PARSE_DROP_IN_UIDB_DOWN: 5
[np:NP3] MODIFY_PUNT_REASON_MISS_DROP: 1

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
New CLI To Dump The MAC Table

• MAC address table is distributed


• Located in NP memory
• NPs exchange updates to keep the tables in sync

• Pulling the MAC table using existing CLI can be slow


• LC CPU reads periodically the MAC table in NP memory and updates the cache
• If cache wasn’t updated recently, the show command initiates a read from NP  slow!

• New CLI introduced to dump the MAC table directly from NP memory
• Much faster dump if MAC table elements
• Full dump of a large MAC table may be as slow as the existing CLI

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
New CLI To Dump The MAC Table
sh l2vpn forwarding platform bridge-domain [<name>] mac-address [<mac-address>|np-id <np>|
xconnect <id>] location <location>

RP/0/RSP0/CPU0:a9k#sh l2vpn forwarding platform bridge-domain av1 mac-address np-id np1


location 0/0/CPU0
Mac Address Learned from/Filtered on
----------------------------------------
0000.0c9f.f016 BVI, bd-id: 0
8478.ac7a.ba75 BVI, bd-id: 0
30f7.0df8.af81 Gi0/0/1/6.22 (XID: 0x01080001), bd-id: 0
001d.7025.8604 BE100.22 (XID: 0xa0000005), bd-id: 0

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Troubleshooting NP Forwarding
1. Identify interface in question.
2. Identify the mapping from interface to NPU.
3. Examine NP counters.
4. Look for rate counters that match lost traffic rate.
• If none of the counters match the expect traffic, check for drops at interface controller

5. Lookup the counter description.


6. If required capture the packet hitting the counter (Typhoon/Tomahawk only).
• If troubleshooting drops use the new tools “monitor np interface” and “show controller np
capture”.

7. If packets are forwarded to the fabric, run fabric troubleshooting steps.


8. Identify egress NP and repeat steps 3 to 6.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
eXR on ASR9000
Introduction
VMs ( Virtual Machines ) Linux Containers
• A container requires enough of an operating
system, supporting programs and libraries, and
• Each VM runs a full copy of an operating system resources to run a specific program .
system and also have a virtual copy of all the
hardware that the operating system needs to • Containers are isolated but share the OS /
run. bin/libraries

• That means VMs require a lot of RAM and • Containers are lightweight in comparison to the
CPU cycles and thus it is heavyweight . VM which enables to install/load two to three
times of applications on a single server with
• Building VMs can take minutes to hours containers than a VM.
• Launching VMs can takes minutes . • Building a container can take minutes.

• Launching containers can take sub-seconds

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Introduction … Continued
VMs ( Virtual Machines ) Linux Container

• Traditional VMs are very large in size which


• In comparison , we can share the bulk of
makes them impractical to store and transfer
1G and can have multiple Containers
• As am example if we have 1G of image and sharing the same OS and thus provide size
if we want to run Full VM , then we need 1G and transfer efficiency .
times the number of desired VMs
• Security is an issue with Containers as
• But with VMs we get better Isolation . there's a lot of important Linux kernel
• With VMs we get better security . subsystems outside the container

• Isolation is poor as compared to VMs


• Security is an issue with Containers as
there's a lot of important Linux kernel
subsystems outside the container

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Pictorial Illustration
Carrier Delay :
carrier delay up 10000
• We recommend to configure it on all interfaces which have redundant members in the bundle
carrier delay down 10000
• On bundle with one member , please config this cli

Service Pack Activation :


• We strongly recommend to load the latest SP at all times , the current latest is Sp2 on 533 .( Please
make sure that your Must Have fixes are part of the SP being loaded )

Picture taken from : Opensource.com

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
VMs Adaptation (Spirit XR i.e. NCS 6k)
• Achieving ZPL, ZTL and HA during ISSU

The system is configured for multiple levels of redundancy. These are driven from
the three main requirements for ISSU: ZPL( Zero Packet loss ) , ZTL ( Zero
Topology loss ) and HA during ISSU.

• For ZPL and ZTL, we have redundancy in the line card resources, and these are
carved into two parts: one for the current software version and one for the new
software version. These resources include the amount of memory for storing
prefixes in the FIB and the resources necessary for supporting features such as
ACLs, QoS in the form of TCAM memories.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
VMs Adaptation (Spirit XR i.e. NCS 6k) ..contd

• For ISSU with HA, we require the ability to create multiple VMs on each single
node. This implies the availability of partitions to host the new VM corresponding
to the new software version, and memory for the new VM to run.

• Sysadmin VM on each node provides a block of each of these and leaves it up to


XR to how these resources are partitioned and used. This usage pattern will
dictate whether or not ISSU with HA is feasible.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
LXC Adaptation (Spirit XR vs eXR )

Summary :
NCS6k FCSed with 48G of memory on the RP and 24G on the LCs and we have
reserved half of the memory on both RP and LC for V2 VM during ISSU and that’s is
where we were able to achieve ZPL , ZTL and HA during ISSU .

But for platforms which are NOT that resources intense , this is a significant price to
be paid if we go by VM approach .

Hence an engineering decision was made to adopt Linux Containers in comparison


with VMs in eXR which will be shipped on Release 611 on ASR-9k

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
What is eXR
Detour - what is eXR?
• Enables Spirit to run on lower end platforms
 Enables Spirit on legacy hardware with no virtualization support.
 Enables hybrid 32-bit and 64-bit nodes.
 Target platforms are ASR9k, Sunstone, Skywarp.
• Linux containers instead of KVMs
 Calvados and XR run in separate containers.
• Leverage and enhance Spirit ISSU capabilities
 Almost identical ISSU semantics/capabilities.
• Platform chooses containers or KVMs.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
What is eXR… Continued
Open embedded Linux kernel using Yocto

• Lots of Linux distributions out there, but none that are just right.
 We cobble together the OS/libraries that meets our needs.
 What we cobbled together may not always work well together.
• Yocto helps create custom Linux OS distributions.
• Yocto provides templates and tools per CPU to create a custom OS.
• Also works with commercial Linux vendors.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
What is eXR …... Continued
3rd party application hosting

• Allow non-Cisco applications to be hosted on the router.


• Provide application life cycle management.
• Provide sandbox for the application to pull in its dependencies such as libraries.
• Docker appears to be the way to go.
 Helps build applications with any tool chain.
 Docker-ized application can then be shipped and run anywhere.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
What is eXR …... Continued
Cohesive Linux and XR networking stack
• Minimal state in the kernel
 Only populate state that is absolutely required in the kernel.
• Minimize changes to the kernel
• Network topology-agnostic framework
 3rd party applications must be able function over any network topology supported by XR.
• Minimal route state in the kernel
 Linux kernel’s routing feature set is primitive – targeted for a host, not a router.
 Packet TX/RX via XR FIB.
 Leverage hundreds of XR forwarding features.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
What is eXR …... Continued
Modular feature delivery
• Enable modular delivery of XR features.
 Example, BGP pie only.
• Reduce number of kernel packages in NG XR images.
• We have feature packs which are lineups per customer in reality.
 Does not scale.
 Integrate AU + Chef and demo on Spirit/NCS6k.
• Asynchronous releases of XR platforms - define granularity.
 Too granular will be a pain.
 API versioning, automation and testing – define governance model.

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Usability
IOS-XR Usability Initiative Progress
 Number of usability related features / featurettes delivered and planned

5.1.x / 5.2.x 5.3.0 5.3.1 5.3.2

183 18 43 20

Delivered

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Few usability highlights
• Input Drops Troubleshooting

• Global Configuration Replace – Ever wanted to quickly move interface configuration from one port to another? This
new feature allows for quick customization of router configuration by match and replace based on interface names and /
or regular expressions (see presentation below for details)

• Non-interactive EXEC commands – Ever wanted to initiate a router reload without being asked for confirmation? A new
global knob has been introduced to remove user interaction with the parser

• BGP advertised prefix count statistics – A new knob provides access to advertised count stats (something you could
do easily in IOS but not in IOS XR)

• OSPF post-reload protocol shutdown – A new knob that would keep OSPF in shutdown state after a node reload

• Interactive Rollback operations– Ever issued the wrong rollback ID by mistake? – a new knob would ask for user
confirmation before committing

• CLI / XML serviceability enhancements to several platform dependent commands such as “show controllers” and
“show hw-module fpd” commands

• And many more …

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
Atomic Configuration
Replace Feature
Operational and Automation
Enhancements
Atomic Configuration Replace – Problem Statement

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:pE2#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:pE2(config)#interface GigabitEthernet0/0/0/19
Mon Feb 16 13:25:11.142 UTC RP/0/RSP0/CPU0:pE2(config-if)# no description ***To 7604-2 2/12***
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:pE2(config-if)# no cdp
description ***To 7604-2 2/12*** RP/0/RSP0/CPU0:pE2(config-if)# no ipv4 address 13.3.6.6 255.255.255.0
cdp RP/0/RSP0/CPU0:pE2(config-if)# no negotiation auto
ipv4 address 13.3.6.6 255.255.255.0 RP/0/RSP0/CPU0:pE2(config-if)# no load-interval 30
negotiation auto RP/0/RSP0/CPU0:pE2(config-if)# ipv6 address 2603:10b0:100:10::31/126
load-interval 30
!

Example:
Consider an interface with a Problem Statement:
target config where all config It is operationally challenging to
lines are new expect prior knowledge of existing
config in order to manually remove
unwanted items

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Operational and Automation
What about
Enhancements
Atomic Configuration Replace – Current Behavior using ‘no
interface”?

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:pE2#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:pE2(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:25:11.142 UTC RP/0/RSP0/CPU0:pE2(config)#interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:pE2(config-if)# description ***TEST-after-change***
description ***To 7604-2 2/12*** RP/0/RSP0/CPU0:pE2(config-if)# ipv4 address 13.3.6.6 255.255.255.0
cdp RP/0/RSP0/CPU0:pE2(config-if)# ipv6 address 2603:10b0:100:10::31/126
ipv4 address 13.3.6.6 255.255.255.0 RP/0/RSP0/CPU0:pE2(config-if)# negotiation auto
negotiation auto RP/0/RSP0/CPU0:pE2(config-if)# load-interval 60
load-interval 30 RP/0/RSP0/CPU0:pE2(config-if)# commit
!
3 Committed Configuration
RP/0/RSP0/CPU0:pE2#show configuration commit changes last 1 Example:
Mon Feb 16 13:33:25.972 UTC Consider an interface with a
Building configuration... new target config where
!! IOS XR Configuration 5.1.2 some config lines are
interface GigabitEthernet0/0/0/19 untouched and the rest are
no description ***To 7604-2 2/12***
description ***TEST-after-change***
either deleted , changed or
! added
no interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 CURRENT Behavior:
ipv4 address 13.3.6.6 255.255.255.0 When issuing the “no” interface config
ipv6 address 2603:10b0:100:10::31/126
submode, the entire interface config is
no negotiation auto
negotiation auto
destroyed to later be re-created
no load-interval 30 This causes unnecessary interface flaps
load-interval 60
!
end
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
Operational and Automation
Enhancements
Atomic Configuration Replace – NEW Behavior
1 Original Configuration 2 Target Configuration
RP/0/RSP0/CPU0:PE1#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:00:32.153 UTC RP/0/RSP0/CPU0:PE1(config)#
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#interface GigabitEthernet0/0/0/19
description ***AAABBBCCC*** RP/0/RSP0/CPU0:PE1(config-if)# ipv6 address 2603:10b0:100:10::21/126
cdp RP/0/RSP0/CPU0:pE1(config-if)# commit
ipv4 address 13.3.5.5 255.255.255.0
negotiation auto
shutdown
load-interval 30
!
3 Committed Configuration Example:
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1 Consider an interface with a
Mon Feb 16 13:15:36.655 UTC target config where all config
Building configuration...
lines are new
!! IOS XR Configuration 5.1.2
interface GigabitEthernet0/0/0/19
no description ***AAABBBCCC***
no cdp
no ipv4 address 13.3.5.5 255.255.255.0 NEW Behavior:
ipv6 address 2603:10b0:100:10::21/126 When issuing the “no” interface config,
no negotiation auto the system does not destroy the subtree
no shutdown
but instead performs a SET of new config
no load-interval 30
!
and REMOVE of unwanted config lines
end

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
Operational and Automation
Enhancements
Atomic Configuration Replace – NEW Behavior
1 Original Configuration 2 Target Configuration
RP/0/RSP0/CPU0:PE1#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:00:32.153 UTC RP/0/RSP0/CPU0:PE1(config)#interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:PE1(config-if)# description ***TEST-after-change***
description ***AAABBBCCC*** RP/0/RSP0/CPU0:PE1(config-if)# ipv4 address 13.3.5.5 255.255.255.0
cdp RP/0/RSP0/CPU0:PE1(config-if)# ipv6 address 2603:10b0:100:10::21/126
ipv4 address 13.3.5.5 255.255.255.0 RP/0/RSP0/CPU0:PE1(config-if)# negotiation auto
negotiation auto RP/0/RSP0/CPU0:PE1(config-if)# load-interval 60
load-interval 30 RP/0/RSP0/CPU0:pE1(config-if)# commit
!
3 Committed Configuration
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1 Example:
Mon Feb 16 13:15:36.655 UTC Consider an interface with a
Building configuration... new target config where
!! IOS XR Configuration 5.1.2 some config lines are
interface GigabitEthernet0/0/0/19 untouched and the rest are
description ***TEST-after-change***
no cdp NEW Behavior: either deleted , changed or
ipv6 address 2603:10b0:100:10::21/126 When issuing the “no” interface config, added
load-interval 60 the system does not destroy the subtree
! but instead performs a SET of new config
End and REMOVE of unwanted config lines
Only the diffs (changes, removals,
additions) are applied
No interface flaps

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
Operational and Automation
What about
Enhancements
other config
Atomic Configuration Replace submodes?

1 Original Configuration 2 Target Configuration


RP/0/RSP0/CPU0:PE1#sh run router bgp 100 neighbor- RP/0/RSP0/CPU0:PE1(config)#router bgp 100
group NG-test RP/0/RSP0/CPU0:PE1(config-bgp)#no neighbor-group NG-test
Tue Mar 3 09:02:34.728 UTC RP/0/RSP0/CPU0:PE1(config-bgp)#neighbor-group NG-test
router bgp 100 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#remote-as 100
neighbor-group NG-test RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#description *** NEW NEW ***
remote-as 100 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#update-source loopback 0
description *** TEST description *** RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#address-family l2vpn evpn
update-source Loopback0 RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp-af)#exit
address-family l2vpn evpn RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#address-family vpnv4 unicast
! RP/0/RSP0/CPU0:PE1(config-bgp-nbrgrp)#commit

3 Committed Configuration
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1
Tue Mar 3 09:05:19.008 UTC Example:
Building configuration... Consider an BGP neighbor-
!! IOS XR Configuration 5.1.2 group with a new target
router bgp 100
EXISTING Behavior: config where some config
neighbor-group NG-test lines are untouched and the
description *** NEW NEW ***
When issuing the “no” neigh-group config
, the system does not destroy the subtree rest are either changes or
address-family vpnv4 unicast
! but instead performs a SET of new config additions
! and REMOVE of unwanted config lines
! Only the diffs (changes, removals,
end additions) are applied

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
Global Configuration
Replace Feature
Operational and Automation
Enhancements
Global Configuration Replace
• Description / Use Case
• Easy manipulation of router
configuration; e.g. moving around Want to change all
configuration blocks Want to move repetitions of a given
interface config pattern?
• Available since 5.3.2 (CSCte81345) around?

• Configuration / Example Interface Y


description “my_uplink”
ipv4 address x.x.x.x
load-interval 30
replace interface <int> with <int> [dry-run]
replace pattern <regex_1> with <regex_2> [dry-run]
Interface X
replace interface gigabitEthernet 0/0/0/0 with loopback 450 description “my_uplink”
ipv4 address x.x.x.x
replace pattern '10\.20\.30\.40' with '100.200.250.225‘
load-interval 30
replace pattern 'GigabitEthernet0/1/0/([0-4])' with 'TenGigE0/3/0/\1'

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
Operational and Automation

Global Configuration Replace – Ex. 1 Enhancements

interface GigabitEthernet0/0/0/0 RP/0/0/CPU0:fella(config)#replace interface gigabitEthernet 0/0/0/0 with


description first loopback 450
ipv4 address 10.20.30.40 255.255.0.0 Building configuration...
Loading.
shutdown
232 bytes parsed in 1 sec (230)bytes/sec
!
Original
router ospf 10 RP/0/0/CPU0:fella(config-ospf-ar-if)#show configuration
cost 100 Running
Wed Feb 25 18:27:16.110 PST
area 200 Configuration Building configuration...
cost 200 !! IOS XR Configuration 0.0.0
interface GigabitEthernet0/0/0/0 interface Loopback450
transmit-delay 5 description first
ipv4 address 10.20.30.40 255.255.0.0
shutdown
!
no interface GigabitEthernet0/0/0/0
router ospf 10
area 200
interface Loopback450
transmit-delay 5
!
no interface GigabitEthernet0/0/0/0

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Operational and Automation

Global Configuration Replace – Ex. 2 Enhancements

ipv4 access-list mylist RP/0/0/CPU0:fella(config)#replace pattern '10\.20\.30\.40' with


10 permit tcp 10.20.30.40/16 host 1.2.4.5 '100.200.250.225'
20 deny ipv4 any 1.2.3.6/16
Building configuration...
!
interface GigabitEthernet0/0/0/0
Loading.
description first 434 bytes parsed in 1 sec (430)bytes/sec
ipv4 address 10.20.30.40 255.255.0.0
shutdown RP/0/0/CPU0:fella(config)#show configuration
! Thu Feb 26 09:00:11.180 PST
interface GigabitEthernet0/0/0/2 Building configuration...
description 10.20.30.40 !! IOS XR Configuration 0.0.0
shutdown
ipv4 access-list mylist
!
interface GigabitEthernet0/0/0/3
no 10
description 1020304050607080 10 permit tcp 100.200.250.225/16 host 1.2.4.5
shutdown !
! interface GigabitEthernet0/0/0/0
interface GigabitEthernet0/0/0/4 no ipv4 address 10.20.30.40 255.255.0.0
description 1.2.3.4.5.6.7.8 ipv4 address 100.200.250.225 255.255.0.0
shutdown !
!
interface GigabitEthernet0/0/0/2
route-policy temp
if ospf-area is 10.20.30.40 or source in (2.3.4.5/20) then
no description
pass Original description 100.200.250.225
endif !
end-policy Running !
! Configuration route-policy temp
if ospf-area is 100.200.250.225 or source in (2.3.4.5/20) then
pass
endif
end-policy

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
Operational and Automation

Global Configuration Replace – Ex. 3


interface GigabitEthernet0/1/0/0
Enhancements

RP/0/0/CPU0:ios(config)#replace pattern 'GigabitEthernet0/1/0/([0-4])' with


!
ipv4 address 20.0.0.10 255.255.0.0
Original 'TenGigE0/3/0/\1'
interface GigabitEthernet0/1/0/1
ipv4 address 21.0.0.11 255.255.0.0
Running Configuration Building configuration...
! Loading.
interface GigabitEthernet0/1/0/2
ipv4 address 22.0.0.12 255.255.0.0
485 bytes parsed in 1 sec (482)bytes/sec
! RP/0/0/CPU0:ios(config-if)#show configuration
interface GigabitEthernet0/1/0/3
ipv4 address 23.0.0.13 255.255.0.0
Fri Feb 27 16:52:56.549 PST
! Building configuration...
interface GigabitEthernet0/1/0/4
ipv4 address 24.0.0.14 255.255.0.0
!! IOS XR Configuration 0.0.0
! no interface GigabitEthernet0/1/0/0
interface TenGigE0/3/0/0
shutdown
no interface GigabitEthernet0/1/0/1
! no interface GigabitEthernet0/1/0/2
interface TenGigE0/3/0/1
shutdown
no interface GigabitEthernet0/1/0/3
! no interface GigabitEthernet0/1/0/4
interface TenGigE0/3/0/2
shutdown
interface TenGigE0/3/0/0
! ipv4 address 20.0.0.10 255.255.0.0
interface TenGigE0/3/0/3
shutdown
!
! interface TenGigE0/3/0/1
interface TenGigE0/3/0/4
shutdown
ipv4 address 21.0.0.11 255.255.0.0
! !
interface TenGigE0/3/0/5
shutdown
interface TenGigE0/3/0/2
! ipv4 address 22.0.0.12 255.255.0.0
interface TenGigE0/3/0/6
shutdown
!
! interface TenGigE0/3/0/3
end
ipv4 address 23.0.0.13 255.255.0.0
!
interface TenGigE0/3/0/4
ipv4 address 24.0.0.14 255.255.0.0

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 101
FPD Upgrade
Improvements
FPD Upgrade Improvements

 Parellel upgrades across linecards


 Newly inserted linecards get upgraded
 Parellel upgrades across multiple components on a linecard

 Faster and easier upgrade times!

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
RP2 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC0 2 min 49 sec 3 min 15 sec
FSBL 15 sec 6 sec
LNXFW 4 min 25 sec 29 sec
HW.FPD(FPGA2) 8 min 38 sec 51 sec
ALPHA(FPGA3) 6 min 6 sec 35 sec
OMEGA(FPGA4) 6 min 07 sec 36 sec
OPTIMUS(FPGA5) 6 min 07 sec 42 sec
ROMMON 3 min 25 sec 4 min 26 sec
CHA(FPGA6) 9 min 32 sec 8 min 36 sec
CBC1 3 min 04 sec 3 min 32 sec
Total 47 min 39 sec 19 min 53 sec

*Total upgrade time does not include CBC 0 and CBC1 since the CBC is upgraded in parallel, the time
taken is within the CBC upgrade time slice

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 104
8x100 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC 2 min 35 sec 2 min 35 sec
ROMMON 3 min 25 sec 2 min 15 sec
HW.FPD(FPGA2) 8 min 34 sec 1 min 6 sec
FSBL 16 sec 6 sec
LNXFW 4 min 28 sec 37 sec
MELDUN0(FPGA3) 6 min 09 sec 46 sec
MELDUM1(FPGA3) 12 min 16 sec 49 sec
DALLA(FPGA4) 6 min 9 sec 47 sec
Total 43 min 45 sec 9 min 1 sec

*Total upgrade time does not include CBC since the CBC is upgraded in parallel, the time taken is
within the CBC upgrade time slice

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
FC2 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC0(Parallel) 5 min 25 sec 5 min 45 sec
FCFSBL 15 sec 7 sec
FCLNXFW 3 min 29 sec 26 sec
HW.FPD(FPGA8) 6 min 35 sec 46 sec
Total 10 min 4 sec 1 min 19 sec

*Total upgrade time does not include CBC 0 since the CBC is upgraded in parallel, the time taken is
within the CBC upgrade time slice

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 106
FPD Parallel upgrade within a node
enhancement in 6.1.1
 Existing FPGAs upgrade happens sequential for every FPGA
expect CBC upgrade .
 CBC upgrade happens in parallel from 5.3.3
 Expect IPU(FSBL/LNXFW/HW.FPD) all other FPGA has
individual SPI controller mapping in IPU
 Upgrade of FPGAs(omega/optimus/alpha/meldun,dalla) can be
done in parallel within in a node.
 IPU images(fsbl/lnxfw/hw.fpd) can not be upgraded parallel in a
node .

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 107
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner will
receive a $750 Amazon gift card.
• Complete your session surveys
through the Cisco Live mobile
app or from the Session Catalog
on CiscoLive.com/us.

Don’t forget: Cisco Live sessions will be available


for viewing on-demand after the event at
CiscoLive.com/Online

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 108
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Lunch & Learn
• Meet the Engineer 1:1 meetings
• Related sessions

BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
Please join us for the Service Provider Innovation Talk featuring:
Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services

Thursday, July 14th, 2016


11:30 am - 12:30 pm, In the Oceanside A room

What to expect from this innovation talk


• Insights on market trends and forecasts
• Preview of key technologies and capabilities
• Innovative demonstrations of the latest and greatest products
• Better understanding of how Cisco can help you succeed

Register to attend the session live now or


watch the broadcast on cisco.com
Thank you

You might also like