Professional Documents
Culture Documents
Troubleshooting
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
IRB CLI Example Interface gig 0/0/0/1.50 l2transport
encapsulation dot1q 50
rewrite ingress tag pop 1 Symmetric
VPLS/H-VPLS
PWs Interface gig 0/0/0/2 l2transport
BVI (bridge
Logical virtual interface)
encapsulation dot1q 50
“bridge” port MPLS rewrite ingress tag pop 1 Symmetric
l2vpn
L2 ports
bridge group cisco
bridge-domain domain50
Interface gig 0/0/0/1.50
Bridge Interface gig 0/0/0/2
Domain routed interface bvi 20 BVI
L3 port
Interface gig 0/0/0/5.20
encapsulation dot1q 20
ipv4 address 2.2.2.2 255.255.255.0
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
IRB Packet Forwarding Overview
VPLS/H-VPLS
PWs
BVI (bridge
Logical virtual interface)
“bridge” port MPLS
L3 port
L3 routing is via BVI. It involves two steps:
(1) L2 bridging between L2 ports and the
L2 ports BVI “bridge” port. This is the regular
MAC address based L2 forwarding. BVI
is tied to a internal bridge “port”. BVI’s
Bridge
MAC address is used for L2 forwarding
Domain
(2) L3 routing between BVI and the other
L3 interfaces
L3 port
L2 port
CPU CPU
PHY NP0 NP0 PHY
1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY 1 2 Egress LC NP3 PHY
TM
Ingress LC NP
3
1 1 Egress Regular L3
Ingress L2 Lookup Lookup
TM TM packet replication and loop
entire packet (not just packet
head) back to NP
One additional NP lookup on ingress NP
2 Ingress L3 Lookup
One TM packet replication
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
IRB Packet Flow (2) – Unicast: L3 L2
L2 port
CPU CPU
PHY NP0 NP0 PHY
L3 port 1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY 1 2 Egress LC NP3 PHY
TM
Ingress LC NP
3
1 Ingress L3 Lookup & L2 rewrite 1 Egress Regular L2
(including egress BVI counters) Lookup
TM TM packet replication and loop
packet back to NP
2 Ingress L2 Lookup One additional NP lookup on ingress NP
One TM packet replication
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
IRB Packet Flow (3) –
L3 Multicast Fan-out to BVI/L2
L3 port
CPU CPU
PHY NP0 NP0 PHY
L3 port 1
PHY NP1 NP1 PHY
FIA FIA
PHY NP2 NP2 PHY
Switch
Fabric
PHY NP3 Ingress LC Egress LC 2
NP3 PHY
L2 port
1 Regular L3 Multicast replication
2 Multicast replication to L2 ports
(see next slide for details)
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
L3 Multicast Fan-out to BVI/L2 – Inside NP
To egress Example: there is only
Replicated
L2 port one L2 bridge port in
packet lookup
2 the bridge-domain on a
given NP
TM
This is the worst case scenario.
L2 lookup 1
One multicast need 3 lookup, 2
replications.
From Fabric L3 lookup
NP
Initial Lookup
– Regular L3 multicast lookup, result indicate replication to BVI interface
1– Packet is sent to replication engine which replicates the packets and send the copy
back to the processing engine
Second Lookup
–Regular L2 multicast lookup, result indicate replication to L2 port
2 –Packet is sent to replication engine which replicates the packet and send the copy back
to the processing engine
Third Lookup
–Replicated L2 multicast packet lookup
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
L3 Multicast Fan-out to BVI/L2 – Inside NP
One Copy Optimization
To egress Example: there is only
Replicated
L2 port one L2 bridge port in
packet lookup
2 the bridge-domain on a
Internal given NP
Replication
Engine
L2 lookup 1
NP
On a given NP, for particular mroute, if there is only one local L3 interface (BVI) in
the OIF list, then L3 and L2 lookups could be optimized into one. This also
eliminate the first multicast replication as shown in the picture
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
RSP/RP BVI Inject Path Difference
• The RSP doesn’t maintain the RP/0/RP1/CPU0:ASR9922-A#sh adjacency bvI-fia
Fri Jul 8 08:52:07.745 EDT
L2FIB Service card for BVI interfaces on node 0/RP1/CPU0
MPLS packets: Slot 0/1/CPU0 VQI 0xc0
Non-MPLS packets: Slot 0/1/CPU0 VQI
• RSP will use a service card 0xc0 ----- Service
Card List -----
• 1st linecard up is usually selected 0/RP0/CPU0 VQI:
• NP0 on that slot is used for L2 lookup Not Applicable
0/RP1/CPU0 VQI:
for RSP injected packets. Not Applicable
0/0/CPU0 VQI: 0x180
0/1/CPU0 VQI: 0xc0
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Punt and Inject
Items covered
• Netio
• SPP
• LPTS
• Punt/Inject paths
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
Netio
• PI module that handles all protocol packets (either on
RSP or LC)
• Each card has a FINT (Fabric interface: MTU – 9500)
• RSP has Mgmt interfaces – 2
• LC has ‘N’ physical interfaces
• In addition has Virtual interface association
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
Netio (cont…)
• Chain-walker works on the packet based on
packet type set in the Punted packets
• All modules install their handlers in the Chain
• Deals with SPP for Tx / Rx.
• Netio PD trims or adds PD headers
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
SPP
• Software Packet path
• Deals with super-frame
• Optimized packet processing
• CPU pipe-lining of instructions and data cache usage
• Runs on miscellaneous CPUs with different I/O driver
support
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
LPTS
• Local Packet Transport System
• Pre-IFIB packet processing (for-us packets)
• Control plane for Control packets
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Punt infra path Segments
• Segment 1: RSP CPU LC CPU
• Segment 2: LC CPU RSP CPU
• Segment 3: LC CPU Wire (via fabric)
• Segment 4: LC CPU Wire (direct)
• Segment 5: Wire RSP CPU
• Segment 6: Wire LC CPU
• Segment 7: RSP CPU Wire
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
ping
• Ping to local ip address on the LC physical interface
• Ping request: ping ipv4_io netio spp punt-fpga fia_rsp xbar
fia_lc np punt_switch spp netio ipv4_io icmp
• Ping response: icmp ipv4_io lpts netio spp punt_switch np
fia_lc xbar fia_rsp punt_fpga spp netio lpts ping
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Punt and Inject
L3 port
CPU
PHY NP0
PHY NP1
FIA Punt CPU
FIA
PHY NP2
Switch
Fabric
PHY NP3 RSP
LC
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
ARP
ARP Architecture LDP RSVP-TE BGP OSPF
IOS XR: Static
ISIS EIGRP
• fully distributed
• two-stage forwarding operation (clear LSD RIB
separation of ingress and egress feature RP
processing)
Internal EOBC
• Layer 2 header imposition is an egress
operation
• only the line card that hosts the FIB Adjacency
egress interface needs to know the ARP LC NPU
Layer 2 encapsulation for packets that SW FIB
have to be forwarded out of that
AIB AIB: Adjacency Information Base
interface. LC CPU
RIB: Routing Information Base
• As a consequence, ARP and FIB: Forwarding Information Base
adjacency tables are local to a line card. LSD: Label Switch Database
Exceptions:
• Bundle-Ethernet interfaces: ARP synced b/w all LCs that host the bundle members.
• Bridged Virtual Interfaces (BVI): ARP synced b/w all LCs. (Ingress+Egress L3
processing on ingress LC).
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
High-Scale ARP Deployments
• Challenges:
• synchronisation of large number of ARP entries across line cards during large ARP
churn:
• ARP storm in the attached subnet
• Router starts forwarding to end devices in a large attached subnet, triggering ARP resolution
requests
• Running the “show arp” command or poll ARP via SNMP in very high scale can further
slow down the ARP process
• Supported scale:
• ARP table can grow up to the dynamic memory limit imposed on the ARP process
• You can go beyond what is thoroughly tested at Cisco, but make sure you test your
deployment scenario
• 128k entries per LC tested thoroughly at Cisco
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
ARP Data Plane Line card ARP LC
CPU
• Ingress NP classifies the packet to an
interface and detects that it's an ARP packet. spio netio
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
ARP Data Plane - Resolution Line card ARP LC
CPU
• When the line card that performs the egress IPv4
processing receives a packet for a prefix for spio netio
which there is no adjacency information, the SPP
packet must be punted to slow path for ARP
resolution. Tsec Driver
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
ARP Data and Control Plane Commands
RP/0/RSP0/CPU0:ASR9006-H#sh controllers np counters np5 location 0/1/CPU0 NP5 is receiving
<…output omitted..> excessive ARP packets
776 ARP 130760261 929 • 929 pps punted
777 ARP_EXCD 613523651 4422 • 4422 pps dropped
RP/0/RSP0/CPU0:ASR9006-H#sh netio clients location 0/1/CPU0
<…output omitted..>
Input Punt XIPC InputQ XIPC PuntQ 993 packets are sitting in
ClientID Drop/Total Drop/Total Cur/High/Max Cur/High/Max NetIO awaiting ARP
-------------------------------------------------------------------------------- resolution
<…output omitted..>
arp 0/0 17025280/138794437 0/0/1000 993/1000/1000
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
ARP Configuration Commands
!
• Configure the LPTS ARP policer.
lpts punt police location 0/1/CPU0
protocol arp rate 400 • This rate is applied to all NPs on the LC. If you have 8 NPs on the LC the max
! total ARP rate hitting the LC CPU would be: 400 x 8 = 3200 pps
!
• Recommended in BNG deployments
subscriber arp scale-mode-enable
! • Available starting from 5.3.3
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
How Do I Tell If ARP Process Is Overloaded?
LC/0/1/CPU0:Jan 27 06:38:12.445 : netio[270]: %PKT_INFRA-PQMON-6-QUEUE_DROP : Taildrop on XIPC queue 2 owned
by arp (jid=115)
• NetIO queue is full, packets that require ARP resolution are dropped
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
High-Scale ARP Deployments Tips
• Validate your solution if you’re going beyond 128k/LC
• Tighten the LPTS policer rate.
• A safe ball-park number is 1000 per LC (if the LC has 4xNPs, configure 250).
• Monitor ARP counters at NP
• Monitor NetIO ARP resolution queue
• Monitor adjacencies summary info
• If using BVI, look for GSP failures in ARP traces
• In BNG deployment on IOS XR release 5.3.3 or later configure "subscriber arp
scale-mode-enable”
• Read more on ARP operation and troubleshooting in:
• https://supportforums.cisco.com/document/12766486/troubleshooting-arp-asr9000-routers
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
ARP on Unnumbered Interfaces - BNG!
• In typical BNG configuration bundle-access interface is unnumbered
• Unnumbered interfaces inherit all attributes from the parent
• Including primary and all secondary IPv4 addresses
• ARP table quickly grows interface Bundle-Ether100.902
RP/0/RSP0/CPU0:a9k#sh arp Bundle-Ether100.902.ip8 location 0/0/cPU0 ipv4 point-to-point
Address Age Hardware Addr State Type Interface ipv4 unnumbered Loopback902
172.17.255.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 encapsulation dot1q 902
172.31.252.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 ipsubscriber ipv4 l2-connected
172.31.253.1 - 8478.ac7a.ba73 Interface ARPA Bundle-Ether100.902 initiator dhcp
interface Loopback902
• New command in 5.3.3: ipv4 address 172.17.255.1 255.255.255.255
subscriber arp scale-mode-enable ipv4 address 172.31.252.1 255.255.255.0 secondary
ipv4 address 172.31.253.1 255.255.255.0 secondary
• Disables ARP on subscriber interfaces
• DHCP or PPPoE already provide the MAC IPv4 binding
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Core
ARP in Large Scale VRRP/HSRP
X
• Scenario:
• VRRP active is also the preferred router for downstream traffic VRRP
• Suddenly standby VRRP starts receiving traffic from the core Large L2
• if standby didn’t have the ARP table populated, large portion network
of downstream traffic is punted for ARP resolution
• Solution:
• ARP Geo-Redundancy feature
arp redundancy
group 1
source-interface Loopback901
interface-list
interface Bundle-Ether7606.2 id 2
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Loadbalancing revisted
Access L2VPN PE Core
Loadbalancing FAQ
• If packets are fragmented, L4 is omitted from the hash
calculation
• V6 flow label addition (includes the 5 tupple/needs config) to the
hash (target 6.x)
• cef load-balancing fields ipv6 flow-label
• Show cef exact route or bundle hash BE<x> can be used to
feed info and determine actual path/member, but this is a
shadow calculation that is *supposed* to be the same as the
HW.
• Mixing Bundle members between trident/typhoon/tomahawk
can be done, not recommended (though hash calc same).
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
How does ECMP path or LAG member selection work?
…
Path (ECMP)
Member (LAG)
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
ECMP Load balancing Bundle Load balancing
A: IPv4 Unicast or IPv4 to MPLS (3) • L3 bundle uses 5 tuple as “A” (e.g.: IP
• No or unknown Layer 4 protocol: IP SA, DA and Router ID enabled routed bundle interface)
• UDP or TCP: IP SA, DA, Src Port, Dst Port and Router ID • MPLS enabled bundle follows “C”
B: IPv4 Multicast • L2 access bundle uses access S/D-MAC +
• For (S,G): Source IP, Group IP, next-hop of RPF RID, OR L3 if configured (under l2vpn)
• For (*,G): RP address, Group IP address, next-hop of RPF • L2 access AC to PW over mpls enabled
C: MPLS to MPLS or MPLS to IPv4 core facing bundle uses PW label (not
FAT-PW label even if configured)
• # of labels <= 4:
• if inner is IP based: same as IPv4 unicast () • FAT PW label only useful for P/core routers
• EoMPLS: etherheader will follow: 4th label+RID
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 36
MPLS vs IP Based loadbalancing
• When a labeled packet arrives on the interface.
• The ASR9000 advances a pointer for at max 4 labels.
• If the number of labels <=4 and the next nibble seen right after that label is
• 4: default to IPv4 based balancing
• 6: default to IPv6 based balancing
• This means that if you have a P router that has no knowledge about the MPLS service of the packet,
that nibble can either mean the IP version (in MPLS/IP) or it can be the DMAC (in EoMPLS).
• RULE: If you have EoMPLS services AND macs are starting with a 4 or 6. You HAVE to use Control-
Word
45… (ipv4)
L2 MPLS MPLS 0000 (CW) 4111.0000.
41-22-33 (mac)
• Control Word inserts additional zeros after the inner label showing the P nodes to go for label based
balancing.
• In EoMPLS, the inner label is VC label. So LB per VC then. More granular spread for EoMPLS can be achieved with
FAT PW (label based on FLOW inserted by the PE device who owns the service).
• Take note of the knob to change the code: PW label code 0x11 (17 dec, as per draft specification). (IANA assignment is 0x17)
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Loadbalancing ECMP vs UCMP and polarization
• Support for Equal cost and Unequal cost
• 32 ways for IGP paths
• 32 ways (Typhoon) for BGP (recursive paths) 8-way Trident
• 64 members per LAG
• Make sure you reduce recursiveness of routes as much as possible (static route
misconfigurations…)
• All loadbalancing uses the same hash computation but looks at different bits from that hash.
• Use the hash shift knob to prevent polarization.
• Adj nodes compute the same hash, with little variety if the RID is close
• This can result in north bound or south bound routing.
• Hash shift makes the nodes look at complete different bits and provide more spread.
• Trial and error… (4 way shift trident, 32 way typhoon, values of >5 on trident result in modulo)
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Hash shift, what does it do?
L2 L3 L4 payload
8 bits selected
Hash shift 8
100 = 8 1 1 HASH
1 HASH 8 bits selected
010 = 4
(3 drawn) 256 buckets
1 2 3 4 7 8
Path (ECMP)
Member (LAG)
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Loadbalancing knobs and what they affect
• L2vpn loadbalancing src-dest-ip
• For L2 bundle interfaces egress out of the AC
• FAT label computation on ingress from AC towards core
• Note: upstream loadbalancing out of core interface does not look at fat label (inserted after hash is
computed)
• On bundle (sub)interfaces:
•Loadbalance on srcIP, dest IP, src/dest or fixed hash value (tie vlan to hash result)
Used to be on L2transport only, now also on L3.
• GRE (no knob needed anymore)
• Encap: based on incoming ip
• Decap: based on inner ip
• Transit: based on inner payload if incoming is v4 or v6 otherwise based on GRE header
• So running mpls over GRE will result in LB skews!
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
ASR9000 L2VPN Load-Balancing (cont.)
• ASR9000 PE Imposition load-balancing
behaviors PE Per-Flow load-balance configuration based
• Per-PW based on MPLS VC label (default) on L2 payload information
• Per-Flow based on L2 payload information; i.e. !
L2 DMAC / L2 SMAC, RTR ID l2vpn
load-balancing flow src-dst-mac
• Per-Flow based on L3/L4 payload information; !
i.e. L3 D_IP / L3 S_IP / L4 D_port / L4 S_port1,
RTR ID
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
ASR9000 L2VPN Load-Balancing (cont.)
Applicable to Applicable to
L2VPN Is MPLS L3VPN
ASR9000 as P router load-balancing behaviors payload
Based on L3/L4 payload information for IP MPLS IP?
payloads No Yes1 RTR DA
Based on Bottom of Stack label for Non-IP MPLS RTR SA
payloads
Select ECMP / Bundle Select ECMP / Bundle
Member according to Member according to MPLS E-Type (0x8847)
IP MPLS payloads identified based on version field hash operation based hash operation based PSN MPLS Label
value (right after bottom of stack label) on bottom of stack on L3 / L4 payload PW MPLS Label
Version = 4 for IPv4 label value information 0 PW CW
Version = 6 for IPv6 DA
Anything else treated as Non-IP Non-IP SA
L2VPN (PW) traffic treated as Non-IP For L2VPN, bottom of stack 802.1q Tag (0x8100)
label could be: C-VID
PW Control-Word strongly recommended to avoid
• PW VC label or E-Type (0x0800)
erroneous behavior on P router when DMAC starts
with 4/6 • Flow label (when using
FAT PWs) IPv4 Payload
(1) MPLS –encap IP packets with up to four (4) MPLS labels Typical EoMPLS frame
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 42
ASR9000 L2VPN Load-Balancing (cont.)
• In the case of VPWS or VPLS, at the ingress PE side, it’s possible to
change the load-balance upstream to MPLS Core in three different ways:
1. At the L2VPN sub-configuration mode with “load-balancing flow” command with either
src-dst-ip or src-dst-mac [default]
2. At the pw-class sub-configuration mode with “load-balancing” command where you can
choose flow-label or pw-label.
3. At the bundle interface sub-configuration mode with “bundle load-balancing hash”
command with either dst-ip or src-ip.
• It’s important to non only understand these commands but also that 1 is
weaker than 2 which is weaker than 3.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
ASR9000 L2VPN Load-Balancing (cont.)
For example, if you had the configs in interface Bundle-Ether1
bundle load-balancing hash dst-ip
all 3 locations !
L2vpn
• Because of the priorities, on the load-balancing flow src-dst-ip
pw-class FAT
egress side of the ingress PE (to the encapsulation mpls
MPLS Core), we will do per-dst-ip control-word
transport-mode ethernet
load-balance. load-balancing
pw-label
• If the bundle-specific configuration is
removed, we will do per-VC load-
balance.
• If the pw-class load-balance
configuration is removed, we will do
per-src-dst-ip load-balance.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Flow Aware Transport PWs
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
Flow Aware Transport PWs (cont.)
!
• ASR9000 PE capable of negotiating (via l2vpn
LDP – RFC6391) the handling of PW Flow pw-class sample-class-1
encapsulation mpls
labels load-balancing flow-label both
!
• ASR9000 also capable of manually pw-class sample-class-1
encapsulation mpls
configure imposition and disposition load-balancing flow-label tx
behaviors for PW Flow labels !
pw-class sample-class-1
encapsulation mpls
• Flow label value based on L2 or L3/L4 PW load-balancing flow-label rx
payload information
• ASR9000 PE capable of load-balancing !
regardless of the presence of Flow Label l2vpn
pw-class sample-class
• Flow label aimed at assisting P routers encapsulation mpls
load-balancing flow-label both static
!
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
PW2 (Service Y)
P router with ECMP and P router without ECMP
Bundle interfaces and Bundle interfaces
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
L2VPN Load-balancing (Per-VC LB)
Default - ASR9000 P with PW2 (Service Y)
Default - ASR9000 PE with
Core-facing Bundle AC Bundle
P rtr load-balances traffic PE load-balances traffic across
across Bundle members based PW1 (Service X) Bundle members based on
P3
on VC label; i.e. all traffic from P1
DA/SA MAC
F1x F2x F3x
a PW assigned to one member F4x
Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3
Svc X – Flow 4
Svc Y – Flow 1
Svc Y – Flow 2 F1y F2y F3y
Svc Y – Flow 3
PE1 F4y PE2
Svc Y – Flow 4
Default - ASR9000 PE with P4
P2
ECMP
Default - ASR9000 PE with Default - ASR9000 P with
PE load-balances PW traffic
Core-facing Bundle ECMP
across ECMPs based on VC
PE load-balances traffic across P rtr load-balances traffic
label; i.e. all traffic from a PW
Bundle members based on VC across ECMPs based on VC
assign to one ECMP
label; i.e. all traffic from a PW label; i.e. all traffic from a PW
assigned to one member assigned to one ECMP
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
L2VPN Load-balancing (L2/L3 LB)
Default - ASR9000 P
PW loadbalancing based on
VC label; only one ECMP and PE L2VPN load-balancing knob:
one bundle member used for l2vpn
PE L2VPN load-balancing knob: load-balancing flow {src-dst-mac
l2vpn all PW traffic PW1 (Service X) | src-dst-ip}
P1 P3
load-balancing flow {src-dst-mac
| src-dst-ip} F1x F2x
Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3
Svc X – Flow 4
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
L2VPN Load-balancing (L2/L3 LB + FAT)
PE L2VPN load-balancing ASR9000 P with Core-facing
knob (same as before) Bundle No new configuration
PW loadbalancing based on required on P routers
PE FAT PW
l2vpn Flow label; i.e. flows from a PW PE L2VPN load-balancing
pw-class sample-class distributed over bundle members PW1 (Service X) knob
encapsulation mpls P1 P3
PE FAT PW
load-balancing flow- F1x
label both
Svc X – Flow 1
Svc X – Flow 2
Svc X – Flow 3 F2x
Svc X – Flow 4
ASR9000 PE F3x
PE1 PE2
PE now adds Flow
labels based on L2 or F4x
L3 payload info P2 P4 ASR9000 PE with AC Bundle
ASR9000 PE with ECMP ASR9000 PE with Core- PE load-balances now traffic
PE now load-balances PW facing Bundle ASR9000 P with ECMP across Bundle members based
traffic across ECMPs based PE now load-balances traffic P rtr now load-balances traffic on L2 or L3 payload info
on L2 or L3 payload info; i.e. across Bundle members based across ECMPs based on Flow
flows from a PW distributed on L2 or L3 payload info; i.e. label; i.e. flows from a PW
over ECMPs flows from a PW distributed distributed over ECMPs
over members
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
L2VPN LB Summary
• ASR9000 as L2VPN PE router performs multi-stage hash algorithm to select
ECMPs / Bundle members
• User-configurable hash keys allows for the use of L2 fields or L3/L4 fields in PW
payload in order to perform load-balancing at egress imposition
• ASR9000 (as PE) complies with RFC6391 (FAT PW) to POP/PUSH Flow labels
and aid load-balancing in the Core
• PE load-balancing is performed irrespective of Flow PW label presence
• FAT PW allows for load-balancing of PW traffic in the Core WITHOUT requiring any
HW/SW upgrades in the LSR
• Cisco has prepared a draft to address current gap of FAT PW for BGP-signaled PWs
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
Significance of PW Control-Word
Solution:
Problem: Add PW Control Word in
DANGER for LSR front of PW payload. This
LSR will confuse payload guarantees that a zero will RTR DA
as IPv4 (or IPv6) and RTR DA always be present and RTR SA
attempt to load-balance RTR SA thus no risk of confusion MPLS E-Type (0x8847)
based off incorrect fields MPLS E-Type (0x8847) for LSR PSN MPLS Label
PSN MPLS Label PW MPLS Label
PW MPLS Label 0 PW CW
4 DA 4 DA 4 DA
SA SA SA
802.1q Tag (0x8100) 802.1q Tag (0x8100) 802.1q Tag (0x8100)
C-VID C-VID C-VID
Payload E-Type Payload E-Type Payload E-Type
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 52
Satellite ICL Bundle Load-Balancing
a9k
Eg: on a bundle ICL with 4 members, hash value of satellite port 13 is: Satellite
X = (13 + 2) % 4 = 15 % 4 = 3
123 ... 40
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Important ASR9k MFIB Data-Structures
• FGID = Fabric Group ID
1. FGID Index points to (slotmask, fabric-channel-mask)
2. Slotmask, fabric-channel-mask = simple bitmap
• MGID = Multicast Group ID (S,G) or (*,G)
• 4-bit RBH
1. Used for multicast load-balancing chip-to-chip hashing
2. Computed by ingress NP ucode using these packet
fields: (S,G) MGID FGID
3. IP-SA, IP-DA, Src Port, Dst Port, Router ID
10.0.0.1, 230.0.0.1 12345 0x82
• FPOE = FGID + 4-bit RBH 10.1.1.1, 230.12.1.1 3456 0x209
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Multicast Replication Model Overview
Multicast Replication in ASR9k is like an SSM tree
2-stage replication model:
1. Fabric to LC replication
2. Egress NP OIF replication
ASR9k doesn’t use inferior “binary tree” or “root uniary tree” replication model
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Multicast Bundle Load-Balancing
RP/0/RSP0/CPU0:a9k#sh mrib route 172.16.255.3 230.9.0.8 detail a9k
(172.16.255.3,230.9.0.8) Ver: 0x3f71 RPF nbr: 0.0.0.0 Flags: C RPF,
PD: Slotmask: 0x0
MGID: 16905
BE7606
Up: 00:04:00
Outgoing Interface List
Bundle-Ether7606.1 (0/0/CPU0) Flags: LI, Up: 00:04:00 Gi0/0/1/2 Gi0/0/1/3
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
Troubleshooting Packet
Forwarding
Update and recap
Input Drops Troubleshooting
Troubleshooting this?
Piece of cake starting with
GigabitEthernet0/0/1/6.1 is up, line protocol is up
<..output omitted..>
IOS XR 5.3.3!!! 😀
307793 packets input, 313561308 bytes, 227987 total input drops
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Monitor NP Interface
• Part of the stats memory carved out for per-uidb drop counters
• UIDB == µIDB == Micro Interface Descriptor Block
• NP’s view of an interface
• Only one uidb at the time per LC can be monitored
• Drop counters that are updated for selected uidb are not updated in the global
stats memory Per uidb
GLOBAL STATS MEMORY drop
counters
NP complex
STATS MEMORY
FIB MAC
LOOKUP Forwarding chip (multi core) FRAME MEMORY
MEMORY
- TCAM
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Interface µIDB Info
RP/0/RSP0/CPU0:our9001#sh uidb data location 0/0/CPU0 g0/0/1/6.1 ingress | e 0x0
Location = 0/0/CPU0
Index = 35
INGRESS table
------------ ingress
Layer 3 ing-extension
------------
egress
Status 0x1
extension
Ifhandle 0x40009c0
Index 0x23
Stats Pointer 0x5303ce
IPV4 ACL Enable 0x1
IPV4 ACL ID 0x10 interface GigabitEthernet0/0/1/6.1
IPV4 Mcast Enable 0x1 ipv4 address 172.18.0.1 255.255.255.0
MPLS Enable 0x1 encapsulation dot1q 901
IPV4 Enable 0x1 ipv4 access-group CL16 ingress
IPV4 ICMP Punt 0x1 !
mpls Racetrack Eligible 0x1 mpls ldp
router-id 172.16.255.1
interface GigabitEthernet0/0/1/6.1
address-family ipv4
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Monitor NP Interface
RP/0/RSP0/CPU0:our9001#monitor np interface g0/0/1/6.1 count 2 time 1 location 0/0/CPU0
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec
<..output omitted..>
**** Sun Jan 31 22:14:32 2016 ****
(Count 2 of 2)
RP/0/RSP0/CPU0:our9001#
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
Monitor NP Interface
• Counters reported with ‘_MONITOR’ appendix
• These counters are not added to global NP counters
• By default runs one capture during 5 seconds (configurable count and time)
• One session at the time per LC
RP/0/RSP0/CPU0:our9001#monitor np interface
• Supports g0/0/1/6.1
physical and BEcount 2 time 1 location 0/0/CPU0
(sub)interfaces
Monitor NP counters of GigabitEthernet0_0_1_6.1 for 2 sec
• Physical (sub)int: monitoring runs on NP that hosts the interface
<..output omitted..> • BE(sub)int: monitoring runs on all NP that host the members
**** Sun Jan 31 22:14:32 2016 ****
• Applicable only to ucode stages where uidb is known
Monitor 2 non-zero NP1 counters: GigabitEthernet0_0_1_6.1
• Works perfectly for input drops troubleshooting and some output drops
Offset Counter FrameValue Rate (pps)
-------------------------------------------------------------------------------
262 RSV_DROP_MPLS_LEAF_NO_MATCH_MONITOR 101 49
1307 PARSE_DROP_IPV4_CHECKSUM_ERROR_MONITOR 101 50
(Count 2 of 2)
RP/0/RSP0/CPU0:our9001#
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Pervasive Capture of Dropped Packets
Dedicated
• Dedicated pool of buffer buffer pool
pointers
• Instead of dropping the Drop Drop Drop
packet, the first RFD buffer ? ? ?
of a packet is enqueued into
Traffic
WRED
a dedicated pool Mana
ger
TOP Resolve
Line
TOP Search
TOP Modify
TOP Parse
Line Input
Output
ICFDQ
WRED
Traffic
Mana
FIA Input FIA Output
ger
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Show Controllers NP Capture
RP/0/RSP0/CPU0:our9001#sh controllers np capture np1 location 0/0/CPU0
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Show Controllers NP Capture – Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100) ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901 encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
Type: 0 (Echo (ping) reply)
Code: 0 ()
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Troubleshooting Input Drops – Next Steps
Ethernet II, Src: 30:f7:0d:f8:af:81, Dst: 84:78:ac:78:ca:3e interface GigabitEthernet0/0/1/6.1
Type: 802.1Q Virtual LAN (0x8100) ipv4 address 172.18.0.1 255.255.255.0
802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 901 encapsulation dot1q 901
Type: MPLS label switched packet (0x8847) !
MultiProtocol Label Switching Header, Label: 24001, Exp: 0, S: 1, TTL: 255
MPLS Label: 24001 RP/0/RSP0/CPU0:our9001#sh mpls forwarding labels 24001
MPLS Experimental Bits: 0
MPLS Bottom Of Label Stack: 1
MPLS TTL: 255 RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings local-label 24001
Internet Protocol, Src: 172.18.0.2 (172.18.0.2), Dst: 172.16.255.2 (172.16.255.2)
Internet Control Message Protocol
RP/0/RSP0/CPU0:our9001#sh mpls ldp bindings 172.16.255.2/32
Type: 0 (Echo (ping) reply)
Code: 0 () 172.16.255.2/32, rev 48
Local binding: label: 24010
Remote bindings: (1 peers)
Peer Label
Drop reason: Upstream peer is
----------------- --------- sending packets with a wrong
172.16.255.3:0 23 MPLS label.
RP/0/RSP0/CPU0:our9001#
Local binding shows that the label
24010 should be used
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Monitor NP Counter
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Monitor NP Counter
RP/0/RSP0/CPU0:our9001#monitor np counter ACL_CAPTURE_NO_SPAN.1 np1 location 0/0/CPU0
Warning: Every packet captured will be dropped! If you use the 'count'
option to capture multiple protocol packets, this could disrupt
protocol sessions (eg, OSPF session flap). So if capturing protocol
packets, capture only 1 at a time.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
“Show drops all” enhancement
• Supported starting with 5.3.0
• Uses a ‘grammar’ file to combine outputs of other show commands
• Easy way to achieve a combined view of relevant aspects (drops are the most obvious
use case)
• Grammar file:
• Can be modified to suite particular troubleshooting tasks
• System will look for it at two locations:
1. disk0a:/usr/packet_drops.list
2. /pkg/etc/packet_drops.list (default)
• “show drops all commands” shows the constituent commands that will be called
for parsing the final output
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
“Show drops all” sample output (1)
RP/0/RP0/CPU0:ios#sh drops all commands
Wed Feb 4 05:27:40.915 UTC
Module CLI
[arp] show arp traffic
[cef] show cef drops
[fabric] show controllers fabric fia drops egress
[fabric] show controllers fabric fia drops ingress
[lpts] show lpts pifib hardware entry statistics
[lpts] show lpts pifib hardware police
[lpts] show lpts pifib hardware static-police
[netio] show netio drops
[netio] fwd_netio_debug
[niantic-driver] show controllers dmac client punt statistics
[np] show controller np counters
[np] show controllers np tm counters all
[spp] show spp node-counters
[spp] show spp client detail
[spp] show spp ioctrl
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
“Show drops all” sample output (2)
RP/0/RP0/CPU0:ios#sh drops all location 0/5/CPU0
Wed Feb 4 05:26:30.192 UTC
=====================================
Checking for drops on 0/5/CPU0
=====================================
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
New CLI To Dump The MAC Table
• New CLI introduced to dump the MAC table directly from NP memory
• Much faster dump if MAC table elements
• Full dump of a large MAC table may be as slow as the existing CLI
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
New CLI To Dump The MAC Table
sh l2vpn forwarding platform bridge-domain [<name>] mac-address [<mac-address>|np-id <np>|
xconnect <id>] location <location>
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 74
Troubleshooting NP Forwarding
1. Identify interface in question.
2. Identify the mapping from interface to NPU.
3. Examine NP counters.
4. Look for rate counters that match lost traffic rate.
• If none of the counters match the expect traffic, check for drops at interface controller
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
eXR on ASR9000
Introduction
VMs ( Virtual Machines ) Linux Containers
• A container requires enough of an operating
system, supporting programs and libraries, and
• Each VM runs a full copy of an operating system resources to run a specific program .
system and also have a virtual copy of all the
hardware that the operating system needs to • Containers are isolated but share the OS /
run. bin/libraries
• That means VMs require a lot of RAM and • Containers are lightweight in comparison to the
CPU cycles and thus it is heavyweight . VM which enables to install/load two to three
times of applications on a single server with
• Building VMs can take minutes to hours containers than a VM.
• Launching VMs can takes minutes . • Building a container can take minutes.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
Introduction … Continued
VMs ( Virtual Machines ) Linux Container
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
Pictorial Illustration
Carrier Delay :
carrier delay up 10000
• We recommend to configure it on all interfaces which have redundant members in the bundle
carrier delay down 10000
• On bundle with one member , please config this cli
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
VMs Adaptation (Spirit XR i.e. NCS 6k)
• Achieving ZPL, ZTL and HA during ISSU
The system is configured for multiple levels of redundancy. These are driven from
the three main requirements for ISSU: ZPL( Zero Packet loss ) , ZTL ( Zero
Topology loss ) and HA during ISSU.
• For ZPL and ZTL, we have redundancy in the line card resources, and these are
carved into two parts: one for the current software version and one for the new
software version. These resources include the amount of memory for storing
prefixes in the FIB and the resources necessary for supporting features such as
ACLs, QoS in the form of TCAM memories.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
VMs Adaptation (Spirit XR i.e. NCS 6k) ..contd
• For ISSU with HA, we require the ability to create multiple VMs on each single
node. This implies the availability of partitions to host the new VM corresponding
to the new software version, and memory for the new VM to run.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
LXC Adaptation (Spirit XR vs eXR )
Summary :
NCS6k FCSed with 48G of memory on the RP and 24G on the LCs and we have
reserved half of the memory on both RP and LC for V2 VM during ISSU and that’s is
where we were able to achieve ZPL , ZTL and HA during ISSU .
But for platforms which are NOT that resources intense , this is a significant price to
be paid if we go by VM approach .
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
What is eXR
Detour - what is eXR?
• Enables Spirit to run on lower end platforms
Enables Spirit on legacy hardware with no virtualization support.
Enables hybrid 32-bit and 64-bit nodes.
Target platforms are ASR9k, Sunstone, Skywarp.
• Linux containers instead of KVMs
Calvados and XR run in separate containers.
• Leverage and enhance Spirit ISSU capabilities
Almost identical ISSU semantics/capabilities.
• Platform chooses containers or KVMs.
•
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 83
What is eXR… Continued
Open embedded Linux kernel using Yocto
• Lots of Linux distributions out there, but none that are just right.
We cobble together the OS/libraries that meets our needs.
What we cobbled together may not always work well together.
• Yocto helps create custom Linux OS distributions.
• Yocto provides templates and tools per CPU to create a custom OS.
• Also works with commercial Linux vendors.
•
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
What is eXR …... Continued
3rd party application hosting
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
What is eXR …... Continued
Cohesive Linux and XR networking stack
• Minimal state in the kernel
Only populate state that is absolutely required in the kernel.
• Minimize changes to the kernel
• Network topology-agnostic framework
3rd party applications must be able function over any network topology supported by XR.
• Minimal route state in the kernel
Linux kernel’s routing feature set is primitive – targeted for a host, not a router.
Packet TX/RX via XR FIB.
Leverage hundreds of XR forwarding features.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
What is eXR …... Continued
Modular feature delivery
• Enable modular delivery of XR features.
Example, BGP pie only.
• Reduce number of kernel packages in NG XR images.
• We have feature packs which are lineups per customer in reality.
Does not scale.
Integrate AU + Chef and demo on Spirit/NCS6k.
• Asynchronous releases of XR platforms - define granularity.
Too granular will be a pain.
API versioning, automation and testing – define governance model.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 87
Usability
IOS-XR Usability Initiative Progress
Number of usability related features / featurettes delivered and planned
183 18 43 20
Delivered
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Few usability highlights
• Input Drops Troubleshooting
• Global Configuration Replace – Ever wanted to quickly move interface configuration from one port to another? This
new feature allows for quick customization of router configuration by match and replace based on interface names and /
or regular expressions (see presentation below for details)
• Non-interactive EXEC commands – Ever wanted to initiate a router reload without being asked for confirmation? A new
global knob has been introduced to remove user interaction with the parser
• BGP advertised prefix count statistics – A new knob provides access to advertised count stats (something you could
do easily in IOS but not in IOS XR)
• OSPF post-reload protocol shutdown – A new knob that would keep OSPF in shutdown state after a node reload
• Interactive Rollback operations– Ever issued the wrong rollback ID by mistake? – a new knob would ask for user
confirmation before committing
• CLI / XML serviceability enhancements to several platform dependent commands such as “show controllers” and
“show hw-module fpd” commands
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
Atomic Configuration
Replace Feature
Operational and Automation
Enhancements
Atomic Configuration Replace – Problem Statement
Example:
Consider an interface with a Problem Statement:
target config where all config It is operationally challenging to
lines are new expect prior knowledge of existing
config in order to manually remove
unwanted items
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
Operational and Automation
What about
Enhancements
Atomic Configuration Replace – Current Behavior using ‘no
interface”?
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
Operational and Automation
Enhancements
Atomic Configuration Replace – NEW Behavior
1 Original Configuration 2 Target Configuration
RP/0/RSP0/CPU0:PE1#sh run int gigabitEthernet 0/0/0/19 RP/0/RSP0/CPU0:PE1(config)#no interface GigabitEthernet0/0/0/19
Mon Feb 16 13:00:32.153 UTC RP/0/RSP0/CPU0:PE1(config)#interface GigabitEthernet0/0/0/19
interface GigabitEthernet0/0/0/19 RP/0/RSP0/CPU0:PE1(config-if)# description ***TEST-after-change***
description ***AAABBBCCC*** RP/0/RSP0/CPU0:PE1(config-if)# ipv4 address 13.3.5.5 255.255.255.0
cdp RP/0/RSP0/CPU0:PE1(config-if)# ipv6 address 2603:10b0:100:10::21/126
ipv4 address 13.3.5.5 255.255.255.0 RP/0/RSP0/CPU0:PE1(config-if)# negotiation auto
negotiation auto RP/0/RSP0/CPU0:PE1(config-if)# load-interval 60
load-interval 30 RP/0/RSP0/CPU0:pE1(config-if)# commit
!
3 Committed Configuration
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1 Example:
Mon Feb 16 13:15:36.655 UTC Consider an interface with a
Building configuration... new target config where
!! IOS XR Configuration 5.1.2 some config lines are
interface GigabitEthernet0/0/0/19 untouched and the rest are
description ***TEST-after-change***
no cdp NEW Behavior: either deleted , changed or
ipv6 address 2603:10b0:100:10::21/126 When issuing the “no” interface config, added
load-interval 60 the system does not destroy the subtree
! but instead performs a SET of new config
End and REMOVE of unwanted config lines
Only the diffs (changes, removals,
additions) are applied
No interface flaps
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
Operational and Automation
What about
Enhancements
other config
Atomic Configuration Replace submodes?
3 Committed Configuration
RP/0/RSP0/CPU0:PE1#show configuration commit changes last 1
Tue Mar 3 09:05:19.008 UTC Example:
Building configuration... Consider an BGP neighbor-
!! IOS XR Configuration 5.1.2 group with a new target
router bgp 100
EXISTING Behavior: config where some config
neighbor-group NG-test lines are untouched and the
description *** NEW NEW ***
When issuing the “no” neigh-group config
, the system does not destroy the subtree rest are either changes or
address-family vpnv4 unicast
! but instead performs a SET of new config additions
! and REMOVE of unwanted config lines
! Only the diffs (changes, removals,
end additions) are applied
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
Global Configuration
Replace Feature
Operational and Automation
Enhancements
Global Configuration Replace
• Description / Use Case
• Easy manipulation of router
configuration; e.g. moving around Want to change all
configuration blocks Want to move repetitions of a given
interface config pattern?
• Available since 5.3.2 (CSCte81345) around?
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
Operational and Automation
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
Operational and Automation
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
Operational and Automation
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 101
FPD Upgrade
Improvements
FPD Upgrade Improvements
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
RP2 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC0 2 min 49 sec 3 min 15 sec
FSBL 15 sec 6 sec
LNXFW 4 min 25 sec 29 sec
HW.FPD(FPGA2) 8 min 38 sec 51 sec
ALPHA(FPGA3) 6 min 6 sec 35 sec
OMEGA(FPGA4) 6 min 07 sec 36 sec
OPTIMUS(FPGA5) 6 min 07 sec 42 sec
ROMMON 3 min 25 sec 4 min 26 sec
CHA(FPGA6) 9 min 32 sec 8 min 36 sec
CBC1 3 min 04 sec 3 min 32 sec
Total 47 min 39 sec 19 min 53 sec
*Total upgrade time does not include CBC 0 and CBC1 since the CBC is upgraded in parallel, the time
taken is within the CBC upgrade time slice
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 104
8x100 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC 2 min 35 sec 2 min 35 sec
ROMMON 3 min 25 sec 2 min 15 sec
HW.FPD(FPGA2) 8 min 34 sec 1 min 6 sec
FSBL 16 sec 6 sec
LNXFW 4 min 28 sec 37 sec
MELDUN0(FPGA3) 6 min 09 sec 46 sec
MELDUM1(FPGA3) 12 min 16 sec 49 sec
DALLA(FPGA4) 6 min 9 sec 47 sec
Total 43 min 45 sec 9 min 1 sec
*Total upgrade time does not include CBC since the CBC is upgraded in parallel, the time taken is
within the CBC upgrade time slice
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
FC2 upgrade time in 6.0.1
5.3.3 6.0.1(nanospin_ns)
CBC0(Parallel) 5 min 25 sec 5 min 45 sec
FCFSBL 15 sec 7 sec
FCLNXFW 3 min 29 sec 26 sec
HW.FPD(FPGA8) 6 min 35 sec 46 sec
Total 10 min 4 sec 1 min 19 sec
*Total upgrade time does not include CBC 0 since the CBC is upgraded in parallel, the time taken is
within the CBC upgrade time slice
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 106
FPD Parallel upgrade within a node
enhancement in 6.1.1
Existing FPGAs upgrade happens sequential for every FPGA
expect CBC upgrade .
CBC upgrade happens in parallel from 5.3.3
Expect IPU(FSBL/LNXFW/HW.FPD) all other FPGA has
individual SPI controller mapping in IPU
Upgrade of FPGAs(omega/optimus/alpha/meldun,dalla) can be
done in parallel within in a node.
IPU images(fsbl/lnxfw/hw.fpd) can not be upgraded parallel in a
node .
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 107
Complete Your Online Session Evaluation
• Give us your feedback to be
entered into a Daily Survey
Drawing. A daily winner will
receive a $750 Amazon gift card.
• Complete your session surveys
through the Cisco Live mobile
app or from the Session Catalog
on CiscoLive.com/us.
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 108
Continue Your Education
• Demos in the Cisco campus
• Walk-in Self-Paced Labs
• Lunch & Learn
• Meet the Engineer 1:1 meetings
• Related sessions
BRKSPG-2904 © 2016 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
Please join us for the Service Provider Innovation Talk featuring:
Yvette Kanouff | Senior Vice President and General Manager, SP Business
Joe Cozzolino | Senior Vice President, Cisco Services