Professional Documents
Culture Documents
for IPTV
Søren Andreasen
CCIE #3252
Cisco DK
Service
Control Engine
– Active RP synchronizes
information with standby RP
– Session state maintained for high RP1 RP2
availability-aware protocols on ACTIVE STANDBY
standby RP
– Standby RP takes control when
active RP is compromised
Bus
• Non-Stop Forwarding (NSF):
minimal or no packet loss
– Packet forwarding continues
during reestablishment of peering
relationships Line Cards
– No route flaps between
participating neighbor routers
HA-Router(config)#redundancy
HA-Router#show redundancy
Active GRP in slot 0:
HA-Router(config-red)#mode ?
Standby GRP in slot 9:
rpr Route Processor Redundancy
rpr-plus Route Processor Redundancy Plus
Preferred GRP: none
sso Stateful Switchover (Hot Operating Redundancy Mode: SSO
Standby)
Auto synch: startup-config running-
config
HA-Router(config-red)#mode sso switchover timer 3 seconds [default]
Control Data B
Control Data B
Control Data B
mark forwarding
information as stale
© 2005 Cisco Systems, Inc. All rights reserved.
13
GR/NSF Fundamentals
Hold Timer: 15
6
7
8
9
10
11
12
13
14
• No Link Flap
• CEF table sync to the standby RP
• LC CEF Table not cleared on switchover
• Packet forwarding during switchover while routing
is converging on Standby RP
• Maintain peer relationship, no adjacency flapping
with peers
• Limit restart to be local event, not network wide
router#sh ip ospf
Routing Process "ospf 100" with ID 10.1.1.1
.... B
Non-Stop Forwarding enabled, last NSF restart 00:02:06
ago (took 44 secs)
router#show ip ospf neighbor detail
Neighbor 3.3.3.3, interface address 170.10.10.3
....
Options is 0x52
LLS Options is 0x1 (LR), last OOB-Resync 00:02:22 ago
© 2005 Cisco Systems, Inc. All rights reserved.
17
BGP GR/NSF
ACTIVE
Line Card
Line Card
Line Card
How it works:
STANDBY
ACTIVE
ACTIVE
1 Global sync of IGMP Data Structures Failure
occurs when standby Supervisor comes online
2 Periodic syncs are performed as changes occur
IGMP IGMP
5 Synced IGMP Data Structures are utilized Data Data
to provide Layer-2 Non-Stop Forwarding (NSF) Structures periodic Structures
of multicast traffic
syncs
No MPLS HA support
1
Control Plane Control Plane Control Plane
2 2
Data Plane 3 Data Plane 3 3 Data Plane
LSPs LSPs
PE P PE
MPLS HA Support: NSF/SSO + Graceful Restart
1
Control Plane Control Plane Control Plane
2 2
Data Plane Data Plane Data Plane
LSPs LSPs
PE P PE
ASBR-1 ASBR-2
AS #100 AS #200
eBGP IPv4 +
AS1_PE1 Labels AS2_PE1
– Typically 50 ms or less R3 R4
R5
• MPLS TE FRR offers
protection against network
failures R1 R2 R6 R7 R8
– Link protection
R9
– Node protection
Protected Path Backup Path
R3
• Creation of Next-hop
Backup Tunnel
– Parallel path around 20
20 11
11 11
11
protected link to next hop HE PLR
• Creation of Next-next-hop
21
21 12
12
Backup Tunnel
– Parallel path around R3 R4
20
20 12
12 R5
protected node to next- 12
12
next-hop HE PLR R6
Distribution/
Source Service Edge Core Access Receiver
Multicast Packet
Primary RSVP Tunnel
Multicast Packet
+ MPLS Label
P P
Layer 2 Layer 2
Switch Switch
PE MPLS Core
CE PE PE CE
Source Receiver
P P
Distribution/
Source Service Edge Core Access Receiver
P P
Layer 2 Layer 2
Switch Switch
P P
PE1 PE3
PE1 P1 PE
Site1 P2 P4 Site2
PE2 PE4
CE2
CE1
• Failure in the provider core mitigated with link redundancy and FRR
• PE router failure–Another PE or NSF/SSO
• PE-CE Attachment Circuit failure–need pair of attachment Circuits
end-to-end
• CE Router failure–redundant CEs
x
PE1 PE3 CE2
PE1 P1 PE
Site2
Site1 P2 P4
PE2 PE4
CE1
CE3
pe1(config)#int e 0/0.1
pe1(config-subif)#encapsulation dot1q 10
pe1(config-subif)# xconnect <PE3 router ID> <VCID> encapsulation mpls
pe1(config-subif-xconn)#backup peer <PE4 router ID> <VCID>
P2 P4 Site2
PE2
10.1.1.0/24
PE4 10.1.2.0/24
10.1.1.0/24
10.1.2.0/24
• Link failures—solved using redundancy in network design and
MPLS TE FRR and/or Fast IGP
• Node failures—solved using redundant nodes and meshing of connections
– Also using MPLS TE FRR node protection
• Line card failures
– Hardware redundancy—line card redundancy (1+1, 1:N, 1:1)
• PE router (RP) failures
– Dual RP systems NSF/SSO
– HA software provides seamless transition…
© 2005 Cisco Systems, Inc. All rights reserved.
36
HA in MPLS L3VPN Environment
RR1 RR2
10.1.1.0/24 P1 P3 10.1.2.0/24
PE1 PE1
Primary Primary
Standby P2 P4 Standby
10.1.1.0/24 10.1.2.0/24
LDP
iBGP
• LDP and BGP use TCP as a reliable transport mechanism for
its protocol messages
• The TCP session between two LDP/BGP peers may go down
due to HW/SW failure (RP switchover)
• Use IGP, BGP, LDP NSF/GR
PE3
PE1
P1 PE
Primary
Site1 Primary
Standby
P2 P4 Site2
Primary
PE4
Objective:
Achieve and meet requirements for sub-
second channel change, pause, resume,
etc.
Multicast
Distribution 5 Network Signaling latency to rebuild tree(s) during
Tree
Building link/node failures in the core
primary backup
Video
Sources
6 Time for video feed to recover during primary source failover
Problem Description:
In networks where bandwidth is constrained between
multicast routers and hosts (like in xDSL deployments),
fast channel changes can easily lead to
bandwidth oversubscription, resulting in a temporary
degradation of traffic flow for all users.
Solution:
Reduce the leave latency during a channel change
by extending the IGMPv3 protocol.
Benefits:
• Faster channel changing without BW oversubscription
• Improved diagnostics capabilities
igm
p
p
• When user leaves channel no one igm
igmp
igmp
compared to IGMPv2 (up to 3 E0
seconds) and IGMPv1 (up to 180
seconds)!
Configuration:
interface
interfaceEthernet
Ethernet00 Int Channel User
ipippim
pimsparse-mode
sparse-mode
ipipigmp
igmpversion
version33
E0 10.0.0.1, 239.1.1.1 “David”
ipipigmp
igmpexplicit-tracking
explicit-tracking
E0 10.0.0.1, 239.2.2.2 “John”
First introduced in 12.0(29)S E0 10.0.0.1, 239.3.3.3 “David”
© 2005 Cisco Systems, Inc. All rights reserved.
45
Fast Convergence:
LDP-IGP sync => LDP Down but IGP Still Up
P2
PE1 P1 P3 PE4
Primary P4 Primary
Standby Standby
Network x Network y
CE1 CE2
P2
PE1 P1 P3 PE4
Primary P4 Primary
Standby Standby
PE2 PE3
Network x Max-metric Network y
advertised
CE1 CE2
P4
Basic mpls ldp session
Discovery protection
© 2005 Cisco Systems, Inc. All rights reserved.
49
Agenda
This approach requires the network to have fast This approach does not require fast IGP and PIM
IGP and PIM convergence convergence
Primary
X
Source 1
Heartbeat
Regional
Backbone
Secondary
Source 1
P P
Primary PE
Source 2
PE
P P PE
Heartbeat
Regional Backbone PE
P P PE
PE
Secondary PE
Source 2
P P
Primary
Source 3
Regional
Backbone
Heartbeat
Secondary
Source 3
Source 1
Source 1
PE
PE P P PE PE PE
PE
PE
PE P P PE
Source 2 PE PE
Source 2
P PE
P
Source 3
Source 3
X
SF NY
• Have R1 and R2 advertise same 1.1.1.1 1.1.1.1
prefix for each source segment.
• R3 and R4 follow best path R1 R2
towards source based on IGP
v2 join
metrics.
• Let’s say R3’s best path to SF is R3 R4
through R1. The source in SF
now suddenly fails.
I will send join
• R3’s IGP will reconverge and to the nearest
1.1.1.1/32
trigger SSM joins towards R2 STB STB
in NY.
© 2005 Cisco Systems, Inc. All rights reserved.
54
Conclusion