You are on page 1of 16

Considerations for service providers to deploy P2MP RSVP-TE and LDP for multicast applications

Pradeep Jain (pradeep.jain@alcatel-lucent.com) Pranjal Dutta (pranjal.dutta@alcatel-lucent.com) Sandeep Bishnoi (sandeep.bishnoi@alcatel-lucent.com) Alcatel-Lucent

www.mpls2010.com

Multicast Applications
Type IPTV, Broadcast Live Events System Monitoring Stock Ticker Video Conference Multiplayer Gaming Data Collection 1:N 1:N Latency Medium Low/Medium Bandwidth High High Varies Medium High Medium Varies Scale High Medium/High Low/Medium Low/Medium Low/Medium Medium/High Medium/High

1:N / N:N Low 1:N N:N N:N N:1 Low Low Low Medium

Multicast Applications (contd)


Loss Tolerance Application Recovery impact on network IPTV, Broadcast Live Events System Monitoring Stock Ticker Video Conference Multiplayer Gaming Data Collection Medium Medium Low/Medium Low Very Low Very Low Low High High -NA- (Real Time) -NA- (Real Time) -NA- (Real Time) -NA- (Real Time) Medium Provisioning

Static/Dynamic Dynamic/Static Static Dynamic/Static Dynamic Dynamic Dynamic/Static

Multicast Services
Native IP Multicast
PIM, IGMP, MSDP, MBGP etc.

L3 Multicast VPN services


PIM, MPLS RSVP-TE/LDP PMSI etc.

L2 Multicast VPN services


PIM, IGMP snooping, MPLS PMSI etc.

MPLS Multicast Services


IP Multicast traffic over P2MP RSVP-TE/LDP LSP IP Multicast using dynamic in-band LSP signaling L3 Multicast VPN services (P2MP PMSI) L2 Multicast VPN services (P2MP PMSI)

MPLS Multicast Services: IP Multicast traffic over RSVP-TE P2MP LSP


PE3 PIM join (S1,G1) IGMP join (S1,G1) (S1,G1) & (S1,G2) Received: On P2MP LSP Forwarded: Native IP
RSVP S2L PE3

(S1,G1) & (S1,G2) Received:Native IP Forwarded: On P2MP LSP

(S1,G1) (S1,G2)

PIM JOIN (S1,G1) (S1,G2)

RSVP S2L PE2, PE3, PE4

RSVP S2L PE3. PE4

PE1
RSVP S2L PE2

RSVP S2L PE4

(S1,G1) & (S1,G2) Received: On P2MP LSP Forwarded: Native IP PIM join (S1,G2) IGMP join (S1,G2)

(S1,G1) & (S1,G2) Received: On P2MP LSP Forwarded: Native IP PE2 IGMP join (S1,G1)

PE4

MPLS Multicast Services: IP Multicast traffic over LDP P2MP LSP


PE PIM join (S1,G1) IGMP join (S1,G1) (S1,G1) & (S1,G2) Received: On LDP P2MP Forwarded: Native IP
LDP LBL MAP P2MP-FEC:1

(S1,G1) & (S1,G2) Received:Native IP Forwarded: On P2MP LSP

(S1,G1) (S1,G2)

PIM JOIN (S1,G1) (S1,G2)

LDP LBL MAP P2MP-FEC:1

LDP LBL MAP P2MP-FEC:1

PE
LDP LBL MAP P2MP-FEC:1

LDP LBL MAP P2MP-FEC:1

(S1,G1) & (S1,G2) Received: On LDP P2MP Forwarded: Native IP PIM join (S1,G2) IGMP join (S1,G2)

(S1,G1) & (S1,G2) Received: On LDP P2MP Forwarded: Native IP PE IGMP join (S1,G1)

PE

MPLS Multicast Services: IP Multicast using dynamic LDP LSP signaling


PIM JOIN to LDP LABEL MAP Conversion

PE

LDP LABEL MAP TO PIM JOIN Conversion LDP LBL MAP P2MP-FEC:S1,G1 LDP LBL MAP P2MP-FEC:S1,G1

PIM join (S1,G1)

IGMP join (S1,G1)

(S1,G1)

PIM JOIN (S1,G1) PE

LDP LBL MAP P2MP-FEC:S1,G1

LDP LBL MAP P2MP-FEC:S1,G1 LDP LBL MAP P2MP-FEC:S1,G1 PIM JOIN to LDP LABEL MAP Conversion

PIM JOIN to LDP LABEL MAP Conversion

PE

PIM join (S1,G1)

IGMP join (S1,G1)

PE

IGMP join (S1,G1)

MPLS P2MP LSP: Scaling


RSVP-TE P2MP
SRC 1
INGRESS / HEAD-END NODE

LDP P2MP
SRC 1
INGRESS / HEAD-END NODE

BRANCH LSR

TRANSIT LSR

BRANCH LSR

TRANSIT LSR

RCV
BUD LSR

4 5 RCV
EGRESS / LEAF NODE

RCV

4 5
EGRESS / LEAF NODE

RCV

EGRESS / LEAF NODE

BUD LSR

RCV
EGRESS / LEAF NODE

RCV

RCV

RCV

MPLS P2MP LSP: Network Resource Utilization


RSVP-TE
Intelligent CSPF calculation that avoids parallel multicast paths and eliminates wastage of network resources. Parallel data streams setup due to remerge and crossover of S2L paths. Branch point can be pushed much closer to leaf node Impact is S2L control plane load on upstream nodes is higher but data plane is merged CSPF calculation allows efficient use of network resources. Each S2L uses individual optimized path and not tree level optimized distribution. Tree level CSPF calculation (Steiner's Tree) allows most efficient resource utilization Auto adjustment of bandwidth based on end-user requirement.

LDP
No wasted duplication of stream in network ECMP upstream based on root node provides efficient load balancing. Pushing branch point closer to leaf nodes cannot be controlled. Inefficient replication as branch node much closer to the root node as network diverges along the path to leaf nodes.

10

MPLS P2MP LSP: Quality of Service


RSVP-TE
Ability to meet end-to-end QoS guarantee without compromising on network resources Bandwidth, Class-type, Admin-groups etc. Meet end-to-end SLA even during failure Provide multiple level of QoS guarantees (Priority and Preemption)

LDP
Best effort service. Rely on ECMP and provisioning overhead No control of traffic during failure. Over-provisioning of network to accommodate worst case scenario. Inefficient use of network resources.

11

MPLS P2MP LSP: Control Plane Scaling


RSVP-TE
Path refresh can be aggregated but no merging of state. Timer for each S2L. Periodic Refresh. Avoided using refresh reduction. Control traffic on wire is reduced. Processing on control plane is reduced but not eliminated. Explicit Path processing per S2L though path is shared CSPF Calculation per S2L consume CPU cycles. Tree level calculation can be done for all S2L but they are CPU intensive. Cannot be done frequently as new tree calculation may impact existing paths that must not be disturbed. Tree level calculation can be done in case of major network events. S2L CSPF calculation must be used in case of minor network events.

LDP
No periodic Refresh Merging of control plan state on branch point

12

MPLS P2MP LSP: Impact During Failure/Reroute


RSVP-TE
Failure on any node along the path affects end-to-end LSP Potentially could impact tree rerouting if optimization threshold reached. Re-signaling
of S2Ls for leafs that were not affected by failure. Impact on data path can be controlled and distributed to allow QoS and Make-before-break allows transition to a new LSP. No loss of traffic during LSP path optimization.

LDP
A failure on a node does not impact other leafs nodes in the network. LSP is locally repaired by selecting a new upstream node. Impact on data path must be planned for worst case failure. Make-before-break allows transition to a new better upstream node. No loss of traffic during LSP path optimization.

13

MPLS P2MP LSP: Data Path Protection


RSVP-TE
FRR can be used to achieve sub-50ms link or node failover protection Node protection using FRR failover can lead to spike in traffic due to multiple replications on single link in certain cases. PLR node must send individual copy of stream to multiple merge point nodes if the node being protected is a branch node Efficient node protection and sub-50ms failover can be achieved by end-to-end duplicate stream received on disjoint paths on the leaf node. Multipoint BFD session over P2MP LSP is used to check the status of upstream nodes and select an active stream. End-to-end RSVP disjoint paths can be setup based on SRLG, admin-groups or multi-topology IGP

LDP
LFA or RSVP-TE backup LSP can be used to achieve sub-50ms failover protection based on local repair. End-to-end data protection using redundant stream on leaf and multipoint BFD over multicast tree. End-to-end LDP disjoint paths can be setup based on multi-topology IGP or by FEC coloring of LDP P2MP FEC.

14

Multicast P2MP LSP resilience


PE1 PE3 PIM join IGMP join

SRC1 P2MP LSP SRC2 P2MP LSP LSP UFD Session

PE2 PE4

PIM join

IGMP join

Primary and backup multicast source forwarding over two separate P2MP LSPs that have a common set of egress leaf nodes but no shared links in core. Egress LER receives duplicate multicast streams on primary and secondary tunnel interfaces Ingress LER establishes Multipoint Unidirectional Forwarding Detection (UFD) session to egress LER over P2MP LSP running at 10ms time interval. If egress LER misses a number of successive UFD packets over the primary tunnel interface: It declares the primary P2MP LSP as down Moves the receiving of the affected <S,G> records to the secondary tunnel interface. Switchover of multiple <S,G> entries can be done with a single operation in forwarding plane leading to consistent failover time for all multicast flows.

MPLS LSP: Control and data plane scaling.


MP2MP v/s P2MP LSP for N:N Multicast Application # of PE P2MP v/s 1 MP2MP
PE

LDP LBL MAP MP2MP-FEC:1

LDP LBL MAP MP2MP-FEC:1

LDP LBL MAP MP2MP-FEC:1

PE
LDP LBL MAP MP2MP-FEC:1 LDP LBL MAP MP2MP-FEC:1

PE

PE

16