Professional Documents
Culture Documents
V600R001C00
Issue 04
Date 2010-08-31
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Purpose
The document describes the configuration methods of IP multicast network in terms of basic
principles, implementation of protocols, configuration procedures and configuration examples
for the IP multicast of the NE80E/40E.
Related Versions
The following table lists the product versions related to this document.
Intended Audience
The intended audience of this document is:
l Commissioning Engineer
l Data Configuration Engineer
l Network Monitoring Engineer
l System Maintenance Engineer
Organization
This document is organized as follows.
Chapter Describes
Chapter Describes
4 PIM-DM (IPv4) This chapter describes the PIM-DM (IPv4) fundamentals and
Configuration configuration steps and maintenance for PIM-DM functions,
along with typical examples.
5 PIM-SM (IPv4) This chapter describes the PIM-SM (IPv4) and SSM
Configuration fundamentals and configuration steps and maintenance for
PIM-SM functions, along with typical examples.
8 IPv4 Multicast VPN This chapter describes the MD VPN fundamentals and
Configuration configuration steps and maintenance for MD VPN functions,
along with typical examples.
9 IPv4 Multicast CAC This chapter describes the configurations and maintenance of
Configuration IPv4 multicast CAC, and provides configuration examples.
10 IPv4 Multicast Routing This chapter describes the RPF fundamentals and
Management configuration steps and maintenance for RPF functions,
along with typical examples and troubleshooting cases.
12 PIM-DM (IPv6) This chapter describes the PIM-DM (IPv6) fundamentals and
Configuration configuration steps and maintenance for PIM-DM functions,
along with typical examples.
13 PIM-SM (IPv6) This chapter describes the PIM-SM (IPv6) and SSM
Configuration fundamentals and configuration steps and maintenance for
PIM-SM functions, along with typical examples.
14 IPv6 Multicast Routing This chapter describes the RPF fundamentals and
Management configuration steps and maintenance for RPF functions,
along with typical examples and troubleshooting cases of
IPv6 multicast.
Chapter Describes
Conventions
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
General Conventions
The general conventions that may be found in this document are defined as follows.
Convention Description
Convention Description
Courier New Examples of information displayed on the screen are in
Courier New.
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
&<1-n> The parameter before the & sign can be repeated 1 to n times.
GUI Conventions
The GUI conventions that may be found in this document are defined as follows.
Convention Description
Keyboard Operations
The keyboard operations that may be found in this document are defined as follows.
Format Description
Key Press the key. For example, press Enter and press Tab.
Key 1+Key 2 Press the keys concurrently. For example, pressing Ctrl+Alt
+A means the three keys should be pressed concurrently.
Key 1, Key 2 Press the keys in turn. For example, pressing Alt, A means
the two keys should be pressed in turn.
Mouse Operations
The mouse operations that may be found in this document are defined as follows.
Action Description
Click Select and release the primary mouse button without moving
the pointer.
Drag Press and hold the primary mouse button and move the
pointer to a certain position.
Update History
Updates between document issues are cumulative. Therefore, the latest document issue contains
all updates made in previous issues.
Contents
2 IGMP Configuration.................................................................................................................2-1
2.1 IGMP Introduction..........................................................................................................................................2-2
2.1.1 IGMP Overview.....................................................................................................................................2-2
2.1.2 IGMP Features Supported by the NE80E/40E.......................................................................................2-2
2.2 Configuring Basic IGMP Functions................................................................................................................2-3
2.2.1 Establishing the Configuration Task......................................................................................................2-4
2.2.2 Enabling IP Multicast Routing...............................................................................................................2-5
2.2.3 Enabling Basic IGMP Functions............................................................................................................2-5
2.2.4 Configuring IGMP Version....................................................................................................................2-6
2.2.5 Configuring a Static IGMP Group.........................................................................................................2-6
2.2.6 (Optional) Configuring an Interface to Join a Multicast Group in a Certain Range..............................2-7
2.2.7 Checking the Configuration...................................................................................................................2-8
2.3 Configuring Options of an IGMP Packet........................................................................................................2-8
2.3.1 Establishing the Configuration Task......................................................................................................2-8
2.3.2 Configuring a Router to Reject IGMP Packets Without the Router-Alert Option.................................2-9
2.3.3 Configuring a Router to Send IGMP Packets Without the Router-Alert Option.................................2-10
2.3.4 Configuring Host Address-based IGMP Report Message Filtering.....................................................2-11
6 MSDP Configuration.................................................................................................................6-1
6.1 MSDP Introduction.........................................................................................................................................6-3
6.1.1 MSDP Overview....................................................................................................................................6-3
6.1.2 MSDP Features Supported by the NE80E/40E......................................................................................6-3
6.2 Configuring PIM-SM Inter-domain Multicast................................................................................................6-5
6.2.1 Establishing the Configuration Task......................................................................................................6-6
6.2.2 Configuring Intra-AS MSDP Peers........................................................................................................6-6
6.2.3 Configuring Inter-AS MSDP Peers on MBGP Peers.............................................................................6-8
6.2.4 Configuring Static RPF Peers................................................................................................................6-9
6.2.5 Checking the Configuration.................................................................................................................6-10
6.3 Configuring an Anycast RP in a PIM-SM Domain.......................................................................................6-10
6.3.1 Establishing the Configuration Task....................................................................................................6-10
6.3.2 Configuring the Interface Address of an RP........................................................................................6-11
6.3.3 Configuring a C-RP..............................................................................................................................6-12
6.3.4 Statically Configuring an RP................................................................................................................6-13
6.3.5 Configuring an MSDP Peer..................................................................................................................6-13
6.3.6 Specifying the Logical RP Address for an SA Message......................................................................6-14
6.3.7 Checking the Configuration.................................................................................................................6-15
6.4 Managing MSDP Peer Connections..............................................................................................................6-16
6.4.1 Establishing the Configuration Task....................................................................................................6-16
6.4.2 Controlling the Sessions Between MSDP Peers..................................................................................6-17
6.4.3 Adjusting the interval for Retrying Setting up an MSDP Peer Connection.........................................6-17
7 MBGP Configuration................................................................................................................7-1
7.1 MBGP Introduction.........................................................................................................................................7-2
7.1.1 MBGP Overview....................................................................................................................................7-2
7.1.2 MBGP Features Supported by the NE80E/40E.....................................................................................7-2
7.2 Configuring Basic MBGP Functions..............................................................................................................7-2
7.2.1 Establishing the Configuration Task......................................................................................................7-3
11 MLD Configuration...............................................................................................................11-1
11.1 MLD Introduction.......................................................................................................................................11-2
11.1.1 MLD Overview..................................................................................................................................11-2
11.1.2 MLD Features Supported by the NE80E/40E....................................................................................11-2
11.2 Configuring Basic MLD Functions.............................................................................................................11-3
11.2.1 Establishing the Configuration Task..................................................................................................11-3
11.2.2 Enabling IPv6 Multicast Routing.......................................................................................................11-4
11.2.3 Enabling MLD....................................................................................................................................11-4
11.2.4 (Optional) Configuring the MLD Version.........................................................................................11-5
11.2.5 (Optional) Configuring an Interface to Statically Join a Group.........................................................11-5
11.2.6 (Optional) Configuring the Range of Groups an Interface Can Join.................................................11-6
11.2.7 Checking the Configuration...............................................................................................................11-7
11.3 Configuring Options of an MLD Packet.....................................................................................................11-7
11.3.1 Establishing the Configuration Task..................................................................................................11-7
A Glossary.....................................................................................................................................A-1
B Acronyms and Abbreviations.................................................................................................B-1
Figures
Tables
224.0.0.0 to 224.0.0.255 Indicates the reserved group addresses for local links. The
addresses are reserved by Internet Assigned Number
Authority (IANA) for routing protocols, and are called
permanent multicast group addresses. The addresses are
used to identify a group of specific network devices rather
than being used for multicast forwarding.
Source
PIM PIM
MSDP
IGMP IGMP
User User
CAUTION
Customize configuration solutions according to the actual network conditions and service
requirements. The configuration solution in this section functions only as a reference.
The network environments are classified into two types, which need different configuration
solutions. For details, refer to the HUAWEI NetEngine80E/40E Router Configuration Guide -
IP Multicast.
NOTE
Ensure that unicast routes work normally in the network before configuring IP multicast.
Small-Scale Network
A small-scale network, such as a test network, is suitable to implement multicast data
transmission in a Local Area Network (LAN), and does not interconnect with the Internet.
Large-Scale Network
A large-scale network is suitable to bear multicast services on an ISP network, and interconnects
with the Internet.
For details, refer to the chapter IPv4 Multicast Routing Management in the HUAWEI
NetEngine80E/40E Router Configuration Guide - IP Multicast.
reserved (80bit)
group ID (32bit)
Table 1-3 shows the scopes and meanings of fixed IPv6 multicast addresses.
FF1x::/32 (x cannot be 1 or 2) Indicates ASM addresses. The addresses are valid in the
FF2x::/32 (x cannot be 1 or 2) entire network.
FF3x::/32 (x cannot be 1 or 2) Indicates SSM addresses. This is the default SSM group
address scope, and is valid in the entire network.
MLD Receiver
IPv6 Network UserA
PIM
Source
Receiver
Multicast UserB
PIM
Server
PIM
MLD Receiver
UserC
UserD
CAUTION
Customize the configuration solutions according to the actual network conditions and service
requirements. The configuration solution in this section functions only as a reference.
The network environments are classified into two types, which are suitable for different
configuration solutions. For details, refer to the HUAWEI NetEngine80E/40E Router
Configuration Guide - IP Multicast.
NOTE
Ensure that IPv6 unicast routes work normally in the network before configuring IP multicast.
Small-Scale Network
A small-scale network, such as the test network, is suitable to implement multicast data
transmission in a Local Area Network (LAN), and does not interconnect with the Internet.
Large-Scale Network
A large-scale network is suitable to carry multicast services on an ISP network, and interconnects
with the Internet.
Perform the following configurations:
1. Enable multicast on all routers in the network.
2. Enable PIM-IPv6-SM on all router interfaces.
3. Enable MLD on router interfaces connected to hosts.
4. Configure an RP. You can configure an embedded RP, a static RP, or a BSR-RP.
2 IGMP Configuration
This chapter describes the IGMP fundamentals and configuration steps, and maintenance
commands for IGMP functions, along with typical examples.
2.1 IGMP Introduction
This section describes the basic principle of IGMP and IGMP features supported by the NE80E/
40E.
2.2 Configuring Basic IGMP Functions
This section describes the applications of IGMP and how to configure basic IGMP functions.
2.3 Configuring Options of an IGMP Packet
This section describes how to configure IGMP packet options.
2.4 Configuring IGMP Query Control
This section describes how to configure IGMPv1 and IGMPv2/v3 queriers.
2.5 Configuring SSM Mapping
This section describes how to configure SSM mapping.
2.6 Configuration IGMP Limit Function
This section describes how to configure the IGMP limit function.
2.7 Maintaining IGMP
This section describes how to clear the statistics of IGMP and monitor the running status of
IGMP.
2.8 Configuration Examples
This section provides several configuration examples of IGMP.
With the wide applications of multicast, increasingly more hosts join the related multicast
groups. Managing multicast groups and related members on routers becomes an important issue.
In the TCP/IP suite, the Internet Group Management Protocol (IGMP) manages IPv4 multicast
members. It establishes and maintains the relationship between IP hosts and routers that are
directly connected to the IP hosts.
IGMP is a signaling mechanism of hosts towards routers on the leaf network of IP multicast.
IGMP can be divided into two functional parts: at the host side and at the router side.
NOTE
The Operating System (OS) of a host determines which version of IGMP is supported by the host.
l All hosts that participate in multicast transmission must be enabled with IGMP. Hosts can
randomly join or leave the related multicast groups, and the number of hosts is not limited.
l Through IGMP, a multicast router can know whether there is a member of a certain group
in the network segment to which each interface of the router is connected. Hosts store
information only about the multicast groups they join.
At present, IGMP has three versions, that is, IGMPv1 (defined in RFC 1112), IGMPv2 (defined
in RFC 2236), and IGMPv3 (defined in RFC 3376). All IGMP versions support the Any-Source
Multicast (ASM) model. IGMPv3 can be directly applied to the Source-Specific Multicast
(SSM) model, whereas IGMPv1 and IGMPv2 require the technical support of SSM mapping
when they are applied to the SSM model.
l Supporting IGMPv1, IGMPv2, and IGMPv3. The IGMP version can be configured.
l Supporting static IGMP.
l Configuring the range of multicast groups that an interface can join.
Currently, IGMPv3 supported by the NE80E/40E can process the packets matching the
following situations:
l The group address is within the SSM group address range.
l The group address is within the ASM group address range, the mode of Group Record is
MODE_IS_EXCLUDE or CHANGE_TO_EXCLUDE_MODE, and the source address list
is null.
All the packets that do not match these conditions are not processed.
Router-Alert
Through Router-Alert, IGMP sends the messages related to the group that the local device does
not join to the upper protocol for processing.
According to requirements, users can determine whether to set Router-Alert in IGMP packets
to be sent, and whether to require that the received IGMP packets contain Router-Alert.
SSM-Mapping
You can configure SSM mapping on routers to provide SSM services for hosts that run IGMPv1
or IGMPv2.
Applicable Environment
IGMP is applicable to the network segment where routers are connected to hosts. routers and
hosts need to run IGMP. This section only describes how to configure IGMP on routers.
Before configuring IGMP, enable IP multicast routing. IP multicast routing is the precondition
for configuring all multicast functions. If IP multicast routing is disabled, the configurations
related to multicast are deleted.
You need to enable IGMP on the interface connected to hosts. Because the packet formats of
IGMPv1, IGMPv2, and IGMPv3 are different, you need to specify the IGMP version for
routers and hosts first (the later version at the router side is compatible with the earlier version
at the host side). After this, you can perform other IGMP configurations.
You can set an ACL rule so that the host joins specified multicast groups and receive packets
from these groups. This ACL rule serves as a filter on the associated interface and limits the
range of groups that an interface joins.
Pre-configuration Tasks
Before configuring basic IGMP functions, complete the following tasks:
l Configuring the link layer protocol parameters and an IP address for each interface to make
the link protocol of the interface Up
l Configuring a unicast routing protocol to make IP routes between nodes reachable
Data Preparation
To configure basic IGMP functions, you need the following data.
No. Data
1 IGMP version
2 Group address and source address used to configure multicast static routes
NOTE
l The configuration in the IGMP view is globally effective, whereas the configuration in the interface
view is effective only on the interface.
l When the command is not used in the interface view, the global values set in the IGMP view are used.
When the command is used in both views, the values set in the interface view are preferred.
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast routing-enable
By default, IPv4 multicast routing is not enabled in the public network instance.
CAUTION
Configurations related to VPN instances are applicable only to PE routers. On the PE, if the
VPN instance interface is connected to hosts, you need to perform Step 3 and Step 4.
----End
Context
Do as follows on the router connecting the host:
Procedure
Step 1 Run:
system-view
IGMP is enabled.
By default, IGMP is not enabled on the interface.
----End
Context
CAUTION
All the routers on the same subnet must be configured with the same IGMP version. By default,
IGMPv2 is adopted.
Procedure
Step 1 Run:
system-view
----End
Context
The configuration is optional. By default, the interface does not statically join any multicast
group.
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
l To configure sub-interface for QinQ termination or Dot1q termination to statically join one
or multiple multicast groups, run:
igmp static-group group-address [ inc-step-mask { group-mask | group-mask-
length } number group-number ] { qinq pe-vid pe-vid ce-vid low-ce-vid [ to high-
ce-vid ] | dot1q vid low-pe-vid [ to high-pe-vid ] }
NOTE
The static group with VLAN tag can be configured only on the sub-interface for QinQ termination or
the sub-interface for Dot1q termination.
After the interface joins the multicast groups, the router considers that the members of the
multicast groups exist on the network segment where the interface resides.
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 3 Run:
igmp group-policy { acl-number | acl-name acl-name } [ 1 | 2 | 3 ]
The range of multicast groups that the interface is allowed to join is configured.
----End
Procedure
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check the
configuration and running of IGMP on an interface.
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] * static command to check the
members of static IGMP multicast groups.
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] * verbose command to check the
members of an IGMP multicast group.
----End
Applicable Environment
The Router-Alert option is used to send the IGMPmessages of which the multicast group is not
specified by the upper-layer protocol of the IP layer to the upper-layer protocol for processing.
For details of Route-Alert, refer to RFC 2113.
By checking host addresses in the IP headers encapsulating IGMP Report messages, IGMP
Report message filtering is achieved.
Pre-configuration Tasks
Before configuring options of an IGMP message, complete the following task:
Data Preparation
To configure options of an IGMP message, you need the following data.
No. Data
2 Type and number of the interface on which IGMP Report messages need to be filtered
Context
By default, routers do not check the Router-Alert option contained in IGMP packets. That is,
routers process all the received IGMP packets, including the IGMP packets without the Router-
Alert option.
When a user does not want to receive the IGMP packets without Router-Alert option, do as
follows on the router connected to the user:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The router is configured to receive only the IGMP packets with the Router-Alert
option.
l Configuration on an Interface
1. Run:
system-view
The router is configured to receive only the IGMP packets with the Router-Alert
option.
----End
Context
By default, the IGMP packets sent by routers contain the Router-Alert option.
To configure a router to send the IGMP packets without the Router-Alert option, do as follows
on the router connected to hosts:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The header of a sent IGMP packet does not contain the Router-Alert option.
l Configuration on an Interface
1. Run:
system-view
The header of a sent IGMP packet does not contain the Router-Alert option.
----End
Context
By default, the router does not filter the received IGMP Report messages based on host addresses.
The rules for filtering IGMP Report messages based on host addresses are as follows:
l Processing the IGMP Report messages whose host addresses are at the same network
segment with the addresses of the inbound interfaces of the IGMP Report messages or
processing the IGMP Report messages whose host addresses are 0.0.0.0
l Discarding the IGMP Report messages if the host addresses in the IP headers are at different
network segments from the addresses of the inbound interfaces of the IGMP Report
messages
When you need to filter IGMP Report messages based on host addresses, do as follows on the
interface of the router connected with the host:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] * [ static ] [ verbose ] command to
check information about the members of an IGMP group.
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check the
configuration and running of IGMP on an interface.
----End
Applicable Environment
CAUTION
A great many of IGMP interfaces exist on the network and the IGMP interfaces are mutually
restricted. Therefore, ensure that all the IGMP parameter values of all the IGMP router interfaces
on the same network segment are identical. Otherwise, the network may be faulty.
The querier periodically sends IGMP query messages on a shared network connected to
receivers. When receiving a Report message from a member, the querier refreshes information
about the members.
If non-queriers do not receive any general query message within the Keepalive period of other
IGMP queriers, the querier is considered faulty, and a new round of querier election is triggered
automatically.
In ADSL dial-up access, the querier corresponds to only one host because a single host
corresponds to one port. When a receiver frequently joins or leaves multiple multicast groups,
like switchover among TV channels, you can enable the mechanism of fast leave on the querier.
Pre-configuration Tasks
Before configuring IGMP query control, complete the following tasks:
Data Preparation
To configure IGMP query control, you need the following data.
No. Data
2 Robustness variable
Context
CAUTION
This configuration is applicable only to IGMPv1.
NOTE
Procedure
l Global Configuration
1. Run:
system-view
3. Run:
timer query interval
----End
Context
CAUTION
This configuration is applicable only to IGMPv2 and IGMPv3. In actual configuration, ensure
that the interval for sending general query messages is greater than the maximum response time
but is smaller than the Keepalive time of other IGMP queriers.
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The maximum response time is set for the Query messages on the IGMP router.
By default, the maximum IGMP response time is 10 seconds.
6. Run:
timer other-querier-present interval
The interval for sending IGMP last member query messages is set.
The shorter the interval is, the more flexible the querier is.
By default, the interval for sending IGMP last member query messages is 1 second.
l Configuration on an Interface
1. Run:
system-view
NOTE
----End
Procedure
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check the
configuration and running of IGMP on an interface.
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] routing-table
[ group-address [ mask { group-mask | group-mask-length } ] | source-address [ mask
{ source-mask | source-mask-length } ] ] * [ static ] [ outgoing-interface-number
[ number ] ] command to check the IGMP routing table.
NOTE
IGMP routing entries are generated on an interface only when the interface is configured with IGMP
but not PIM and the interface functions as an IGMP querier. You can run the display igmp routing-
table command to view IGMP routing entries.
----End
Applicable Environment
In the network segment where multicast services are provided in the SSM mode, certain hosts
must run IGMPv1/v2 due to various limitations. To provide SSM services for the hosts, you
need to configure static SSM mapping on the router.
Pre-configuration Tasks
To configure SSM mapping, complete the following task:
Data Preparation
To configure SSM mapping, you need the following data.
No. Data
2 Multicast group address and mask, and multicast source address and mask
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
igmp enable
IGMP is enabled.
Step 4 Run:
igmp version 3
IGMPv3 is configured.
To ensure that hosts that run any IGMP version on the network segment can obtain SSM services,
you are recommended to configure IGMPv3 on the interface.
Step 5 Run:
igmp ssm-mapping enable
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
----End
Applicable Environment
The function of IGMP Limit is applicable to IPv4 PIM-SM and IPv4 PIM-DM networks.
To limit IPTV ICPs and the number of users accessing IP core networks, you can configure the
IGMP limit function.
The IGMP limit function is configured on the last-hop router connected to users. You can
perform the following configurations as required:
If the IGMP limit function is required to be configured globally, for a single instance, and for an interface
on the same router, it is recommended that the limits on the number of global IGMP group memberships,
the number of IGMP group memberships in the single instance, and the number of IGMP group
memberships on the interface should be in descending order.
Pre-configuration Tasks
Before configuring the IGMP limit function, complete the following task:
Data Preparation
To configure the IGMP limit function, you need the following data.
No. Data
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
NOTE
If except is not set in the command, the router is limited by the maximum number of IGMP entries when
creating the entries for all the groups or source/groups.
Before setting except, you need to configure the related ACL. The interface then filters the received IGMP
Join messages according to the ACL. The number of entries specified to be filtered by the ACL is not
limited by the maximum number of IGMP entries.
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
----End
Example
# Run the display igmp interface command to view the configuration and running status of
IGMP on the router interface. The display information is as follows:
<RouterA> display igmp interface gigabitethernet 1/0/0
Ethernet0/0/0(10.2.1.1):
IGMP is enabled
Current IGMP version is 2
IGMP state: up
IGMP group policy: none
IGMP limit: 40
Value of query interval for IGMP (negotiated): -
Value of query interval for IGMP (configured): 60 s
Value of other querier timeout for IGMP: 0 s
Value of maximum query response time for IGMP: 10 s
Querier for IGMP: 10.2.1.1 (this router)
Total 1 IGMP Group reported
From the display, you can see the maximum number of IGMP group members on GE1/0/0 of
Router A.
# Display information about the attackers on all LPUs.
<HUAWEI> display cp-rate-limit verbose
[Slot 2]
Interface: GigabitEthernet1/8/0/2.1
PeVid: 0
CeVid: 0
PassBytes(byte): 1391816
PassByteRate(kbps): 200
DropBytes(byte): 5827014740
DropByteRate(kbps): 838529
PassPackets(packet): 13384
PassPacketRate(pps): 240
DropPackets(packet): 56028990
DropPacketRate(pps): 1007848
Context
CAUTION
The IGMP groups that an interface dynamically joins are deleted after you run the reset igmp
group or the reset igmp group ssm-mapping command. Therefore, receivers cannot receive
multicast information normally. So, confirm the action before you run the command.
Procedure
l Run the reset igmp [ vpn-instance vpn-instance-name | all-instance ] group { all |
interface interface-type interface-number { all | group-address [ mask { group-mask |
group-mask-length } ] [ source-address [ mask { source-mask | source-mask-length } ] ] } }
command in the user view to clear the statistics of the IGMP groups that an interface
dynamically joins.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of IGMP.
Procedure
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] * [ static ] [ verbose ] command in
any view to check information about IGMP groups.
l Run the display igmp [ vpn-instance vpn-instance-name | all-instance ] group ssm-
mapping [ group-address | interface interface-type interface-number ] [ verbose ]
command in any view to check information about multicast groups involved in SSM
mapping.
Networking Requirements
In the IPv4 network shown in Figure 2-1, unicast routes are normal. It is required to implement
multicast in the network to enable hosts to receive the Video On Demand (VOD) information.
When the host connected to a certain interface of a router needs to receive a very popular program
for a long time, you can add the interface to a multicast group statically. As shown in the
following network, if Host A and Host B need to receive the multicast data of multicast group
225.1.1.1 for a long time, add GE 1/0/0 of the routers to multicast group 225.1.1.1 statically.
Ethernet
HostA
RouterA Receiver
POS2/0/0 N1
192.168.1.1/24 GE1/0/0
HostB
10.110.1.1/24
RouterB
POS2/0/0 GE1/0/0
192.168.2.1/24 10.110.2.1/24 Leaf network
HostC
PIM network
RouterC GE1/0/0 Receiver
N2
10.110.2.2/24
POS2/0/0 HostD
192.168.3.1/24
Ethernet
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable multicast on each router, and configure IGMP and PIM-SM on the interface connected
to hosts.
# Enable multicast on Router A; enable IGMP and PIM-SM on Gigabit Ethernet 1/0/0; configure
the IGMP version to 2.
The configurations of Router B and Router C are the same as the configuration of Router A, and
are not mentioned here.
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim sm
[RouterA-GigabitEthernet1/0/0] igmp enable
[RouterA-GigabitEthernet1/0/0] quit
Step 2 Add GE 1/0/0 on Router A to multicast group 225.1.1.1 statically. In this manner, the hosts
connected to GE 1/0/0 can steadily receive the multicast data sent to multicast group 225.1.1.1.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] igmp static-group 225.1.1.1
# Run the display pim routing-table command on Router A to check whether GE 1/0/0 is added
to multicast group 225.1.1.1 statically. If (*, 225.1.1.1) entry is generated on Router A, the
downstream interface is GE 1/0/0, and the protocol type is set to static, it indicates that GE 1/0/0
is successfully added to multicast group 225.1.1.1.
<RouterA> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 0 (S, G) entry
(*, 225.1.1.1)
RP: 192.168.4.1
Protocol: pim-sm, Flag: WC
UpTime: 00:12:17
Upstream interface: Pos2/0/0
Upstream neighbor: 192.168.1.1
RPF prime neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: static, UpTime: 00:12:17, Expires: -
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim sm
igmp enable
igmp static-group 225.1.1.1
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
Networking Requirements
In the multicast network shown in Figure 2-2, PIM-SM is run in the network, and the ASM and
SSM models are used to provide multicast services. The interface connected to the receiver runs
IGMPv3. The IGMP version on the receiver is IGMPv2 and cannot be upgraded to IGMPv3.
The SSM group address range in the network is 232.1.1.0/24. Source 1, Source 2, and Source 3
send multicast data to the multicast groups in this range. It is required that the receiver receive
the multicast data only from Source 1 and Source 3.
Source 2 Source 3
133.133.2.1/24 133.133.3.1/24
RouterB RouterC
GE1/0/0 GE3/0/0 GE3/0/0 GE1/0/0
GE2/0/0 GE2/0/0
Receiver
133.133.4.1/24
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete this configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each interface and a unicast routing protocol on each router. The
configuration details are not mentioned here.
Step 2 Enable IGMP and SSM mapping on the interface connected to hosts.
[RouterD] multicast routing-enable
[RouterD] interface gigabitethernet 1/0/0
[RouterD-GigabitEthernet1/0/0] igmp enable
[RouterD-GigabitEthernet1/0/0] igmp version 3
[RouterD-GigabitEthernet1/0/0] igmp ssm-mapping enable
[RouterD-GigabitEthernet1/0/0] quit
Step 4 Configure the static SSM mapping rules on the router connected to hosts.
# Map the multicast groups within the 232.1.1.0/24 range to Source 1 and Source 3.
[RouterD] igmp
[RouterD-igmp] ssm-mapping 232.1.1.0 24 133.133.1.1
[RouterD-igmp] ssm-mapping 232.1.1.0 24 133.133.3.1
# Run the display pim routing-table command to view the PIM-SM routing table on a router.
The PIM-SM routing table on Router D is as follows:
<RouterD> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 2 (S, G) entries
(133.133.1.1, 232.1.1.1)
Protocol: pim-ssm, Flag:SG_RCVR
UpTime: 00:11:25
Upstream interface: GigabitEthernet3/0/0
Upstream neighbor: 192.168.4.2
RPF prime neighbor: 192.168.4.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: ssm-map, UpTime: 00:11:25, Expires:-
(133.133.3.1, 232.1.1.1)
Protocol: pim-ssm, Flag:SG_RCVR
UpTime: 00:11:25
Upstream interface: GigabitEthernet2/0/0
Upstream neighbor: 192.168.3.1
RPF prime neighbor: 192.168.3.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: ssm-map, UpTime: 00:11:25, Expires:-
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 133.133.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 192.168.4.2 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 133.133.1.0 0.0.0.255
network 192.168.1.0 0.0.0.255
network 192.168.4.0 0.0.0.255
#
pim
ssm-policy 2000
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 133.133.2.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 133.133.2.0 0.0.0.255
network 192.168.1.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
pim
ssm-policy 2000
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 133.133.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.3.1 255.255.255.0
pim sm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 133.133.3.0 0.0.0.255
network 192.168.3.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
pim
ssm-policy 2000
#
return
Networking Requirements
When a large number of users watch multiple programs simultaneously, great bandwidth of
devices is consumed, which degrades the performance of the devices and lowers the stability of
receiving multicast data.
The existing multicast technologies control multicast networks by limiting the number of
multicast forwarding entries or the number of outgoing interfaces of an entry, which cannot meet
the requirements of operators for real-time video services on IPTV networks and flexible
management of network resources.
Configuring IGMP limit can enable operators to properly plan network resources and flexibly
control the number of multicast groups that hosts can join. In the network shown in Figure
2-3, multicast services are deployed. The global IGMP limit, instance-based IGMP limit, and
interface-based IGMP limit are configured on Router A, Router B, and Router C connected to
hosts to limit the number of multicast groups that the hosts can join. When the number of
multicast groups that hosts can join reaches the limit, the devices are not allowed to create new
IGMP entries. This ensures that the users that join related multicast groups more clearly and
stably watch related programs.
Ethernet
HostA
RouterA Receiver
POS2/0/0 N1
192.168.1.1/24 GE1/0/0
HostB
10.110.1.1/24
RouterB
POS2/0/0 GE1/0/0
192.168.2.1/24 10.110.2.1/24 Leaf network
HostC
PIM network
RouterC GE1/0/0 Receiver
N2
10.110.2.2/24
POS2/0/0 HostD
192.168.3.1/24
Ethernet
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable multicast on all routers because multicast is a prerequisite to IGMP.
2. Enable PIM-SM on all interfaces.
3. Enable IGMP on the interface connected to hosts.
4. Add GE 1/0/0 on Router A to multicast group 225.1.1.1 statically. In this manner, hosts
can steadily receive the multicast data of multicast group 225.1.1.1 for a long time.
5. Limit the number of IGMP group memberships on Router A.
Data Preparation
To complete the configuration, you need the following data:
l Version number of IGMP run on routers and hosts
l Static multicast group address that is 225.1.1.1
l The maximum number of IGMP group memberships.
Procedure
Step 1 Enable multicast on each router, and configure IGMP and PIM-SM on the interface connected
to hosts.
# Enable multicast on Router A; enable IGMP and PIM-SM on Gigabit Ethernet 1/0/0; configure
the IGMP version to 2. Configurations of Router B and Router C are similar to those of Router
A, and are not mentioned here.
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim sm
Step 2 Add GE 1/0/0 on Router A to multicast group 225.1.1.1 statically. In this manner, the hosts
connected to GE 1/0/0 can steadily receive the multicast data sent to multicast group 225.1.1.1.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] igmp static-group 225.1.1.1
[RouterA-GigabitEthernet1/0/0] quit
Step 3 Limit the number of IGMP group memberships on the last-hop router connected to users.
# Configure the maximum number of IGMP group memberships to 50 on Router A.
[RouterA] igmp global limit 50
# Configure the maximum number of IGMP group memberships to 40 in the public network
instance.
[RouterA] igmp
[RouterA-igmp] limit 40
[RouterA-igmp] quit
# Configurations of Router B and Router C are similar to those of Router A, and are not
mentioned here.
Step 4 Verify the configuration.
# Run the display igmp interface command to view the configuration and running status of
IGMP on the interfaces of each router. Take the display on GE 1/0/0 of Router A as an example.
<RouterA> display igmp interface gigabitethernet 1/0/0
GigabitEthernet1/0/0(10.110.1.1):
IGMP is enabled
Current IGMP version is 2
IGMP state: up
IGMP group policy: none
IGMP limit: 30
Value of query interval for IGMP (negotiated): -
Value of query interval for IGMP (configured): 60 s
Value of other querier timeout for IGMP: 0 s
Value of maximum query response time for IGMP: 10 s
Querier for IGMP: 10.110.1.1 (this router)
From the display, you can see that the maximum number of IGMP group members that the
GE1/0/0 of Router A can create is 30.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
igmp global limit 50
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim sm
igmp enable
igmp limit 30
igmp static-group 225.1.1.1
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.1.0 0.0.0.255
network 192.168.1.0 0.0.0.255
#
igmp
limit 40
#
return
l Configuration file of Router B
#
sysname RouterB
#
igmp global limit 50
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.2.1 255.255.255.0
pim sm
igmp enable
igmp limit 30
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.2.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
igmp
limit 40
#
return
l Configuration file of Router C
#
sysname RouterC
#
igmp global limit 50
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.2.2 255.255.255.0
pim sm
igmp enable
igmp limit 30
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.3.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
igmp
limit 40
#
return
Networking Requirements
As shown in Figure 2-4, configure sub-interface for QinQ termination to statically join multicast
groups, to make the Receiver receive multicast data sent from the Source.
Figure 2-4 Networking diagram of sub-interface for QinQ termination to statically join multicast
group
vpna
Source
Loopback1
GE1/0/0 2.2.2.9/32
10.1.1.2/24 GE1/0/0 GE2/0/0 PE2
PE1
Loopback1 172.1.1.2/24 172.2.1.1/24 Loopback1
1.1.1.9/32 GE3/0/0 GE3/0/0 3.3.3.9/32
172.1.1.1/24 172.2.1.2/24
GE2/0/0.10 P GE1/0/0
10.2.1.2/24 MPLSbackbone 10.3.1.2/24
AS: 100
Access vpna
network
Receiver
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF on the backbone network to implement interworking between PEs.
2. Configure the basic MPLS functions and MPLS LDP on the PEs and establish the MPLS
LSPs between the PEs.
3. Configure the VPN instance on the PE and bind VPN instance with the interface to Source
and the interface to Receiver.
4. Configure MP IBGP to exchange the VPN routing information between the PEs.
5. Configuring sub-interface for QinQ termination to statically join multicast groups
Data Preparation
To configuring sub-interface for QinQ termination to statically join multicast groups, you need
the following data:
l MPLS LSR-IDs on the PEs and the Ps
l VPN instance name, RD, VPN-Target
l ID of the outer QinQ VLAN tag, value range of the inner QinQ VLAN tag
Procedure
Step 1 Configure basic BGP/MPLS IP VPN.
The specific configuration procedures are omitted here.
Step 2 Configure VPN instance on PE, configure sub-interface for QinQ termination and bind the
interface with the VPN instance.
# Configure PE1.
# Configure VPN instance.
[PE1] ip vpn-instance vpna
[PE1-vpn-instance-vpna] route-distinguisher 100:1
[PE1-vpn-instance-vpna] vpn-target 111:1 both
[PE1-vpn-instance-vpna] quit
# Configure PE2.
Step 3 Add the route of the Source and the route of the Receiver to VPN routing-table.
# Configure PE1.
[PE1] bgp 100
[PE1-bgp] ipv4-family vpn-instance vpna
[PE1-bgp-vpna] import-route direct
[PE1-bgp-vpna] quit
# Configure PE2.
[PE2] bgp 100
[PE2-bgp] ipv4-family vpn-instance vpna
[PE2-bgp-vpna] import-route direct
[PE2-bgp-vpna] quit
After the configuration above, run the display ip routing-table vpn-instance command on PE1.
The route of the Source and the route of the Receiver are added to VPN routing-table.
[PE1] display ip routing-table vpn-instance vpna
Route Flags: R - relied, D - download to fib
------------------------------------------------------------------------------
Routing Tables: vpna
Destinations : 5 Routes : 5
Destination/Mask Proto Pre Cost Flags NextHop Interface
10.1.1.0/24 Direct 0 0 D 10.1.1.2 GigabitEthernet1/0/0
10.1.1.2/32 Direct 0 0 D 127.0.0.1 InLoopBack0
10.2.1.0/24 Direct 0 0 D 10.2.1.2 GigabitEthernet2/0/0
10.2.1.2/32 Direct 0 0 D 127.0.0.1 InLoopBack0
10.3.1.0/24 BGP 255 0 RD 3.3.3.9 GigabitEthernet3/0/0
Step 4 Configure multicast routing-enable in the public instance on PE1, P and PE2.
# Configure PE1.
[PE1] multicast routing-enable
# Configure P.
[P] multicast routing-enable
# Configure PE2.
[PE2] multicast routing-enable
# Configure P.
[P] interface gigabitethernet 1/0/0
[P-GigabitEthernet1/0/0] pim sm
[P-GigabitEthernet1/0/0] quit
[P] interface gigabitethernet 2/0/0
[P-GigabitEthernet2/0/0] pim sm
[P-GigabitEthernet2/0/0] quit
[P] interface loopback 1
[P-LoopBack1] pim sm
[P-LoopBack1] quit
# Configure PE2.
[PE2] interface gigabitethernet 3/0/0
[PE2-GigabitEthernet3/0/0] pim sm
[PE2-GigabitEthernet3/0/0] quit
[PE2] interface loopback 1
[PE2-LoopBack1] pim sm
[PE2-LoopBack1] quit
Step 6 Configuring sub-interface for QinQ termination to statically join multicast groups.
[PE1] interface gigabitethernet 2/0/0.10
[PE2-GigabitEthernet2/0/0.10] igmp static-group 225.0.0.1 inc-step-mask 0.0.0.1
number 17 qinq pe-vid 1 ce-vid 1 to 2
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
router id 1.1.1.9
#
multicast routing-enable
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
ip vpn-instance vpna
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
mode user-termination
pim sm
igmp enable
#
interface gigabitethernet 2/0/0.10
control-vid 10 qinq-termination
qinq termination pe-vid 1 ce-vid 1 to 2
c-rp LoopBack1
#
ospf 1
area 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
This chapter describes the Layer 2 Multicas fundamentals and configuration steps and
maintenance for Layer 2 Multicas functions, along with typical examples.
NOTE
For details about VPLS, refer to the HUAWEI NetEngine80E/40E Router Feature Description - VPN.
Router Port
A router port refers to a port connected to the upstream multicast router. The router port is
classified into the following types:
l Dynamic router port
– A dynamic router port is the port that can receive the IGMP Query message with the
source address that is not 0.0.0.0 or the Protocol Independent Multicast (PIM) Hello
message. The dynamic router port is dynamically maintained through protocol packets
exchanged between routers and hosts.
– Each dynamic router port starts a timer. The timeout period of the timer is the aging
time of the router port. The port is no longer a router port, if the port does not receive
any IGMP Query message with the source address being not 0.0.0.0 or the PIM Hello
message before the timer times out.
l Static router port
– The static router port is specified by using commands and does not age.
Prompt Leave
When receiving an IGMP Leave message through a member port, the router immediately deletes
the member port regardless of whether the aging timer times out. The member port thus becomes
invalid and stops receiving multicast packets. This is called prompt leave of member ports.
On the router, if all ports in a VLAN are connected to only one multicast receiver, you can
configure ports in the VLAN to promptly leave some or all multicast groups. This saves the
network bandwidth.
BTV
NPE
IP/MPLS
UPE DSLAM
CE
IGMP snooping-based Layer 2 multicast CAC can be configured differently, depending on the
networks that deliver multicast services:
l Global Layer 2 multicast CAC
l Layer 2 multicast CAC for a VLAN
l Layer 2 multicast CAC for a VSI
l Multicast CAC for a Layer 2 interface or Layer 2 multicast CAC for an interface in the
specified VALN
l Layer 2 multicast CAC for a sub-interface
l Layer 2 multicast CAC for a PW
When configuring channel-based CAC in IGMPv3, you need to set the channel type to ASM or
SSM. If the channel type is ASM, the system counts and processes (*, G) and (S, G) entries; if
the channel type is SSM, the system counts and processes (S, G) entries.
NOTE
When configuring an SSM channel, you must run the command to create the source IP address to be
mapped; when configuring an ASM channel, you need not configure the source IP address to be mapped.
You can run the igmp-snooping ssm-policy command to change the multicast address range of
the ASM and SSM channels rather than the counting rules of CAC. You are recommended to
run the igmp-snooping ssm-policy command prior to setting the channel type to SSM. In this
case, the multicast group address range of the SSM and ASM channels is consistent with that
configured through the igmp-snooping ssm-policy command.
NOTE
The system use the multicast CAC bandwidth limit to process the entries triggered by multicast streams
the same as fast-channel entries.
Applicable Environment
In a VPLS network, the multicast traffic is broadcast in the VSI if IGMP snooping is not enabled
on PE devices. To prevent the multicast traffic from being transmitted on the PW on which no
receiver exists, you need to configure IGMP snooping for the VSI.
l Static router port: When the network topology is stable, you can configure router ports on
the router as static router ports.
l IGMP snooping querier: To save the bandwidth between the upstream router and the local
router, you can configure the querier on the local router. The querier replaces the upstream
router to send IGMP Query messages. You can also configure parameters for the IGMP
snooping querier as required.
l Multicast policy: You can configure the multicast policy for a VSI, when you need to
specify the range of multicast groups that hosts in a certain VSI can access.
l Prompt response to changes in the Layer 2 link: When the Layer 2 link changes, the
router must rapidly update information about the ports so that the multicast traffic can be
forwarded without interruptions.
l After the igmp-snooping send-query enable command is used, the router sends the IGMP
General Query message with the source IP address being not 0.0.0.0 when Layer 2 links
change. In this manner, other routers can rapidly update their router port. The port link
change event is received by the ports in the VSI.
l For the MSTP port link change event, the Query message is sent to all non-router ports in
the VSI.
l For the RRPP and RPR port link change events, the Query message is sent to only the ports
that receive the event.
l Prompt leave: When only one host exists for all ports in a VSI, you can configure prompt
leave for the ports to save the network bandwidth.
l Suppression for IGMP messages: After the igmp-snooping proxy command is used, IGMP
proxy provides two functions: querier and message suppression. Through the querier
function, a switch can replace the upstream router to send IGMP Query messages; through
the message suppression function, the switch can replace the upstream router to receive
IGMP Report and Leave messages. Therefore, after IGMP proxy is enabled on the local
router, bandwidth of the link between the upstream router and local router can be saved.
Pre-configuration Tasks
Before configuring IGMP snooping for a VSI, complete the following tasks:
l Connecting interfaces and setting physical parameters for the interfaces so that the physical
layer of the interfaces is Up
l Configuring basic VPLS functions
l Creating the VSI
NOTE
For details about creating the VSI, refer to the HUAWEI NetEngine80E/40E Router Configuration Guide
- VPN.
Data Preparation
To configure IGMP snooping for a VSI, you need the following data.
No. Data
4 Parameters of the querier, including the interval for sending the IGMP General Query
message, the robustness variable, the maximum response time, and the interval for
sending the Last Member Query message
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
IGMP snooping is enabled on the router. By default, IGMP snooping is disabled on the router.
Step 3 Run:
vsi vsi-name
The forwarding mode is configured. By default, the forwarding mode is based on the IP address.
You are recommended to use the default values. The LPUB supports only the forwarding based
on MAC addresses.
Step 5 Run:
igmp-snooping enable
IGMP snooping is enabled in the VPLS network. By default, IGMP snooping is disabled in the
VPLS network.
Step 6 Run:
igmp-snooping version { 1 | 2 | 3 }
The version of IGMP packets that can be processed is configured for the router enabled with
IGMP snooping. By default, the versions of IGMP packets that can be processed through IGMP
snooping of the router is IGMPv2.
The versions of IGMP packets that can be processed through IGMP snooping of the router are
IGMPv1, IGMPv2 or IGMPv3
To enable IGMP snooping for multiple VSIs, you can perform Step 3 to Step 6 repeatedly as
required.
----End
Procedure
l Configuring a PW as a Static Router Port
1. Run:
system-view
The PW is configured as a static router port. The statically configured router port does
not age and can only be deleted by using commands.
When the LDP is used as the signaling negotiation protocol for a PW, you can use the
igmp-snooping static-router-port remote-peer ip-address [ negotiation-vc-id vc-
id ] command to configure the PW as a static router port in the VSI. If no IP address
is assigned to the remote peer, you can also configure the PW as a static router port
in the VSI. The configuration, however, takes effect only after an IP address is assigned
to the remote peer.
l Adding a Sub-interface to the Static Router Port
1. Run:
system-view
– Common sub-interface
– Dot1q termination sub-interfaces
– QinQ termination sub-interfaces
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
The dynamic learning of router ports is enabled. By default, the dynamic learning of router ports
is enabled.
Step 4 (Optional) Run:
igmp-snooping router-aging-time router-aging-time
The default aging time of the routing interface learnt from IGMP packets is 180 seconds. The
default aging time of the routing interface learnt from PIM Hello packets is the Holdtime carried
in PIM Hello packets.
An aging time is set for the router port. If the router port does not receive the IGMP Query
message with the source address not being 0.0.0.0 or the PIM Hello message within the aging
time, the router considers the router port as an invalid port.
----End
Procedure
Step 1 Run:
system-view
The router is enabled to send the IGMP General Query message with the source address not
being 0.0.0.0.
routers can swiftly update the outbound port information and the multicast data received by user
hosts is not interrupted when the topology changes because of a link fault.
Step 3 (Optional) Run:
By default, the source address is 192.168.0.1. When 192.168.0.1 is used by other devices in the
network, you can run the igmp-snooping send-query source-address command to change the
source IP address of the IGMP General Query message sent by the IGMP snooping module.
Step 4 Run:
vsi vsi-name
Step 5 Run:
igmp-snooping querier enable
----End
Context
Steps 3 to 8 are optional and can be selected as required. The default values are recommended
in most cases.
NOTE
For details of parameters of the querier, refer to the chapter "IGMP Configuration."
Procedure
Step 1 Run:
system-view
Step 2 Run:
vsi vsi-name
Step 3 Run:
igmp-snooping query-interval query-interval
The interval for the querier to send IGMP General Query messages is set.
Step 4 Run:
igmp-snooping robust-count robust-count
Step 5 Run:
igmp-snooping max-response-time max-response-time
The interval for the querier to send the Last Member Query message is set.
Step 7 Run:
igmp-snooping require-router-alert
The router is configured to receive IGMP packets that must contain the Router Alert option in
the IP header. The router does not process the packet but discard the packet directly.
Step 8 Run:
igmp-snooping send-router-alert
The router is configured to contain the RouterAlert option in the IP header of the sent IGMP
packets.
----End
NOTE
l To specify the range of multicast groups that hosts in a certain VSI can access, you can configure the
multicast policy for the VSI.
l After the multicast policy is configured, hosts in the VSI cannot join multicast groups that are not that
are not in the range specified by the policy.
Procedure
l Configuring the Multicast Group Policy
1. Run:
system-view
The multicast group policy is configured for the VSI. In this manner, ports in the VSI
can only dynamically join the multicast group matching ACL rules. By default, no
multicast group policy is configured for the VSI.
4. Run:
quit
The multicast group policy is configured for the sub-interface. By default, no multicast
group policy is configured for the sub-interface.
l Configuring the SSM Group Policy
1. Run:
system-view
The SSM group policy is configured. By default, the SSM group policy ranges from
232.0.0.0 to 232.255.255.255.
----End
Context
After prompt leave is enabled for IGMP snooping, the router deletes a port from the forwarding
table without waiting for the Report message when receiving the Leave message from the port.
Therefore, the aging timer need not be enabled.
Prompt leave is applied when only one host is connected to the port. In addition, prompt leave
for the port takes effect only when IGMPv2 packets can be processed in the VSI.
Procedure
Step 1 Run:
system-view
Step 2 Run:
vsi vsi-name
Step 3 Run:
igmp-snooping prompt-leave [ group-policy acl-number ]
Prompt leave is configured for ports in the VSI. By default, ports are not allowed to promptly
leave a multicast group.
group-policy acl-number specifies the range of multicast groups that ports can promptly leave.
If no group-policy acl-number is specified, ports promptly leave a multicast group after the
router receives the Leave message from the VSI.
----End
Procedure
Step 1 Run:
system-view
----End
Context
When the router receives IGMP Report messages from a multicast group, it needs to apply for
resources for the multicast group before generating the forwarding entry, which slows down the
speed at which a member joins a multicast group.
After the fast join function is enabled through the l2-multicast fast-channel command, the
router applies for resources for multicast groups in advance. When the router receives IGMP
Report messages from a multicast group, it can immediately generate the forwarding entry for
the multicast group, thus improving the speed at which the member joins a multicast group and
reducing the response delay of the router.
Do as follows on the router:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display igmp-snooping [vsi [ vsi-name ] ] [ configuration ] command to check
all parameters of IGMP Snooping in the VSI.
l Run the display igmp-snooping router-port { vlan vlan-id | vsi vsi-name } command to
check the information in the router interface.
l Run the display l2-multicast forwarding-mode vsi vsi-name command to check the
forwarding mode for a VSI.
l Run the display l2-multicast forwarding-table vsi vsi-name [ group-address group-
address [ source-address source-address ] | [ | count ] [ | { begin | include | exclude }
regular-expression ] ] command to check the Layer 2 multicast forwarding table.
----End
Example
Run the display igmp-snooping vsi vsi-name configuration command. If the IGMP snooping
parameters configured for the VSI are displayed, it means that the configuration succeeds.
<HUAWEI> display igmp-snooping vsi v123 configuration
IGMP Snooping Configuration for Vsi v123
igmp-snooping enable
igmp-snooping version 1
igmp-snooping query-interval 200
igmp-snooping router-aging-time 300
igmp-snooping proxy querier-disable
Run the display l2-multicast forwarding-mode vsi command. If the Layer 2 multicast
forwarding mode of all VSIs is displayed, it means that the configuration succeeds.
<HUAWEI> display l2-multicast forwarding-mode vsi
VSI Fowarding-mode
-----------------------------------------------
Vsi1 IP
Run the display l2-multicast forwarding-table vsi vsi-name command. If the Layer 2 multicast
forwarding mode of the VSI is displayed, it means that the configuration succeeds.
<HUAWEI> display l2-multicast forwarding-table vsi vsi123
VSI Name : vsi123, Forwarding Mode : IP,
-------------------------------------------------------------------,
(Source, Group) Status Age Index Port-cnt,
--------------------------------------------------------------------,
Router-port Ok No 0 1 ,
(192.168.0.7, 232.0.0.1) Ok No 1 2 ,
--------------------------------------------------------------------,
Total Entry(s) : 2
For the VLAN bound to a VSI, you need to enable IGMP snooping in the VSI view.
IGMP snooping for a VLAN runs on Layer 2 of routers. In the implementation of Layer 2
multicast, IGMP snooping manages and controls the forwarding of multicast packets by
maintaining information about the outbound port of multicast packets. It maintains the outbound
port information by detecting the multicast protocol packets exchanged between the upstream
router and hosts.
In the network, functions related to IGMP snooping are as follows:
l Static router port: When the network topology is stable, you can configure router ports on
the router as static router ports.
l Static member port: If hosts connected to a certain port need to receive the multicast traffic
of a certain multicast group for a long time, you can add the port to the multicast group. In
this manner, the port becomes a static member port.
l IGMP snooping querier: To save the bandwidth between the upstream router and the local
router, you can configure the querier on the local router. The querier replaces the upstream
router to send IGMP Query messages. You can also configure parameters for the IGMP
snooping querier based on different networking requirements
l Multicast policy: You can configure the multicast policy for a VLAN, when you need to
specify the range of multicast groups that hosts in a certain VLAN can access.
l Rapid response to changes in the Layer 2 link: When the Layer 2 link changes, the router
must rapidly update information about the ports so that the multicast traffic can be
forwarded without interruptions.
l After the igmp-snooping send-query enable command is used, the router sends the IGMP
General Query message with source IP address being not 0.0.0.0 when Layer 2 links change.
The port link change event is received by the ports in the VLAN.
l For the MSTP port link change event, the Query message is sent to all non-router ports in
the VLAN.
l For the RRPP and RPR port link change events, the Query message is sent to only the ports
that receive the event.
l Prompt leave: When only one host exists for all ports in a VLAN, you can configure prompt
leave for the ports to save the network bandwidth.
l Suppression for IGMP messages: After the igmp-snooping proxy command is used, IGMP
proxy provides two functions: querier and message suppression. Through the querier
function, a switch can replace the upstream router to send IGMP Query messages; through
the message suppression function, the switch can replace the upstream router to receive
IGMP Report and Leave messages. Therefore, after IGMP proxy is enabled on the local
router, bandwidth of the link between the upstream router and local router can be saved.
Pre-configuration Tasks
Before configuring IGMP snooping for a VLAN, complete the following tasks:
l Connecting interfaces and setting physical parameters for the interfaces so that the physical
layer of the interfaces is Up
l Creating a VLAN
l Adding ports to the VLAN
Data Preparation
To configure IGMP snooping for a VLAN, you need the following data.
No. Data
No. Data
4 Parameters of the querier, including the interval for sending the IGMP General Query
message, the robustness variable, the maximum response time, and the interval for
sending the Last Member Query message
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
igmp-snooping enable
IGMP snooping is enabled on the router. By default, IGMP snooping is disabled on the router.
Step 3 Run:
vlan vlan-id
Step 4 (Optional)Run:
l2-multicast forwarding-mode { mac | ip }
The forwarding mode is configured for static Layer 2 multicast. By default, the forwarding mode
is based on the IP address.
You are recommended to use the default values. The LPUB supports only the forwarding based
on MAC addresses.
Step 5 Run:
igmp-snooping enable
IGMP snooping is enabled for the VLAN. By default, IGMP snooping is disabled for the VLAN.
Step 6 Run:
igmp-snooping version { 1 | 2 | 3 }
The version of IGMP packets that can be processed is configured for the router enabled with
IGMP snooping. By default, the version of IGMP packets that can be processed through IGMP
snooping of the router is IGMPv2.
The versions of IGMP packets that can be processed through IGMP snooping of the router are
IGMPv1, IGMPv2 or IGMPv3
To enable IGMP snooping for multiple VLANs, you can perform Step 3 to Step 6 repeatedly as
required.
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
The view of the Ethernet interface connected to the router or the virtual Ethernet interface is
displayed.
l Run:
interface eth-trunk trunk-id
The view of the Ethernet trunk interface connected to the router is displayed.
Step 3 Run:
portswitch
Step 4 Run:
igmp-snooping static-router-port vlan vlan-id
The interface in the VLAN is configured as a static router port. The statically configured router
port does not age and can only be deleted by using commands.
Before using the igmp-snooping static-router-port command, ensure that the interface is added
to the VLAN specified by vlan-id; otherwise, the configuration fails. If the VLAN that the current
interface belongs to is bound to a VSI, you can configure the current interface as a router port
in the VSI.
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
vlan vlan-id
Step 3 Run:
igmp-snooping router-learning
Step 4 Run:
igmp-snooping router-aging-time router-aging-time
The default aging time of the routing interface learnt from IGMP packets is 180 seconds. The
default aging time of the routing interface learnt from PIM Hello packets is the Holdtime carried
in PIM Hello packets.
An aging time is set for the router port. If the router port does not receive the IGMP Query
message with the source address not being 0.0.0.0 or the PIM Hello message before the aging
time, the router considers the router port as invalid.
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
igmp-snooping send-query enable
The router is enabled to send the IGMP General Query message with the source address not
being 0.0.0.0.
By default, the source address is 192.168.0.1. When 192.168.0.1 is used by other devices in the
network, you can run the igmp-snooping send-query source-address command to change the
source IP address of the IGMP General Query message.
Step 4 Run:
vlan vlan-id
Step 5 Run:
igmp-snooping querier enable
----End
Context
Steps 3 to 8 are optional and can be selected as required.
NOTE
For details of parameters of the querier, refer to the chapter "IGMP Configuration."
Procedure
Step 1 Run:
system-view
Step 2 Run:
vlan vlan-id
Step 3 Run:
igmp-snooping query-interval query-interval
The interval for the querier to send IGMP General Query messages is set.
Step 4 Run:
igmp-snooping robust-count robust-count
Step 5 Run:
igmp-snooping max-response-time max-response-time
Step 6 Run:
igmp-snooping lastmember-queryinterval lastmember-queryinterval
The interval for the querier to send the Last Member Query message is set.
Step 7 Run:
igmp-snooping require-router-alert
The router is configured to receive IGMP packets that must contain the Router Alert option in
the IP header. The router does not process the packet but discard the packet directly.
Step 8 Run:
igmp-snooping send-router-alert
The router is configured to contain the Router Alert option in the IP header of the sent IGMP
packets. You can run the undo igmp-snooping send-router-alert command to set the IP header
of the sent IGMP packet not to contain the Router-Alert option.
----End
Context
Do as follows on the router:
NOTE
l To specify the range of multicast groups that hosts in a certain VLAN can access, you can configure
the multicast policy for the VLAN.
l After the multicast policy is configured, hosts in the VLAN cannot join multicast groups that are not
specified in the policy.
Procedure
l Configuring the Multicast Group Policy
1. Run:
system-view
The multicast group policy is configured for the VLAN. After that, ports in the VLAN
can only dynamically join the multicast group matching ACL rules.
4. Run:
quit
The multicast group policy is configured for the interface. By default, no multicast
group policy is configured for the interface.
l Configuring the SSM Group Policy
1. Run:
system-view
----End
Context
When only one host exists for all ports in a VLAN, you can configure prompt leave for the ports
to save the network bandwidth. In addition, prompt leave for the ports takes effect only when
IGMPv2 packets can be processed in the VLAN.
Procedure
Step 1 Run:
system-view
Step 2 Run:
vlan vlan-id
Step 3 Run:
igmp-snooping prompt-leave [ group-policy acl-number ]
Prompt leave is configured for the port. By default, ports are not allowed to promptly leave a
multicast group.
group-policy acl-number specifies the range of multicast groups that ports can promptly leave.
If no group-policy acl-number is specified, the port promptly leaves a multicast group after
receiving the Leave message from a member port in a VLAN.
----End
Procedure
Step 1 Run:
system-view
----End
Context
When the router receives IGMP Report messages from a multicast group, it needs to apply for
resources for the multicast group before generating the forwarding entry, which slows down the
speed at which a member is added to a multicast group.
After the fast join function is enabled through the l2-multicast fast-channel command, the
router applies for resources for multicast groups in advance. When the router receives IGMP
Report messages from a multicast group, it can immediately generate the forwarding entry for
the multicast group, thus improving the speed at which the member joins a multicast group and
reducing the response delay of the router.
Do as follows on the router:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display igmp-snooping [ vlan [ vlan-id ] ] [ configuration ] command to check
the parameters of IGMP snooping for a VLAN.
l Run the display igmp-snooping router-port vlan vlan-id command to check information
about the router port.
l Run the display l2-multicast forwarding-mode vlan [ vlan-id ] command to check the
forwarding mode for a VLAN.
l Run the display l2-multicast forwarding-table { vlan vlan-id | vsi vsi-name} [ group-
address group-address [ source-address source-address ] | [ | count ] [ | { begin |
include | exclude } regular-expression ] ] command to check the layer 2 multicast
forwarding table.
l Run the display igmp-snooping port-info [ vlan vlan-id [ group-address group-
address ] ] [ verbose ] command to check information about ports in a VLAN.
----End
Example
# Check all configurations of IGMP snooping for VLAN 10.
<HUAWEI> display igmp-snooping vlan 10 configuration
IGMP Snooping Configuration for Vlan 10
igmp-snooping enable
igmp-snooping lastmember-queryinterval 5
igmp-snooping max-response-time 6
igmp-snooping group-policy 2001 2
igmp-snooping proxy querier-disable
# In VLAN 10,run the display igmp-snooping port-info [ vlan vlan-id [ group-address group-
address ] ] [ verbose ] to check information about the outbound port that joins multicast group
225.1.1.1.
<HUAWEI> display igmp-snooping port-info
-----------------------------------------------------------------------
(Source, Group) Port Flag
-----------------------------------------------------------------------
VLAN 10, 3 Entry(s)
(10.1.1.2, 224.1.1.1) GE1/0/1 --M
1 port(s)
(10.1.1.3, 224.1.1.1) GE1/0/1 --M
1 port(s)
(10.1.1.4, 224.1.1.1) GE1/0/1 --M
1 port(s)
-----------------------------------------------------------------------
Applicable Environment
In the scenario where IGMP snooping is not enabled in the VLAN or VSI, multicast traffic needs
to be load balanced among trunk member interfaces.
Trunk load balancing for Layer 2 multicast is applicable to the following networkings:
Pre-configuration Tasks
Before configuring trunk load balancing for Layer 2 multicast, complete the following tasks:
l Connecting interfaces and configuring physical parameters of the interfaces to ensure that
the physical status of the interfaces is Up
l Creating a VLAN
l Creating a VSI
Data Preparation
To configure trunk load balancing for Layer 2 multicast, you need the following data.
No. Data
1 ID of the VLAN to be configured with trunk load balancing for Layer 2 multicast
2 Name of the VSI to be configured with trunk load balancing for Layer 2 multicast
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 3 Run:
trunk multicast load-balance enable
----End
Procedure
Step 1 Run the display interface [ interface-type [ interface-number ] ] [ | { begin | exclude |
include } regular-expression ] command to check the number of multicast packets received or
sent on each trunk member interface.
----End
Example
Run the display eth-trunk [ trunk-id [ interface interface-type interface-number | verbose ] ]
command, and you can view configurations about Eth-Trunk 1 in the VLAN. For example:
<HUAWEI> display eth-trunk 1
Eth-Trunk1's state information is:
WorkingMode: NORMAL Hash arithmetic: According to flow
Least Active-linknumber: 1 Max Bandwidth-affected-linknumber: 16
Operate status: up Number Of Up Port In Trunk: 2
--------------------------------------------------------------------------------
PortName Status Weight
GigabitEthernet4/0/0 Up 1
GigabitEthernet4/0/5 Up 1
Applicable Environment
When hosts in different VLANs need to receive the same multicast traffic, you can configure
replication of the multicast VLAN. By configuring replication of the multicast VLAN, you can
manage and control the multicast source and the multicast group members. You can also reduce
wastes on the bandwidth.
In the implementation, VLANs are classified into the multicast VLAN and the user VLAN:
l A multicast VLAN is the VLAN that the interface connecting the router to the multicast
source belongs to. The multicast VLAN is used to converge the multicast traffic. The
router supports a maximum of 16 multicast VLANs.
l A user VLAN is the VLAN that member hosts of a multicast group belong to. The user
VLAN is used to receive the data from the multicast VLAN.
l You can bind multiple user VLANs to one multicast VLAN. A maximum of 512 user
VLANs can be bound to each multicast VLAN.
As shown in Figure 3-2, Router A is connected to Router B at the side of the multicast source.
It is required that Router A replicate the data flow from the multicast VLAN 10 to the user VLAN
100 and the user VLAN 200.
NOTE
Source Router B
Internet/
Intranet
VLAN 10
Router A
VLAN100 VLAN200
Pre-configuration Tasks
Before configuring replication of the multicast VLAN, complete the following tasks:
l Connecting interfaces and setting physical parameters for the interfaces so that the physical
status of the interfaces is Up
l Enabling IGMP snooping globally
l Creating a user VLAN and adding the interfaces that are connected to a switch to the user
VLAN as shown in Figure 3-2
Data Preparation
To configure replication of the multicast VLAN, you need the following data.
No. Data
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on Router A:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Here, the interface refers to an interface that connects Router A to the upstream router.
Step 3 Run:
portswitch
----End
Context
Do as follows on Router A:
Procedure
Step 1 Run:
system-view
Step 2 Run:
vlan vlan-id
Step 3 Run:
multicast-vlan user-vlan { { vlan-id1 [ to vlan-id2 ] } &<1-10> }
The matching relationship between the multicast VLAN and the user VLAN is configured.
vlan-id1 and vlan-id2 are IDs of user VLANs.
One multicast VLAN can be mapped to multiple user VLANs, but one user VLAN can be
mapped to only one multicast VLAN.
----End
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display user-vlan vlan [ vlan-id ] command to check the configuration on the user
VLAN.
l Run the display multicast-vlan vlan [ vlan-id ] command to check the configuration on
the multicast VLAN.
----End
Example
Run the display multicast-vlan vlan command. If you can view the IDs of all configured
multicast VLANs, the number of user VLANs mapped to the multicast VLAN, and the IGMP
snooping status of the multicast VLAN, it means that the configuration succeeds.
<HUAWEI> display multicast-vlan vlan
Total multicast vlan 2
multicast-vlan user-vlan snooping-state
--------------------------------------------------
2 1 Enable
80 0 Enable
Run the display user-vlan vlan command. If the configurations on all configured user VLANs
are displayed, it means that the configuration succeeds.
<HUAWEI> display user-vlan vlan
Total user vlan 6
user-vlan snooping-state multicast-vlan snooping-state
---------------------------------------------------------
3 Enable 2 Enable
223 Enable 222 Enable
232 Enable 231 Enable
601 Enable 600 Enable
2001 Enable 2000 Enable
2002 Enable 2000 Enable
NOTE
Router A and Router B can be a switch or router that supports the 1+1 protection of multicast VLANs.
On Router A, VLAN 3 is the multicast VLAN; VLAN 4 and VLAN 5 are user VLANs. On
Router B, VLAN 4 functions as a multicast VLAN; VLAN 100 and VLAN 200 function as its
user VLANs. VLAN 5 functions as the protection VLAN of VLAN 4. It is not required to
configure VLAN 5 as a multicast VLAN or enable IGMP snooping.
It is required that Router A should forward the traffic of the multicast VLAN 3 to user VLAN
4 and user VLAN 5. According to the detection results of the CCM, Router B selects the multicast
traffic from one of the two VLANs, and then forwards the valid multicast traffic to user VLAN
100 and user VLAN 200.
DHCP Source
server
IP/MPLS
core
VLAN3
RouterA
VLAN4 VLAN5
CCM
VLAN4 VLAN5
RouterB
VLAN100 VLAN200
Multicast flow
CCM Data Flow
Multicast flow of the working VLAN
Multicast flow of the protection VLAN
Pre-configuration Tasks
Before configuring 1+1 protection of multicast VLANs, complete the following tasks:
l Connecting interfaces and setting physical parameters for the interfaces so that the physical
layer of the interfaces is Up
l Configuring replication of the multicast VLAN
l Configuring the Ethernet OAM fault detection mechanism on Router A and Router B
Data Preparation
To configure 1+1 protection multicast VLANs, you need the following data.
No. Data
3 Name of the Maintenance Domain (MD) and the Maintenance Alliance (MA), and
number of the Remote Maintenance End Point (RMEP)
Context
Do as follows on Router A and Router B:
Procedure
Step 1 Run:
system-view
Step 2 Run:
igmp-snooping enable
Step 3 Run:
vlan vlan-id
The first multicast VLAN is created. The value of vlan-id can be any integer that ranges from 1
to 4094.
Step 4 Run:
igmp-snooping enable
Step 5 Run:
multicast-vlan enable
----End
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run:
system-view
The delay for the switchover of the protection group that the working VLAN belongs to is
configured.
----End
Procedure
Step 1 Run:
system-view
The recovery mode and the Wait to Restore Time (WRT) of the protection group that the working
VLAN belongs to are configured.
By default, the protection group works in revertive mode and the WRT is 5 minutes.
----End
NOTE
In the following commands, the name of the Maintenance Domain (MD) and the Maintenance Association
(MA), and the number of the Remote Maintenance End Point (RMEP) should be defined before the Ethernet
OAM detection mechanism is configured.
Procedure
Step 1 Run:
system-view
Step 3 Run:
l2-multicast protection oam-bind cfm md md-name ma ma-name remote-mep mep-id
Step 4 Run:
quit
Step 5 Run:
vlan vlan-id
Step 6 Run:
l2-multicast protection oam-bind cfm md md-name ma ma-name remote-mep mep-id
----End
Context
Do as follows on Router B:
Procedure
Step 1 Run:
system-view
Step 2 Run:
vlan vlan-id
Step 3 Run:
l2-multicast protection switch { clear | lock | force | manual }
The switchover status of the protection group that the working VLAN belongs to is configured.
----End
Prerequisite
The configurations of the 1+1 Protection of Multicast VLANs function are complete.
Procedure
Step 1 Run the display l2-multicast protection vlan [ vlan-id ] command to check the configuration
on multicast VLAN 1+1 protection.
----End
Example
Run the display l2-multicast protection vlan command. If the configuration on multicast
VLAN 1+1 protection is displayed, it means that the configuration succeeds. For example:
<HUAWEI> display l2-multicast protection vlan
Total protection groups 1:
PG Main Prot Rest Wtr Hold Main Prot Proto FM Swith Group
indx vlan vlan mode time time stat stat enable Board cmd state
------------------------------------------------------------------------
0 2 3 Rev 5 0 OK OK Yes 3 none normal
Pre-configuration Tasks
Before configuring static Layer 2 multicast, complete the following tasks:
l Connecting interfaces and setting physical parameters for the interfaces so that the physical
layer of the interfaces is Up
l Configuring basic VPLS functions
l Enabling IGMP snooping globally
Data Preparation
To configure static Layer 2 multicast, you need the following data.
No. Data
2 Pseudo Wire (PW), sub-interface, and interface of the multicast group that hosts join
statically
NOTE
l A sub-interface not bound to a VSI cannot be added to a static multicast group. In addition, when the
binding relationship between the sub-interface and the VSI is configured, the configurations of all static
multicast groups and router ports on the sub-interface are deleted.
l One sub-interface can be added to only one VSI. In addition, one sub-interface can be configured with
multiple static groups in only one VSI.
Procedure
l Adding a PW to a Static Layer 2 Multicast Group
1. Run:
system-view
NOTE
l A sub-interface not bound to a VSI cannot be added to a static multicast group. In addition, when
the binding relationship between the sub-interface and the VSI is removed, the configurations of
all static multicast groups and router ports on the sub-interface are deleted.
l One sub-interface can be added to only one VSI. In addition, one sub-interface can be configured
with multiple static groups in only one VSI.
1. Run:
system-view
Procedure
Step 1 Run:
system-view
----End
Procedure
Step 1 Run the display igmp-snooping port-info [ vlan vlan-id ] [ group-address group-address ] ]
[ verbose ] command to check information about ports on the router.
----End
Example
Run the display igmp-snooping port-info command. You can view whether the configuration
of the static multicast takes effect. If the bolded S is displayed, it indicates that the port is a static
router port. That means the configuration succeeds.
<HUAWEI> display igmp-snooping port-info
-----------------------------------------------------------------------
(Source, Group) Port Flag
-----------------------------------------------------------------------
VLAN 10, 3 Entry(s)
(10.1.1.2, 224.1.1.1) GE1/0/1 S--
1 port(s)
(10.1.1.3, 224.1.1.1) GE1/0/1 S--
1 port(s)
(10.1.1.4, 224.1.1.1) GE1/0/1 S--
1 port(s)
-----------------------------------------------------------------------
Procedure
Step 1 Run:
system-view
Step 2 Run:
snmp-agent trap enable feature-name l2-multicast { non-excessive all | trap-name
trap-name }
----End
Prerequisite
The configurations of the the Network Management Function for Layer 2 Multicast function are
complete.
Procedure
Step 1 Run the display current-configuration configuration | include l2-multicast command to
check the Layer 2 alarm function.
----End
Example
Run the display current-configuration configuration | include l2-multicast command. If the
bolded information as shown in the following output is displayed, it means that the configuration
succeeds.
<HUAWEI> display current-configuration configuration | include l2-multicast
snmp-agent trap enable l2-multicast
Applicable Environment
If a router connected with the host is configured with IGMPv3, SSM mapping is required for
the device to map the multicast group addresses of the received multicast packets with no source
addresses to specific sources.
Though the router enabled with IGMPv3 receives an IGMPv2 packet whose group address is
within the SSM group range, SSM mapping is still required to implement automatic source
mapping.
Pre-configuration Tasks
Before configuring Layer 2 multicast SSM mapping, complete the following tasks:
Data Preparation
To configure Layer 2 multicast SSM mapping, you need the following data.
No. Data
No. Data
Context
If the specified multicast group address is in the ASM group address range, you are required to
configure an SSM policy on the router and add the multicast address to the SSM group address
range.
Do as follows on the router connected with the host:
Procedure
Step 1 Run:
system-view
The multicast packet whose group address is beyond the range defined in the ACL rule is
discarded.
Step 6 Run:
vlan vlan-id
The specified multicast address is specified to be in the SSM group address range.
----End
Context
Do as follows on the router connected with the host:
Procedure
Step 1 Run:
system-view
Step 2 Run:
igmp-snooping enable
Step 3 (Optional)Run:
igmp-snooping version { 1 | 2 | 3 }
Step 4 Run:
vlan vlan-id
Step 5 Run:
igmp-snooping enable
Step 6 Run:
igmp-snooping ssm-mapping enable
Step 7 Run:
igmp-snooping ssm-mapping ip-group-address { ip-group-mask | mask-length } ip-
source-address
The multicast address in the specified range is mapped to the source address.
The specified multicast address must be within the SSM group address range.
----End
Prerequisite
Configurations about Layer 2 multicast SSM mapping are complete.
Procedure
l Run the display igmp-snooping port-info command to check entries on the interface.
----End
Example
Run the display igmp-snooping port-info command, and you can view entries on the
corresponding interface. For example:
<HUAWEI> display igmp-snooping port-info
-----------------------------------------------------------------------
(Source, Group) Port Flag
-----------------------------------------------------------------------
VLAN 10, 3 Entry(s)
(10.1.1.2, 224.1.1.1) GE1/0/1 --M
1 port(s)
(10.1.1.3, 224.1.1.1) GE1/0/1 --M
1 port(s)
(10.1.1.4, 224.1.1.1) GE1/0/1 --M
1 port(s)
-----------------------------------------------------------------------
As shown in Figure 3-1, UPEs are connected with access devices (DSLAMs or switches)
through VLANs. IGMP snooping and multicast CAC are deployed on UPEs. Therefore, when
receiving IGMP Report messages, UPEs check the messages based on the configured CAC limit,
thereby controlling the number of IPTV channels or bandwidth of each channel requested by
the access devices attached to UPEs.
NOTE
You can choose to configure multicast CAC for a VLAN, a Layer 2 interface, or the interface in a specified
VLAN, or you can configure multicast CAC for all of them simultaneously.
In this section, the scenario where IGMP snooping is deployed on UPEs is described.
Pre-configuration Tasks
Before configuring Layer 2 multicast CAC for a VLAN, a Layer 2 interface, or the interface in
a specified VLAN, complete the following tasks:
l Connecting interfaces of the routers in the network correctly
l Configuring interfaces on the routers to ensure that the status of the link layer protocol
between the routers is Up
Data Preparation
To configure Layer 2 multicast CAC for a VLAN, a Layer 2 interface, or the interface in a
specified VLAN, you need the following data.
No. Data
1 Name of channels
4 Number of channels
5 Bandwidth of channels
6 ID of the VLAN or the type and number of the interface where Layer 2 multicast
CAC should be configured
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Step 2 Run:
l2-multicast limit max-entry count [ except acl-number ]
Step 3 Run:
l2-multicast limit max-entry count vlan { { vlan-id [ to vlan-id ] } &<1-10> }
[ except acl-number ]
Step 4 Run:
interface interface-type interface-number
The Ethernet interface view, Gigabit Ethernet interface view, or Eth-Trunk interface view is
displayed.
Step 5 Run:
portswitch
Step 6 Run:
l2-multicast limit max-entry count[ except acl-number ]
Restriction on the number of multicast group members is configured for the Layer 2 interface.
Step 7 Run:
l2-multicast limit max-entry count [ vlan { { vlan-id [ to vlan-id ] } &<1-10> } ]
[ except acl-number ]
Restriction on the number of multicast group members is configured for the interface in a
specified VLAN.
NOTE
The parameter except is used to exclude the multicast groups that need not be restricted.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Step 2 Run:
l2-multicast limit bandwidth bandwidth
Step 3 Run:
l2-multicast limit bandwidth bandwidth vlan { { vlan-id [ to vlan-id ] } &<1-10>
}
Step 4 Run:
interface interface-type interface-number
The Ethernet interface view, Gigabit Ethernet interface view, or Eth-Trunk interface view is
displayed.
Step 5 Run:
portswitch
Step 6 Run:
l2-multicast limit bandwidth bandwidth
Restriction on the bandwidth of the multicast groups on the Layer 2 interface is configured.
Step 7 Run:
l2-multicast limit bandwidth bandwidth [ vlan { { vlan-id [ to vlan-id ] }
&<1-10> } ]
Restriction on the bandwidth of the multicast groups is configured for the interface in a specified
VLAN.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Restriction on the number of global channels or the number of channels in a VLAN is configured.
Step 3 Run:
interface interface-type interface-number
The Ethernet interface view, Gigabit Ethernet interface view, or Eth-Trunk interface view is
displayed.
Step 4 Run:
portswitch
Restriction on the number of channels is configured for the interface in a specified VLAN.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Restriction on the bandwidth of global channels or the bandwidth of the channels in a VLAN is
configured.
Step 3 Run:
interface interface-type interface-number
The Ethernet interface view, Gigabit Ethernet interface view, or Eth-Trunk interface view is
displayed.
Step 4 Run:
portswitch
Step 5 Run:
l2-multicast limit channel channel-name { bandwidth bandwidth }
Restriction on the bandwidth of the channels is configured for the Layer 2 interface.
Step 6 Run:
l2-multicast limit channel channel-name { bandwidth bandwidth } [ vlan { { vlan-id
[ to vlan-id ] } &<1-10> } ]
Restriction on the bandwidth of the channels is configured for the interface in a specified VLAN.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Step 3 Run:
l2-multicast-channel vlan vlan-id
The VLAN specified in this command must not a VLAN that is bound to a VSI and must not
be a user VLAN or a protection VLAN. Otherwise, you cannot configure channels for the VLAN.
Step 4 Run:
channel channel-name [ type [ asm | ssm ] ]
The name specified in this command must not be the same as the name of the global channel,
otherwise, the channel cannot be created for the VLAN.
Step 5 Run:
group group-address { group-mask-length | group-mask } [ per-bandwidth bandwidth ]
The member multicast groups of the channel in the VLAN are configured and bandwidth of each
member multicast group is set.
The addresses of member multicast groups in the global channel, the channels in a VLAN, and
the channels in a VSI cannot be the same.
Step 6 Run:
interface interface-type interface-number
The Ethernet interface view, Gigabit Ethernet interface view, or Eth-Trunk interface view is
displayed.
The method for the channel in the VLAN to process the Join messages for an unknown member
multicast group is configured.
After this command is run, the Join messages for an unknown member multicast group are
denied; if the undo unspecified-channel deny command is run, the Join messages for an
unknown member multicast group are permitted.
----End
Prerequisite
The configurations of layer 2 Multicast CAC for a VLAN, a Layer 2 Interface, or the Interface
in a Specified VLAN function are complete.
Procedure
l Run the display l2-multicast limit configuration command to check configurations of
Layer 2 multicast CAC.
l Run the display l2-multicast limit configuration vlan [ vlanid ] command to check
configurations of Layer 2 multicast CAC for a VLAN.
l Run the display l2-multicast limit vlan [ vlanid ] [ channel channel-name ] command to
check configurations and statistics of restriction on the channels in a VLAN.
l Run the display l2-multicast limit interface interface-type interface-number command to
check configurations and statistics of Layer 2 multicast CAC for an interface.
l Run the display l2-multicast limit channel channel-name command to check
configurations and statistics of Layer 2 multicast CAC for a channel.
Example
Run the display l2-multicast limit configuration command to check configurations of Layer
2 multicast CAC.
<HUAWEI> display l2-multicast limit configuration
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
Global limit information:
---------------------------------------------------------------------
100 2000
---- ----------
VLAN 20 limit information:
---------------------------------------------------------------------
50 1000
---- ----------
VLAN 20 channel limit information:
---------------------------------------------------------------------
bjtv 15 60
---- ----------
interface GigabitEthernet1/0/1 VLAN 10 channel limit information:
---------------------------------------------------------------------
cctv 20 300
---- ----------
Run the display l2-multicast-channel vlan 10 command to check configurations of the channel
in VLAN 10.
<HUAWEI> display l2-multicast-channel vlan 10
Channel information on VLAN 10
ChannelName GroupAddress Mask Bandwidth
---------------------------------------------------------------------
njtv 226.1.1.0 255.255.255.0 4
226.1.2.0 255.255.255.0 6
226.1.3.0 255.255.255.0 5
---------------------------------------------------------------------
njtv1 226.2.1.0 255.255.255.0 4
226.2.2.0 255.255.255.0 2
Run the display l2-multicast limit vlan 20 command to check configurations and statistics of
multicast CAC for the channels in VLAN 20.
<HUAWEI> display l2-multicast limit vlan 20
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
VLAN 20 limit information:
---------------------------------------------------------------------
50 1000
0 0
VLAN 20 channel limit information:
---------------------------------------------------------------------
bjtv 15 60
0
Run the display l2-multicast limit vlan 10 command to check configurations and statistics of
multicast CAC on the interface in VLAN 10.
Pre-configuration Tasks
Before configuring Layer 2 multicast CAC for a VSI, complete the following tasks:
l Connecting interfaces of the routers in the network correctly
l Configuring interfaces on the routers to ensure that the status of the link layer protocol
between the routers is Up
Data Preparation
To configure Layer 2 multicast CAC for a VSI, you need the following data.
No. Data
1 Name and ID of the VSI, and the interface to which the VSI is bound
No. Data
5 Number of channels
6 Bandwidth of channels
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Restriction on the number of multicast group members is configured for a VSI. You can specify
the parameter except to exclude the multicast groups that need not be restricted.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
The method for the channel in the VSI to process the Join messages for an unknown member
multicast group is configured.
After this command is run, the Join messages for an unknown member multicast group are
denied; if the unspecified-channel deny command is run, the Join messages for an unknown
member multicast group are permitted.
For the procedure for configuring a global channel, see "Configuring Channels for a
VLAN."
Step 4 Run:
channel channel-name [ type [ asm | ssm ] ]
A channel is configured for the VSI and the channel view is displayed.
The name specified in this command must not be the same as the name of the global channel,
otherwise, the channel cannot be created for the VSI.
Step 5 Run:
group group-address { group-mask-length | group-mask } [ per-bandwidth bandwidth ]
The member multicast groups in the channel are configured and the bandwidth of each member
multicast group is set.
If the bandwidth is not configured, the value is 0.
----End
Procedure
l Run the display l2-multicast limit configuration command to check configurations of
Layer 2 multicast CAC.
l Run the display l2-multicast limit configuration vsi vsi-name command to check
configurations of Layer 2 multicast CAC for a VSI.
l Run the display l2-multicast limit vsi [ vsi-name ] [ channel channel-name ] command
to check configurations and statistics of restriction on the channels in a VSI.
l Run the display l2-multicast limit interface interface-type interface-number command to
check configurations and statistics of Layer 2 multicast CAC for an interface.
l Run the display l2-multicast limit channel channel-name command to check
configurations and statistics of Layer 2 multicast CAC for a channel.
l Run the display l2-multicast-channel channel-name command to check configurations of
a channel.
----End
Example
Run the display l2-multicast limit configuration command to check configurations about
Layer 2 multicast CAC.
<HUAWEI> display l2-multicast limit configuration
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
Global limit information:
---------------------------------------------------------------------
1000 4000
---- --------
VSI a limit information:
---------------------------------------------------------------------
1000 4000
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
bjtv 100 1000
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
cctv 120 2500
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
njtv 50 450
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
bjtv 100 1000
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
cctv 120 2500
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
njtv 50 450
---- ----------
Run the display l2-multicast-channel vsi a command to check configurations and statistics of
restriction on the channels in VSI a.
<HUAWEI> display l2-multicast-channel vsi a
Channel information on VSI a
ChannelName GroupAddress Mask Bandwidth
---------------------------------------------------------------------
bjtv 226.1.2.0 255.255.255.0 4
---------------------------------------------------------------------
cctv 226.1.3.0 255.255.255.0 6
---------------------------------------------------------------------
njtv 226.1.1.0 255.255.255.0 3
Run the display l2-multicast limit configuration vsi vsi-name command to check
configurations of Layer 2 multicast CAC on the sub-interface bound with the VSI.
<HUAWEI> display l2-multicast limit configuration vsi a interface Ethernet 1/0/1.1
L2-multicast limit information, The unit of bandwidth is kbits/sec
------------------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
------------------------------------------------------------------------------
interface Ethernet1/0/1.1 limit information:
------------------------------------------------------------------------------
200 500
---- ----------
Applicable Environment
In the scenario where UPEs provide multicast services through VSIs and VSIs are bound to
Ethernet sub-interfaces, you can configure Layer 2 multicast CAC on sub-interfaces to ensure
the quality of the IPTV service.
Pre-configuration Tasks
Before configuring Layer 2 multicast CAC for a sub-interface, complete the following tasks:
Data Preparation
To configure Layer 2 multicast CAC for a sub-interface, you need the following data.
No. Data
1 Name and ID of the VSI and the number of sub-interface bound to the VSI
5 Number of channels
6 Bandwidth of channels
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface { ethernet | gigabitethernet | eth-trunk} interface-number.subnumber
Step 3 Run:
l2-multicast limit max-entry count [ except acl-number ]
Restriction on the number of multicast group members is configured for a sub-interface. You
can specify the parameter except to exclude the multicast groups that need not be restricted.
Step 4 Run:
l2-multicast limit max-entry count dot1q vid { { vid1 [ to vid2 ] } & <1-10> }
[ except acl-number ]
Restriction on the number of the multicast group members on a sub-interface for dot1q VLAN
tag termination is configured. You can specify the parameter except to exclude the multicast
groups that need not be restricted.
Step 5 Run:
l2-multicast limit max-entry count qinq pe-vid pe-vid ce-vid { { ce-id1 [ to ce-
id2 ] } & <1-10> } [ except acl-number ]
Restriction on the number of the multicast group members on a sub-interface for QinQ VLAN
tag termination is configured. You can specify the parameter except to exclude the multicast
groups that need not be restricted.
NOTE
For details on the procedure for configuring the sub-interface for dot1q or QinQ VLAN tag termination,
refer to the HUAWEI NetEngine80E/40E Configuration Guide - LAN Access and MAN Access.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface { ethernet | gigabitethernet | eth-trunk} interface-number.subnumber
Step 3 Run:
l2-multicast limit bandwidth bandwidth
Step 4 Run:
l2-multicast limit bandwidth bandwidth dot1q vid { { vid1 [ to vid2 ] } & <1-10> }
Restriction on the bandwidth of the multicast groups of a sub-interface for dot1q VLAN tag
termination is configured.
Step 5 Run:
l2-multicast limit bandwidth bandwidth qinq pe-vid pe-vid ce-vid { { ce-id1 [ to
ce-id2 ] } & <1-10> }
Restriction on the bandwidth of the multicast groups of a sub-interface for QinQ VLAN tag
termination is configured.
NOTE
For details on the procedure for configuring the sub-interface for dot1q or QinQ VLAN tag termination,
refer to the HUAWEI NetEngine80E/40E Configuration Guide - LAN Access and MAN Access.
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on a UPE:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display l2-multicast limit configuration command to check configurations of
Layer 2 multicast CAC.
l Run the display l2-multicast limit configuration vsi [vsi-name] interface { interface-
type interface-number.subnumber } command to check configurations of Layer 2 multicast
CAC for a sub-interface.
l Run the display l2-multicast limit interface interface-type interface-number command to
check configurations and statistics of Layer 2 multicast CAC for an interface.
l Run the display l2-multicast limit channel channel-name command to check
configurations and statistics of Layer 2 multicast CAC for a channel.
l Run the display l2-multicast-channel channel-name command to check configurations of
a channel.
----End
Example
Run the display l2-multicast limit configuration command to check configurations about
Layer 2 multicast CAC.
<HUAWEI> display l2-multicast limit configuration
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
Global limit information:
---------------------------------------------------------------------
1000 4000
---- --------
VSI a limit information:
---------------------------------------------------------------------
1000 4000
---- --------
interface GigabitEthernet1/0/0.1 channel limit information:
---------------------------------------------------------------------
bjtv 100 1000
---- --------
Run the display l2-multicast limit configuration vsi [vsi-name [ interface { interface-type
interface-number } ] ] command to check configurations and statistics of Layer 2 multicast CAC
on the sub-interface bound with the VSI.
<HUAWEI> display l2-multicast limit configuration vsi a interface Ethernet 1/0/1.1
L2-multicast limit information, The unit of bandwidth is kbits/sec
------------------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
------------------------------------------------------------------------------
interface Ethernet1/0/1.1 limit information:
------------------------------------------------------------------------------
200 500
---- ----------
Run the display l2-multicast limit interface command to check configurations and statistics of
Layer 2 multicast CAC on an interface.
<HUAWEI> display l2-multicast limit interface Ethernet 1/0/0
L2-multicast limit information, The unit of bandwidth is kbits/sec
------------------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
------------------------------------------------------------------------------
interface Ethernet1/0/0 limit information:
------------------------------------------------------------------------------
500 1000
0 0
Applicable Environment
In a ring network, UPEs are connected as shown in Figure 3-4 and H-VPLS is deployed in the
network to reduce the replicated multicast traffic between UPEs.
BTV SERVER
NPE1 NPE2
UPE3
UPE2
DSLAM1 DSLAN2
In the H-VPLS networking, IGMP Report messages received from downstream UPEs are
forwarding through PWs and the outgoing interface in the multicast forwarding table can also
be a PW.
Therefore, you are recommended to configure Layer 2 multicast CAC for the PW.
NOTE
You must first configure a VSI or PW. When configure the PW, you should specify the signaling protocol,
LDP or BGP, adopted by the PW.
Pre-configuration Tasks
Before configuring Layer 2 multicast CAC for a PW, complete the following tasks:
l Connecting interfaces on network devices correctly
l Configuring interfaces on the routers to ensure that the status of the link layer protocol
between the routers is Up
Data Preparation
To configure Layer 2 multicast CAC for a PW, you need the following data.
No. Data
4 Number of channels
5 Bandwidth of channels
Context
Do as follows on the UPE and SPE:
Procedure
Step 1 Run:
pwsignal { ldp | bgp }
The signaling protocol (LDP or BGP) used by the PW is configured and the VSI-LDP or VSI-
BGP view is displayed.
Step 2 Run:
l2-multicast limit max-entry count [ except acl-number ] {remote-peer ip-address
[ negotiation-vc-id vc-id ]| remote-site remote-site-id}
Restriction on the number of multicast group members is configured for a PW. You can specify
the parameter except to exclude the multicast groups that need not be restricted.
----End
Context
Do as follows on the UPE and SPE:
Procedure
Step 1 Run:
pwsignal { ldp | bgp }
The signaling protocol (LDP or BGP) used by the PW is configured and the VSI-LDP or VSI-
BGP view is displayed.
Step 2 Run:
l2-multicast limit bandwidth bandwidth {remote-peer ip-address [ negotiation-vc-id
vc-id ]| remote-site remote-site-id}
----End
Context
Do as follows on the UPE and SPE:
Procedure
Step 1 Run:
pwsignal { ldp | bgp }
The signaling protocol (LDP or BGP) used by the PW is configured and the VSI-LDP or VSI-
BGP view is displayed.
Step 2 Run:
l2-multicast limit channel channel-name max-entry count { remote-peer ip-address
[ negotiation-vc-id vc-id ]| remote-site remote-site-id}
----End
Context
Do as follows on the UPE and SPE:
Procedure
Step 1 Run:
pwsignal { ldp | bgp }
The signaling protocol (LDP or BGP) used by the PW is configured and the VSI-LDP or VSI-
BGP view is displayed.
Step 2 Run:
l2-multicast limit channel channel-name bandwidth traffic-rate {remote-peer ip-
address [ negotiation-vc-id vc-id ]| remote-site remote-site-id}
----End
Procedure
l Run the display vpls connection [ ldp | bgp | vsi vsi-name ] [ down | up ] [ verbose ]
command to check information about the VPLS connection.
l Run the display vsi remote ldp [ router-id ip-address ] [ pw-id pw-id ] command to check
information about the remote VSI.
l Run the display l2-multicast limit configuration command to check configurations of
Layer 2 multicast CAC.
l Run the display l2-multicast limit configuration vsi vsi-name command to check
configurations of Layer 2 multicast CAC for a VSI.
l Run the display l2-multicast limit vsi [vsi-name [ interface { interface-type interface-
number } ] ] |remote-peer ip-address [ negotiation-vc-id vc-id ] | [ channel channel-
name ] command to check configurations and statistics of restriction on the channels of a
peer.
l Run the display l2-multicast limit interface interface-type interface-number command to
check configurations and statistics of Layer 2 multicast CAC for an interface.
l Run the display l2-multicast limit channel channel-name command to check
configurations and statistics of Layer 2 multicast CAC for a channel.
l Run the display l2-multicast-channel channel-name command to check configurations of
a channel.
----End
Example
Run the display l2-multicast limit configuration command to check configurations about
Layer 2 multicast CAC.
Run the display l2-multicast limit configuration vsi vsi-name command to check
configurations and statistics of Layer 2 multicast CAC on the sub-interface bound with the VSI.
<HUAWEI> display l2-multicast limit configuration vsi a interface Ethernet 1/0/1.1
L2-multicast limit information, The unit of bandwidth is kbits/sec
------------------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
------------------------------------------------------------------------------
interface Ethernet1/0/1.1 limit information:
------------------------------------------------------------------------------
200 500
---- ----------
Context
CAUTION
After you clear the dynamic port in the outbound port information, the hosts in the VLAN or
VSI are disabled from receiving multicast flows provisionally until the router regenerates the
outbound port information. So, confirm before you use the command.
Procedure
Step 1 Run the reset igmp-snooping group { vlan { vlan-id | all } | vsi { vsi-name | all } | all } command
in the user view to clear the Dynamic Port in the Outbound Port Information statistics.
----End
Context
CAUTION
If the statistics of IGMP snooping are cleared, the previous statistics cannot be restored. So,
confirm the action before you use the command.
Procedure
Step 1 Run the reset igmp-snooping statistics { vlan { vlan-id | all } | vsi { vsi-name | all} | all }
command in the user view to clear the Statistics of IGMP Snooping statistics.
----End
Context
In routine maintenance, you can run the following command in any view to check the operation
of he Running Status of IGMP Snooping.
Procedure
l Run the display igmp-snooping [ vlan [ vlan-id ] | vsi [ vsi-name ] ] [ configuration ]
command in any view to check the operation of all the configurations of IGMP snooping.
l Run the display igmp-snooping port-info [ vlan vlan-id [ group-address group-
address ] ] [ verbose ] command in any view to check the operation of information about
ports on the router.
l Run the display igmp-snooping router-port { vlan vlan-id | vsi vsi-name } command in
any view to check the operation of information about the router port.
l Run the display igmp-snooping statistics { vlan [ vlan-id ] | vsi [ vsi-name ] } command
in any view to check the operation of the statistics of IGMP snooping.
l Run the display { multicast-vlan | user-vlan } vlan [ vlan-id ] command in any view to
check the operation of information about the multicast VLAN or the user VLAN.
l Run the display igmp-snooping querier { vlan [ vlan-id ] | vsi [ vsi-name ] } command
in any view to check the operation of information about the IGMP snooping querier.
l Run the display l2-multicast forwarding-mode { vlan [ vlan-id ] | vsi [ vsi-name ] }
command in any view to check the operation of the forwarding mode of Layer 2 multicast.
l Run the display l2-multicast forwarding-table {vlan vlan-id | vsi vsi-name } [ group-
address group-address | router-group ] command in any view to check the operation of
the Layer 2 multicast forwarding table.
l Run the display l2-multicast protection [ vlan vlan-id ] command in any view to check
the operation of detailed information about the protection group.
----End
CAUTION
Debugging affects the performance of the system. So, after debugging, run the undo debugging
all command to disable it immediately.
When a fault occurs, run the debugging command to debug CAC and locate the fault. For the
procedure for displaying the debugging information, refer to the HUAWEI NetEngine80E/
40E Router Configuration Guide - System Management.
Action Command
VPLS Loopback1
CE3
POS2/0/0 GE1/0/0
Ethernet GE1/0/0
Loopback1 PE3
CE1
POS3/0/0
GE1/0/0
GE1/0/0
GE2/0/0 PE1 POS2/0/0 Ethernet
PE2 CE2
Source GE1/0/0
POS2/0/0 GE1/0/0
GE2/0/0
Receiver
Loopback1
Loopback1 1.1.1.1/32
Loopback1 2.2.2.2/32
Loopback1 3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable Open Shortest Path First (OSPF) to advertise the network segment of each interface and
ID of the Label Switch Router(LSR).
The configurations are not mentioned here.
Step 2 Configure basic MPLS functions and the Label Distribution Path (LDP).
# Configure PE1.
<PE1> system-view
[PE1] mpls lsr-id 1.1.1.1
[PE1] mpls
[PE1-mpls] quit
[PE1] mpls ldp
[PE1-mpls-ldp] quit
[PE1] interface pos 2/0/0
[PE1-Pos2/0/0] mpls
[PE1-Pos2/0/0] mpls ldp
[PE1-Pos2/0/0]quit
[PE1] interface pos 3/0/0
[PE1-Pos3/0/0] mpls
[PE1-Pos3/0/0] mpls ldp
[PE1-Pos3/0/0] quit
# Configure PE2.
<PE2> system-view
[PE2] mpls lsr-id 2.2.2.2
[PE2] mpls
[PE2-mpls] quit
[PE2] mpls ldp
[PE2-mpls-ldp] quit
[PE2] interface pos 2/0/0
[PE2-Pos2/0/0] mpls
[PE2-Pos2/0/0] mpls ldp
[PE2-Pos2/0/0] quit
# Configure PE3.
<PE3> system-view
[PE3] mpls lsr-id 3.3.3.3
[PE3] mpls
[PE3-mpls] quit
[PE3] mpls ldp
[PE3-mpls-ldp] quit
[PE3] interface pos 2/0/0
[PE3-Pos2/0/0] mpls
[PE3-Pos2/0/0] mpls ldp
[PE3-Pos2/0/0] quit
# Configure PE2.
<PE2> system-view
[PE2] mpls l2vpn
[PE2-l2vpn] quit
[PE2] vsi v123 static
[PE2-vsi-v123] pwsignal ldp
[PE2-vsi-v123-ldp] vsi-id 123
[PE2-vsi-v123-ldp] peer 1.1.1.1
[PE2-vsi-v123-ldp] quit
[PE2-vsi-v123] quit
# Configure PE3.
<PE3> system-view
[PE3] mpls l2vpn
[PE3-l2vpn] quit
[PE3] vsi v123 static
[PE3-vsi-v123] pwsignal ldp
[PE3-vsi-v123-ldp] vsi-id 123
[PE3-vsi-v123-ldp] peer 1.1.1.1
[PE3-vsi-v123-ldp] quit
[PE3-vsi-v123] quit
# Configure PE1. The configurations of PE2 and PE3 are similar to those of PE1 and are not
mentioned here.
[PE1] vlan 10
[PE1-vlan10] quit
[PE1] interface gigabitethernet 1/0/0
[PE1-GigabitEthernet1/0/0] portswitch
[PE1-GigabitEthernet1/0/0] port link-type access
[PE1-GigabitEthernet1/0/0] port default vlan 10
[PE1-GigabitEthernet1/0/0] undo shutdown
[PE1-GigabitEthernet1/0/0] quit
[PE1] interface vlanif 10
[PE1-Vlanif10] l2 binding vsi v123
[PE1-Vlanif10] quit
Step 5 Enable IGMP snooping globally and for the VSI on the PE devices.
# Configure PE1. The configurations of PE2 and PE3 are similar to those of PE1 and are not
mentioned here.
[PE1] igmp-snooping enable
[PE1] vsi v123
[PE1-vsi-v123] igmp-snooping enable
Step 6 Configure GE 1/0/0 on PE1 as a static router port and the PW on PE2 as a static router port in
VLAN 10, and configure the querier on PE1. The default values are used for the querier and
thus no special configuration is required.
# Configure PE1.
<PE1> system-view
[PE1] igmp-snooping send-query enable
[PE1] vsi v123
[PE1-vsi-v123] igmp-snooping querier enable
[PE1-vsi-v123] quit
[PE1] interface gigabitethernet 1/0/0
[PE1-GigabitEthernet1/0/0] igmp-snooping static-router-port vlan 10
# Configure PE2.
<PE2> system-view
[PE2] vsi v123
[PE2-vsi-v123] igmp-snooping static-router-port remote-peer 1.1.1.1
# Run the display igmp-snooping router-port vlan command on PE1. You can check whether
the configuration of the static router port succeeds. If STATIC is displayed as shown in the
following output, it indicates that GE 1/0/0 is already configured as a static router port in VLAN
10.
<PE1> display igmp-snooping router-port vlan 10
Port Name UpTime Expires Flags
---------------------------------------------------------------------
VLAN 10, 1 router-port(s)
GigabitEthernet1/0/0 00:00:16 -- STATIC
# Run the display igmp-snooping router-port vsi command on PE2. You can check whether
the configuration of the static router port succeeds. If STATIC is displayed as shown in the
following output, it indicates that PW (1.1.1.1/123) is already configured as a static router port.
<PE2> display igmp-snooping router-port vsi v123
Port Name UpTime Expires Flags
---------------------------------------------------------------------
VSI v123, 1 router-port(s)
PW(1.1.1.1/123) 00:05:16 -- STATIC
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10
#
igmp-snooping enable
igmp-snooping send-query enable
#
mpls lsr-id 1.1.1.1
mpls
#
mpls l2vpn
#
vsi v123 static
pwsignal ldp
vsi-id 123
peer 2.2.2.2 upe
peer 3.3.3.3 upe
igmp-snooping enable
igmp-snooping querier enable
#
mpls ldp
#
interface Vlanif10
l2 binding vsi v123
#
interface GigabitEthernet1/0/0
undo shutdown
portswitch
port link-type access
port default vlan 10
igmp-snooping static-router-port vlan 10
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
ip address 20.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.3
network 20.1.1.0 0.0.0.3
#
return
l Configuration file of PE2
#
sysname PE2
#
vlan batch 10
#
igmp-snooping enable
#
mpls lsr-id 2.2.2.2
mpls
#
mpls l2vpn
#
vsi v123 static
pwsignal ldp
vsi-id 123
peer 1.1.1.1
igmp-snooping enable
igmp-snooping static-router-port remote-peer 1.1.1.1
#
mpls ldp
#
interface Vlanif10
l2 binding vsi v123
#
interface GigabitEthernet1/0/0
undo shutdown
portswitch
port link-type access
port default vlan 10
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 10.1.1.2 255.255.255.252
mpls
mpls ldp
#
interface LoopBack1
#
return
Source RouterB
Internet/
Intranet
GE1/0/0
GE1/0/1 GE1/0/2
RouterA
VLAN 3 VLAN2
SwitchA SwitchB
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure VLANs.
2. Configuring basic IGMP snooping functions.
3. Configure Router A as the querier for VLAN 3.
4. Configure GE 1/0/0 on Router A as a static router port.
5. Configure prompt leave for GE 1/0/2 on Router A.
Data Preparation
To complete the configuration, you need the following data:
l GE 1/0/0 is configured as the static router port.
l GE 1/0/2 is configured with prompt leave.
l Switch-A and the switches connected to Switch-A belong to VLAN 3; Switch-B and the
switches connected to Switch-B belong to VLAN 2.
Procedure
Step 1 Configure VLANs.
<RouterA> system-view
[RouterA] interface gigabitethernet 1/0/1
[RouterA-GigabitEthernet1/0/1] portswitch
[RouterA-GigabitEthernet1/0/1] undo shutdown
[RouterA-GigabitEthernet1/0/1] quit
[RouterA] vlan 3
[RouterA-vlan3] port gigabitethernet 1/0/1
[RouterA-vlan3] quit
[RouterA] interface gigabitethernet 1/0/2
[RouterA-GigabitEthernet1/0/2] portswitch
[RouterA-GigabitEthernet1/0/2] undo shutdown
[RouterA-GigabitEthernet1/0/2] quit
[RouterA] vlan 2
[RouterA-vlan2] port gigabitethernet 1/0/2
[RouterA-vlan2] quit
Step 3 Configure Router A as the querier for VLAN 3 and allow Router A to send IGMP General Query
messages.
[RouterA] vlan 3
[RouterA-vlan3] igmp-snooping querier enable
[RouterA-vlan3] quit
[RouterA] igmp-snooping send-query enable
# Run the display igmp-snooping router-port vlan 3 command on Router A. As shown in the
following output, it indicates that GE 1/0/0 is configured as a static router port.
<RouterA> display igmp-snooping router-port vlan 3
Port Name UpTime Expires Flags
---------------------------------------------------------------------
VLAN 3, 1 router-port(s)
GigabitEthernet1/0/1 00:01:02 -- STATIC
# Run the display igmp-snooping querier vlan 3 command on Router A. You can check
whether the configuration of the querier succeeds.
<RouterA> display igmp-snooping querier vlan 3
VLAN Querier-state
-----------------------------------------------
3 Enable
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
vlan batch 2 3
#
igmp-snooping enable
igmp-snooping send-query enable
#
vlan 2
igmp-snooping enable
#
vlan 3
igmp-snooping enable
igmp-snooping querier enable
#
interface GigabitEthernet 1/0/0
undo shutdown
portswitch
igmp-snooping static-router-port vlan 3
#
interface GigabitEthernet 1/0/1
undo shutdown
portswitch
port default vlan 3
#
interface GigabitEthernet 1/0/2
undo shutdown
portswitch
port default vlan 2
#
return
Networking Requirements
As shown in Figure 3-7, GE 1/0/1 that connects Router A to Router B is added to VLAN 10.
GE 1/0/2 and GE 1/0/3 that connect Router A to switches are added to VLAN 100 and VLAN
200 respectively. It is required that the four hosts connected to Router A should receive the
multicast packets from the multicast group with the address ranges from 225.0.0.1 to 225.0.0.3.
VLAN 10 is a multicast VLAN, VLAN 100 and VLAN 200 are user VLANs.
Figure 3-7 Networking diagram of configuring replication of the multicast VLAN on Router
A
Source RouterB
Internet/
Intranet
VLAN 10 GE1/0/1
GE1/0/2 GE1/0/3
RouterA
VLAN100 VLAN200
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l Interface connected to Router B and the VLAN that the interface belongs to
l Interfaces connected to hosts and the VLANs that the interfaces belong to
l Group addresses of entries in the static multicast VLAN
Procedure
Step 1 Create a multicast VLAN and user VLANs.
# Create a multicast VLAN.
<RouterA> system-view
[RouterA] vlan 10
[RouterA-vlan10] quit
Step 3 Set the mapping relationship between the multicast VLAN and the user VLANs.
[RouterA-vlan10] multicast user-vlan 100
[RouterA-vlan10] multicast user-vlan 200
[RouterA-vlan10] quit
Step 4 Configure the VLAN that the interface of Router A belongs to and static multicast entries.
# Configure GE 1/0/1 to allow data frames from VLAN 10 to pass through.
[RouterA] interface gigabitethernet 1/0/1
[RouterA-GigabitEthernet1/0/1] portswitch
[RouterA-GigabitEthernet1/0/1] port trunk allow-pass vlan 10
[RouterA-GigabitEthernet1/0/1] undo shutdown
[RouterA-GigabitEthernet1/0/1] quit
# Add GE 1/0/2 to VLAN 100, and add the interface to the multicast group statically.
[RouterA] interface gigabitethernet 1/0/2
[RouterA-GigabitEthernet1/0/2] portswitch
[RouterA-GigabitEthernet1/0/2] port trunk allow-pass vlan 100
[RouterA-GigabitEthernet1/0/2] undo shutdown
[RouterA-GigabitEthernet1/0/2] l2-multicast static-group group-address 225.0.0.1
vlan 100
[RouterA-GigabitEthernet1/0/2] l2-multicast static-group group-address 225.0.0.2
vlan 100
[RouterA-GigabitEthernet1/0/2] l2-multicast static-group group-address 225.0.0.3
vlan 100
[RouterA-GigabitEthernet1/0/2] quit
# Add GE 1/0/3 to VLAN 200, and add the interface to the multicast group statically.
[RouterA] interface gigabitethernet 1/0/3
[RouterA-GigabitEthernet1/0/3] portswitch
[RouterA-GigabitEthernet1/0/3] port trunk allow-pass vlan 200
[RouterA-GigabitEthernet1/0/3] undo shutdown
[RouterA-GigabitEthernet1/0/3] l2-multicast static-group group-address 225.0.0.1
vlan 200
[RouterA-GigabitEthernet1/0/3] l2-multicast static-group group-address 225.0.0.2
vlan 200
[RouterA-GigabitEthernet1/0/3] l2-multicast static-group group-address 225.0.0.3
vlan 200
[RouterA-GigabitEthernet1/0/3] quit
According to the following output, the forwarding table is created for the multicast VLAN and
the forwarding mode of the table is based on MAC addresses.
The static multicast group 225.0.0.1 is mapped to the MAC multicast address 0100-5e00-0001;
the static multicast group 225.0.0.2 is mapped to the MAC multicast address 0100-5e00-0002;
the static multicast group 225.0.0.3 is mapped to the MAC multicast address 0100-5e00-0003.
<RouterA> display l2-multicast forwarding-table vlan 10
VLAN ID : 10, Forwarding Mode : IP,
--------------------------------------------------------------------
(Source, Group) Status Age Index Port-cnt
--------------------------------------------------------------------
(*, 225.0.0.1) Ok No 0 1
(*, 225.0.0.2) Ok No 0 2
(*, 225.0.0.3) Ok No 0 2
--------------------------------------------------------------------
Total Entry(s) : 3
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
vlan batch 10 100 200
#
igmp-snooping enable
igmp-snooping version 3
#
vlan 10
igmp-snooping enable
multicast-vlan enable
multicast user-vlan 100
multicast user-vlan 200
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port trunk allow-pass vlan 10
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port trunk allow-pass vlan 100
l2-multicast static-group group-address 225.0.0.1 vlan 100
l2-multicast static-group group-address 225.0.0.2 vlan 100
l2-multicast static-group group-address 225.0.0.3 vlan 100
#
interface GigabitEthernet1/0/3
undo shutdown
portswitch
port trunk allow-pass vlan 200
l2-multicast static-group group-address 225.0.0.1 vlan 200
l2-multicast static-group group-address 225.0.0.2 vlan 200
l2-multicast static-group group-address 225.0.0.3 vlan 200
#
return
to VLAN 2 and VLAN 3. GE 1/0/1 on Router A belongs to VLAN 2 and GE 1/0/2 belongs to
VLAN 3. Router B selects the multicast traffic from VLAN 2 and VLAN 3 and performs the
switchover of traffic in the case of VLAN failure. In this manner, Router B steadily and
effectively sends multicast packets to hosts within VLAN 100 and VLAN 200.
DHCP Source
Server
IP/MPLS
core
VLAN10 GE1/0/3
RouterA
GE1/0/1 GE1/0/2
VLAN2 VLAN3
MAN MAN
VLAN2 VLAN3
GE2/0/1 GE2/0/2
GE2/0/3 GE2/0/4
RouterB
VLAN100 VLAN200
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable replication of the multicast VLAN.
# Enable replication of the multicast VLAN on Router A, and add GE 1/0/1 and GE 1/0/2 to the
multicast VLAN.
<RouterA> system-view
[RouterA] vlan batch 2 3 10
[RouterA] igmp-snooping enable
[RouterA] vlan 10
[RouterA-vlan10] igmp-snooping enable
[RouterA-vlan10] multicast-vlan enable
[RouterA-vlan10] multicast-vlan user-vlan 2 to 3
[RouterA-vlan10] quit
[RouterA] interface gigabitethernet1/0/3
[RouterA-GigabitEthernet1/0/3] portswitch
[RouterA-GigabitEthernet1/0/3] port trunk allow-pass vlan 10
[RouterA-GigabitEthernet1/0/3] undo shutdown
[RouterA-GigabitEthernet1/0/3] quit
[RouterA] interface gigabitethernet1/0/1
[RouterA-GigabitEthernet1/0/1] undo shutdown
[RouterA-GigabitEthernet1/0/1] portswitch
[RouterA-GigabitEthernet1/0/1] port trunk allow-pass vlan 2
[RouterA-GigabitEthernet1/0/1] l2-multicast static-group group-address 225.0.0.1
vlan 2
[RouterA-GigabitEthernet1/0/1] quit
[RouterA] interface gigabitethernet1/0/2
[RouterA-GigabitEthernet1/0/2] undo shutdown
[RouterA-GigabitEthernet1/0/2] portswitch
[RouterA-GigabitEthernet1/0/2] port trunk allow-pass vlan 3
[RouterA-GigabitEthernet1/0/2] l2-multicast static-group group-address 225.0.0.1
vlan 3
[RouterA-GigabitEthernet1/0/2] quit
# Enable replication of the multicast VLAN on Router B, and add GE 2/0/1, GE 2/0/2, GE 2/0/3,
and GE 2/0/4 to the multicast VLAN.
<RouterB> system-view
[RouterB] vlan batch 2 3 100 200
[RouterB] igmp-snooping enable
[RouterB] vlan 2
[RouterB-vlan2] igmp-snooping enable
[RouterB-vlan2] multicast-vlan enable
[RouterB-vlan2] multicast-vlan user-vlan 100 200
[RouterB-vlan2] quit
[RouterB] interface gigabitethernet2/0/1
[RouterB-GigabitEthernet2/0/1] undo shutdown
[RouterB-GigabitEthernet2/0/1] portswitch
[RouterB-GigabitEthernet2/0/1] port trunk allow-pass vlan 2
[RouterB-GigabitEthernet2/0/1] quit
[RouterB] interface gigabitethernet2/0/2
[RouterB-GigabitEthernet2/0/2] portswitch
[RouterB-GigabitEthernet2/0/2] port trunk allow-pass vlan 3
[RouterB-GigabitEthernet2/0/2] undo shutdown
[RouterB-GigabitEthernet2/0/2] quit
[RouterB] interface gigabitethernet2/0/3
[RouterB-GigabitEthernet2/0/3] undo shutdown
[RouterB-GigabitEthernet2/0/3] portswitch
[RouterB-GigabitEthernet2/0/3] port trunk allow-pass vlan 100
[RouterB-GigabitEthernet2/0/3] l2-multicast static-group group-address 225.0.0.1
vlan 100
[RouterB-GigabitEthernet2/0/3] quit
[RouterB] interface gigabitethernet2/0/4
[RouterB-GigabitEthernet2/0/4] undo shutdown
[RouterB-GigabitEthernet2/0/4] portswitch
[RouterB-GigabitEthernet2/0/4] port trunk allow-pass vlan 200
[RouterB-GigabitEthernet2/0/4] l2-multicast static-group group-address 225.0.0.1
vlan 200
[RouterB-GigabitEthernet2/0/4] quit
# The following output shows the configurations about the multicast VLAN on Router B. You
can check whether the configurations are correct.
<RouterB> display multicast-vlan vlan 2
Multicast-vlan : 2
User-vlan : 2
IGMP snooping state : Enable
User-vlan Snooping-state
-------------------------------------
100 Enable
200 Enable
# Run the display l2-multicast protection vlan 2 command on Router B to check information
about the multicast protection group.
<RouterB> display l2-multicast protection vlan 2
PG main-vlan 2, protect-vlan 3, work-vlan 2.
PG main-vlan state OK, protect-vlan state OK.
PG revertive-mode Revertive, wtr-time 5s
PG hold-off time 0(100ms)
PG protocol-enable is Yes, switch command is none
PG state is normal
Vlan 2 bind md is test, ma is test2, remote-mep is 2
# Run the shutdown command on GE 2/0/1 on Router B and then run the display l2-multicast
protection vlan command to check information about the protection group. You can view that
VLAN 3 serves as the working VLAN. This indicates that the switchover is performed and
Router B now receives the multicast data from the protection VLAN.
<RouterB> display l2-multicast protection vlan 2
PG main-vlan 2, protect-vlan 3, work-vlan 3.
PG main-vlan state DE, protect-vlan state OK.
PG revertive-mode Revertive, wtr-time 5s
PG hold-off time 0(100ms)
PG protocol-enable is Yes, switch command is none
PG state is sf_w
Vlan 2 bind md is test, ma is test2, remote-mep is 2
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
vlan batch 2 3 10
#
igmp-snooping enable
#
vlan 10
igmp-snooping enable
multicast-vlan enable
multicast-vlan user-vlan 2 to 3
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port trunk allow-pass vlan 2
l2-multicast static-group group-address 225.0.0.1 vlan 2
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port trunk allow-pass vlan 3
l2-multicast static-group group-address 225.0.0.1 vlan 3
#
interface GigabitEthernet1/0/3
undo shutdown
portswitch
port trunk allow-pass vlan 10
#
cfm enable
cfm md test
ma test2
map vlan 2
mep mep-id 2 vlan
mep ccm-send mep-id 2 enable
ma test3
map vlan 3
mep mep-id 3 vlan
mep ccm-send mep-id 3 enable
#
return
Networking Requirements
As shown in Figure 3-9, GE 1/0/1 on Router A is connected to a router, and GE 1/0/2 on the
Router A is connected to a switch. It is required that all the hosts in VLAN 3 should receive the
multicast packets from the multicast group 225.0.0.1.
RouterB
VLAN 3 GE1/0/1
RouterA
GE1/0/2
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a VLAN.
2. Add interfaces to the VLAN.
3. Add interfaces to the multicast group statically.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Create a VLAN.
# Create VLAN 3 on Router A.
<RouterA> system-view
[RouterA] igmp-snooping enable
[RouterA] vlan 3
[RouterA-vlan3] quit
# Configure GE 1/0/1 on Router A to allow data frames from VLAN 3 to pass through.
[RouterA] interface gigabitethernet 1/0/1
[RouterA-GigabitEthernet1/0/1] portswitch
[RouterA-GigabitEthernet1/0/1] port trunk allow-pass vlan 3
[RouterA-GigabitEthernet1/0/1] quit
Step 3 Create GE 1/0/2 on Router A and add it to the multicast group 225.0.0.1 statically.
[RouterA] interface gigabitethernet 1/0/2
[RouterA-GigabitEthernet1/0/2] l2-multicast static-group group-address 225.0.0.1
vlan 3
[RouterA-GigabitEthernet1/0/2] quit
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
vlan batch 3
#
igmp-snooping enable
#
interface GigabitEthernet1/0/1
undo shutdown
portswitch
port trunk allow-pass vlan 3
#
interface GigabitEthernet1/0/2
undo shutdown
portswitch
port trunk allow-pass vlan 3
l2-multicast static-group group-address 225.0.0.1 vlan 3
#
return
Networking Requirements
As shown in Figure 3-10, Switch and Router A are connected through VLAN 20. Router A
accesses the upper layer network (IPTV server in the diagram) through Router B. The multicast
CAC bandwidth limit is set on both the switch and Router A. This relieves the burden of
bandwidth on a physical link between the switch and Router A, and ensures that high-quality
IPTV services reach downstream users connected to the switch.
Layer 2 multicast CAC for the VLAN is required on GE 1/0/0 on Router A. The maximum
number of multicast group members allowed in VLAN 20 is restricted to 50 and the bandwidth
of the multicast groups is 1000 kbit/s; the maximum number of member multicast groups of the
BJTV channel in VLAN 20 is restricted to 15 and the bandwidth of the BJTV channel is 60 kbit/
s. Gobal Layer 2 multicast CAC on Router A needs to be configured to restrict the number of
multicast group members on Router A to 100 and the bandwidth of multicast groups to 2000
kbit/s.
The Layer 2 multicast CAC limit for VLAN 20 is set on the switch. The maximum number of
multicast group members allowed in VLAN 20 is restricted to 40 and the bandwidth of the
multicast groups is 100 kbit/s; the maximum number of member multicast groups of the BJTV
channel in VLAN 20 is restricted to 3 and the bandwidth of the BJTV channel is 30 kbit/s; the
maximum number of global multicast groups is 50 and the bandwidth of the multicast groups is
1000 kbit/s.
The channel named bjtv of the ASM type is configured. The multicast address of the member
multicast group is 226.1.1.0/24 and the bandwidth of each membe multicast group is 4 kbit/s;
the multicast address of the member multicast group 2 is 226.1.2.0/24, and the bandwidth of
each member multicast group is 5 kbit/s;the multicast address of the member multicast group 3
is 226.1.3.0/24, and the bandwidth of each member multicast group is 6 kbit/s.
The multicast policy is configured for VLAN 20 by using the unspecified-channel deny
command, disabling the switch and router from creating entries for multicast groups whose
addresses are not in the address range allowed by the configured channel.
VLAN20 RouterA
Configuration Roadmap
The configuration roadmap is as follows:
l Configure VLAN 20 on Switch and Router A.
l Enable global IGMP snooping.
l Configure global Layer 2 multicast CAC and Layer 2 multicast CAC for VLAN 20.
Data Preparation
To complete the configuration, you need the following data:
l ID of the VLAN to which Router A and Switch belong
l Parameters related to Layer 2 multicast CAC, including the number of multicast group
members and channels and bandwidth of multicast groups and channels
Procedure
Step 1 Configure VLAN 20 on Router A and the switch.
# Configure the switch.
[Switch] vlan 20
[Switch-vlan20] quit
[Switch] interface gigabitethernet 1/0/1
[Switch-GigabitEthernet1/0/1] portswitch
[Switch-GigabitEthernet1/0/1] port trunk allow-pass vlan 20
# Configure Router A.
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] portswitch
[RouterA-GigabitEthernet1/0/0] port trunk allow-pass vlan 20
# Configure Router A.
[RouterA] l2-multicast-channel vlan 20
[RouterA-l2-channel-vlan20] channel bjtv type asm
[RouterA-l2-channel-vlan20-bjtv] group 226.1.1.0 255.255.255.0 per-bandwidth 4
[RouterA-l2-channel-vlan20-bjtv] group 226.1.2.0 255.255.255.0 per-bandwidth 6
[RouterA-l2-channel-vlan20-bjtv] group 226.1.3.0 255.255.255.0 per-bandwidth 5
[RouterA-l2-channel-vlan20-bjtv] quit
[RouterA-l2-channel-vlan20] unspecified-channel deny
Step 5 Configure global Layer 2 multicast CAC and Layer 2 multicast CAC for VLAN 20.
# Configure Router A.
[RouterA] l2-multicast limit max-entry 100
[RouterA] l2-multicast limit bandwidth 2000
[RouterA] l2-multicast limit max-entry 50 vlan 20
[RouterA] l2-multicast limit bandwidth 1000 vlan 20
[RouterA] l2-multicast limit channel bjtv max-entry 15 vlan 20
[RouterA] l2-multicast limit channel bjtv bandwidth 60 vlan 20
Run the display l2-multicast limit configuration command. You can view configurations of
Layer 2 multicast CAC.
Run the display l2-multicast-channel vlan 20 command. You can view configurations of the
channels in VLAN 20.
226.1.2.0/24 * 6
226.1.3.0/24 * 5
Run the display l2-multicast limit vlan 20 command. You can view configurations of Layer 2
multicast CAC for VLAN 20.
Take the display on Router A as an example:
[RouterA] display l2-multicast limit vlan 20
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
VLAN 20 limit information:
---------------------------------------------------------------------
50 1000
0 0
VLAN 20 channel limit information:
---------------------------------------------------------------------
bjtv 15 60
0
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
vlan 20
#
igmp-snooping enable
l2-multicast limit max-entry 100
l2-multicast limit bandwidth 2000
l2-multicast limit max-entry 50 vlan 20
l2-multicast limit bandwidth 1000 vlan 20
l2-multicast limit channel bjtv max-entry 15 vlan 20
l2-multicast limit channel bjtv bandwidth 60 vlan 20
#
vlan 20
igmp-snooping enable
#
interface GigabitEthernet1/0/0
undo shutdown
portswitch
port trunk allow-pass vlan 20
#
l2-multicast-channel vlan 20
channel bjtv type asm
group 226.1.1.0 255.255.255.0 per-bandwidth 4
group 226.1.2.0 255.255.255.0 per-bandwidth 6
group 226.1.3.0 255.255.255.0 per-bandwidth 5
unspecified-channel deny
#
return
Networking Requirements
As shown in Figure 3-11, Router A and Router B are connected a VSI network. Layer 2 multicast
CAC for the VSI is required on GE 1/0/2 on Router B. The number of the multicast group
members in the VSI should be restricted to 1000 and the bandwidth reserved for multicast groups
should be 4000 kbit/s. In addition, you are required to configure Layer 2 multicast CAC for the
channels on the sub-interface, with the number of member multicast groups in the BJTV, NJTV,
and CCTV channels being restricted to 100, 50, and 120 respectively, and bandwidth reserved
for the three channels being 1000 kbit/s, 450 kbit/s, and 2500 kbit/s respectively.
l Channel 1: NJTV, with the multicast group address being 226.1.1.0/24, the channel type
being ASM, and bandwidth reserved for each member multicast group being 3 kbit/s
l Channel 2: BJTV, with the multicast group address being 226.1.2.0/24, the channel type
being ASM, and bandwidth reserved for each member multicast group being 4 kbit/s
l Channel 3: CCTV, with the multicast group address being 226.1.3.0/24, the channel type
being ASM, and bandwidth reserved for each member multicast group being 6 kbit/s
The sub-interface, GE 1/0/0 on Router A and GE 1/0/2 on Router B, are bound with VSIs
respectively.
GE1/0/2
GE1/0/0
MPLS/
VPLS Internet
GE1/0/1
Switch RouterA RouterB
GE1/0/1
192.168.10.1/24
Configuration Roadmap
The configuration roadmap is as follows:
l Configure the VSI network between Router A and Router B.
l Enable global IGMP snooping.
l Configure Layer 2 multicast CAC on Router B.
l Configure Layer 2 multicast CAC on Router A.
Data Preparation
To complete the configuration, you need the following data:
l Numbers of the interfaces that bound with the VSIs
l Name of a VSI
l Parameters related to Layer 2 multicast CAC, including the number of multicast group
members and channels and bandwidth of multicast groups and channels
Procedure
Step 1 Enable basic MPLS capabilities and LDP on the MPLS backbone network and enable L2VPN.
# Configure Router A.
[RouterA] interface loopback 1
[RouterA-LoopBack1] ip address 1.1.1.1 32
[RouterA-LoopBack1] quit
[RouterA] mpls lsr-id 1.1.1.1
[RouterA] mpls
[RouterA-mpls] mpls ldp
[RouterA-mpls-ldp] quit
[RouterA-mpls] mpls l2vpn
[RouterA-l2vpn] quit
# Configure Router B.
[RouterB] interface loopback 2
[RouterB-LoopBack2] ip address 2.2.2.2 32
[RouterB-LoopBack2] quit
[RouterB] mpls lsr-id 2.2.2.2
[RouterB] mpls
[RouterB-mpls] mpls ldp
[RouterB-mpls-ldp] quit
[RouterB-mpls] mpls l2vpn
[RouterB-l2vpn] quit
Step 2 Create VSIs and specify LDP as the signaling protocol of VSIs.
# Configure Router A.
[RouterA] vsi a static
[RouterA-vsi-a] pwsignal ldp
[RouterA-vsi-a-ldp] vsi-id 1
[RouterA-vsi-a-ldp] peer 2.2.2.2
[RouterA-vsi-a-ldp] quit
[RouterA-vsi-a] quit
# Configure Router B.
[RouterB] vsi a static
[RouterB-vsi-a] pwsignal ldp
[RouterB-vsi-a-ldp] vsi-id 1
Step 3 Configure IP addresses for GE 1/0/1 on Router A and Router B and enable MPLS on GE 1/0/1.
# Configure Router A.
[RouterA] interface gigabitethernet1/0/1
[RouterA-GigabitEthernet1/0/1] ip address 192.168.10.1 24
[RouterA-GigabitEthernet1/0/1] mpls
[RouterA-GigabitEthernet1/0/1] mpls ldp
[RouterA-GigabitEthernet1/0/1] quit
# Configure Router B.
[RouterB] interface gigabitethernet1/0/1
[RouterB-GigabitEthernet1/0/1] ip address 192.168.10.2 24
[RouterB-GigabitEthernet1/0/1] mpls
[RouterB-GigabitEthernet1/0/1] mpls ldp
[RouterB-GigabitEthernet1/0/1] quit
# Configure Router B.
[RouterB-ospf-1] a 0
[RouterB-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[RouterB-ospf-1-area-0.0.0.0] network 192.168.10.0 0.0.0.255
[RouterB-ospf-1-area-0.0.0.0] quit
Step 5 Configure sub-interfaces on Routers and bind the sub-interfaces to the VSIs.
# Configure Router A.
[RouterA] interface gigabitethernet 1/0/0.1
[RouterA-GigabitEthernet1/0/0.1] vlan-type dot1q 1
[RouterA-GigabitEthernet1/0/0.1] l2 binding vsi a
[RouterA-GigabitEthernet1/0/0.1] undo shutdown
[RouterA-GigabitEthernet1/0/0.1] quit
[RouterA] interface gigabitethernet 1/0/0.2
[RouterA-GigabitEthernet1/0/0.2] vlan-type dot1q 2
[RouterA-GigabitEthernet1/0/0.2] l2 binding vsi a
[RouterA-GigabitEthernet1/0/0.2] undo shutdown
[RouterA-GigabitEthernet1/0/0.2] quit
# Configure Router B.
[RouterB] interface gigabitethernet 1/0/2.1
[RouterB-GigabitEthernet1/0/2.1] vlan-type dot1q 1
[RouterB-GigabitEthernet1/0/2.1] l2 binding vsi a
[RouterB-GigabitEthernet1/0/2.1] undo shutdown
[RouterB-GigabitEthernet1/0/2.1] quit
[RouterB] interface gigabitethernet 1/0/2.2
[RouterB-GigabitEthernet1/0/2.2] vlan-type dot1q 2
[RouterB-GigabitEthernet1/0/2.2] l2 binding vsi a
[RouterB-GigabitEthernet1/0/2.2] undo shutdown
[RouterB-GigabitEthernet1/0/2.2] quit
Step 8 Configure restriction on the number and bandwidth of the channels in the VSI.
# Configure Router B.
[RouterA] l2-multicast-channel vsi a
[RouterA-l2-channel-vsi-a] channel njtv type asm
[RouterA-l2-channel-vsi-a-njtv] group 226.1.1.0 255.255.255.0 per-bandwidth 3
[RouterA-l2-channel-vsi-a-njtv] quit
[RouterA-l2-channel-vsi-a] channel bjtv type asm
[RouterA-l2-channel-vsi-a-bjtv] group 226.1.2.0 255.255.255.0 per-bandwidth 4
[RouterA-l2-channel-vsi-a-bjtv] quit
[RouterA-l2-channel-vsi-a] channel cctv type asm
[RouterA-l2-channel-vsi-a-cctv] group 226.1.3.0 255.255.255.0 per-bandwidth 6
[RouterA-l2-channel-vsi-a-cctv] quit
Run the display vsi name a verbose command and find that the VSI status is Up.
Take the display on Router A as an example.
[RouterA] display vsi name a verbose
***VSI Name : a
Administrator VSI : no
Isolate Spoken : disable
VSI Index : 0
PW Signaling : ldp
VSI ID : 1
*Peer Router ID : 2.2.2.2
VC Label : 23552
Peer Type : dynamic
Session : up
Tunnel ID : 0x802000
**PW Information:
Run the display l2-multicast limit configuration command to check configurations of Layer
2 multicast CAC.
Take the display on Router B as an example:
[RouterB] display l2-multicast limit configuration
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
Global limit information:
---------------------------------------------------------------------
1000 4000
---- --------
VSI a limit information:
---------------------------------------------------------------------
1000 4000
---- --------
interface GigabitEthernet1/0/2.1 channel limit information:
---------------------------------------------------------------------
bjtv 100 1000
---- --------
interface GigabitEthernet1/0/2.1 channel limit information:
---------------------------------------------------------------------
cctv 120 2500
---- --------
interface GigabitEthernet1/0/2.1 channel limit information:
---------------------------------------------------------------------
njtv 50 450
---- --------
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
igmp-snooping enable
#
mpls lsr-id 1.1.1.1
mpls
#
mpls l2vpn
#
vsi a static
pwsignal ldp
vsi-id 1
peer 2.2.2.2
igmp-snooping enable
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 1
l2 binding vsi a
#
interface GigabitEthernet1/0/0.2
undo shutdown
vlan-type dot1q 2
l2 binding vsi a
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls ldp
interface NULL0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.10.0 0.0.0.255
#
l2-multicast-channel vsi a
channel bjtv
group 226.1.2.0 255.255.255.0 per-bandwidth 4
channel cctv
group 226.1.3.0 255.255.255.0 per-bandwidth 6
channel njtv
group 226.1.1.0 255.255.255.0 per-bandwidth 3
#
return
l Configuration file of Router B
#
sysname RouterB
#
igmp-snooping enable
#
mpls lsr-id 2.2.2.2
mpls
#
mpls l2vpn
#
vsi a static
pwsignal ldp
vsi-id 1
peer 1.1.1.1
igmp-snooping enable
l2-multicast limit max-entry 1000
l2-multicast limit bandwidth 4000
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
mpls
mpls ldp
#
interface GigabitEthernet1/0/2.1
undo shutdown
vlan-type dot1q 1
l2 binding vsi a
l2-multicast limit channel bjtv max-entry 100
l2-multicast limit channel bjtv bandwidth 1000
l2-multicast limit channel njtv max-entry 50
l2-multicast limit channel njtv bandwidth 450
l2-multicast limit channel cctv max-entr 120
l2-multicast limit channel cctv bandwidth 2500
#
interface GigabitEthernet1/0/2.2
undo shutdown
vlan-type dot1q 2
l2 binding vsi a
#
interface NULL0
#
interface LoopBack2
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.10.0 0.0.0.255
#
l2-multicast-channel
#
l2-multicast-channel vsi a
channel bjtv type asm
group 226.1.2.0 255.255.255.0 per-bandwidth 4
channel cctv type asm
group 226.1.3.0 255.255.255.0 per-bandwidth 6
channel njtv type asm
group 226.1.1.0 255.255.255.0 per-bandwidth 3
#
return
Networking Requirements
As shown in Figure 3-12, Router A and Router B are connected through a PW and LDP is set
to be the signaling protocol used by the PW. To save network bandwidth between Routers and
ensure the quality of the IPTV service, you are recommended to configure global Layer 2
multicast CAC on Router B, with the number of multicast group members being restricted to
650 and bandwidth reserved for multicast groups being 3000 kbit/s.
A channel SHTV is configured in the VSI, with the multicast address being 225.0.0.1, the channel
type being ASM, and the bandwidth configured for each multicast group member of the channel
being 3 kbit/s. The maximum number of multicast groups on a PW is 650, the bandwidth for the
multicast group is 3000 kbit/s, the number of multicast groups in the channel SHTV is 50, and
the bandwidth for the multicast group in the channel SHTV is 300 kbit/s.
The interval for sending the same multicast CAC-related trap message is set to 40 seconds.
Configuration Roadmap
The configuration roadmap is as follows:
l Establish a PW between Router A and Router B.
l Enable global IGMP snooping.
l Configure Layer 2 multicast CAC on Router B.
l Configure Layer 2 multicast CAC for the PW.
Data Preparation
To complete the configuration, you need the following data:
l Numbers and IP addresses of the two ends of the PW connecting Router A and Router B
l Name of a VSI
l Parameters related to Layer 2 multicast CAC, including the number of multicast group
members and channels and bandwidth of multicast groups and channels
Procedure
Step 1 Configure an IP address for each interface.
On Router A and Router B, create loopback interfaces respectively on Router A and Router B
and configure an IP address for each interface. The detailed configuration is not mentioned here.
Step 2 Configure IP addresses for GE 1/0/1 on Router A and Router B and enable MPLS on GE 1/0/1.
# Configure Router A.
[RouterA] interface gigabitethernet1/0/1
[RouterA-GigabitEthernet1/0/1] ip address 192.168.10.1 24
[RouterA-GigabitEthernet1/0/1] mpls
[RouterA-GigabitEthernet1/0/1] mpls ldp
[RouterA-GigabitEthernet1/0/1] quit
# Configure Router B.
[RouterB] interface gigabitethernet1/0/1
[RouterB-GigabitEthernet1/0/1] ip address 192.168.10.2 24
[RouterB-GigabitEthernet1/0/1] mpls
[RouterB-GigabitEthernet1/0/1] mpls ldp
[RouterB-GigabitEthernet1/0/1] quit
Step 4 Configure MPLS networks on Router A and Router B, set up a PW between Router A and
Router B, with LDP being the signaling protocol, and enable MPLS L2VPN.
# Configure Router A.
# Configure Router B.
[RouterB] mpls lsr-id 2.2.2.2
[RouterB] mpls
[RouterB-mpls] mpls ldp
[RouterB-mpls-ldp] quit
[RouterB-mpls] mpls l2vpn
[RouterB-l2vpn] quit
# Configure Router B.
<RouterB> system-view
[RouterB] vsi a static
[RouterB-vsi-a] pwsignal ldp
[RouterB-vsi-a-ldp] vsi-id 1
[RouterB-vsi-a-ldp] peer 1.1.1.1
[RouterB-vsi-a-ldp] quit
[RouterB-vsi-a] quit
# Configure Router B.
[RouterB] interface gigabitethernet1/0/2.1
[RouterB-GigabitEthernet1/0/2.1] vlan-type dot1q 1
[RouterB-GigabitEthernet1/0/2.1] l2 binding vsi a
[RouterB-GigabitEthernet1/0/2.1] undo shutdown
[RouterB-GigabitEthernet1/0/2.1] quit
Step 9 Configure restriction on the number and bandwidth of the channel SHTV in the VSI.
# Configure Router B.
[RouterB] l2-multicast-channel vsi a
[RouterB-l2-channel-vsi-a] channel shtv type asm
[RouterB-l2-channel-vsi-a-njtv] group 225.0.0.1 255.255.255.0 per-bandwidth 3
[RouterB-l2-channel-vsi-a-njtv] quit
Step 11 On Router A, set the interval for sending the same multicast CAC-related trap message to 40
seconds.
# Configure Router A.
[RouterA] l2-multicast limit trap-interval 40
Run the display mpls ldp session command, and you can find that peer relationship is set up.
Take the display on Router A as an example:
[RouterA] display mpls ldp session
LDP Session(s) in Public Network
------------------------------------------------------------------------------
Peer-ID Status LAM SsnRole SsnAge KA-Sent/Rcv
------------------------------------------------------------------------------
2.2.2.2:0 Operational DU Passive 000:00:00 1/1
------------------------------------------------------------------------------
TOTAL: 1 session(s) Found.
LAM : Label Advertisement Mode SsnAge Unit : DDD:HH:MM
Run the display l2-multicast limit configuration command to check configurations of Layer
2 multicast CAC.
Take the display on Router B as an example:
[RouterB] display l2-multicast limit configuration
L2-multicast limit information, The unit of bandwidth is kbits/sec
---------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
---------------------------------------------------------------------
Global limit information:
---------------------------------------------------------------------
650 3000
---- ----------
PW(Peer:1.1.1.1, VCID:1) limit information:
----------------------------------------------------------------------------
650 3000
---- ----------
PW(Peer:1.1.1.1, VCID:1) channel limit information:
------------------------------------------------------------------------------
shtv 50 300
---- ----------
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
igmp-snooping enable
l2-multicast limit trap-interval 40
#
mpls lsr-id 1.1.1.1
mpls
#
mpls l2vpn
#
vsi a static
pwsignal ldp
vsi-id 1
peer 2.2.2.2
igmp-snooping enable
#
mpls ldp
#
interface Ethernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 1
undo shutdown
l2 binding vsi a
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.10.0 0.0.0.255
#
return
l Configuration file of Router B
#
sysname RouterB
#
igmp-snooping enable
l2-multicast limit max-entry 650
l2-multicast limit bandwidth 3000
#
mpls lsr-id 2.2.2.2
mpls
#
mpls l2vpn
#
vsi a static
igmp-snooping enable
pwsignal ldp
vsi-id 1
peer 1.1.1.1
l2-multicast limit max-entry 650 remote-peer 1.1.1.1
l2-multicast limit bandwidth 3000 remote-peer 1.1.1.1
l2-multicast limit channel shtv max-entry 50 remote-peer 1.1.1.1
l2-multicast limit channel shtv bandwidth 300 remote-peer 1.1.1.1
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
#
interface GigabitEthernet1/0/2.1
undo shutdown
vlan-type dot1q 1
l2 binding vsi a
#
interface LoopBack2
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.10.0 0.0.0.255
#
l2-multicast-channel
#
l2-multicast-channel vsi a
channel shtv type asm
group 225.0.0.1 255.255.255.0 per-bandwidth 3
#
return
Networking Requirements
As shown in Figure 3-13, PE1 is connected to PE2 through GE 1/0/0. They both belong to
VLAN 10. GE 1/0/0 on PE2 is a static router port and statically joins the multicast group
224.1.1.1.
IGMPv3 is run on PE1 while IGMPv2 is run on PE2.
Then, SSM mapping needs to be configured on PE1 so that PE1 can map the multicast group
addresses of the received packets with no source addresses to specific sources. In addition, the
querier function is required for VLAN 10 on PE1 to enable PE1 to send Query messages
periodically to PE2.
VLAN10
PE1 PE2
GE1/0/0 GE1/0/0
SSM Mapping
Configuration Roadmap
The configuration roadmap is as follows:
1. Create VLANs and enable basic IGMP snooping functions.
2. Configure the versions of IGMP snooping running on PE1 and PE2.
3. Enable the querier function for VLAN 10 on PE1.
4. Configure GE 1/0/0 on PE2 as a static router port and statically add GE 1/0/0 to the multicast
group 224.1.1.1.
5. Configure an IGMP snooping SSM policy.
Data Preparation
To complete the configuration, you need the following data:
l VLAN ID
Procedure
Step 1 Create VLANs.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[PE1] interface gigabitethernet1/0/0
[PE1-GigabitEthernet1/0/0] portswitch
[PE1-GigabitEthernet1/0/0] undo shutdown
[PE1-GigabitEthernet1/0/0] quit
[PE1] vlan 10
[PE1-vlan10] port gigabitethernet 1/0/0
[PE1-vlan10] quit
# Configure PE2.
<HUAWEI> system-view
[HUAWEI] sysname PE2
[PE2] interface gigabitethernet 1/0/0
[PE2-GigabitEthernet1/0/0] portswitch
[PE2-GigabitEthernet1/0/0] undo shutdown
[PE2-GigabitEthernet1/0/0] quit
[PE2] vlan 10
[PE2-vlan10] port gigabitethernet 1/0/0
[PE2-vlan10] quit
# Configure PE1.
[PE1] igmp-snooping enable
[PE1] vlan 10
[PE1-vlan10]igmp-snooping enable
# Configure PE2.
[PE2] igmp-snooping enable
[PE2] vlan 10
[PE2-vlan10]igmp-snooping enable
[PE2-vlan10]igmp-snooping proxy enable
Step 3 Configure the versions of IGMP snooping running on PE1 and PE2.
# Configure PE1.
[PE1-vlan10] igmp-snooping version 3
# Configure PE2.
[PE2-vlan10] igmp-snooping version 2
Step 4 Enable the querier function for VLAN 10 on PE1 so that PE1 can send general Query messages
to PE2.
[PE1] igmp-snooping send-query enable
[PE1] vlan 10
[PE1-vlan10] igmp-snooping querier enable
[PE1-vlan10] quit
Step 5 Statically add GE 1/0/0 on PE2 to the multicast group with the source address being 10.1.1.1
and group address being 224.1.1.1.
# After PE1 receives a Report message, run the display igmp-snooping port-info command to
check configurations about the interface.
[PE1] display igmp-snooping port-info
-----------------------------------------------------------------------
(Source, Group) Port Flag
-----------------------------------------------------------------------
VLAN 10, 3 Entry(s)
(10.1.1.2, 224.1.1.1) GE1/0/0 --M
1 port(s)
(10.1.1.3, 224.1.1.1) GE1/0/0 --M
1 port(s)
(10.1.1.4, 224.1.1.1) GE1/0/0 --M
1 port(s)
-----------------------------------------------------------------------
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10
#
igmp-snooping enable
igmp-snooping send-query enable
#
vlan 10
igmp-snooping enable
igmp-snooping querier enable
igmp-snooping ssm-mapping enable
igmp-snooping version 3
igmp-snooping ssm-policy 2008
igmp-snooping ssm-mapping 224.1.1.0 255.255.255.0 10.1.1.2
igmp-snooping ssm-mapping 224.1.1.0 255.255.255.0 10.1.1.3
igmp-snooping ssm-mapping 224.1.1.0 255.255.255.0 10.1.1.4
#
acl number 2008
rule 5 permit source 224.1.1.1 0
#
interface GigabitEthernet1/0/0
undo shutdown
portswitch
port default vlan 10
#
return
Networking Requirements
As shown in Figure 3-14, PE accesses the upper network through a VSI.
QinQ VLAN tag termination is configured on GE 1/0/0.1 of PE. Then, multicast CAC needs to
be configured in the sub-interface view, with the number of multicast groups being limited to
650 and bandwidth of the multicast groups being limited to 3000.
Dot1q VLAN tag termination is configured on GE1/0/0.2 of PE and a global channel CCTV is
created, adopting the SSM model. Then, channel-based multicast CAC needs to be configured
in the sub-interface view, with the number of member multicast groups in the CCTV channel
being limited to 50 and bandwidth of the channel being limited to 300.
Figure 3-14 Networking diagram of multicast CAC on a sub-interface for VLAN tag termination
PE
Internet/
Intranet
GE1/0/0.2
Dot1q termination GE1/0/0.1
QinQ termination
CE
Group Number
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable global IGMP snooping.
2. Configure basic MPLS functions on PE and bind the sub-interfaces on PE to the VSI.
3. Enable IGMP snooping in the VSI and configure the IGMP snooping version number.
4. Configure the QinQ VLAN tag termination mode on GE 1/0/0.1.
5. Configure global multicast CAC on GE 1/0/0.1.
6. Configure the dot1q VLAN tag termination mode on GE 1/0/0.2.
7. Configure channel-based multicast CAC on GE 1/0/0.2.
Data Preparation
To complete the configuration, you need the following data:
l Version of IGMP snooping running on PE
l VSI name and the numbers of the sub-interfaces to which the VSI is bound
l VLAN ID terminated by GE 1/0/0.1
l VLAN ID terminated by GE 1/0/0.2
l Parameters related to multicast CAC, including the number of multicast groups, bandwidth
of multicast groups, number of member multicast groups in the channel, and bandwidth of
the channel
Procedure
Step 1 Enable global IGMP snooping.
<HUAWEI> system-view
[HUAWEI] sysname PE
[PE] igmp-snooping enable
Step 3 Enable basic IGMP snooping functions in the VSI and configure the IGMP snooping version
number.
[PE] vsi a static
[PE-vsi-a] igmp-snooping enable
[PE-vsi-a] igmp-snooping version 3
Step 4 Configure the QinQ VLAN tag termination mode on GE 1/0/0.1 and bind GE 1/0/0.1 to the VSI.
[PE] interface gigabitethernet 1/0/0
[PE-GigabitEthernet1/0/0] mode user-termination
[PE-GigabitEthernet1/0/0] quit
[PE] interface gigabitethernet 1/0/0.1
[PE-GigabitEthernet1/0/0.1] control-vid 1 qinq-termination
[PE-GigabitEthernet1/0/0.1] qinq termination pe-vid 10 ce-vid 100 to 200
[PE-GigabitEthernet1/0/0.1] l2 binding vsi a
[PE-GigabitEthernet1/0/0.1] undo shutdown
[PE-GigabitEthernet1/0/0.1] quit
Step 5 Configure the dot1q VLAN tag termination mode on GE 1/0/0.2 and bind GE 1/0/0.2 to the VSI.
[PE] interface gigabitethernet 1/0/0.2
[PE-GigabitEthernet1/0/0.2] control-vid 2 dot1q-termination
[PE-GigabitEthernet1/0/0.2] dot1q termination vid 50 to 90
[PE-GigabitEthernet1/0/0.2] l2 binding vsi a
[PE-GigabitEthernet1/0/0.2] undo shutdown
[PE-GigabitEthernet1/0/0.2] quit
Step 6 Configure multicast CAC on the two sub-interfaces for VLAN tag termination.
# Configure the sub-interface for QinQ VLAN tag termination.
[PE-GigabitEthernet1/0/0.1] l2-multicast limit max-entry 650 qinq pe-vid 10 ce-vid
100 to 102
[PE-GigabitEthernet1/0/0.1] l2-multicast limit bandwidth 3000 qinq pe-vid 10 ce-vid
100 to 102
VSI ID : 2
*Peer Router ID : 2.2.2.2
VC Label : 162816
Peer Type : dynamic
Session : down
Tunnel ID :
# After PE receives a Report message, run the display igmp-snooping port-info command to
check configurations about the interface.
[PE] display igmp-snooping port-info
-----------------------------------------------------------------------
(Source, Group) Port Flag
-----------------------------------------------------------------------
VSI a, 3 Entry(s)
(10.1.1.2, 232.0.0.1) GE1/0/0.2(PE:50) -D-
1 port(s)
(10.1.1.3, 232.0.0.1) GE1/0/0.2(PE:51) -D-
1 port(s)
(10.1.1.1, 232.1.1.1) GE1/0/0.1(PE:10/CE:100) -D-
1 port(s)
-----------------------------------------------------------------------
# Run the display l2-multicast limit configuration command to check configurations and
statistics about multicast CAC.
[PE] display l2-multicast limit
L2-multicast limit information, The unit of bandwidth is Kbits/sec
------------------------------------------------------------------------------
ConfigEntries ConfigBandwidth
CurrentEntries CurrentBandwidth
------------------------------------------------------------------------------
interface GigabitEthernet1/0/0.1 qinq pe-vid 10 ce-vid 100 limit information:
------------------------------------------------------------------------------
650 3000
1 0
interface GigabitEthernet1/0/0.1 qinq pe-vid 10 ce-vid 101 limit information:
------------------------------------------------------------------------------
650 3000
0 0
interface GigabitEthernet1/0/0.1 qinq pe-vid 10 ce-vid 102 limit information:
------------------------------------------------------------------------------
650 3000
0 0
interface GigabitEthernet1/0/0.2 dot1q vid 50 channel limit information:
------------------------------------------------------------------------------
cctv 50 300
1 20
interface GigabitEthernet1/0/0.2 dot1q vid 51 channel limit information:
------------------------------------------------------------------------------
cctv 50 300
1 20
interface GigabitEthernet1/0/0.2 dot1q vid 52 channel limit information:
------------------------------------------------------------------------------
cctv 50 300
0 0
----End
Configuration Files
l Configuration file of PE
#
sysname
PE
#
igmp-snooping
enable
#
mpls lsr-id
1.1.1.1
mpls
#
mpls
l2vpn
#
vsi a
static
pwsignal
ldp
vsi-id
2
peer
2.2.2.2
igmp-snooping
enable
igmp-snooping version
3
#
mpls
ldp
#
interface
GigabitEthernet1/0/0
undo
shutdown
mode user-
termination
#
interface
GigabitEthernet1/0/0.1
control-vid 1 qinq-
termination
qinq termination pe-vid 10 ce-vid 100 to
200
l2 binding vsi
a
l2-multicast limit max-entry 650 qinq pe-vid 10 ce-vid 100 to
102
l2-multicast limit bandwidth 3000 qinq pe-vid 10 ce-vid 100 to
102
#
interface
GigabitEthernet1/0/0.2
control-vid 2 dot1q-
termination
dot1q termination vid 50 to
90
l2 binding vsi
a
l2-multicast limit channel cctv max-entry 50 dot1q vid 50 to
52
l2-multicast limit channel cctv bandwidth 300 dot1q vid 50 to
52
#
interface
GigabitEthernet1/0/1
undo
shutdown
ip address 200.1.1.1
255.255.255.252
mpls
mpls
ldp
#
interface
LoopBack1
ip address 1.1.1.1
255.255.255.255
#
ospf
1
area
0.0.0.0
network 1.1.1.1
0.0.0.0
network 200.1.1.0
0.0.0.3
#
l2-multicast-
channel
channel cctv type
ssm
group 232.0.0.0 255.255.255.0 source 10.1.1.0 255.255.255.0 per-bandwidth
20
#
return
This chapter describes the PIM-DM (IPv4) fundamentals, configuration steps, and maintenance
for PIM-DM functions, along with typical examples.
4.1 PIM-DM (IPv4) Introduction
This section describes basic principles of PIM-DM.
4.2 Configuring Basic PIM-DM Functions
This section describes how to configure basic PIM-DM functions.
4.3 Adjusting Control Parameters of a Multicast Source
This section describes how to control the forwarding of multicast data based on the multicast
source in a PIM network.
4.4 Adjusting Control Parameters for Maintaining Neighbor Relationships
This section describes how to configure control parameters of a PIM-DM Hello message.
4.5 Adjusting Control Parameters for Prune
This section describes how to configure control parameters of a PIM-DM Join/Prune message.
4.6 Adjusting Control Parameters for State-Refresh
This section describes how to configure control parameters of a PIM-DM State-Refresh message.
4.7 Adjusting Control Parameters for Graft
This section describes how to configure control parameters of a PIM-DM Graft message.
4.8 Adjusting Control Parameters for Assert
This section describes how to configure control parameters of a PIM-DM Assert message.
4.9 Configuring PIM Silent
This section describes how to configure PIM silent to prevent the attack of a malicious host.
4.10 Maintaining PIM
This section describes how to clear the statistics of PIM-DM and monitor the running status of
PIM-DM.
4.11 Configuration Example
This section provides several configuration examples of PIM-DM.
CAUTION
This chapter is concerned only about the PIM-DM configuration in an IPv4 network.
The Protocol Independent Multicast (PIM) is a multicast protocol that is independent of unicast
routing protocol such as static route, RIP, OSPF, IS-IS, and BGP. Multicast routing is
independent of unicast routing protocols, except that unicast routing protocols are used to
generate related multicast routing entries.
Based on the Reverse Path Forwarding (RPF), PIM transmits multicast data across a network.
RPF constructs a multicast forwarding tree by using the existing unicast routing information.
When a multicast packet reaches a router, the router performs the RPF check first. If the packet
does not pass the RPF check, the router directly discards the packet.
NOTE
For more details of RPF, refer to the chapter IPv4 Multicast Routing Management in the HUAWEI
NetEngine80E/40E Router Configuration Guide - IP Multicast.
IGMP
Receiver
UserA
PIM-DM
Source
Receiver
UserB
Multicast
PIM-DM
Server
PIM-DM IGMP
Receiver
UserC
UserD
To prevent the preceding case, you can set the status of the router interface to PIM silent. When
the interface is in the PIM silent state, the interface is prohibited from receiving and forwarding
any PIM packet. Then all PIM neighbors and PIM state machines on the interface are deleted.
The interface acts as the static DR and immediately takes effect. At the same time, IGMP on the
interface are not affected.
PIM Multi-instance
In multi-instance applications, multicast routers need to maintain the PIM neighbor list and
multicast routing table for different VPN instances and keep the information independent among
multiple instances.
When a router receives a multicast data packet, the router needs to distinguish the VPN instance
to which the packet belongs and forward the packet based on the multicast routing table of the
specific VPN instance, or create a PIM multicast routing entry of the VPN instance.
Applicable Environment
PIM-DM is applicable to a small-scale network, and most network segments of the network have
receivers.
Pre-configuration Tasks
Before configuring basic PIM-DM functions, complete the following configuration tasks:
Data Preparation
To configure basic PIM-DM functions, you need the following data.
No. Data
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
CAUTION
The configuration related to the VPN instance is applicable only to the PE router. If the interface
of the VPN instance connects to hosts, run the commands in Step 3 and Step 4.
----End
Context
NOTE
PIM-SM and PIM-DM cannot be enabled on an interface at the same time. The PIM mode must be the
same on all the interfaces of the same instance. When routers are distributed in different PIM-DM domains,
enable PIM-SM on all non-boundary interfaces.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim dm
PIM-DM is enabled.
After PIM-DM is enabled on the interface and the PIM neighbor relationship is set up between
routers, the protocol packets sent by the PIM neighbors can be processed. You can run the undo
pim dm command to disable PIM-DM on the interface.
----End
Procedure
l Run the command display pim [ vpn-instance vpn-instance-name | all-instance ]
interface [ interface-type interface-number | up | down ] [ verbose ] to check PIM on
interfaces of the public network, VPN instance, or all instances.
l Run the command display pim [ vpn-instance vpn-instance-name | all-instance ]
neighbor [ neighbor-address | interface interface-type interface-number | verbose ] * to
check PIM neighbors of the public network, VPN instance, or all instances.
l Run the following commands to check the PIM routing table of the public network, VPN
instance, or all instances.
– display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-
number | register } | outgoing-interface { include | exclude | match } { interface-type
interface-number | register | none } | mode { dm | sm | ssm } | flags flag-value | fsm ]
* [ outgoing-interface-number [ number ] ]
----End
Example
Run the display pim interface verbose command, and you can view the detailed information
about PIM on the interface in the public network instance.
<HUAWEI> display pim interface verbose
VPN-Instance: public net
Interface: Ethernet2/0/12, 12.40.41.2
PIM version: 2
PIM mode: Dense
PIM state: up
PIM DR: 12.40.41.2 (local)
PIM DR Priority (configured): 1
PIM neighbor count: 1
PIM hello interval: 120 s
PIM LAN delay (negotiated): 500 ms
PIM LAN delay (configured): 100 ms
PIM hello override interval (negotiated): 2500 ms
PIM hello override interval (configured): 2500 ms
PIM Silent: enabled
PIM neighbor tracking (negotiated): disabled
PIM neighbor tracking (configured): disabled
PIM generation ID: 0XC34B2FCD
PIM require-GenID: disabled
PIM hello hold interval: 60 s
PIM hello assert interval: 180 s
PIM triggered hello delay: 5 s
PIM J/P interval: 60 s
PIM J/P hold interval: 200 s
PIM state-refresh processing: enabled
PIM state-refresh interval: 60 s
PIM graft retry interval: 3 s
PIM state-refresh capability on link: capable
PIM dr-switch-delay timer : not configured
Number of routers on network not using DR priority: 0
Number of routers on network not using LAN delay: 0
Number of routers on link not using neighbor tracking: 1
Run the display pim neighbor verbose command, and you can view the detailed information
about PIM neighbors in the public network instance.
<HUAWEI> display pim neighbor verbose
VPN-Instance: public net
Total Number of Neighbors = 1
Neighbor: 12.40.41.1
Interface: Ethernet2/0/12
Uptime: 00:20:19
Expiry time: 00:01:27
DR Priority: 1
Generation ID: 0XB3DDAD78
Holdtime: 105 s
LAN delay: 500 ms
Override interval: 2500 ms
State refresh interval: 60 s
Neighbor tracking: Disabled
Pre-configuration Tasks
Before configuring control parameters of a multicast source, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-DM Functions
Data Preparation
To configure control parameters of a multicast source, you need the following data.
No. Data
Context
Do as follows on the first next hop router connected to the source:
NOTE
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the PIM router:
Procedure
Step 1 Run:
system-view
NOTE
l If acl-number | acl-name acl-name is specified in the source-policy command and ACL rules are
created, only the multicast packets whose source addresses match the ACL rules are permitted.
l If acl-number | acl-name acl-name is specified in the source-policy command and no ACL rule is
created, the multicast packets with any source addresses are not forwarded.
l The source-policy command does not filter the static (S, G) entries and the PIM entries of the Join
messages received from private networks.
----End
NOTE
Pre-configuration Tasks
Before adjusting control parameters for maintaining neighbor relationships, complete the
following tasks:
Data Preparation
To adjust control parameters for maintaining neighbor relationships, you need the following
data.
No. Data
Context
Do as follows on the PIM-DM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
2. Run:
interface interface-type interface-number
Context
Do as follows on the PIM-DM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
2. Run:
interface interface-type interface-number
Procedure
Step 1 Run:
system-view
----End
Context
To prevent some router from being involved in PIM, filtering PIM neighbors is required.
Do as follows on the router running PIM-DM:
Procedure
Step 1 Run:
system-view
NOTE
When configuring the neighbor filtering function on the interface, you must also configure the neighbor
filtering function correspondingly on the router that sets up the neighbor relationship with the interface.
----End
Routers can work normally under the control of the default parameter values. Users can adjust
related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for prune, complete the following tasks:
Data Preparation
To adjust control parameters for prune, you need the following data.
No. Data
4.5.2 Configuring the Period for an Interface to Keep the Prune State
Context
Do as follows on the PIM-DM router:
Procedure
l Global Configuration
1. Run:
system-view
The period during which the downstream interface is in the Prune state is set.
After the period expires, the pruned interface starts to forward packets again. Before
the period expires, the router refreshes the Prune state when receiving a State-Refresh
message.
l Configuration on an Interface
1. Run:
system-view
The period during which the downstream interface is in the Prune state is set.
After the period is expired, the pruned interface starts to forward packets again.
Before the period expires, the router refreshes the Prune state when receiving a State-
Refresh message.
----End
Context
Do as follows on the PIM-DM router:
Procedure
l Global Configuration
1. Run:
system-view
----End
Context
Do as follows on the PIM-DM router:
Procedure
l Global Configuration
1. Run:
system-view
When a router receives a Prune message from an upstream interface, it indicates that
another downstream router exists in the LAN. If the router still requests the multicast
data, it needs to send a Join message to the upstream router in the override-interval
period.
l Configuration on an Interface
1. Run:
system-view
----End
Router periodically send State-Refresh messages to refresh the prune state of interfaces and
maintain the SPT.
Routers can work normally under the control of the default parameter values. Users can adjust
related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for State-Refresh, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-DM Functions
Data Preparation
To adjust control parameters for State-Refresh, you need the following data.
No. Data
Context
Do as follows on all the routers in the PIM-DM domain.
NOTE
Procedure
Step 1 Run:
system-view
The interface on which PIM-DM State-Refresh is disabled cannot forward any State-Refresh
message.
NOTE
You can run the pim state-refresh-capable command to re-enable PIM-DM State-Refresh on the interface.
----End
Context
Do as follows on all the routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
NOTE
l This command is applicable to the first-hop router connecting with the multicast source.
l The interval for sending PIM State-Refresh messages should be shorter than the timeout period for
keeping the Prune state.
l You can run the holdtime join-prune command to set the timeout period for keeping the Prune state.
----End
Context
Do as follows on all the PIM-DM routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
The period for waiting to receive the next State-Refresh message is set.
----End
Context
Do as follows on all the PIM-DM routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
This command is valid only on the router directly connected to the source.
----End
----End
Applicable Environment
In a PIM-DM network, if State-Refresh is not enabled, a pruned interface can forward packets
after the Prune state times out. If State-Refresh is enabled, the pruned interface may never
forward packets.
To enable new members in the network to receive multicast data quickly, a PIM-DM router
sends a Graft message through an upstream interface. After receiving the Graft message, the
upstream router responds immediately with a Graft-Ack message and enables the interface that
receives the Graft message to forward packets.
Routers can work normally under the control of the default parameter values. Users can adjust
the related parameters according to the specific network environment.
NOTE
Pre-configuration Task
Before configuring control parameters for graft, complete the following tasks:
Data Preparation
To configure control parameters for graft, you need the following data.
No. Data
Context
Do as follows on the PIM-DM router:
Procedure
Step 1 Run:
system-view
----End
----End
Applicable Environment
When a PIM-DM router receives multicast data through a downstream interface, it indicates that
other upstream routers exist in the network segment. The router sends Assert messages through
the interface to elect the unique upstream router.
Routers can work normally under the control of the default parameter values. Users can adjust
related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for Assert, complete the following tasks:
Data Preparation
To adjust control parameters for Assert, you need the following data.
No. Data
Context
Do as follows on the PIM-DM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The router that fails in the election prevents its downstream interface from forwarding
multicast data.
After the Holdtime period of the Assert state expires, the downstream interface can
forward packets.
----End
Applicable Environment
On the access layer, the interface directly connected to hosts needs to be enabled with PIM. You
can set up the PIM neighbor relationship on the interface to process various PIM packets. The
configuration, however, has the security vulnerability. When a host maliciously generates PIM
Hello messages and sends many packets to a router, the router may fail.
To prevent the preceding case, you can set the status of the interface to PIM silent. When the
interface is in the PIM silent state, the interface is prevented from receiving and forwarding any
PIM packet. All PIM neighbor relationships and PIM state machines on the interface are deleted.
At the same time, IGMP and MLD on the interface are not affected.
To enable PIM silent, the network environment must meet the following conditions:
CAUTION
If PIM silent is enabled on the interface connected to a router, the PIM neighbor relationship
cannot be established and a multicast fault may occur.
If the host network segment is connected to multiple routers and PIM silent is enabled on multiple
interfaces of the routers, these interfaces do not send Assert messages. Therefore, multiple
interfaces that forward multicast data exist in the user network segment. A multicast fault thus
occurs.
Pre-configuration Tasks
Before configuring PIM silent, complete the following tasks:
Data Preparation
To configure PIM silent, you need the following data.
No. Data
Context
Do as follows on the interface connected to the host network segment:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check PIM on an
interface.
----End
Example
Run the display pim interface verbose command, and you can find that the configuration is
complete.
<HUAWEI> display pim interface gigabitethernet 6/0/0 verbose
VPN-Instance: public net
Interface: GigabitEthernet6/0/0, 10.1.2.1
PIM version: 2
PIM mode: Dense
PIM state: up
PIM DR: 10.1.2.1 (local)
PIM DR Priority (configured): 1
PIM neighbor count: 0
PIM hello interval: 30 s
PIM LAN delay (negotiated): 500 ms
PIM LAN delay (configured): 500 ms
PIM hello override interval (negotiated): 2500 ms
PIM hello override interval (configured): 2500 ms
PIM Silent: enabled
CAUTION
The statistics of the PIM control messages on the interface cannot be restored after you reset
them. Confirm the action before you run the command.
Procedure
l Run the reset pim [ vpn-instance vpn-instance-name | all-instance ] control-message
counters [ interface interface-type interface-number ] command in the user view to clear
the statistics of the PIM control messages on an interface.
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] claimed-route
[ source-address ] command in any view to check the unicast routes used by PIM.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] control-message
counters interface interface-type interface-number [ message-type { assert | graft | graft-
ack | hello | join-prune | state-refresh | bsr } ] command in any view to check the number
of sent or received PIM control messages.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] grafts command
in any view to check unacknowledged PIM-DM Graft messages.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check information
about PIM on an interface.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] neighbor
[ neighbor-address | interface interface-type interface-number | verbose ] * command to
check information about a PIM neighbor.
l Run the following commands to check the PIM routing table.
– display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-
number | register } | outgoing-interface { include | exclude | match } { interface-
type interface-number | register | none } | mode { dm | sm | ssm } | flags flag-value |
fsm ] * [ outgoing-interface-number [ number ] ]
– display pim routing-table [ group-address [ mask { group-mask-length | group-
mask } ] | source-address [ mask { source-mask-length | source-mask } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
{ include | exclude | match } { interface-type interface-number | vpn-instance vpn-
instance-name | register | none } | mode { dm | sm | ssm } | flags flag-value | fsm ] *
[ outgoing-interface-number [ number ] ]
– display pim [ vpn-instance vpn-instance-name | all-instance ] routing-table brief
[ group-address [ mask { group-mask-length | group-mask } ] | source-address
[ mask { source-mask-length | source-mask } ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Networking Requirements
In the test network shown in Figure 4-2, multicast and an IGP are deployed, and the unicast
routes work normally. It is required to configure routers correctly in the network to enable hosts
to receive the Video On Demand (VOD) information in multicast mode.
GE2/0/0
RouterC Ethernet
Configuration Roadmap
The network is a small-scale experiment network. Therefore, PIM-DM is adopted. Enable PIM
silent on Router A to prevent Hello message attack. The configuration roadmap is as follows:
1. Enable multicast on each router.
2. Enable PIM-DM on each interface.
3. Enable PIM silent on the interface connected to hosts, and configure IGMP.
Data Preparation
To complete the configuration, you need the following data:
l Address of multicast group G is 225.1.1.1/24.
l Address of multicast source S is 10.110.5.100/24.
l Version of IGMP running on routers and hosts is IGMPv2.
Procedure
Step 1 Enable multicast on each router and PIM-DM on each interface.
# Enable multicast on Router A and enable PIM-DM on each interface. The configuration
procedures on Router B, Router C, and Router D are similar to those on Router A, and are not
mentioned here.
[RouterA] multicast routing-enable
Step 2 Enable PIM silent on the interface connected to hosts, and configure IGMP on the interface.
# On Router A, enable PIM silent on the interface connected to hosts, and configure IGMP on
the interface.
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] pim silent
[RouterA-GigabitEthernet2/0/0] igmp enable
[RouterA-GigabitEthernet2/0/0] quit
# On Router B, configure IGMP on the interface. The configurations of Router C are the same
as that of Router B, and are not mentioned here.
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] igmp enable
[RouterB-GigabitEthernet2/0/0] quit
# Run the display pim neighbor command to view the PIM neighbor relationship between
routers. Take the PIM neighbor relationship on Router D as an example:
<RouterD> display pim neighbor
VPN-Instance: public net
Total Number of Neighbors = 3
Neighbor Interface Uptime Expires Dr-Priority BFD-Session
192.168.1.1 Pos3/0/0 00:02:22 00:01:27 1 N
192.168.2.1 Pos1/0/0 00:00:22 00:01:29 1 N
192.168.3.1 Pos2/0/0 00:00:23 00:01:31 1 N
# Run the display pim routing-table command, and you can view the PIM routing table. Assume
that Host A requests the information of group G (225.1.1.1). After multicast source S
(10.110.5.100) sends multicast packets to multicast group G (255.1.1.1), the MDT (Multicast
Distribution Tree) is established be means of flooding. All PIM multicast routers (including
Router A and Router D) on the MDT have the (S, G) entry. Host A joins G, and Router A
generates an (*, G) entry. The display information on Router B and Router C is similar to that
on Router A.
<RouterA> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 03:54:19
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim dm
pim silent
igmp enable
#
interface POS1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.1.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 10.110.1.0 0.0.0.255
#
return
undo shutdown
ip address 10.110.2.1 255.255.255.0
pim dm
igmp enable
#
interface POS1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.2.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 192.168.2.0 0.0.0.255
network 10.110.2.0 0.0.0.255
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.2.2 255.255.255.0
pim dm
igmp enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.3.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 192.168.3.0 0.0.0.255
network 10.110.2.0 0.0.0.255
#
return
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
interface GigabitEthernet4/0/0
undo shutdown
ip address 10.110.5.1 255.255.255.0
pim dm
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.2.2 255.255.255.0
pim dm
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 192.168.3.2 255.255.255.0
pim dm
#
interface Pos3/0/0
undo shutdown
link-protocol ppp
ip address 192.168.1.2 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
network 192.168.1.0 0.0.0.255
network 10.110.5.0 0.0.0.255
#
return
This chapter describes the PIM-SM (IPv4) and SSM fundamentals, configuration steps, and
maintenance for PIM-SM functions, along with typical examples.
5.1 PIM-SM (IPv4) Introduction
This section describes the PIM-SM overview and PIM-SM features supported by the NE80E/
40E.
5.2 Configuring Basic PIM-SM Functions
This section describes how to configure PIM-SM to implement ASM and SSM models.
5.3 Adjusting Control Parameters for a Multicast Source
This section describes how to control the forwarding of multicast data according to the multicast
source in the PIM network.
5.4 Adjusting Control Parameters of the C-RP and C-BSR
This section describes how to configure control parameters of the C-RP, advertisement
messages, C-BSR, and Bootstrap messages.
5.5 Configuring a BSR Administrative Domain
This section describes how to configure a PIM-SM administrative domain.
5.6 Adjusting Control Parameters for Establishing the Neighbor Relationship
This section describes how to configure control parameters of PIM-SM Hello messages.
5.7 Adjusting Control Parameters for Source Registering
This section describes how to configure control parameters of PIM-SM Register messages.
5.8 Adjusting Control Parameters for Forwarding
This section describes how to configure control parameters of PIM-SM Join/Prune messages.
5.9 Adjusting Control Parameters for Assert
This section describes how to configure control parameters of PIM-SM Assert messages.
5.10 Configuring the SPT Switchover
This section describes how to configure the PIM-SM SPT switchover.
5.11 Configuring PIM BFD
This section describes how to configure PIM BFD in a shared network segment.
For details of RPF, refer to the chapter IPv4 Multicast Routing Management in the NE80E/40E Router
Configuration Guide - IP Multicast.
The working process of the Protocol Independent Multicast-Sparse Mode (PIM-SM) consists
of neighbor discovery, assert, DR election, RP discovery, join, prune, register, and SPT
switchover.
As shown in Figure 5-1, PIM-SM is used in a large-scale network with sparsely distributed
group members.
UserA
IGMP
PIM-SM Receiver
PIM-SM IGMP
Source PIM-SM
UserB
Multicast PIM-SM PIM-SM
Receiver
Server PIM-SM IGMP
PIM-SM
UserC
NOTE
l The Protocol Independent Multicast Dense Mode (PIM-DM) is applicable to a small-scale network
with densely distributed members.
l PIM-SM can be used to construct the Any-Source Multicast (ASM) and Source-Specific Multicast
(SSM) models.
Static RP
You can specify a static RP on all the routers in a PIM-SM domain. When a dynamic RP exists
in the domain, the dynamic RP is preferred by default, but you can configure the static RP to be
preferred.
Dynamic RP
You can configure C-RPs and C-BSRs in a PIM-SM domain and set the unified rules used to
dynamically generated the BSR and the RP. You can adjust the priority for C-RP election, adjust
the lifetime of the advertisement message on the BSR received from the C-RP, adjust the interval
for the C-RP to send advertisement messages, and specify an Access Control List (ACL) to limit
the range of the multicast groups served by the C-RP.
BSR
You can specify the C-BSR in the BSR domain, adjust the hash length used by the RP for C-RP
election, adjust the priority used for BSR election, and adjust the legal BSR address range. To
limit the transmission of BSR messages, you can configure the BSR service boundary on an
interface of the router on the boundary of the BSR domain.
PIM BFD
In the NE80E/40E, you can dynamically set up the BFD session to detect the status of the link
between PIM neighbors. Once a fault occurs on the link, BFD reports the fault to PIM.
PIM GR
The NE80E/40E supports the PIM GR function on the router with double MPUs. PIM GR
ensures normal multicast data forwarding during master-slave switchover of the router.
Multi-Instance PIM
In multi-instance applications, a multicast router needs to maintain the PIM neighbor list,
multicast routing table, BSR information, and RP-Set information for different VPN instances
and keep the information independent between the instances. The router functions as multiple
multicast routers running PIM independently.
When a router receives a data packet, it needs to differentiate which VPN instance the packet
belongs to and forward it based on the multicast routing table of that VPN instance, or create
PIM-related multicast routing entries in that VPN instance.
Applicable Environment
A PIM-SM network can adopt the ASM and SSM models to provide multicast services for user
hosts. The integrated components (including the RP) of the ASM model must be configured in
the network first. The SSM group address range is then adjusted as required.
NOTE
The SSM model is only supported in IGMPv3. If user hosts must run IGMPv1 or IGMPv2, configure IGMP
SSM mapping on router interfaces.
Through IGMP, a router knows the multicast group G that a user wants to join.
l If G is in the SSM group address range and the source S is specified when the user joins G
through IGMPv3, the SSM model is used to provide multicast services.
l If G is in the SSM group address range and the router is configured with the (S, G) SSM
mapping rules, the SSM model is used to provide multicast services.
l If G is not in the SSM group address range, the ASM model is used to provide multicast
services.
In the PIM-SM network, the ASM model supports the following methods to obtain an RP. You
can select the method as required.
l dynamic RP: To obtain the dynamic RP, select several routers in the PIM-SM domain and
configure them as C-RPs and C-BSRs, and then configure the BSR boundary on the
interface on the boundary of the domain. Each router in the PIM-SM domain can then
automatically obtain the RP.
l Static RP: To obtain a static RP, manually configure RP on each router in the PIM-SM
domain. For the large-scale PIM network, configuring the static RP is complicated. To
enhance the robustness and the operating management of the multicast network, the static
RP is usually used as the backup of the BSR-RP.
A multicast group may be in the service range of the dynamic RP and the static RP
simultaneously. By default, The router prefers the dynamic RP. If the static RP precedence is
configured, the static RP is preferred.
Different multicast groups correspond to different RPs. Compared with all groups corresponding
to an RP, this can reduce the burden of an RP and enhance the robustness of the network.
Pre-configuration Tasks
Before configuring basic PIM-SM functions, complete the following tasks:
l Configuring a unicast routing protocol
Data Preparation
To configure basic PIM-SM functions, you need the following data.
No. Data
1 Static RP address
3 C-RP priority
6 Timeout of the period during which BSR waits to receive the Advertisement message
from C-RP.
8 C-BSR priority
Context
CAUTION
The configuration related to the VPN instance is applicable only to the PE router. If the interface
of the VPN instance connects to the host, run the commands in step 3 and step 4.
Procedure
Step 1 Run:
system-view
----End
Context
NOTE
PIM-SM and PIM-DM cannot be enabled on an interface at the same time. The PIM mode on all interfaces
that belong to the same instance must be consistent. When the router is distributed in the PIM-SM domain,
enable PIM-SM on all non-boundary interfaces.
Procedure
Step 1 Run:
system-view
PIM SM is enabled.
After PIM SM is enabled on the interface and PIM neighbor relationships are set up between
routers, the packets from the PIM neighbors can be processed.
----End
CAUTION
When the static RP and the dynamicRP are configured in the PIM-SM at the same time, faults
may occur in the network. So, confirm the action before you run the command. If you want to
use only the dynamicRP in the PIM-SM network, skip the configuration.
Procedure
Step 1 Run:
system-view
All routers in the PIM-SM area must be configured with the same static-rp command.
l rp-address: specifies the static RP address.
l basic-acl-number | acl-name acl-name: specifies the ACL. The ACL defines the range of
the multicast group served by the static RP. When the range of multicast groups that multiple
static RPs serve overlaps, the static RP with the largest IP address functions as the RP.
l preferred: indicates the preference of the static RP. If the C-RP is configured in the network
at the same time, the router prefers the RP statically specified after preferred is used.
Otherwise, C-RP is preferred.
----End
CAUTION
The configuration is applicable only to the dynamic RP. If you want to use the static RP in the
network, skip the configuration.
Procedure
Step 1 Run:
system-view
l interface-type interface-number: specifies the interface where the C-BSR resides. The
interface must be configured with the PIM-SM.
l hash-length: specifies the length of the hash. According to the G, C-RP address, and the value
of hash-length, routers calculate the C-RPs that have the same priority and require to serve
G by operating hash functions, and compare the calculation results. The C-RP with the
greatest calculated value functions as the RP that serves G.
l priority: specifies the priority used by routers to join the BSR election. The greater is the
value, the higher is the priority. By default, it is 0.
In the BSR election, the C-BSR with the highest priority wins. In the case of the same priority,
the C-BSR with the largest IP address wins.
When the router interworks with a router supporting auto-RP, this command needs to be
configured on the router.
----End
Context
This configuration is optional. By default, the SSM group address range is 232.0.0.0/8.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
Step 3 Run:
ssm-policy { basic-acl-number | acl-name acl-name }
NOTE
Ensure that the SSM group address range of all routers in the network is consistent.
----End
NOTE
Pre-configuration Tasks
Before adjusting control parameters for a multicast source, complete the following tasks:
l Configuring a certain unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To adjust control parameters for a multicast source, you need the following data.
No. Data
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 3 Run:
source-lifetime interval
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
A filter is configured.
If the basic ACL is configured, only the packets with the source addresses that pass the filtering
are forwarded.
If the advanced ACL is configured, only the packets with the source addresses and group
addresses that pass the filtering are forwarded.
NOTE
l If acl-number | acl-name acl-name is specified in the source-policy command and ACL rules are
created, only the multicast packets whose source addresses match the ACL rules are permitted.
l If acl-number | acl-name acl-name is specified in the source-policy command and no ACL rule is
created, the multicast packets with any source addresses are not forwarded.
l The source-policy command does not filter the static (S, G) entries and the PIM entries of the Join
messages received from private networks.
----End
NOTE
The configuration is applicable only to a BSR-RP. If you want to use only a static RP in the network, skip
the configuration.
The router can work normally under the control of default values. The NE80E/40E allows users
to adjust the parameters as required.
NOTE
Pre-configuration Tasks
Before adjusting control parameters of the C-RP and C-BSR, complete the following tasks:
Data Preparation
To adjust various control parameters of the C-RP and C-BSR, you need the following data.
No. Data
1 RP priority
3 Timeout of the period during which a BSR waits to receive Advertisement messages
from a C-RP
5 Priority of a C-BSR
Context
Do as follows on the router configured with the C-RP:
NOTE
You can re-set various parameters of a C-RP. This configuration is optional. If there is no specific
requirement, default values of parameters are recommended.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
Step 3 Run:
c-rp priority priority
Step 4 Run:
c-rp advertisement-interval interval
The interval during which the C-RP sends Advertisement messages is set.
Step 5 Run:
c-rp holdtime interval
The time for holding the Advertisement message from a C-RP is set. The value must be greater
than the interval for a C-RP to send advertisement messages.
The C-RP periodically sends advertisement messages to the BSR. After receiving the
advertisement messages, the BSR obtains the Holdtime of the C-RP from the message. During
the Holdtime, the C-RP is valid. When the Holdtime expires, the C-RP ages out.
----End
Context
Do as follows on the router configured with the C-BSR:
NOTE
You can re-set various parameters of a C-BSR. This configuration is optional. If there is no specific
requirement, the default values of parameters are recommended.
Procedure
Step 1 Run:
system-view
The time of holding the Bootstrap message received from a BSR is set.
The BSR periodically sends a Bootstrap message to the network. After receiving the Bootstrap
message, the routers keep the message for a certain time. During the period, the BSR election
stops temporarily. If the Holdtime timer times out, a new round of BSR election is triggered
among C-BSRs.
NOTE
Ensure that the value of c-bsr holdtime is greater than the value of c-bsr interval. Otherwise, the winner
of BSR election cannot be fixed.
----End
Context
Do as follows on the router that may become the BSR boundary:
Procedure
Step 1 Run:
system-view
The BSR boundary is configured. Bootstrap messages cannot pass the BSR boundary.
By default, all the PIM-SM routers on the network can receive Bootstrap messages.
NOTE
Routers outside the BSR boundary cannot participate in multicast forwarding in this PIM-SM domain.
----End
Context
Do as follows on all routers in the PIM-SM domain:
NOTE
By default, all BSR packets are received without the BSR source address check.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on all the C-BSRs in the PIM-SM domain:
NOTE
This configuration is optional. By default, a router does not check the C-RP address and the group address
contained in a received Advertisement message and adds them to the RP-set.
Procedure
Step 1 Run:
system-view
The range of the valid C-RP addresses and the range of the multicast group addresses that a
router serves are specified. When receiving an Advertisement message, the router checks the C-
RP address and the addresses of the groups that the C-RP serves in the message. The C-RP
address and the addresses of the groups that the C-RP serves are added to the RP-Set only when
they are in the valid address range. The C-RP spoofing can thus be prevented.
{ advanced-acl-number | acl-name acl-name }: specifies the advanced ACL. The ACL defines
the filtering policy for the C-RP address range and the address range of the groups that a C-RP
serves.
----End
NOTE
Pre-configuration Tasks
Before configuring a BSR administrative domain, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To configure a BSR administrative domain, you need the following data.
No. Data
1 Priority and hash mask length for electing a BSR in a BSR domain
2 Priority and hash mask length of electing the global domain BSR
Context
Do as follows on all routers in the PIM-SM network:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on all routers at the boundary of a BSR administrative domain:
NOTE
The routers outside the BSR administrative domain cannot forward the multicast packets of the BSR
administrative domain.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
multicast boundary group-address { mask | mask-length }
The BSR administrative domain boundary is configured. Multicast packets that belong to the
BSR administrative domain cannot traverse the boundary.
----End
Context
Do as follows on all C-BSRs:
NOTE
Procedure
l Configuration in a BSR Administrative Domain
1. Run:
system-view
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] bsr-info command
to check the BSR in a PIM-SM domain.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] rp-info [ group-
address ] command to check the RP in a PIM-SM domain.
----End
Applicable Environment
The configuration in this section is applicable to both the ASM model and the SSM model.
The PIM routers send Hello messages to each other to establish the neighbor relationship,
negotiate the control parameters, and elect a DR.
The router can work normally by default. The NE80E/40E allows the users to adjust the
parameters as required.
NOTE
Pre-configuration Tasks
Before configuring control parameters for establishing the neighbor relationship, complete the
following tasks:
Data Preparation
To adjust the control parameters for establishing the neighbor relationship, you need the
following data.
No. Data
5 DR switchover delay, that is, the period during which the original entries are still
valid when the interface changes from a DR to a non-DR.
Context
Do as follows on the PIM-SM router.
NOTE
Procedure
l Global Configuration
1. Run:
system-view
If no Hello message is received after the interval expires, the neighbor is considered
unreachable.
l Configuration on an Interface
1. Run:
system-view
This can prevent the conflict of Hello messages sent by multiple PIM routers at the
same time.
5. Run:
pim hello-option holdtime interval
If no Hello message is received after the interval expires, the neighbor is considered
unreachable.
6. Run:
pim require-genid
The Generation ID option is contained in a received Hello message. The Hello message
without the Generation ID option is rejected.
By default, the router handles the Hello message without the Generation option.
----End
Context
Do as follows on the PIM-SM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
On a shared network segment where all PIM routers support the DR priority, the
interface with the highest priority acts as the DR. In the case of the same priority, the
interface with the largest IP address acts as the DR. If a minimum of one PIM router
does not support the DR priority, the interface with the largest IP address acts as the
DR.
l Configuration on an Interface
1. Run:
system-view
Context
Do as follows on the PIM-SM router:
NOTE
Procedure
l Global configuration
1. Run:
system-view
NOTE
The function of tracking downstream neighbors cannot be implemented unless all the PIM
routers in the shared network segment are enabled with this function.
l Configuration on an interface
1. Run:
system-view
After this function is enabled, information about the downstream neighbor who has
sent a Join message and whose Join state does not times out is recorded.
NOTE
The function of tracking downstream neighbors cannot be implemented unless all PIM
routers in the shared network segment are enabled with this function.
----End
Context
To prevent some router from being involved in the PIM protocol and prevent the router from
becoming the DR, filtering PIM neighbors is required.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim neighbor-policy { basic-acl-number | acl-name acl-name }
An interface sets up neighbor relationships with only the addresses matching the filtering rules
and deletes the neighbors unmatching the filtering rules.
NOTE
When configuring the PIM neighbor filtering function on the interface, you must also configure the
neighbor filtering function correspondingly on the router that sets up the neighbor relationship with the
interface.
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check PIM on an
interface.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] neighbor
[ neighbor-address | interface interface-type interface-number | verbose ] * command to
check a PIM neighbor.
----End
Applicable Environment
This section describes how to configure the control parameters of the source registering through
commands.
In a PIM-SM network, the DR directly connected to the source S encapsulates multicast data in
a Register message and sends it to the RP in unicast mode. The RP then decapsulates the message,
and forwards it along the RPT.
After the SPT switchover on the RP is complete, the multicast data reaches the RP along the
source tress in the multicast mode. The RP sends a Register-stop message to the DR at the source
side. The DR stops sending Register messages and enters the suppressed state. During the register
suppression, the DR periodically sends null packets to inform that the source is still in the active
state. After the timeout of the register suppression, the DR starts to send Register message again.
The router can work normally under the control of default values. The NE80E/40E allows the
users to adjust the parameters as required.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for source registering, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To adjust control parameters for source registering, you need the following data.
No. Data
Context
Do as follows on all routers that may become an RP:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on all the routers that may become the DR at the multicast source side:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
Step 3 Run:
register-suppression-timeout interval
Step 4 Run:
probe-interval interval
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check PIM on an
interface.
----End
NOTE
Pre-configuration Tasks
Before adjusting control parameters for forwarding, complete the following tasks:
l Configuring a certain unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To adjust control parameters for forwarding, you need the following data.
No. Data
5 Number or name of the ACL used to filter join information in the Join/Prune messages
6 Whether neighbor check needs to be performed after Join/Prune message and Assert
messages are sent or received
Context
Do as follows on the PIM-SM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The interval for holding the forwarding state of a downstream interface is set.
l Configuration on an Interface
1. Run:
system-view
The interval for holding the forwarding state of a downstream interface is set.
5. Run:
pim require-genid
The Generation ID option is contained in a received Hello message. The Hello message
without the Generation ID option is rejected.
By default, the router handles the Hello message without the Generation option.
The change of the Generation ID in the Hello message received from an upstream
neighbor indicates that the upstream neighbor is lost or the status of the upstream
neighbor has changed. The router immediately sends the Join/Prune message to the
upstream router to refresh the status.
----End
Context
Do as follows on the PIM-SM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
Context
A Join/Prune message received by an interface may contain both join information and prune
information. You can configure the router to filter join information based on ACL rules. The
router then creates PIM entries for only the join information matching ACL rules, which can
avoid access of illegal users.
Do as follows on the router enabled with PIM-SM:
Procedure
Step 1 Run:
system-view
----End
Context
By default, checking whether the Join/Prune message and Assert messages are sent to or received
from a PIM neighbor is not enabled.
If PIM neighbor checking is required, it is recommended to configure the neighbor checking
function on the devices connected with user devices rather than on the internal devices of the
network. Then, the router checks whether the Join/Prune and Assert messages are sent to or
received from a PIM neighbor. If not, the router drops the messages.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
Step 3 Run:
neighbor-check { receive | send }
You can specify both receive and send to enable the PIM neighbor check function for the
received and sent Join/Prune and Assert messages.
----End
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check PIM on an
interface.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] control-message
counters interface interface-type interface-number [ message-type { assert | graft | graft-
ack | hello | join-prune | state-refresh | bsr } ] command to check the number of sent or
received PIM control messages.
l Run the following commands to check the PIM routing table.
– display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-
number | register } | outgoing-interface { include | exclude | match } { interface-
type interface-number | register | none } | mode { dm | sm | ssm } | flags flag-value |
fsm ] * [ outgoing-interface-number [ number ] ]
– display pim routing-table [ group-address [ mask { group-mask-length | group-
mask } ] | source-address [ mask { source-mask-length | source-mask } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
{ include | exclude | match } { interface-type interface-number | vpn-instance vpn-
instance-name | register | none } | mode { dm | sm | ssm } | flags flag-value | fsm ] *
[ outgoing-interface-number [ number ] ]
– display pim [ vpn-instance vpn-instance-name | all-instance ] routing-table brief
[ group-address [ mask { group-mask-length | group-mask } ] | source-address
NOTE
Pre-configuration Tasks
Before adjusting control parameters for assert, complete the following tasks:
l Configuring a certain unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To adjust control parameters for assert, you need the following data.
No. Data
Context
Do as follows on all the routers in the PIM-SM domain:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
ack | hello | join-prune | state-refresh | bsr } ] command to check the number of sent or
received PIM control messages.
l Run the following commands to check the PIM routing table.
– display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-
number | register } | outgoing-interface { include | exclude | match } { interface-
type interface-number | register | none } | mode { dm | sm | ssm } | flags flag-value |
fsm ] * [ outgoing-interface-number [ number ] ]
– display pim routing-table [ group-address [ mask { group-mask-length | group-
mask } ] | source-address [ mask { source-mask-length | source-mask } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
{ include | exclude | match } { interface-type interface-number | vpn-instance vpn-
instance-name | register | none } | mode { dm | sm | ssm } | flags flag-value | fsm ] *
[ outgoing-interface-number [ number ] ]
– display pim [ vpn-instance vpn-instance-name | all-instance ] routing-table brief
[ group-address [ mask { group-mask-length | group-mask } ] | source-address
[ mask { source-mask-length | source-mask } ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Applicable Environment
This section describes how to configure the control parameters of the SPT switchover through
commands.
In a PIM-SM network, each multicast group corresponds to an RPT. At first, all multicast sources
encapsulate data in Register messages, and send them to the RP in the unicast mode. The RP
decapsulates the messages and forwards them along the RPT.
Forwarding multicast data by using the RPT has the following defects:
l The DR at the source side and the RP need to encapsulate and decapsulate packets.
l Forwarding path may not be the shortest path from the source to receivers.
l Large-volume data flow increases the load of the RP, and may cause a fault.
l SPT switchover triggered by the RP: The RP sends a Join message to the source, and
establishes a multicast route along the shortest path from the source to the RP. The
subsequent packets are forwarded along the path.
l SPT switchover triggered by the DR at the member side: The DR at the member side checks
the forwarding rate of multicast data. If the DR finds that the rate exceeds the threshold,
the DR tiggers the SPT switchover immediately. The DR sends a Join message to the source,
and establishes a multicast route along the shortest path from the source to the DR. The
subsequent packets are forwarded along the path.
Routers can work normally under the control of default values. The NE80E/40E allows users to
adjust the parameters as required.
NOTE
Pre-configuration Tasks
Before configuring the SPT switchover, complete the following tasks:
Data Preparation
To configure the SPT switchover, you need the following data.
No. Data
1 Rate threshold that a leaf PIM router switches packets from the RPT to the SPT
2 Group filtering policy and sequence policy for the switchover from the RPT to the
SPT
3 Interval for checking the rate threshold of multicast data before the RPT-to-SPT
switchover
Context
Do as follows on all the routers that may become a DR at the member side:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
The interval for checking the forwarding rate of multicast data is set.
----End
Pre-configuration Tasks
Before configuring PIM BFD, complete the following task:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM Functions
Data Preparation
To configure PIM BFD, you need the following data.
No. Data
1 Minimum intervals for sending and receiving BFD detection messages, and local
detection multiple
Context
NOTE
This function is applicable to NBMA interfaces and broadcast interfaces rather than MTunnel interfaces.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on two PIM routers that set up the neighbor relationship:
Procedure
Step 1 Run:
system-view
----End
----End
Applicable Environment
In some multicast applications, the router may need to perform active/standby switchover. After
active/standby switchover, the new active main control board deletes the forwarding entries on
the interface board and re-learns the PIM routing table and multicast routing table. During this
process, multicast traffic is interrupted.
In the PIM-SM/SSM network, PIM Graceful Restart (GR) can be applied to the router with dual
main control boards to ensure normal multicast traffic forwarding during active/standby
switchover.
The active main control board of the router backs up PIM routing entries and Join/Prune
information to be sent upstream to the standby main control board. The interface board maintains
forwarding entries. Therefore, after active/standby switchover, the router can actively and fast
send Join messages upstream to maintain the Join state of the upstream. In addition, the PIM
protocol sends Hello message carrying new Generation ID to all routers enabled with PIM-SM.
When the downstream router finds that the Generation ID of its neighbor changes, it sends a
Join/Prune message to the neighbor for re-creating routing entires, thereby ensuring non-stop
forwarding of multicast data on the forwarding plane.
If a dynamic RP is used on the network, after receiving a Hello message with the Generation ID
being changed, the DR or candidate DR unicasts a BSM message to the router performing active/
standby switchover and the router learns and restores RP information based on the received BSM
message. If the router has not leant any RP information from the BSM messages, it obtains the
RP information from the Join/Prune message received from the downstream router and re-creates
multicast routing table.
NOTE
Pre-configuration Tasks
Before enabling PIM GR, complete the following task:
Data Preparation
To enable PIM GR, you need the following data.
No. Data
1 Unicast GR period
2 PIM GR period
Context
Do as follows on the router enabled with PIM-SM:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim [ vpn-instance vpn-instance-name ]
Step 3 Run:
graceful-restart
PIM GR is enabled.
----End
Procedure
Step 1 Run the following commands to check PIM routing table.
l display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-number |
register } | outgoing-interface { include | exclude | match } { interface-type interface-
----End
CAUTION
If PIM silent is enabled on the interface connected to a router, the PIM neighbor relationship
cannot be set up and a multicast fault may occur.
If the host network segment is connected to multiple routers and PIM silent is enabled on multiple
interfaces, the interfaces become static DRs. Therefore, multiple DRs exist in this network
segment, and a fault occurs.
Pre-configuration Tasks
Before configuring PIM silent, complete the following tasks:
Data Preparation
To configure PIM silent, you need the following data.
No. Data
Context
Do as follows on the interface connected to the host network segment:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim silent
After PIM silent is enabled, the Hello packet attack of malicious hosts is effectively prevented
and the router is protected.
----End
Prerequisite
All the configurations of PIM silent are complete.
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command to check PIM on an
interface.
----End
Example
Run the display pim interface verbose command, and you can find that the configuration is
complete.
<RouterA> display pim interface verbose
VPN-Instance: public net
Interface: GigabitEthernet6/1/1, 2.2.2.2
PIM version: 2
PIM mode: Sparse
PIM DR: 2.2.2.2 (local)
PIM DR Priority (configured): 1
PIM neighbor count: 0
PIM hello interval: 30 s
PIM LAN delay (negotiated): 500 ms
PIM LAN delay (configured): 500 ms
PIM hello override interval (negotiated): 2500 ms
PIM hello override interval (configured): 2500 ms
PIM Silent: enabled
PIM neighbor tracking (negotiated): disabled
PIM neighbor tracking (configured): disabled
PIM generation ID: 0X2649E5DA
PIM require-genid: disabled
PIM hello hold interval: 105 s
PIM hello assert interval: 180 s
PIM triggered hello delay: 5 s
PIM J/P interval: 60 s
PIM J/P hold interval: 210 s
PIM state-refresh capability on link: non-capable
PIM BSR domain border: disabled
PIM dr-switch-delay timer : not configured
Number of routers on link not using DR priority: 0
Number of routers on link not using LAN delay: 0
Number of routers on link not using neighbor tracking: 1
Context
CAUTION
The statistics of PIM control messages on an interface cannot be restored after you clear it. So,
confirm the action before you use the command.
Procedure
l Run the reset pim [ vpn-instance vpn-instance-name | all-instance ] control-message
counters [ interface interface-type interface-number ] command in the user view to clear
the statistics of PIM control messages on an interface.
----End
Context
CAUTION
Clearing PIM status of the downstream interfaces may trigger the sending of corresponding Join/
Prune messages, which affects multicast services.
Using the following command can clear join information about illegal users, and clear the PIM
status of the specified interface in a specified entry, such as PIM Join/Prune status and Assert
status.
The command cannot be used to clear the IGMP or static group join status on a specified
interface.
Procedure
Step 1 After confirming that PIM status of the specified downstream interfaces of the specified PIM
entry need to be cleared, run the reset pim [ vpn-instance vpn-instance-name ] routing-table
group group-address mask { group-mask-length | group-mask } source source-address
interface interface-type interface-number command in the user view.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of PIM-SM.
Procedure
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] claimed-route
[ source-address ] command in any view to check the unicast routes used by PIM.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] bfd session
[ interface interface-type interface-number | neighbor neighbor-address ] * command in
any view to check information about a PIM BFD session.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] bsr-info command
in any view to check information about the BSR in a PIM-SM domain.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] control-message
counters interface interface-type interface-number [ message-type { assert | graft | graft-
ack | hello | join-prune | state-refresh | bsr } ] command in any view to check the number
of sent or received PIM control messages.
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] interface
[ interface-type interface-number | up | down ] [ verbose ] command in any view to check
PIM on an interface.
l Run the command display pim [ vpn-instance vpn-instance-name | all-instance ]
neighbor [ neighbor-address | interface interface-type interface-number | verbose ] * to
check PIM neighbors.
l Run the following commands in any view to check the PIM routing table.
– display pim { vpn-instance vpn-instance-name | all-instance } routing-table [ group-
address [ mask { group-mask-length | group-mask } ] | source-address [ mask { source-
mask-length | source-mask } ] | incoming-interface { interface-type interface-
number | register } | outgoing-interface { include | exclude | match } { interface-
type interface-number | register | none } | mode { dm | sm | ssm } | flags flag-value |
fsm ] * [ outgoing-interface-number [ number ] ]
– display pim routing-table [ group-address [ mask { group-mask-length | group-
mask } ] | source-address [ mask { source-mask-length | source-mask } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
{ include | exclude | match } { interface-type interface-number | vpn-instance vpn-
instance-name | register | none } | mode { dm | sm | ssm } | flags flag-value | fsm ] *
[ outgoing-interface-number [ number ] ]
– display pim [ vpn-instance vpn-instance-name | all-instance ] routing-table brief
[ group-address [ mask { group-mask-length | group-mask } ] | source-address
[ mask { source-mask-length | source-mask } ] | incoming-interface { interface-type
interface-number | register } ] *
l Run the display pim [ vpn-instance vpn-instance-name | all-instance ] rp-info [ group-
address ] command in any view to check information about the RP to which a multicast
group corresponds.
----End
Networking Requirements
As shown in Figure 5-2, multicast is deployed in the Internet Service Provider (ISP) network.
An integrated Interior Gateway Protocol (IGP) is deployed in the network. Unicast routes work
normally and are connected to the Internet. It is required to perform proper configuration on
routers in the network to enable hosts to receive the Video On Demand (VOD) information in
multicast mode.
POS2/0/0 PIM-SM
Source RouterE POS3/0/0 Leaf networks
GE3/0/0 POS1/0/0
POS4/0/0 POS2/0/0 GE2/0/0
POS4/0/0 POS1/0/0 Receiver
RouterD POS1/0/0
RouterB HostB
RouterC POS2/0/0
N2
GE1/0/0 Ethernet
GE 3/0/0 10.110.5.1/24
Router E POS 1/0/0 192.168.3.2/24
POS 2/0/0 192.168.2.2/24
POS 3/0/0 192.168.9.2/24
POS 4/0/0 192.168.4.1/24
Configuration Roadmap
The ISP network is accessed to the Internet. To expand services, PIM-SM is adopted to configure
multicast functions, and ASM and SSM models are used to provide multicast services.
1. Configure an IP address for each interface on routers and a unicast routing protocol. PIM,
an intra-domain multicast routing protocol, depends on unicast routing protocols. The
multicast routing protocol can work normally only when unicast routing protocols work
normally.
2. Enable the multicast function on all the routers providing multicast services. PIM-SM can
be configured only after multicast is enabled.
3. Enable PIM-SM on all interfaces of the multicast routers. Other PIM-SM functions can be
configured only after PIM-SM is enabled.
NOTE
If IGMP needs to be configured on this interface, PIM-SM must be enabled before IGMP is enabled.
The configuration order cannot be reversed; otherwise, the configuration of PIM-SM fails.
4. Enable IGMP on the interface connected to user hosts. A receiver can join and leave a
multicast group freely by sending IGMP messages. Leaf routers maintain the member
relationship through IGMP.
5. Enable PIM silent on the router interface connected to hosts to prevent malicious hosts
from attacking the router by simulating and sending PIM Hello packets; therefore, the
security of multicast routers can be ensured.
NOTE
PIM silent is applicable only to the router interface directly connected to the host network segment
that is connected only to this router.
6. Configure an RP. The RP is a root node of an RPT tree in a PIM-SM network. It is
recommended to configure the RP on a router through which many multicast flows pass,
such as Router E in the figure.
NOTE
l After creating an (*, G) entry according to the new multicast member relationship, the DR on
the user side sends Join/Prune messages to the RP, updating the shared tree.
l When a multicast data source starts to send data to groups, the DR unicasts a Register message
to the RP. After receiving the Register message, the RP decapsulates it and then forwards it to
other multicast members along the shared tree. At the same time, the RP sends a Register-Stop
message to the DR on the multicast source side. After the Register-Stop message is received by
the DR, the traffic can be switched from RPT to the SPT.
7. (Optional) Configure the BSR boundary on the interface connected to the Internet.
Bootstrap messages cannot pass through the BSR boundary; therefore, the BSR serves this
PIM-SM domain only. In this manner, multicast services can be controlled effectively.
8. (Optional) Configure the SSM group address range on each router. Ensure that multicast
routers in the PIM-SM domain provide services only for multicast groups in the SSM group
address range. In this manner, multicast services can be controlled effectively.
Data Preparation
To complete the configuration, you need the following data:
l Address of multicast group G is 225.1.1.1.
l Source address is 10.110.5.100/24.
l Version number of IGMP running between the interface and hosts is 3.
l SSM group address range is 232.1.1.0/24.
Procedure
Step 1 Configure an IP address and a unicast routing protocol on each interface.
Step 2 Enable multicast on all routers and PIM-SM on all interfaces.
# Enable multicast on all routers and PIM-SM on all interfaces. The configurations of Router
B, Router C, Router D, and Router E are the same as the configuration of Router A, and are not
mentioned here.
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet 2/0/0
[RouterA-GigabitEthernet2/0/0] pim sm
[RouterA-GigabitEthernet2/0/0] quit
[RouterA] interface pos 1/0/0
[RouterA-Pos1/0/0] pim sm
[RouterA-Pos1/0/0] quit
[RouterA] interface pos 3/0/0
[RouterA-Pos3/0/0] pim sm
[RouterA-Pos3/0/0] quit
l RPs are classified into two types, that is, the static RP and the dynamic RP. You can configure the static
RP and the dynamic at the same time or just configure one of them.
l When the static RP and the dynamic RP are configured simultaneously, you can adjust parameters to
specify the preferred RP.
This example shows how to configure the static RP and the dynamic RP, to prefer the dynamic
RP, and specify the static RP as the standby RP by configuring the parameters.
# Configure the dynamic RP on one or more routers in the PIM-SM domain. Set the service
range of the RP advertisement and configure the C-BSR and the C-RP on Router E.
# Configure the static RP on all multicast routers. Configure Router A, Router B, Router C,
Router D, and Router E. The configurations on Router B, Router C, Router D, and Router E are
similar to those on Router A. The detailed configurations are not mentioned here.
NOTE
If preferred is set in the static-rp x.x.x.x command, the static RP is preferred as the RP in the PIM-SM
domain.
[RouterA] pim
[RouterA-pim] static-rp 192.168.2.2
Step 6 On Router D, configure the BSR boundary on the interface connected to the Internet.
[RouterD] interface pos 4/0/0
[RouterD-Pos4/0/0] pim bsr-boundary
[RouterD-Pos4/0/0] quit
# Run the display pim bsr-info command to view the BSR election on a router. For example,
the BSR information on Router A and Router E (including the C-BSR information on Router
E) is as follows:
<RouterA> display pim bsr-info
VPN-Instance: public net
Elected AdminScoped BSR Count: 0
Elected BSR Address: 192.168.9.2
Priority: 0
Hash mask length: 30
State: Accept Preferred
Scope: Not scoped
Uptime: 01:40:40
Expires: 00:01:42
C-RP Count: 1
<RouterE> display pim bsr-info
VPN-Instance: public net
Elected AdminScoped BSR Count: 0
Elected BSR Address: 192.168.9.2
Priority: 0
# Run the display pim rp-info command to view the RP information obtained by a router. For
example, the RP information on Router A is as follows:
<RouterA> display pim rp-info
VPN-Instance: public net
PIM-SM BSR RP information:
Group/MaskLen: 225.1.1.0/24
RP: 192.168.9.2
Priority: 0
Uptime: 00:51:45
Expires: 00:02:22
PIM SM static RP information:
Static RP: 192.168.2.2
# Run the display pim routing-table command to view the PIM multicast routing table on a
router. Host A needs to receive the information about group 225.1.1.1/24, and Host B needs to
receive the information sent by source 10.110.5.100/24 to group 232.1.1.1/24. The display is as
follows:
<RouterA> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 192.168.9.2
Protocol: pim-sm, Flag: WC
UpTime: 00:13:46
Upstream interface: Pos1/0/0,
Upstream neighbor: 192.168.9.2
RPF neighbor: 192.168.9.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0,
Protocol: igmp, UpTime: 00:13:46, Expires:-
(10.110.5.100, 225.1.1.1)
RP: 192.168.9.2
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:00:42
Upstream interface: Pos3/0/0
Upstream neighbor: 192.168.1.2
RPF neighbor: 192.168.1.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:00:42, Expires:-
<RouterD> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 2 (S, G) entries
(10.110.5.100, 225.1.1.1)
RP: 192.168.9.2
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:00:42
Upstream interface: GigabitEthernet3/0/0
Upstream neighbor: 10.110.5.100
RPF neighbor: 10.110.5.100
Downstream interface(s) information:
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim sm
igmp enable
igmp version 3
pim silent
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.9.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.1.0 0.0.0.255
network 192.168.1.0 0.0.0.255
network 192.168.9.0 0.0.0.255
#
pim
static-rp 192.168.2.2
ssm-policy 2000
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.2.1 255.255.255.0
pim sm
igmp enable
igmp version 3
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.2.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
Pim
static-rp 192.168.2.2
ssm-policy 2000
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
acl number 2000
rule 5 permit source 232.1.1.0 0.0.0.255
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.2.2 255.255.255.0
pim sm
igmp enable
igmp version 3
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.3.1 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
pim
static-rp 192.168.2.2
ssm-policy 2000
#
return
Networking Requirements
Receivers can receive the Video On Demand (VOD) information in multicast mode. The SM-
single BSR administrative domain is adopted in the entire PIM network. By default, the DR at
the receiver side and the RP perform the SPT switchover immediately after receiving the first
multicast data packet, and choose the optimal path to receive information from the source. If a
receiver wants to perform the SPT switchover after the traffic reaches the threshold, you need
to configure the SPT switchover.
Figure 5-3 Networking diagram for performing SPT switchover in a PIM-SM domain
RouterA
POS2/0/0
Ethernet
POS1/0/0
POS2/0/0 Leaf
POS1/0/0 networks
Source POS1/0/0
GE3/0/0
POS3/0/0 GE2/0/0
RouterB RouterC
PIM-SM Receiver
HostA
Ethernet
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address and a unicast routing protocol on each interface.
# Based on Figure 5-3, configure an IP address and mask of each interface, interconnect
routers through OSPF, ensure that Router A, Router B, and Router C can interconnect at the
network layer, and configure the three routers to dynamically update routes through a unicast
routing protocol. The configuration details are not mentioned here.
Step 2 Enable multicast on each router, PIM-SM on each interface, and IGMP on the interface at the
host side.
# Enable multicast on each router, PIM-SM on each interface, and IGMP on the interfaces
through which Router C is connected to the leaf network. The configurations of Router A and
Router B are the same as the configuration of Router C, and are not mentioned here.
[RouterC] multicast routing-enable
[RouterC] interface gigabitethernet 2/0/0
[RouterC-GigabitEthernet2/0/0] pim sm
[RouterC-GigabitEthernet2/0/0] igmp enable
[RouterC-GigabitEthernet2/0/0] igmp version 2
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface pos 3/0/0
[RouterC-Pos2/0/0] pim sm
[RouterC-Pos2/0/0] quit
[RouterC] interface pos 1/0/0
[RouterC-Pos1/0/0] pim sm
[RouterC-Pos1/0/0] quit
UpTime: 00:13:46
Upstream interface: Pos1/0/0,
Upstream neighbor: 192.168.1.1
RPF neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0,
Protocol: igmp, UpTime: 00:13:46, Expires:-
(10.110.5.100, 225.1.1.1)
RP: 192.168.1.1
Protocol: pim-sm, Flag: ACT
UpTime: 00:00:42
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.1.1
RPF neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:00:42, Expires:-
# When the rate is greater than 1024 kbit/s, run the display pim routing-table command to view
the PIM multicast routing table on the router. You can find that the upstream neighbor is Router
B. The display is as follows:
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 192.168.1.1
Protocol: pim-sm, Flag: WC
UpTime: 00:13:46
Upstream interface: Pos2/0/0,
Upstream neighbor: 192.168.2.2
RPF neighbor: 192.168.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0,
Protocol: igmp, UpTime: 00:13:46, Expires:-
(10.110.5.100, 225.1.1.1)
RP: 192.168.1.1
Protocol: pim-sm, Flag:RPT SPT ACT
UpTime: 00:00:42
Upstream interface: Pos2/0/0
Upstream neighbor: 192.168.2.2
RPF neighbor: 192.168.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:00:42, Expires:-
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.3.1 255.255.255.0
pim sm
#
pim
static-rp 192.168.1.1
#
ospf 1
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.110.5.1 255.255.255.0
pim sm
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.3.2 255.255.255.0
pim sm
#
pim
static-rp 192.168.1.1
#
ospf 1
area 0.0.0.0
network 10.110.5.0 0.0.0.255
network 192.168.2.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.2.1 255.255.255.0
pim sm
igmp enable
igmp version 2
#
interface Pos3/0/0
link-protocol ppp
undo shutdown
Networking Requirements
In the multicast network shown in Figure 5-4, PIM-SM is run on routers, hosts normally receive
the VOD information from the multicast source, and Router B and Router C are connected to
the host network segment. When the DR changes, other routers in the network segment can
detect the change of the DR quickly.
Set up the BFD session on the host network segment to quickly respond to the changes of the
DR, and configure the delay of the DR switchover. In this case, when a router is added to the
network segment and may become a DR, the multicast routing table of the original DR is reserved
till the entries of the new DR are created. The packet loss due to the delay for creating multicast
entries is thus prevented.
NOTE
After the delay of the PIM DR switchover is set, downstream receivers may receive two copies of the same
data during the DR switchover, and trigger the assert mechanism. If you do not tend to trigger the assert
mechanism, you need not configure the DR switchover delay.
Figure 5-4 Networking diagram of applying PIM BFD on a multi-router network segment
RouterA
Source
10.1.7.1/24 PIM-SM
RouterC
GE2/0/0
10.1.1.2/24
RouterB
GE2/0/0
10.1.1.1/24
User1 User2
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure PIM BFD on the interface that is connected to the host network segment.
2. Configure the PIM DR switching delay on the interface that is connected to the host network
segment.
Data Preparation
To complete the configuration, you need the following data:
l Parameters of a PIM BFD session
l PIM DR switching delay
Procedure
Step 1 Configure an IP address for each interface and a unicast routing protocol.
Step 2 Enable BFD globally and configure PIM BFD on an interface.
# Enable BFD globally on Router B and Router C, enable PIM BFD on the interface that is
connected to the host network segment, and configure PIM BFD parameters. Configuration
procedures of Router C are similar to those of Router B, and are not mentioned here.
[RouterB] bfd
[RouterB-bfd] quit
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] pim bfd enable
[RouterB-GigabitEthernet2/0/0] pim bfd min-tx-interval 100 min-rx-interval 100
detect-multiplier 3
# Run the display pim bfd session command to display the BFD session on each router, and
you can check whether the BFD session on each router is set up.
<RouterB> display pim bfd session
VPN-Instance: public net
Total 1 BFD session Created
GigabitEthernet2/0/0 (10.1.1.1): Total 1 BFD session Created
Neighbor ActTx(ms) ActRx(ms) ActMulti Local/Remote State
10.1.1.2 100 100 3 8192/8192 Up
# Run the display pim routing-table command to view the PIM routing table. Router C acts as
the DR. (S, G) and (*, G) entries exist. The display is as follows:
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 10.1.5.2
Protocol: pim-sm, Flag: WC
UpTime: 00:13:46
Upstream interface: Pos1/0/0,
Upstream neighbor: 10.1.2.2
RPF neighbor: 10.1.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0,
Protocol: igmp, UpTime: 00:13:46, Expires:-
(10.1.7.1, 225.1.1.1)
RP: 10.1.5.2
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:00:42
Upstream interface: Pos1/0/0
Upstream neighbor: 10.1.2.2
RPF neighbor: 10.1.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:00:42, Expires:-
----End
Configuration Files
Router A needs to be configured only with basic PIM-SM functions that are not most of concern
in this example. Therefore, the configuration file of Router A is not mentioned here.
The configuration file of Router B is as follows. The configuration file of Router C is similar to
that of Router B and is not mentioned here.
#
sysname RouterB
#
multicast routing-enable
#
bfd
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 10.1.2.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
pim sm
igmp enable
pim bfd enable
pim bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
pim timer dr-switch-delay 20
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
#
return
Networking Requirements
In multicast applications, if a device performs the active/standby switchover, the new main
control board deletes the multicast forwarding entries on the interface board and rebuilds the
PIM routing table and PIM forwarding table. During this process, multicast traffic of users is
interrupted.
Deploying PIM GR on an IPTV network protects the core devices and edge devices. When a
device on the IPTV network performs the active/standby switchover, the interface board can
maintain the normal forwarding of multicast data, which increases the fault tolerance capacity
of the devices on the network.
In the network shown in Figure 5-5, multicast services are deployed and PIM GR is configured
on Router C. During the process that Router C forwards multicast data to receiver, the active
main control board backs up the PIM routing entries and Join/Prune messages that need to be
sent to upstream device to the standby main control board. When Router C performs the active/
standby switchover, the interface board maintains the original forwarding entries, which ensures
the smooth forwarding of multicast data. Therefore, the receiver can normally receive multicast
data from the multicast source when the device performs the active/standby switchover.
Ethernet
Leaf networks
RouterA
POS1/0/0 POS2/0/0
Source
GE2/0/0 POS1/0/0 POS1/0/0 GE2/0/0
Router B RouterC
Receiver
PIM-SM HostA
Ethernet
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IP addresses for the interfaces and a unicast routing protocol on each router.
2. Enable unicast GR on each router and set unicast GR period.
3. Enable the multicast function, enable PIM-SM on the interfaces of routers, and enable
IGMP on the interface connecting router to the host.
4. Configure a RP. Configure a same static RP on routers.
5. On Router C, enable PIM GR and set the GR period.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure IP addresses for the interfaces and a unicast routing protocol on each router.
# Based on Figure 5-5, configure IP addresses and masks for the interfaces on each router.
Configure OSPF as the unicast routing protocol running between routers to ensure IP
interworking between Router A, Router B, and Router C. The detailed configuration is not
mentioned here.
# Enable unicast GR and set the GR period to 200 seconds on each router. The configurations
on Router A and Router B are similar to those on Router C. The detailed configuration procedures
are not mentioned here.
[RouterC] ospf 1
[RouterC-ospf-1] opaque-capability enable
[RouterC-ospf-1] graceful-restart
[RouterC-ospf-1] graceful-restart period 200
[RouterC-ospf-1] quit
Step 3 Enable the multicast function, enable PIM-SM on the interfaces of routers, and enable IGMP
on the interface connecting router and the host.
# Enable the multicast function on all routers, and enable PIM-SM on the interfaces of the
routers, and enable IGMP on the interface connecting Router C to the host. The configurations
on Router A and Router B are similar to those on Router C. The detailed configuration procedures
are not mentioned here.
[RouterC] multicast routing-enable
[RouterC] interface gigabitethernet 2/0/0
[RouterC-GigabitEthernet2/0/0] pim sm
[RouterC-GigabitEthernet2/0/0] igmp enable
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface pos 1/0/0
[RouterC-Pos1/0/0] pim sm
[RouterC-Pos1/0/0] quit
# Configure a same static RP on each router. The configurations on Router B and Router C are
similar to those on Router A. The detailed configuration procedures are not mentioned here.
[RouterA] pim
[RouterA-pim] static-rp 1.1.1.1
[RouterA-pim] quit
# On Router C, enable PIM GR and set the PIM GR period to 300 seconds.
[RouterC] pim
[RouterC-pim] graceful-restart
[RouterC-pim] graceful-restart period 300
[RouterC-pim] quit
# The multicast source (10.110.1.100) sends data to the multicast group (225.1.1.1). Host A
sends IGMP Report message to join multicast group and can receive the data from the multiast
source. Before Router C performs active/standby switchover, run the display pim routing-
table command on Router B and Router C to view PIM routing table. The command output are
as follows:
<RouterB> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: WC
UpTime: 01:52:38
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 01:52:38, Expires: 00:02:53
(10.110.1.100, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 01:52:38
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 01:52:38, Expires: 00:03:03
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: WC
UpTime: 01:51:24
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.4.1
RPF prime neighbor: 192.168.4.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: igmp, UpTime: 01:51:24, Expires: -
(10.110.1.100, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 01:51:24
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.4.1
RPF prime neighbor: 192.168.4.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 01:51:24, Expires: -
After Router C performs active/standby switchover, during PIM GR, run the display pim
routing-table command on Router B and Router C to view PIM routing table. The command
output are as follows:
(*, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: WC
UpTime: 02:52:38
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 02:52:38, Expires: 00:03:00
(10.110.1.100, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 02:52:38
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 02:52:38, Expires: 00:03:12
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: WC
UpTime: 02:51:24
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.4.1
RPF prime neighbor: 192.168.4.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: igmp, UpTime: 02:51:24, Expires: -
(10.110.1.100, 225.1.1.1)
RP: 1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 02:51:24
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.4.1
RPF prime neighbor: 192.168.4.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 02:51:24, Expires: -
In a normal multicast network, the downstream router periodically sends Join/Prune messages
upstream to refresh the timeout period of PIM routing entries on the upstream, thereby ensuring
normal multicast data forwarding.
If the GR function is not configured on Router C, the new active main control board deletes the
multicast forwarding entries on the interface board, receives IGMP Report messages from the
host again, and re-creates PIM routing entries. During this process, multicast traffic is
interrupted.
From the preceding command output, it can be found that after Router C performs active/standby
switchover, the downstream interface of Router B remains unchanged. That is, after Router C
performs active/standby switchover, Router C sends the backed up Join messages upstream. In
this way, multicast forwarding entries can be maintained during GR to ensure non-stop multicast
data forwarding.
During the process of restoring multicast routing entries on Router C after active/standby
switchover, users can receive multicast data normally and services are not interrupted.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim sm
#
interface Loopback0
ip address 1.1.1.1 255.255.255.255
pim sm
#
ospf 1
opaque-capability enable
graceful-restart period 200
area 0.0.0.0
network 192.168.2.0 0.0.0.255
network 10.110.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
pim
static-rp 1.1.1.1
#
return
6 MSDP Configuration
This chapter describes the MSDP fundamentals and configuration steps, and maintenance for
MSDP functions, along with typical examples.
6.1 MSDP Introduction
This section describes basic MSDP functions and MSDP features supported by the NE80E/
40E.
6.2 Configuring PIM-SM Inter-domain Multicast
This section describes how to configure PIM-SM inter-domain MSDP peers in an AS.
6.3 Configuring an Anycast RP in a PIM-SM Domain
This section describes how to configure an anycast RP.
6.4 Managing MSDP Peer Connections
This section describes how to manage MSDP peers connections.
6.5 Configuring SA Cache
This section describes how to configure SA Cache.
6.6 Configuring the SA Request
This section describes how to configure an SA request.
6.7 Transmitting Burst Multicast Data Between Domains
This section describes how to transmit burst multicast data between domains.
6.8 Configuring the Filtering Rules for SA Messages
This section describes how to configure the filtering rules for SA messages.
6.9 Configuring MSDP Authentication
This section describes how to configure MSDP MD5 authentication and Key-Chain
authentication to enhance the security of the connections between MSDP peers.
6.10 Maintaining MSDP
This section describes how to clear MSDP statistics, reset connections between MSDP peers,
and monitor the running status of MSDP.
6.11 Configuration Examples
In the general PIM-SM mode, a multicast source registers only with the local rendezvous point
(RP). The information on the inter-domain multicast sources is isolated. The RP knows only the
source in its domain, establishes a multicast distribution tree (MDT) in its domain, and distributes
the data sent by the source to the local users.
A type of mechanism is required to enable the local RP to share the information on the multicast
sources of other domains. By means of the mechanism, the local RP can send Join messages to
the multicast sources of other domains and establish MDTs. Multicast packets can thus be
transmitted across domains.
The Multicast Source Discovery Protocol (MSDP) is an inter-area multicast solution based on
multiple interconnected PIM-SM domains, and can solve the preceding problem.
MSDP achieves this objective by setting up the MSDP peer relationship between RPs of different
domains. MSDP peers share the information on multicast sources by sending Source Active
(SA) messages. They transmit the (S, G) information from the RP that the source S registers
with to other RPs connected to members of G.
MSDP peers are connected through the TCP connection. MSDP peers perform the RPF check
on received SA messages.
NOTE
MSDP is applicable only to PIM-SM domains, and useful only for the Any-Source Multicast (ASM) mode.
You can configure intra-AS MSDP peers, inter-AS MSDP peers, and static RPF peers.
You can use a loopback interface as a C-RP or a static RP and specify the logical RP address
for an SA message.
Configuring SA Cache
By default, SA-Cache is enabled on routers. Therefore, routers can locally store the (S, G)
information carried in SA messages. When required to receive the multicast data, the routers
can obtain the (S, G) information from the SA-Cache.
You can set the maximum number of cached (S, G) entries, which can effectively prevent the
Denial of Service (DoS) attack.
You can disable SA-Cache on a router. After the SA-Cache on a router is disabled, the router
does not locally store the (S, G) information carried in SA messages. When the router needs to
receive multicast data, it needs to wait for the SA message to be sent by its MSDP peer in the
next period. This causes a delay for receivers to obtain multicast source information.
Controlling SA Requests
Certain routers cannot be enabled with SA Cache or the capacity of SA Cache on these routers
is too small. When these routers need to receive multicast data, they cannot immediately obtain
the valid (S, G) information but need to wait for the SA message to be sent by their MSDP peers
in the next period.
If SA Cache is enabled on the remote MSDP peer and the capacity of the SA Cache is large, you
can configure "sending SA request messages" on the local router to reduce the period during
which receivers obtain multicast source information.
At the same time, you can also configure the filtering rules for receiving SA request messages
on the remote MSDP peers.
remote peers. If the TTL value is greater than the threshold, the MSDP peer reduces the TTL
value in the IP header of the multicast packet by 1, and then encapsulates the multicast packet
in an SA message and sends the message out.
Multi-Instance MSDP
MSDP peer relationships can be set up between interfaces on multicast routers that belong to
the same instance (including the public instance and VPN instance). MSDP peers exchange SA
message with each other. The inter-domain VPN multicast is thus implemented.
Multicast routers on which multi-instance is applied maintain a set of MSDP mechanisms for
each instance. Multicast routers also guarantee the information separation among different
instances; therefore, only MSDP and PIM-SM that belong to the same instance can interact.
By applying multi-instance, the NE80E/40E implements inter-domain VPN multicast.
NOTE
For details of inter-domain VPN multicast, refer to the chapter "Multicast VPN Configuration" in the
HUAWEI NetEngine80E/40E Router Configuration Guide - IP Multicast.
MSDP Authentication
Configuring MSDP MD5 or Key-Chain authentication can improve the security of TCP
connections set up between MSDP peers. Note that the MSDP peers must be configured with
the same authentication password; otherwise, the TCP connection cannot be set up between
MSDP peers and MSDP messages cannot be transmitted.
Applicable Environment
When a large multicast network is divided into multiple PIM-SM domains, MSDP is used to
connect RPs of various domains to share the source information. In this manner, hosts in a domain
can receive multicast data sent by multicast sources in other domains.
To ensure that all RPs in the network can share the source information, reduce the scale of an
MSDP connected graph. It is recommended to configure MSDP peer relationships between all
RPs, including static RPs and C-RPs, in the network.
To ensure that SA messages transmitted between MSDP peers are not interrupted by RPF rules
and to reduce redundant traffic, the following solutions are recommended:
Both BGP and MBGP can be used to set up inter-AS EBGP peer relationships. MBGP is recommended
because MBGP does not affect the unicast topology of a network.
Pre-configuration Tasks
Before configuring PIM-SM inter-domain multicast, complete the following tasks:
Data Preparation
To configure PIM-SM inter-domain multicast, you need the following data.
No. Data
Context
Do as follows on the RPs of all PIM-SM domains that belong to the same AS:
Procedure
Step 1 Run:
system-view
MSDP is enabled in the public network instance or VPN instances and the MSDP view is
displayed.
Step 3 Run:
peer peer-address connect-interface interface-type interface-number
The system does not advertise routes on MTIs to VPNs; therefore, it is not allowed to use MTIs to set up
an MSDP peer connection.
Step 5 Run:
peer peer-address mesh-group name
l MSDP peer connections must be set up between all members of the same mesh group.
l All members of the mesh group must acknowledge each other as a member of the group.
l An MSDP peer can belong to only one mesh group. If an MSDP peer is configured to join
different mesh groups for multiple times, only the latest configuration is valid.
----End
Context
Establish the MBGP peer relationship between two RPs of different ASs and do as follows on
the MBGP peers:
NOTE
If the two RPs set up the BGP peer relationship, it is not necessary to set up the MBGP peer relationship
between them.
For details of the configuration of MBGP peers, refer to the chapter MBGP Configuration in the HUAWEI
NetEngine80E/40E Router Configuration Guide - IP Multicast.
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
MSDP is enabled in the public network instance or VPN instance, and the MSDP view is
displayed.
Step 3 Run:
peer peer-address connect-interface interface-type interface-number
l peer-address: specifies the address of a remote MSDP peer. The address is the same as that
of the remote BGP or MBGP peer.
l interface-type interface-number: specifies the local interface connected to the remote MSDP
peer. The interface is the same as the local BGP or MBGP interface.
The configuration helps to distinguish the remote MSDP peers and manage the connections with
the remote MSDP peers.
----End
Context
NOTE
If Configuring Inter-AS MSDP Peers on MBGP Peers is complete, skip the configuration.
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
MSDP is enabled in the public network instance or VPN instance, and the MSDP view is
displayed.
Step 3 Run:
peer peer-address connect-interface interface-type interface-number
The configuration helps to distinguish remote MSDP peers and manage the connections with
the remote MSDP peers.
Step 5 Run:
static-rpf-peer peer-address
----End
Example
<HUAWEI> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
2 2 0 0 0 0
Pre-configuration Tasks
Before configuring an anycast RP in a PIM-SM domain, complete the following tasks:
l Configuring a unicast routing protocol to implement interconnection at the network layer
l Enabling IP multicast
l Configuring a PIM-SM domain without any RP
Data Preparation
To configure an anycast RP in a PIM-SM domain, you need the following data.
No. Data
1 RP address
Context
Use a unicast routing protocol in the current network to advertise the address of the newly
configured RP interface. Ensure that all routers in the network have a route to the RP.
In the PIM-SM domain, do as follows on multiple routers on which the anycast RP is to be
configured:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface loopback interface-number
Step 4 Run:
pim sm
NOTE
Before configuring a dynamic RP, you need to run this command. This command is not required when you
configure a static RP.
----End
Context
NOTE
l If the PIM-SM network uses a static RP, the configuration is not necessary.
l If the PIM-SM network uses a BSR-RP, the configuration is mandatory. Before configuring a C-RP,
configure a BSR and BSP boundary. The BSR address cannot be the same as the C-RP address.
Procedure
Step 1 Run:
system-view
----End
Context
NOTE
l When the PIM-SM network uses a BSR-RP, the configuration is not necessary.
l When the PIM-SM network uses a static RP, the configuration is mandatory.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on multiple routers on which an anycast RP is to be created:
NOTE
If the number of routers configured with the RPs that have the same IP address exceeds two, ensure the
interconnection between the routers that set up MSDP peer relationship.
Procedure
Step 1 Run:
system-view
MSDP is enabled in the public network instance or the VPN instance, and the MSDP view is
displayed.
Step 3 Run:
peer peer-address connect-interface interface-type interface-number
This configuration helps to differentiate remote MSDP peers and manage the connection with
the remote MSDP peers.
That is, the remote MSDP peer is acknowledged as a member of the mesh group.
If only two routers are configured with the anycast-RP, this configuration is not necessary.
l MSDP peer connections must be set up between all members of the mesh group.
l All members of the mesh group must acknowledge each other as the member of the mesh
group.
l An MSDP peer belongs to only one mesh group. If an MSDP peer is configured to join
different mesh groups for many times, only the last configuration is valid.
----End
Context
After receiving an SA message, an MSDP peer performs the RPF check on the message. If the
remote RP address carried in the SA message is the same as the local RP address, the SA message
is discarded.
Do as follows on the routers on which the anycast RP is to be configured:
Procedure
Step 1 Run:
system-view
The logical RP interface is configured. The logical RP interface cannot be the same as the actual
RP interface. It is recommended to configure the logical interface as the MSDP peer interface.
After the originating-rp command is used, the logical RP address carried in the SA message
sent by the router replaces the RP address in the IP header of the SA message, and the SA message
can pass the RPF check after reaching the remote router.
NOTE
The system does not advertise routes on the MTIs to VPNs; therefore, the MTIs cannot be used as logical
RPs.
----End
Example
Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] brief command. If the
brief information about the remote MSDP peer status is displayed, it means that the configuration
succeeds. For example:
<HUAWEI> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
1 1 0 0 0 0
Applicable Environment
MSDP peers are connected by the TCP connection (the port number is 639). Users can close or
reestablish a TCP connection, and flexibly control the sessions set up between MSDP peers.
When a new MSDP peer is created, or when a closed MSDP peer connection is restarted, or
when a faulty MSDP peer tries recovering, the TCP connection needs to be immediately set up
between MSDP peers. Users can flexibly adjust the interval for retrying setting up an MSDP
peer connection.
Pre-configuration Tasks
Before managing MSDP peer connections, complete the following tasks:
Data Preparation
To manage MSDP peer connections, you need the following data.
No. Data
Context
Do as follows on the router on which the MSDP peer is created:
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
shutdown peer-address
----End
Context
Do as follows on the router on which the MSDP peer is created:
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
timer retry interval
The period for retrying sending the TCP connection request to the remote MSDP peer is set
----End
Procedure
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] brief command
to check the brief information about the statuses of all remote peers that establish MSDP
peer relationships with the local host.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status
[ peer-address ] to check the detailed information about the statuses of the specified remote
peers that establish the MSDP peer relationships with the local host.
----End
Example
<HUAWEI> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
2 2 0 0 0 0
Pre-configuration Tasks
Before configuring SA Cache, complete the following tasks:
l Configuring a unicast routing protocol to implement interconnection at the network layer
l Enabling IP multicast
l Configuring a PIM-SM domain to implement intra-domain multicast
l Configuring PIM-SM Inter-domain Multicast or Configuring an Anycast RP in a
PIM-SM Domain
Data Preparation
To configure SA Cache, you need the following data.
No. Data
Context
Do as follows on the router on which the MSDP peer is configured:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
peer peer-address sa-cache-maximum sa-limit
----End
Context
Do as follows on the router on which the MSDP peer is configured:
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
undo cache-sa-enable
NOTE
----End
Procedure
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] sa-cache [ group-
address | source-address | { 2-byte-as-number | 4-byte-as-number } ] * command to check
(S, G) entries in the SA Cache of the public network instance, VPN instance or all instances.
----End
Example
Run the display msdp sa-cache command to check (S, G) entries in SA Cache.
<HUAWEI> display msdp sa-cache
MSDP Source-Active Cache Information of VPN-Instance: public net
MSDP Total Source-Active Cache - 3 entries
MSDP matched 3 entries
(8.8.8.8, 225.0.0.200)
Origin RP: 4.4.4.4
Pro: BGP, AS: 10
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.201)
Origin RP: 4.4.4.4
Pro: BGP, AS: 1.0
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.202)
Origin RP: 4.4.4.4
Pro: BGP, AS: 65535.65535
Uptime: 00:00:33, Expires: 00:05:27
Run the display msdp sa-count command to check the number of (S, G) entries in SA Cache.
<HUAWEI> display msdp sa-count
MSDP Source-Active Count Information of VPN-Instance: public net
Number of cached Source-Active entries, counted by Peer
Peer's Address Number of SA
10.10.10.10 5
Number of source and group, counted by AS
AS Number of source Number of group
? 3 3
Total 5 Source-Active entries matched
Applicable Environment
The capacity of SA Cache on certain routers is small. When these routers need to receive
multicast data, they cannot immediately obtain the valid (S, G) information and need to wait for
the SA message sent by their MSDP peers in the next period.
If SA Cache is enabled on the remote MSDP peer and the capacity of the SA Cache is large,
configuring "sending SA Request message" on the local router can shorten the period during
which receivers obtain multicast source information.
l When the local router wants to receive (S, G) information, it sends an SA Request message
to a specified remote MSDP peer.
l On receiving the SA Request message, the MSDP peer responds to the SA Request message
with the required (S, G) information. If the "filtering rule of SA Request message" is
configured on the remote MSDP peer, it checks the SA Request messages received from a
specified peers and determines whether to respond according to the checking results.
Pre-configuration Tasks
Before configuring an SA request, complete the following tasks:
Data Preparation
To configure an SA request, you need the following data.
No. Data
Context
Do as follows on the local router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
peer peer-address request-sa-enable
----End
Context
Do as follows on the remote MSDP peer specified by using the peer peer-address request-sa-
enable command. If the configuration is not done, once an SA message reaches, the router
immediately responds to it with an SA message containing the required (S, G) information.
Procedure
Step 1 Run:
system-view
----End
Example
Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status [ peer-
address ] command, and you can view the SA-Requests field and check whether the
configuration is valid. For example:
<HUAWEI> display msdp peer-status
MSDP Peer Information of VPN-Instance: public net
MSDP Peer 172.40.41.1, AS ?
Description:
Information about connection status:
State: Up
Up/down time: 00:26:41
Resets: 0
Connection interface: GigabitEthernet2/0/14 (172.40.41.2)
Number of sent/received messages: 27/28
Number of discarded output messages: 0
Elapsed time since last connection or counters clear: 00:26:56
Information about (Source, Group)-based SA filtering policy:
Import policy: none
Export policy: none
Information about SA-Requests:
Policy to accept SA-Request messages: 2000
Sending SA-Requests status: enable
Minimum TTL to forward SA with encapsulated data: 0
SAs learned from this peer: 0, SA Cache maximum for the peer: none
Input queue size: 0, Output queue size: 0
Counters for MSDP message:
Count of RPF check failure: 0
Incoming/outgoing SA messages: 16/0
Incoming/outgoing SA requests: 0/0
Incoming/outgoing SA responses: 0/0
Incoming/outgoing data packets: 0/0
Peer authentication: configured
Peer authentication type: Key-Chain
receiving an SA message, a remote RP decapsulates the message and forwards the multicast data
to users in the domain along the RPT.
Setting the TTL threshold can limit the transmission scope of a multicast packet contained in an
SA message. After receiving an SA message containing a multicast packet, an MSDP peer checks
the TTL value in the IP header of the multicast packet. If the TTL value is smaller than or equal
to the threshold, the MSDP peer does not forward the SA message to a specific remote peers. If
the TTL value is greater than the threshold, the MSDP peer reduces the TTL value in the IP
header of the multicast packet by 1, and then encapsulates the multicast packet in an SA message
and sends it out.
Pre-configuration Tasks
Before transmitting burst multicast data between domains, complete the following tasks:
l Configuring a unicast routing protocol to implement interconnection at the network layer
l Enabling IP multicast
l Configuring a PIM-SM domain to implement intra-domain multicast
l Configuring PIM-SM Inter-domain Multicast or Configuring an Anycast RP in a
PIM-SM Domain
Data Preparation
To transmit burst multicast data between domains, you need the following data.
No. Data
Context
Do as follows on the source RP configured with an MSDP peer:
Procedure
Step 1 Run:
system-view
By default, the SA message contains only (S, G) information, and does not contain a multicast
data packet.
----End
Context
Do as follows on the router configured with an MSDP peer:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
peer peer-address minimum-ttl ttl
After receiving an SA massage containing a multicast data packet, an MSDP peer forwards the
SA message to a specified remote MSDP peers only when the TTL value of the multicast packet
is greater than the threshold.
----End
Procedure
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] sa-cache [ group-
address | source-address | { 2-byte-as-number | 4-byte-as-number } ] * command to check
SA Cache of the public network instance, VPN instance, or all instances.
Example
Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status [ peer-
address ] command, and you can view the minimum TTL for forwarding an SA messages
containing a data packet and check whether the configuration is valid. For example:
<HUAWEI> display msdp peer-status
MSDP Peer Information of VPN-Instance: public net
MSDP Peer 172.40.41.1, AS ?
Description:
Information about connection status:
State: Up
Up/down time: 00:26:41
Resets: 0
Connection interface: GigabitEthernet2/0/14 (172.40.41.2)
Number of sent/received messages: 27/28
Number of discarded output messages: 0
Elapsed time since last connection or counters clear: 00:26:56
Information about (Source, Group)-based SA filtering policy:
Import policy: none
Export policy: none
Information about SA-Requests:
Policy to accept SA-Request messages: 2000
Sending SA-Requests status: enable
Minimum TTL to forward SA with encapsulated data: 10
SAs learned from this peer: 0, SA Cache maximum for the peer: none
Input queue size: 0, Output queue size: 0
Counters for MSDP message:
Count of RPF check failure: 0
Incoming/outgoing SA messages: 16/0
Incoming/outgoing SA requests: 0/0
Incoming/outgoing SA responses: 0/0
Incoming/outgoing data packets: 0/0
Run the display msdp sa-cache command to check the information about (S, G) entries in SA
Cache.
l If group-address is specified, the (S, G) entry to which a specified group corresponds is
displayed.
l If source-address is specified, the (S, G) entry to which a specified source corresponds is
displayed.
l If 2-byte-as-number or 4-byte-as-number is specified, the (S, G) entry whose Origin RP
attribute belongs to a specified AS is displayed.
<HUAWEI> display msdp sa-cache
MSDP Source-Active Cache Information of VPN-Instance: public net
MSDP Total Source-Active Cache - 3 entries
MSDP matched 3 entries
(8.8.8.8, 225.0.0.200)
Origin RP: 4.4.4.4
Pro: BGP, AS: 10
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.201)
Origin RP: 4.4.4.4
Pro: BGP, AS: 1.0
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.202)
Pre-configuration Tasks
Before configuring the filtering rules for SA messages, complete the following tasks:
l Configuring a unicast routing protocol to implement interconnection at the network layer
l Enabling IP multicast
l Configuring a PIM-SM domain to implement intra-domain multicast
l Configuring PIM-SM Inter-domain Multicast or Configuring an Anycast RP in a
PIM-SM Domain
Data Preparation
To configure the filtering rules for SA messages, you need the following data.
No. Data
No. Data
Context
Do as follows on the source RP configured with an MSDP peer:
NOTE
If the configuration is not done, an SA message created by the source RP contains the information of all
local active sources.
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
import-source [ acl { acl-number | acl-name } ]
The rules for filtering the multicast source of an SA message are set.
l acl: specifies the filtering list based on multicast sources. The SA message created by an
MSDP peer contains the local source information that match the filtering rules. The MSDP
peer can thus control the local (S, G) information.
l If the import-source command with acl is used, the SA message does not advertise any
information about the local active source.
----End
Context
Do as follows on the router configured with MSDP:
NOTE
If the configuration is not done, the router receives all SA messages that pass the RPF check.
Procedure
Step 1 Run:
system-view
The rules for filtering an SA message received from a remote MSDP peer are set.
The parameters of the command are explained as follows:
l peer-address: specifies the address of a remote MSDP peer.
l acl: specifies the advanced filtering list. Only the (S, G) information that passes the filtering
of the ACL is received. The (S, G) information is contained in an SA message sent by the
peer specified by peer-address .
l If the peer peer-address sa-policy import command without acl is used, the router does not
receive any (S, G) information from the peer specified by peer-address.
----End
Context
Do as follows on the router enabled with MSDP:
NOTE
If the configuration is not done, the router forwards all SA messages that pass the RPF check.
Procedure
Step 1 Run:
system-view
The rules for filtering an SA message forwarded to a remote MSDP peer is set.
----End
Procedure
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] sa-cache [ group-
address | source-address | { 2-byte-as-number | 4-byte-as-number } ] * command to check
SA Cache of the public network instance, VPN instance, or all instances.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status
[ peer-address ] command to check detailed information about the MSDP peer status.
----End
Example
Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status [ peer-
address ] command, and you can view information about the (Source, Group)-based SA filtering
policy field and check whether the configuration is valid. For example:
<HUAWEI> display msdp peer-status
MSDP Peer Information of VPN-Instance: public net
MSDP Peer 172.40.41.1, AS ?
Description:
Information about connection status:
State: Up
Up/down time: 00:26:41
Resets: 0
Connection interface: GigabitEthernet2/0/14 (172.40.41.2)
Number of sent/received messages: 27/28
Number of discarded output messages: 0
Elapsed time since last connection or counters clear: 00:26:56
Information about (Source, Group)-based SA filtering policy:
Import policy: 3000
Export policy: 3002
Information about SA-Requests:
Policy to accept SA-Request messages: 2000
Sending SA-Requests status: enable
Minimum TTL to forward SA with encapsulated data: 10
SAs learned from this peer: 0, SA Cache maximum for the peer: none
Input queue size: 0, Output queue size: 0
Counters for MSDP message:
Count of RPF check failure: 0
Incoming/outgoing SA messages: 16/0
Incoming/outgoing SA requests: 0/0
Incoming/outgoing SA responses: 0/0
Incoming/outgoing data packets: 0/0
Run the display msdp sa-cache command to check the information about (S, G) entries in SA
Cache.
(8.8.8.8, 225.0.0.200)
Origin RP: 4.4.4.4
Pro: BGP, AS: 10
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.201)
Origin RP: 4.4.4.4
Pro: BGP, AS: 1.0
Uptime: 00:00:33, Expires: 00:05:27
(8.8.8.8, 225.0.0.202)
Origin RP: 4.4.4.4
Pro: BGP, AS: 65535.65535
Uptime: 00:00:33, Expires: 00:05:27
Applicable Environment
Configuring MSDP authentication can enhance the security of the TCP connections between
MSDP peers.
Pre-configuration Tasks
Before configuring MSDP authentication, complete the following tasks:
Data Preparation
Before configuring MSDP authentication, prepare the following data:
No. Data
Context
By default, MSDP MD5 authentication is not configured.
Procedure
Step 1 Run:
system-view
Step 2 Run:
msdp [ vpn-instance vpn-instance-name ]
Step 3 Run:
peer peer-address password { cipher cipher-password | simple simple-password }
The MSDP MD5 authentication password is case sensitive and cannot contain any space.
The MSDP peers must be configured with the same authentication password; otherwise, the TCP
connection cannot be set up between MSDP peers and MSDP messages cannot be transmitted.
The authentication password on peers can be in different forms, that is, the password on one end
can be in the cipher text while the password on the peer can be in the plain text.
NOTE
MSDP MD5 authentication and MSDP Key-Chain authentication are mutually exclusive.
The MD5 authentication password that starts and ends with $@$@ is invalid, because $@$@ is used to
distinguish old and new passwords.
----End
Context
By default, MSDP Key-Chain authentication is not configured.
Do as follows on the router configured with MSDP peers:
Procedure
Step 1 Run:
system-view
NOTE
MSDP MD5 authentication and MSDP Key-Chain authentication are mutually exclusive.
----End
Example
Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status [ peer-
address ] command, and you can find the Peer authentication and Peer authentication type
fields in the command output. For example:
<HUAWEI> display msdp peer-status
MSDP Peer Information of VPN-Instance: public net
MSDP Peer 172.40.41.1, AS ?
Description:
Information about connection status:
State: Up
Up/down time: 00:26:41
Resets: 0
Connection interface: GigabitEthernet2/0/14 (172.40.41.2)
Number of sent/received messages: 27/28
Number of discarded output messages: 0
Elapsed time since last connection or counters clear: 00:26:56
Information about (Source, Group)-based SA filtering policy:
Import policy: 3000
Export policy: 3002
Information about SA-Requests:
Policy to accept SA-Request messages: 2000
Sending SA-Requests status: enable
Minimum TTL to forward SA with encapsulated data: 10
SAs learned from this peer: 0, SA-cache maximum for the peer: none
Input queue size: 0, Output queue size: 0
Counters for MSDP message:
Count of RPF check failure: 0
Incoming/outgoing SA messages: 16/0
Incoming/outgoing SA requests: 0/0
Incoming/outgoing SA responses: 0/0
Incoming/outgoing data packets: 0/0
Peer authentication: configured
Peer authentication type: Key-Chain
Context
CAUTION
The statistics of MSDP peers cannot be restored after you clear it. So, confirm the action before
you use the command.
Procedure
l Run the reset msdp [ vpn-instance vpn-instance-name | all-instance ] peer [ peer-
address ] command in the user view to clear the TCP connection with a specified MSDP
peer and all statistics of the specified MSDP peer.
l Run the reset msdp [ vpn-instance vpn-instance-name | all-instance ] statistics [ peer-
address ] command in the user view to clear the statistics of an MSDP peer or multiple
MSDP peers of the public network instance, VPN instance, or all instances, if MSDP peers
are not reset.
CAUTION
The (S, G) information in SA Cache cannot be restored after you clear it. So, confirm the action
before you use the command.
Procedure
l Run the reset msdp [ vpn-instance vpn-instance-name | all-instance ] sa-cache [ group-
address ] command in the user view to clear the entries in MSDP SA Cache.
----End
Procedure
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] brief [ state
{ connect | down | listen | shutdown | up } ] command in any view to check brief
information about the MSDP peer status.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] peer-status
[ peer-address ] command in any view to check detailed information about the status of an
MSDP peer of the public network instance, VPN instance, or all instances.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] sa-cache [ group-
address | source-address | { 2-byte-as-number | 4-byte-as-number } ] * command in any
view to check the (S, G) information in SA Cache.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] sa-count [ 2-
byte-as-number | 4-byte-as-number ] command in any view to check the number of (S, G)
entries in MSDP Cache.
l Run the display msdp [ vpn-instance vpn-instance-name | all-instance ] control-
message counters [ peer peer-address | message-type { source-active | sa-request | sa-
response | keepalive | notification | traceroute-request | traceroute-reply | data-
packets | unknown-type } ] * command in any view to check statistics about the received,
sent, and discarded MSDP messages.
----End
Networking Requirements
As shown in Figure 6-1, there are two ASs in the network. Each AS contains one or more PIM-
SM domains. The receivers in PIM-SM2 domain are required to receive multicast data sent by
S3 in PIM-SM3 domain and multicast data sent by S1 in PIM-SM1 domain.
AS100 AS200
Loopback0
Loopback0 2.2.2.2/32 Receiver
1.1.1.1/32
PIM-SM2
RouterA
RouterC GE1/0/0
POS2/0/0 POS1/0/0 POS1/0/0 POS2/0/0
POS2/0/0 POS2/0/0
GE1/0/0 RouterB POS3/0/0 RouterD
PIM-SM1
S1 POS3/0/0 RouterF
POS2/0/0
POS2/0/0
RouterE GE1/0/0
PIM-SM3
S3
Loopback0
3.3.3.3/32
MSDP peer
Configuration Roadmap
Solution: configure MSDP peer relationships between RPs of each PIM-SM domain. The
configuration roadmap is as follows:
1. Configure IP addresses of interfaces on each router, and configure OSPF in the ASs to
ensure that unicast routes are reachable in the ASs.
2. Configure EBGP peers between ASs and configure BGP and OSPF to import routes to each
other to ensure that unicast routes are reachable.
3. Enable multicast and PIM-SM on each interface, configure the boundary of a domain, and
enable IGMP on the interface connected to hosts.
4. Configure a C-BSR and a C-RP. Configure the RPs of PIM-SM1 and PIM-SM2 on ASBRs.
5. Establish the MSDP peer relationship between RPs of each domain. MSDP peers and EBGP
peers between ASs use the same interface address. According to RPF rules, routers receive
SA messages forwarded by the next hop of the route to the source RP.
Data Preparation
To complete this configuration, you need the following data:
l Address of multicast group G: 225.1.1.1/24.
l The AS number of Router A and Router B is 100. Router ID of Router B is 1.1.1.1.
l The AS number of Router C and Router D is 200. Router ID of Router C is 2.2.2.2.
l The AS number of Router E and Router F is 200.
Procedure
Step 1 Configure an IP address and a unicast routing protocol on each router.
# Configure an IP address and mask on each interface as shown in Figure 6-1. Configure OSPF
in the AS. Ensure that the communication between routers is normal at the network layer. Ensure
dynamic routing updates between routers with the help of the unicast routing protocol. The
procedures are not mentioned here.
Step 2 Configure BGP between ASs and configure BGP and OSPF to import routes each other.
# Configure EBGP on Router B and import OSPF routes.
[RouterB] bgp 100
[RouterB-bgp] router-id 1.1.1.1
[RouterB-bgp] peer 192.168.2.2 as-number 200
[RouterB-bgp] import-route ospf 1
[RouterB-bgp] quit
# Import BGP routes to OSPF on Router B. The configuration of Router C is similar to that of
Router B, and is not mentioned here.
[RouterB] ospf 1
[RouterB-ospf-1] import-route bgp
[RouterB-ospf-1] quit
Step 3 Enable multicast on each router and enable PIM-SM on each interface, configure the boundary
of a domain and enable IGMP on the interface connected to hosts.
# Enable multicast on Router B, and enable PIM-SM on each interface. The configurations of
other routers are similar to that of Router B, and are not mentioned here.
[RouterB] multicast routing-enable
[RouterB] interface pos 2/0/0
[RouterB-Pos2/0/0] pim sm
[RouterB-Pos2/0/0] quit
[RouterB] interface pos 1/0/0
[RouterB-Pos1/0/0] pim sm
Configure the domain boundary on POS 1/0/0 and POS 3/0/0 of Router C and the BSR service
boundary on POS 3/0/0 of Router E. The configurations of Router C and Router E are similar
to that of Router B, and are not mentioned here.
# Enable IGMP on interface through which Router D is connected to the leaf network.
[RouterD] interface gigabitethernet 1/0/0
[RouterD-GigabitEthernet1/0/0] igmp enable
[RouterE] msdp
[RouterE-msdp] peer 192.168.4.1 connect-interface pos3/0/0
[RouterE-msdp] quit
# Run the display bgp routing-table command. You can view the BGP routing table on a
router. For example, the BGP routing table on Router C is as follows:
<RouterC> display bgp routing-table
Total Number of Routes: 5
BGP Local router ID is 2.2.2.2
Status codes: * - valid, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
Network NextHop MED LocPrf PrefVal Path/Ogn
*> 1.1.1.1/32 192.168.2.1 0 0 100?
*>i 2.2.2.2/32 0.0.0.0 0 0 ?
*> 192.168.2.0 0.0.0.0 0 0 ?
*> 192.168.2.1/32 0.0.0.0 0 0 ?
*> 192.168.2.2/32 0.0.0.0 0 0 ?
# Run the display msdp brief command. You can view MSDP peer relationships between
routers. The brief information about MSDP peer relationships between Router B, Router C, and
Router E is as follows:
<RouterB> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
1 1 0 0 0 0
Peer's Address State Up/Down time AS SA Count Reset Count
192.168.2.2 UP 00:12:27 200 13 0
<RouterC> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
2 2 0 0 0 0
Peer's Address State Up/Down time AS SA Count Reset Count
192.168.2.1 UP 01:07:08 100 8 0
192.168.4.2 UP 00:06:39 200 13 0
<RouterE> display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
1 1 0 0 0 0
Peer's Address State Up/Down time AS SA Count Reset Count
192.168.4.1 UP 00:15:32 200 8 0
# Run the display msdp peer-status command. You can view the detailed information about
the MSDP peers. The detailed information about MSDP peers on Router B is as follows:
<RouterB> display msdp peer-status
# Run the display pim routing-table command. You can view the PIM routing table on a
router. When S1 (10.110.1.2/24) in PIM-SM1 domain and S3 (10.110.3.2/24) in PIM-SM3
domain send multicast data to G (225.1.1.1/24), Receiver (10.110.2.2/24) in PIM-SM2 domian
can receive the multicast data. The information about the PIM routing tables on Router B and
Router C is as follows:
<RouterB> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.110.1.2, 225.1.1.1)
RP: 1.1.1.1(local)
Protocol: pim-sm, Flag: SPT EXT ACT
UpTime: 00:00:42
Upstream interface: Pos2/0/0
Upstream neighbor: 192.168.1.1
RPF neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos1/0/0
Protocol: pim-sm, UpTime: 00:00:42, Expires:-
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 2 (S, G) entries
(*, 225.1.1.1)
RP: 2.2.2.2(local)
Protocol: pim-sm, Flag: WC RPT
UpTime: 00:13:46
Upstream interface: NULL,
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0,
Protocol: pim-sm, UpTime: 00:13:46, Expires:-
(10.110.1.2, 225.1.1.1)
RP: 2.2.2.2
Protocol: pim-sm, Flag: SPT MSDP ACT
UpTime: 00:00:42
Upstream interface: Pos1/0/0
Upstream neighbor: 192.168.2.1
----End
Configuration Files
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 192.168.1.2 255.255.255.0
pim sm
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.2.1 255.255.255.0
pim sm
pim bsr-boundary
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
pim sm
#
bgp 100
router-id 1.1.1.1
peer 192.168.2.2 as-number 200
import-route ospf 1
#
ospf 1
import-route bgp
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
msdp
peer 192.168.2.2 connect-interface Pos1/0/0
#
return
multicast routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.2.2 255.255.255.0
pim sm
pim bsr-boundary
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 192.168.3.1 255.255.255.0
pim sm
#
interface Pos3/0/0
undo shutdown
link-protocol ppp
ip address 192.168.4.1 255.255.255.0
pim sm
pim bsr-boundary
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
pim sm
#
bgp 200
router-id 2.2.2.2
peer 192.168.2.1 as-number 100
import-route ospf 1
#
ospf 1
import-route bgp
area 0.0.0.0
network 192.168.3.0 0.0.0.255
network 192.168.4.0 0.0.0.255
network 2.2.2.2 0.0.0.0
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
msdp
peer 192.168.2.1 connect-interface Pos1/0/0
peer 192.168.4.2 connect-interface Pos3/0/0
#
return
l Configuration file of Router E
#
sysname RouterE
#
multicast routing-enable
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 192.168.5.1 255.255.255.0
pim sm
#
interface Pos3/0/0
undo shutdown
link-protocol ppp
ip address 192.168.4.2 255.255.255.0
pim sm
pim bsr-boundary
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 192.168.4.0 0.0.0.255
network 192.168.5.0 0.0.0.255
network 3.3.3.3 0.0.0.0
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
msdp
peer 192.168.4.1 connect-interface Pos3/0/0
#
return
Networking Requirements
As shown in Figure 6-2, there are two ASs in the network. Each AS contains one or more PIM-
SM domains; and each PIM-SM domain has 0 or 1 multicast source or receiver. MSDP peer
relationships are required to be set up between PIM-SM domains to share the information about
multicast sources.
Figure 6-2 Networking diagram of configuring inter-AS multicast by using static RPF peers
AS100 AS200
Loopback0
RouterE
POS2/0/0 POS1/0/0
Loopback0
RouterD
RouterC POS2/0/0 PIM-SM2
POS1/0/0
Receiver
100
Receiver
POS2/0/0 PIM-SM1 RouterB Loopback0
RouterG
POS1/0/0
S1
POS1/0/0
RouterA RouterF
PIM-SM3
S2
BGP peers
Configuration Roadmap
Solution: set up an MSDP peer on RP in each PIM-SM domain. Set up static RPF peer among
MSDP peers. Thus the transmission of source information across sources is implemented without
changing unicast topology. The steps are as follows:
1. Configure an IP address for each interface, configure OSPF in the ASs, configure EBGP
between ASs, and configure BGP and OSPF to import routes into each other.
2. Enable multicast on each router and PIM-SM on each interface, enable IGMP on the
interface connected to hosts, and configure the positions of Loopback 0 interfaces, the C-
BSR, and the C-RP. The Loopback 0 interfaces on Router C, Router D, and Router F act
as C-BSRs and C-RPs of their PIM-SM domains.
3. Set up MSDP peer relationships between RPs in each domain, and the MSDP peer
relationship between Router C and Router D and between Router C and Router F.
4. Specify a static RPF peer for an MSDP peer. The static RPF peers of Router C are Router
D and Router F. Router D and Router F has only one static RPF peer, that is, Router C.
According to RPF rules, routers receive SA messages from static RPF peers.
Data Preparation
To complete this configuration, you need the following data:
l The AS number of Router A, Router B and Router C is 100. The router IDs of the three
routers are 1.1.1.3, 1.1.1.2 and 1.1.1.1 respectively.
l The AS number of Router D and Router E is 200. The router IDs of the two routers are
2.2.2.2 and 2.2.2.1 respectively.
l The AS number of Router F and Router G is 200. The router ID of Router F is 3.3.3.3.
l The name of policy adopted by Router C in filtering the SA message from Router D and
Router F is list-df.
l The name of policy adopted by Router D and Router F in filtering the SA message from
Router C is list-c.
Procedure
Step 1 Configure an IP address for each interface and a unicast routing protocol.
# As shown in Figure 6-2, configure an IP address and mask for each interface. Configure OSPF
in the AS. Configure EBGP between Router A and Router F, Router B, and Router E. Configure
BGP and OSPF to import routes into each other. Ensure the normal communication between
routers on the network layer. Ensure dynamic routing updates between routers through the
unicast routing protocol. The procedures are not mentioned here.
# Enable multicast on each router and enable PIM-SM on each interface. The configurations of
the other routers are similar to that of Router C, and are not mentioned here.
[RouterC] multicast routing-enable
[RouterC] interface pos 1/0/0
[RouterC-Pos1/0/0] pim sm
[RouterC-Pos1/0/0] quit
[RouterC] interface pos 2/0/0
[RouterC-Pos2/0/0] pim sm
[RouterC-Pos2/0/0] quit
# Configure the BSR boundary on POS 1/0/0 of Router A, POS 2/0/0 of Router B, POS 2/0/0
of Router E and POS 1/0/0 of Router F. The configurations of Router B, Router E, and Router
F are similar to that of Router A, and are not mentioned here.
[RouterA] interface pos 1/0/0
[RouterA-Pos1/0/0] pim bsr-boundary
[RouterA-Pos1/0/0] quit
# Configure Router C as the static RPF peer of Router D and Router F. The configurations of
Router F are similar to that of Router D, and are not mentioned here.
[RouterD] ip ip-prefix list-c permit 192.168.0.0 16 greater-equal 16 less-equal 32
[RouterD] msdp
[RouterD-msdp] peer 192.168.1.1 connect-interface pos 1/0/0
[RouterD-msdp] static-rpf-peer 192.168.1.1 rp-policy list-c
----End
Configuration Files
Configuration file of Router C is as follows:
#
sysname RouterC
#
multicast routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 192.168.4.1 255.255.255.0
pim sm
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.4.0 0.0.0.255
network 1.1.1.1 0.0.0.0
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
ip ip-prefix list-df permit 192.168.0.0 16 greater-equal 16 less-equal 32
#
msdp
peer 192.168.3.2 connect-interface pos 1/0/0
peer 192.168.5.1 connect-interface pos 2/0/0
static-rpf-peer 192.168.3.2 rp-policy list-df
static-rpf-peer 192.168.5.1 rp-policy list-df
#
return
The configuration fiiles of Router D and Router F are similar to the files mentioned previously,
and are not mentioned here.
Networking Requirements
As shown in Figure 6-3, the PIM-SM domain has multiple multicast sources and receivers. It
is required to set up MSDP peers in the PIM-SM domain to implement RP load balancing.
Receiver
user2 RouterB
GE2/0/0
Loopback10
PIM-SM GE1/0/0
Source GE3/0/0 GE2/0/0
S1 RouterD
Loopback1
POS1/0/0 Source
GE1/0/0
S2
RouterA Loopback0
GE2/0/0 Loopback0
POS1/0/0
GE2/0/0
POS1/0/0
Loopback1
GE3/0/0 POS2/0/0
RouterC RouterE
Receiver
Loopback10 user1
MSDP peers
Configuration Roadmap
Solution: Configure an anycast RP. The receiver sends a Join message to the RP closest to the
topology. The source sends a Register message to the RP closest to the topology. The steps are
as follows:
1. Configure an IP address for each interface and configure OSPF in the PIM-SM area.
2. Enable multicast on each router and PIM-SM on each interface and enable IGMP on the
interface connected to hosts.
3. Configure the same loopback interface address for Router C and Router D. Configure the
C-BSR on Loopback 1 interfaces and the C-RP on Loopback 10 interfaces.
4. Configure MSDP peers on Loopback 0 interfaces of Router C and Router D. According to
RPF rules, routers receive SA messages from the source RP.
Data Preparation
To complete this configuration, you need the following data:
l Address of group G is 225.1.1.1/24.
l Router ID of Router C is 1.1.1.1.
l Router ID of Router D is 2.2.2.2.
Procedure
Step 1 Configure an IP address on each interface and a unicast routing protocol.
#Configure an IP address and mask to each interface according to Figure 6-3. Configure OSPF.
Ensure the communication between routers on the network layer. Ensure dynamic routing
updates between routers by means of the unicast routing protocol. The procedures are not
mentioned here.
Step 2 Enable multicast on each router and configure PIM-SM on each interface.
# Enable multicast on each router, and enable PIM-SM on each interface. Enable IGMP on the
interface connected to hosts. The configurations of other routers are similar to that of Router C,
and are not mentioned here.
[RouterC] multicast routing-enable
[RouterC] interface gigabitethernet 3/0/0
[RouterC-GigabitEthernet3/0/0] igmp enable
[RouterC-GigabitEthernet3/0/0] pim sm
[RouterC-GigabitEthernet3/0/0] quit
[RouterC] interface gigabitethernet 2/0/0
[RouterC-GigabitEthernet2/0/0] pim sm
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface pos 1/0/0
[RouterC-Pos1/0/0] pim sm
[RouterC-Pos1/0/0] quit
Step 3 Configure Loopback 1 and Loopback 10 interfaces, and the C-BSR and C-RP.
# Configure the address of Loopback 1 interface and the address of Loopback 10 interface on
Router C and Router D respectively. Configure the C-BSP on Loopback 1 and the C-RP on
Loopback 10. The configurations of Router D are similar to that of Router C, and are not
mentioned here.
[RouterC] interface loopback 1
[RouterC-LoopBack1] ip address 3.3.3.3 255.255.255.255
[RouterC-LoopBack1] pim sm
[RouterC-LoopBack1] quit
[RouterC] interface loopback 10
[RouterC-LoopBack10] ip address 10.1.1.1 255.255.255.255
[RouterC-LoopBack10] pim sm
[RouterC-LoopBack10] quit
[RouterC] pim
[RouterC-pim] c-bsr loopback 1
[RouterC-pim] c-rp loopback 10
[RouterC-pim] quit
# Run the display pim routing-table command. You can view PIM routes on a router. In the
PIM-SM domain, S1 (10.110.5.100/24) sends multicast information to G (225.1.1.1). User 1
that joins G receives the multicast data sent to G. Comparing with the display of PIM routes on
Router C and Router D, you can find that the valid RP is Router C. That is, S1 registers with
Router C, and User 1 sends Join messages to Router C.
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 10.1.1.1 (local)
Protocol: pim-sm, Flag: WC
UpTime: 00:28:49
Upstream interface: Register
Upstream neighbor: NULL
There is no display.
# User 1 leaves G, and S1 stops sending multicast data to G. You can run the reset multicast
routing-table all and reset multicast forwarding-table all commands to clear the multicast
routing entries and multicast forwarding entries on Router C.
# User 2 joins G, and S2 (10.110.6.100/24) sends multicast data to G. Comparing with the display
of PIM routes on Router C and Router D, you can find that the valid RP is Router D. That is, S2
registers with Router D, and User 2 sends Join messages to Router D.
<RouterC> reset multicast routing-table all
<RouterC> reset multicast forwarding-table all
There is no display.
<RouterD> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
RP: 10.1.1.1 (local)
Protocol: pim-sm, Flag: WC RPT
UpTime: 00:07:23
Upstream interface: NULL,
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet3/0/0,
Protocol: pim-sm, UpTime: 00:07:23, Expires:-
(10.110.6.100, 225.1.1.1)
RP: 10.1.1.1 (local)
Protocol: pim-sm, Flag: SPT 2MSDP ACT
UpTime: 00:10:20
Upstream interface: GigabitEthernet2/0/0
Upstream neighbor: 10.110.2.2
RPF prime neighbor: 10.110.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet3/0/0
Protocol: pim-sm, UpTime: 00:10:22, Expires: -
----End
Configuration Files
Configuration file of Router C is as follows.
#
sysname RouterC
#
multicast routing-enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.110.4.1 255.255.255.0
igmp enable
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.1.1 255.255.255.0
pim sm
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
pim sm
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
pim sm
#
interface LoopBack10
ip address 10.1.1.1 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 10.110.1.0 0.0.0.255
network 10.110.4.0 0.0.0.255
network 1.1.1.1 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
#
pim
c-bsr LoopBack1
c-rp LoopBack10
#
msdp
originating-rp LoopBack0
peer 2.2.2.2 connect-interface LoopBack0
#
return
The configuration of Router D is similar to that of Router C, and is not mentioned here.
7 MBGP Configuration
This chapter describes the MBGP fundamentals and configuration steps and maintenance for
MBGP functions, along with typical examples.
7.1 MBGP Introduction
This section describes the principle and the concepts of Multicast BGP (MBGP).
7.2 Configuring Basic MBGP Functions
This section describes how to configure basic MBGP functions.
7.3 Configuring the Policy for Advertising MBGP Routes
This section describes how to configure the policy for advertising MBGP routes.
7.4 Configuring the Policy for Exchanging Routes Between MBGP Peers
This section describes how to configure the policy for filtering the routes between MBGP peers.
7.5 Configuring MBGP Route Attributes
This section describes how to configure MBGP route attributes.
7.6 Configuring MBGP Route Dampening
This section describes how to configure MBGP route dampening.
7.7 Maintaining MBGP
This section describes how to clear MBGP statistics and reset connections between MBGP peers.
7.8 Configuration Examples
This section provides the configuration example of MBGP.
NOTE
This chapter describes the configuration of MP-BGP applied to multicast, that is, MBGP configuration.
For the details of MP-BGP, refer to the chapter "BGP Configuration" in the HUAWEI NetEngine80E/
40E Router Configuration Guide - IP Routing.
Applicable Environment
Perform the Reverse Path Forwarding (RPF) check on multicast packets according to the
following factors:
Setting up MBGP connections in the multicast address family view can provide routing
information for RPF check.
Pre-configuration Tasks
Before configuring basic MBGP functions, you need to configure basic Multicast functions.
Data Preparation
To configure basic MBGP functions, you need the following data.
No. Data
Context
CAUTION
If the two routers that plan to set up the MBGP peer relationship have set up a BGP connection,
skip the section.
Do as follows on the two routers between which the MBGP peer relationship needs to be set up.
Procedure
Step 1 Run:
system-view
BGP is enabled, the local AS number is set, and the BGP view is displayed.
Step 3 (Optional) Run:
router-id ipv4-address
The local interface and the source address used to set up a BGP connection are specified.
If the BGP connection is set up through a Loopback interface or a sub-interface, the command
is required.
Step 6 (Optional) Run:
peer { ip-address | group-name } ebgp-max-hop [ number ]
NOTE
For details of BGP peers, refer to the HUAWEI NetEngine80E/40E Router Configuration Guide - IP
Routing.
----End
Context
Do as follows on the router configured with a BGP peer:
Procedure
Step 1 Run:
system-view
Step 3 Run:
ipv4-family multicast
MBGP is enabled on the original BGP peer or peer group. The original BGP peer becomes an
MBGP peer.
The parameters of the command are explained as follows:
l group-name: specifies the original BGP peer group.
l peer-address: specifies the IP address of the original remote BGP peer.
----End
Context
CAUTION
A Route reflector is valid only for IBGP peers. Before performing the configuration, you must
establish the IBGP peer relationships between MBGP route reflector and clients.
Procedure
Step 1 Run:
system-view
The local host is configured as a route reflector, and the peer (group) is specified as a client of
the route reflector.
The parameters of the command are explained as follows:
By default, the route reflector uses its router ID as the cluster ID.
----End
Context
CAUTION
MBGP routes are originated from the following:
l Routes statically imported by using the network command.
l Routes imported by using the import-route command.
Users can import at least one type of local routes as required.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
network network-address [ mask-length | mask ] [ route-policy route-policy-name ]
Step 5 Run:
import-route protocol [ process-id ] [ med med-value | route-policy route-policy-
name ] *
l protocol [ process-id ]: specifies the routing protocol and the process ID of the routing
protocol from which routes are imported. The routing protocol contains direct, static, rip,
isis and ospf. When the routing protocol is isis, ospf or rip, the process ID must be specified.
l med med-value: specifies the MED value assigned to an imported route.
l route-policy route-policy-name: specifies the route filtering policy. Only the route that
passes the filtering of the policy is imported.
The default-route imported command needs to work with the import-route command to
import default routes. The default routes cannot be imported by using only the import-route
command. The default-route imported command is used to import the default routes that exist
in the local routing table.
----End
Prerequisite
The configurations of basic MBGP functions are complete.
Procedure
l Run the display bgp multicast peer [ [ peer-address ] verbose ] command to check
information about an MBGP peer.
l Run the display bgp multicast group [ group-name ] command to check information about
an MBGP peer group.
l Run the display bgp multicast network command to check the routing information
advertised by MBGP.
l Run the display bgp multicast routing-table [ network-address [ mask-length [ longer-
prefixes ] | mask [ longer-prefixes ] ] ] command to check the MBGP routing table.
----End
Applicable Environment
A router configured with the MBGP peer advertises the local routing information to a remote
peer. Based on the actual networking, users can adopt the following policies as required:
l Whether MBGP changes the next hop when advertising a route to IBGP peers
l Whether MBGP advertises all local routes or only the locally aggregated routes
l Whether MBGP advertises default routes
l Whether MBGP advertises community attributes or extended community attributes
l Whether a BGP Update message sent by an MBGP peer carries the private AS number
Pre-configuration Tasks
Before configuring the policy for advertising MBGP routes, complete the task of Configuring
Basic MBGP Functions.
Data Preparation
To configure the policy for advertising MBGP routes, you need the following data.
No. Data
1 AS number
Context
Do as follows on the router configured with an MBGP peer.
NOTE
The configuration is optional, and is valid only for an IBGP peer or peer group.
Procedure
Step 1 Run:
system-view
The local address is configured as the next hop of routes, when MBGP advertises routes to an
MBGP peer or peer group.
The parameters of the command are explained as follows:
l group-name: specifies an MBGP peer group.
l peer-address: specifies the IP address of a remote MBGP peer.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, MBGP does not aggregate local routes.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
Procedure
Step 1 Run:
system-view
Step 4 Run:
peer { group-name | peer-address } default-route-advertise [ route-policy route-
policy-name ]
----End
Context
Do as follows on the router configured with an MBGP peer:
Procedure
Step 1 Run:
system-view
The local peer is configured to advertise the community attribute to an MBGP peer group or a
remote MBGP peer.
By default, the local peer does not advertise the community attribute.
The parameters of this command are explained as follows:
l group-name: specifies the name of an MBGP peer group.
l peer-address: specifies the IP address for a remote MBGP peer.
l advertise-community: advertises the community attribute.
The local peer is configured to advertise the extended community attribute to an MBGP peer
group or a remote MBGP peer.
By default, the local peer does not advertise the extended community attribute.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is applicable only to an EBGP peer. By default, an update message can carries private
AS number.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
peer { group-name | peer-address } public-as-only
A BGP update message sent to an MBGP peer group or a remote MBGP peer is configured not
to carry private AS number. The public AS number can be directly used on the Internet. The
private AS number cannot be advertised to the Internet, and is used only in domains.
----End
Prerequisite
The configurations of the policy for advertising MBGP routes are complete.
Procedure
l Run the display bgp multicast routing-table community [ aa:nn ] & <0-13> [ no-
advertise | no-export | no-export-subconfed ] * [ whole-match ] command to check the
routing information of a specified MBGP community.
l Run the display bgp multicast routing-table community-filter { { community-filter-
name | basic-community-filter-number } [ whole-match ] | advanced-community-filter-
number } command to check the routes that match a specified MBGP community attribute
filter.
l Run the display bgp multicast network command to check the routing information
advertised by MBGP.
l Run the display bgp multicast routing-table [ network-address [ mask-length [ longer-
prefixes ] | mask [ longer-prefixes ] ] ] command to check the MBGP routing table.
l Run the display bgp multicast routing-table cidr command to check CIDR routes.
l Run the display bgp multicast routing-table statistics command to check the statistics
of the MBGP routing table.
----End
Applicable Environment
Based on the actual network, users can configure the related route exchange polices to control
the routing information transmitted between MBGP peers.
For a router configured with MBGP, the routes exchanged between peers are classified into the
following types:
l import: filters the routes sent by a specified peer. Only the routes that pass the filtering are
received.
l export: filters the routes sent to a specified peer or peer group. Only the routes that pass
the filtering are sent.
Pre-configuration Task
Before configuring the route filtering policy between MBGP peers, complete the following task:
Data Preparation
To configure the route filtering policy between MBGP peers, you need the following data.
No. Data
3 Name of the routing policy, sequence number of the node, and matching rule
4 ID of the ACL
5 AS-Path filter
6 IP-Prefix
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional and is applicable to the route exchange with any peer.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
MBGP routing policy is configured to control the route exchange with any MBGP peer.
The parameters of the command are explained as follows:
l basic-acl-number and acl-name acl-name: specifies the address filtering table.
l ip-prefix-name: specifies the address prefix list.
l import: filters the routes sent by any MBGP peer. Only the routes that pass the filtering are
received.
Step 5 Run:
filter-policy { basic-acl-number | acl-name acl-name | ip-prefix ip-prefix-name }
export [ protocol [ process-id ] ]
MBGP routing policy is configured to control the route exchange with any MBGP peer.
The parameters of the command are explained as follows:
l basic-acl-number and acl-name acl-name: specifies the address filtering table.
l ip-prefix-name: specifies the address prefix list.
l export [ protocol [ process-id ] ]: filters the routes sent to any MBGP peer. In fact, the filtering
is performed when the local routes are imported to the MBGP routing table. This command
is used to import the local routes that pass the filtering to the MBGP routing table, and then
advertise the routing information in the MBGP routing table.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the route filtering policy based on route-policy is not configured.
Procedure
Step 1 Run:
system-view
The MBGP routing policy based on route-policy is configured to control the route exchange
with a specified remote MBGP peer.
The parameters of the command are explained as follows:
l group-name specifies an MBGP peer group.
l peer-address: specifies the IP address of the remote MBGP peer.
l route-policy-name: specifies the routing policy.
l import: filters the routes sent by a specified remote MBGP peer or peer group. Only the
routes that pass the filtering are received.
l export: filters the routes sent to a specified remote MBGP peer or peer group. Only the routes
that pass the filtering are sent.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the route filtering policy based on the ACL is not configured.
Procedure
Step 1 Run:
system-view
The MBGP routing policy based on the ACL is configured to control the route exchange with
a specified remote MBGP peer.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the route filtering policy based on the AS-Path list is not
configured.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
peer { group-name | peer-address } as-path-filter filter-number { import | export }
The MBGP routing policy based on the AS-Path list is configured to control the route exchange
with a specified remote MBGP route.
l import: filters the routes sent by a specified remote MBGP peer or peer group. Only the
routes that pass the filtering are received.
l export: filters the routes sent to a specified remote MBGP peer or peer group. Only the routes
that pass the filtering are sent.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the route filtering policy based on the IP prefix list is not
configured.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
peer { group-name | peer-address } ip-prefix prefix-name { import | export }
The MBGP routing policy based on the IP prefix list is configured to control the route exchange
with a specified remote MBGP peer.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the maximum number of routes received from peers is not
configured.
Procedure
Step 1 Run:
system-view
The number of routes received from an MBGP peer or peer group is limited.
The command can be used to control the peer to receive routes. You can configure specific
parameters as required to control BGP after the number of the routes received from a peer exceeds
the threshold.
l alert-only: indicates that only an alarm is generated when the number of routes exceeds the
limit.
l idle-forever: indicates that when the number of routes exceeds the limit, the connection is
not automatically set up until the reset bgp command is used.
l idle-timeout value: indicates the timeout timer for reestablishing the connection
automatically after the number of routes exceeds the limit. value specifies the value of the
timer.
l If the three parameters are not set, the peer relationship is disconnected. The router retries
setting up a connection after 30 seconds. An alarm is generated and recorded in the log.
----End
Procedure
l Run the display bgp multicast routing-table different-origin-as command to check the
routes different from the original AS.
l Run the display bgp multicast routing-table regular-expression [ as-regular-
expression ] command to check the routing information that matches the AS regular
expression.
l Run the display bgp multicast paths [ as-regular-expression ] command to check the
information about the AS paths.
l Run the display bgp multicast routing-table as-path-filter as-path-filter-number
command to check the routing information that matches the filtering list.
l Run the display bgp multicast routing-table community-filter { { community-filter-
name | basic-community-filter-number } [ whole-match ] | advanced-community-filter-
number } command to check the routes that match the MBGP community list.
l Run the display bgp multicast routing-table peer peer-address { advertised-routes |
received-routes } [ statistics ] command to check the routing information receive form or
sent to a specified MBGP peer.
l Run the display bgp multicast network command to check the routing information
advertised by MBGP.
----End
Applicable Environment
MBGP has many route attributes. You can change the optimal route selection by using the
following attributes.
Pre-configuration Tasks
Before configuring the policy for MBGP route selection, complete the task of Configuring Basic
MBGP Functions.
Data Preparation
To configure the policy for MBGP route selection, you need the following data.
No. Data
1 AS number
3 Preferred value
4 Local_Pref
Context
Do as follows on the router configured with an MBGP peer:
NOTE
Procedure
Step 1 Run:
system-view
The preferred value is set for a route learnt from an MBGP peer group or a remote MBGP peer.
The route with the greatest preferred value is selected as the route to a specified network.
The parameters of the command are explained as follows:
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the default preferences of EBGP routes, IBGP routes, and local
routes are 255.
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. By default, the Local_Pref value of the MBGP route is 100.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
default local-preference preference
When a BGP router obtains multiple routes with the same destination but different next hops
from different IBGP peers, the route with the greatest Local_Pref value is preferred.
----End
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is optional. When a BGP router obtains multiple routes with the same destination but
different next hops from different EBGP peers, the route with the smallest MED value is preferred if other
conditions of these routes are the same.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
default med med
Step 5 Run:
compare-different-as-med
By default, the BGP router compares only the MED values of the routes from the same AS.
Step 6 Run:
deterministic-med
Deterministic-MED is enabled.
If this command is not configured, when an optimal route is to be selected from among routes
which are received from different ASs and which carry the same prefix, the sequence in which
routes are received is relevant to the result of route selection. After the command is configured,
however, when an optimal route is to be selected from among routes which are received from
different ASs and which carry the same prefix, routes are first grouped according to the leftmost
AS in the AS_Path. Routes with the same leftmost AS are grouped together, and after
comparison, an optimal route is selected for the group. The group optimal route is then compared
with optimal routes from other groups to determine the final optimal route. This mode of route
selection ensures that the sequence in which routes are received is no longer relevant to the result
of route selection.
Step 7 Run:
bestroute med-none-as-maximum
When the MED value of a route is lost, the route is configured with the maximum MED value.
Step 8 Run:
bestroute med-confederation
By default, the BGP router compares the MED values of routes in the same AS.
----End
Context
Do as follows on the router configured with the MBGP peer:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display bgp multicast routing-table command to check the routes of the MBGP
routing table.
l Run the display bgp multicast routing-table statistics command to check the statistics
of the MBGP routing table.
----End
Pre-configuration Tasks
Before configuring MBGP route dampening, complete the following task:
Data Preparation
To configure MBGP route dampening, you need the following data.
No. Data
Context
Do as follows on the router configured with an MBGP peer:
NOTE
The configuration is valid only for EBGP routes. By default, the default values of dampening parameters
are used.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bgp as-number
Step 3 Run:
ipv4-family multicast
Step 4 Run:
dampening [ half-life-reach reuse suppress ceiling | route-policy route-policy-
name ] *
l suppress: specifies the threshold for suppressing routes. The value must be greater than the
value of reuse. When the penalty is greater than the threshold, the routes are suppressed.
l ceiling: specifies the ceiling value of the threshold. The value must be greater than the value
of suppress.
l route-policy route-policy-name: specifies the routing policy. The configuration is applicable
to the routes that meet certain matching conditions.
----End
Procedure
l Run the display bgp multicast routing-table dampened command to check MBGP
dampened routes.
l Run the display bgp multicast routing-table dampening parameter command to check
MBGP route dampening parameters.
l Run the display bgp multicast routing-table flap-info [ network-address [ mask [ longer-
match ] | mask-length [ longer-match ] ] | as-path-filter as-path-filter-number | regular-
expression as-regular-expression ] command to check the statistics of MBGP route
flapping.
----End
CAUTION
The MBGP peer relationship is deleted after you reset MBGP connections with the reset bgp
multicast command. So, confirm the action before you use the command.
Procedure
l Run the reset bgp multicast peer-address command in the user view to reset the MBGP
connection between specified peers.
l Run the reset bgp multicast all command in the user view to reset all MBGP connections.
l Run the reset bgp multicast group group-name command in the user view to reset the
MBGP connections between all peers in the peer group.
l Run the reset bgp multicast external command in the user view to reset external
connections.
l Run the reset bgp multicast internal command in the user view to reset internal
connections.
----End
Context
CAUTION
The MBGP statistics cannot be restored after you clear them. So, confirm the action before you
use the command.
Procedure
l Run the reset bgp multicast dampening [ network-address [ mask | mask-length ] ]
command in the user view to clear the MBGP routing information.
l Run the reset bgp multicast flap-info [ network-address [ mask-length | mask ] | as-path-
filter as-path-list-number | regrexp regrexp ] command in the user view to clear the
information about the MBGP route flapping.
----End
Networking Requirements
As shown in Figure 7-1, the receiver receives information in multicast mode. The receiver and
the source reside in different ASs. The MBGP peer relationship is established between ASs to
transmit multicast routing information.
AS100 AS200
RouterD
Loopback0
POS2/0/0
POS1/0/0
GE2/0/0 POS1/0/0
POS1/0/0 POS3/0/0
Loopback0 Loopback0
POS3/0/0 POS1/0/0
RouterC Loopback0
GE2/0/0
Receiver
MBGP peers
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an address for each interface to ensure internetworking within the AS in unicast
mode.
2. Configure the MBGP peer relationship and set up inter-AS multicast routes.
3. Configure the MBGP routes to be advertised.
4. Enable multicast on each router.
5. Configure basic PIM-SM functions in each AS and enable IGMP on the interface connected
to hosts.
Data Preparation
To complete the configuration, you need the following data:
l AS number of Router A is 100.
l Router B, Router C, and Router D belong to AS 200.
l Multicast group address is 225.1.1.1 and source address is 10.10.10.10/24.
Procedure
Step 1 Configure an IP address for each interface and OSPF in an AS.
# Configure an IP address and mask for each interface and the OSPF protocol in the AS shown
in Figure 7-1. Enable that Router B, Router C, Router D, and Receiver in AS 200 can
interconnect at the network layer. Ensure that the routers can learn routes to the loopback
interfaces. Ensure the dynamic routing updates between routers by means of the unicast routing
protocol. The process 1 of OSPF is adopted in the configuration, and the detailed process is not
mentioned here.
Step 2 Configure BGP, enable MBGP, and configure an MBGP peer.
# Enable BGP and configure an MBGP peer on Router A.
[RouterA] bgp 100
[RouterA-bgp] peer 192.1.1.2 as-number 200
[RouterA-bgp] ipv4-family multicast
[RouterA-bgp-af-multicast] peer 192.1.1.2 enable
[RouterA-bgp-af-multicast] quit
[RouterA-bgp] quit
# Configure Router A.
[RouterA] multicast routing-enable
[RouterA] interface Pos1/0/0
[RouterA-Pos1/0/0] pim sm
[RouterA-Pos1/0/0] quit
[RouterA] interface GigabitEthernet2/0/0
[RouterA-GigabitEthernet2/0/0] pim sm
[RouterA-GigabitEthernet2/0/0] quit
# Configure Router B.
[RouterB] multicast routing-enable
[RouterB] interface Pos1/0/0
[RouterB-Pos1/0/0] pim sm
[RouterB-Pos1/0/0] quit
[RouterB] interface Pos2/0/0
[RouterB-Pos2/0/0] pim sm
[RouterB-Pos2/0/0] quit
[RouterB] interface Pos3/0/0
[RouterB-Pos3/0/0] pim sm
[RouterB-Pos3/0/0] quit
# Configure Router C.
[RouterC] multicast routing-enable
[RouterC] interface Pos1/0/0
[RouterC-Pos1/0/0] pim sm
[RouterC-Pos1/0/0] quit
[RouterC] interface GigabitEthernet2/0/0
[RouterC-GigabitEthernet2/0/0] pim sm
[RouterC-GigabitEthernet2/0/0] igmp enable
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface Pos3/0/0
[RouterC-Pos3/0/0] pim sm
[RouterC-Pos3/0/0] quit
# Configure Router D.
[RouterD] multicast routing-enable
[RouterD] interface Pos1/0/0
[RouterD-Pos1/0/0] pim sm
[RouterD-Pos1/0/0] quit
[RouterD] interface Pos2/0/0
[RouterD-Pos2/0/0] pim sm
[RouterD-Pos2/0/0] quit
# Configure Router B.
[RouterB] interface loopback 0
[RouterB-LoopBack0] ip address 2.2.2.2 255.255.255.255
[RouterB-LoopBack0] pim sm
[RouterB-LoopBack0] quit
[RouterB] pim
[RouterB-pim] c-bsr loopback 0
[RouterB-pim] c-rp loopback 0
[RouterB] quit
# Configure Router B.
[RouterB] interface Pos1/0/0
[RouterB-Pos1/0/0] pim bsr-boundary
[RouterB-Pos1/0/0] quit
# Configure Router B.
[RouterB] msdp
[RouterB-msdp] peer 192.1.1.1 connect-interface Pos 1/0/0
[RouterB-msdp] quit
# Run the display msdp brief command to check the MSDP peer relationship between
routers. For example, the brief information about the MSDP peer relationship on Router B is
displayed as follows:
[RouterB] display msdp brief
MSDP Peer Brief Information of VPN-Instance: public net
Configured Up Listen Connect Shutdown Down
1 1 0 0 0 0
Peer's Address State Up/Down time AS SA Count Reset Count
192.1.1.1 Up 00:07:17 100 1 0
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.1.1.1 255.255.255.0
pim bsr-boundary
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.10.10.1 255.255.255.0
pim sm
#
interface loopback0
ip address 1.1.1.1 255.255.255.255
pim sm
#
pim
c-bsr loopback 0
c-rp loopback 0
#
bgp 100
peer 192.1.1.2 as-number 200
#
ipv4-family unicast
undo synchronization
import-route direct
peer 192.1.1.2 enable
#
ipv4-family multicast
undo synchronization
import-route direct
peer 192.1.1.2 enable
#
msdp
peer 192.1.1.2 connect-interface Pos1/0/0
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface pos1/0/0
undo shutdown
link-protocol ppp
ip address 192.1.1.2 255.255.255.0
pim bsr-boundary
pim sm
#
interface pos2/0/0
undo shutdown
link-protocol ppp
ip address 194.1.1.2 255.255.255.0
pim sm
#
interface pos3/0/0
undo shutdown
link-protocol ppp
ip address 193.1.1.2 255.255.255.0
pim sm
#
interface loopback 0
ip address 2.2.2.2 255.255.255.255
pim sm
#
pim
c-bsr loopback 0
c-rp loopback 0
#
ospf 1
area 0.0.0.0
network 193.1.1.0 0.0.0.255
network 194.1.1.0 0.0.0.255
network 2.2.2.2 0.0.0.0
#
bgp 200
peer 192.1.1.1 as-number 100
peer 193.1.1.1 as-number 200
peer 194.1.1.1 as-number 200
#
ipv4-family unicast
undo synchronization
import-route direct
peer 192.1.1.1 enable
peer 193.1.1.1 enable
peer 194.1.1.1 enable
#
ipv4-family multicast
undo synchronization
import-route direct
peer 192.1.1.1 enable
peer 193.1.1.1 enable
peer 194.1.1.1 enable
#
msdp
peer 192.1.1.1 connect-interface Pos 1/0/0
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 195.1.1.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 22.22.22.1 255.255.255.0
igmp enable
pim sm
#
interface pos3/0/0
undo shutdown
link-protocol ppp
ip address 193.1.1.1 255.255.255.0
pim sm
#
interface loopback0
ip address 3.3.3.3 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 193.1.1.0 0.0.0.255
network 195.1.1.0 0.0.0.255
network 22.22.22.0 0.0.0.255
network 3.3.3.3 0.0.0.0
#
bgp 200
peer 193.1.1.2 as-number 200
peer 195.1.1.2 as-number 200
#
ipv4-family unicast
undo synchronization
import-route direct
peer 193.1.1.2 enable
peer 195.1.1.2 enable
#
ipv4-family multicast
undo synchronization
import-route direct
peer 193.1.1.2 enable
peer 195.1.1.2 enable
#
return
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 195.1.1.2 255.255.255.0
pim sm
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
ip address 194.1.1.1 255.255.255.0
pim sm
#
interface loopback 0
ip address 4.4.4.4 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 194.1.1.0 0.0.0.255
network 195.1.1.0 0.0.0.255
network 4.4.4.4 0.0.0.0
#
bgp 200
peer 194.1.1.2 as-number 200
peer 195.1.1.1 as-number 200
#
ipv4-family unicast
undo synchronization
import-route direct
peer 194.1.1.2 enable
peer 195.1.1.1 enable
#
ipv4-family multicast
undo synchronization
import-route direct
peer 194.1.1.2 enable
peer 195.1.1.1 enable
#
return
This chapter describes the MD VPN fundamentals and configuration steps and maintenance for
MD VPN functions, along with typical examples.
8.1 IPv4 Multicast VPN Introduction
This section describes the principle and concepts of MD VPN.
8.2 Configuring Basic MD VPN Functions
This section describes how to configure basic MD VPN functions.
8.3 Configuring Switch-MDT Switchover
This section describes how to configure Switch-MDT switchover.
8.4 Maintaining IPv4 Multicast VPN
This section describes how to monitor the running status of MSDP and control the output of
logs.
8.5 Configuration Examples
This section provides several configuration examples of MD VPN.
As shown in Figure 8-1, when multicast VPN is deployed in the network, the network carries
three separate multicast services at the same time, that is, VPN A instance, VPN B instance, and
the public network instance. A multicast router PE at the edge of the public network supports
multi-instance. The PE acts as multiple multicast routers that run separately. Each instance
corresponds to a plane. The three planes are isolated.
site4 PE2
site6
MD B
site5 PE1
VPN instance B
P
PE2
PE1
PIM
PE3
Public instance
site2
site1 MD A PE2
PE1
site3
l Multicast data is transmitted among sites in the public network and each site in multicast
mode.
MD VPN
The NE80E/40E applies Multicast Domain (MD) to implement multicast VPN, which is called
MD VPN. In an MD, VPN data is transmitted through the Multicast Tunnel (MT).
The greatest advantage of the MD solution is that only PEs are required to support multi-instance.
MD neither needs to upgrade CEs and Ps, nor modify the previous Protocol Independent
Multicast (PIM) configuration on CEs and Ps. That is, the MD solution is transparent to CEs
and Ps.
Users can bind Share-Group to Multicast Tunnel Interfaces (MTIs), and set MTI parameters.
The MDT that takes the address of Share-group as the group address is called Share-MDT. VPNs
use Share-Group to uniquely identify a Share-MDT.
Multicast can be enabled in a PIM-SM network or a PIM-DM network. In the two different
modes, the process of setting up a Share-MDT is different.
Switch-MDT Switchover
When multicast data is forwarded through a Share-MDT in the public network, the multicast
data is forwarded to all PEs that support the same VPN instance, regardless of whether there is
a receiver in the site connected to PEs. When the rate of VPN multicast data is higher, it may
lead to data flooding, which wastes the network bandwidth and adds load to PEs.
The NE80E/40E optimizes the MD. Special Switch-MDTs is set up between PEs connected to
VPN receivers and PEs connected to the VPN multicast source for VPN multicast data of a high
rate flowing to the public network. The multicast data flow is then switched from the Share-
MDT to the Switch-MDT. Multicast data can thus be transmitted on demand.
Users can configure the switching conditions of a Switch-MDT.
For detailed implementation process, refer to the chapter "Multicast VPN" in the HUAWEI NetEngine80E/
40E Router Feature Description - IP Multicast.
Configure multicast VPN by using the MD solution. Set up Share-MDT to forward multicast
packets. When the multicast forwarding rate exceeds the threshold, Share-MDT is switched to
Switch-MDT.
Pre-configuration Tasks
Before configuring basic MD VPN functions, complete the following tasks:
Data Preparation
To configure basic MD VPN functions, you need the following data.
No. Data
2 Share-Group address
Context
Do as follows on the PE router:
Procedure
Step 1 Run:
set board-type slot slot-id { tunnel | netstream }
Step 2 Run:
system-view
Step 3 Run:
multicast-vpn slot slot-id
----End
Context
Do as follows on the PE router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast routing-enable
Step 3 Run:
ip vpn-instance vpn-instance-name
Step 4 Run:
multicast routing-enable
----End
Context
Do as follows on the PE router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip vpn-instance vpn-instance-name
Step 3 Run:
multicast-domain share-group group-address binding mtunnel number
A share group is configured. The system automatically creates an MTI, and then binds the share
group to the MTI and binds the MTI to the VPN instance.
----End
Context
Do as follows on the PE router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface mtunnel number
Step 3 Run:
pim sm
or run:
pim dm
The PIM mode is the same as the PIM mode of the VPN that the MTI belongs to.
Step 4 Run:
ip address ip-address { mask | mask-length }
NOTE
The MTI address must be the same as the IP address that is used to set up the IBGP peer relationship on
the PE in the public network. Otherwise, the VPN multicast packets received on the MTI cannot pass the
RPF check.
When multicast packets are transmitted between the VPN instance and the public network
instance, these packets are encapsulated and decapsulated in the GRE mode. If the size of a VPN
multicast packet is equal or close to the Maximum Transmission Unit (MTU) of the outgoing
interface of the public network instance, the size of a new multicast packet after GRE
encapsulation exceeds the MTU of that outgoing interface. In this case, the multicast packet
needs to be fragmented when it is sent out through the outgoing interface of the public network
instance. This creates a difficulty of re-forming the fragments to the remote receiving interface.
Therefore, you can configure a smaller MTU for the MTI to enable the fragmentation of multicast
packet prior to the process of GRE encapsulation. This method prevents re-forming on the remote
receiving interface, and improves the efficiency greatly.
----End
Procedure
l Run the display multicast-domain vpn-instance vpn-instance-name share-group
[ local | remote ] command to check Share-Group information of a specified VPN instance
in an MD.
l Run the display pim vpn-instance vpn-instance-name interface mtunnel [ number ]
[ verbose ] command to check information about an MTI.
----End
Applicable Environment
When multicast data packets are forwarded through the share-MDT in the public network, the
packets are forwarded to all PEs that support the same VPN instance, regardless of whether there
is a receiver in the site to which a PE is connected. When the rate for forwarding VPN multicast
data packets is high, the packets may be flooded in the public network. This wastes network
bandwidth and increases the load of PEs.
In the NE80E/40E, you can determine whether to perform Switch-MDT switchover. If Switch-
MDT switchover is not configured, MDs use Share-MDT to transmit VPN multicast data forever.
l When the rate of the VPN multicast data entering the public network exceeds the threshold,
the VPN multicast data can be switched from Share-MDT to a specified Switch-MDT. On-
demand multicast is thus implemented.
l After the VPN multicast data is switched to the switch-MDT, the switchover conditions
may not be met. In this case, the VPN multicast data can be reversely switched from Switch-
MDT to Share-MDT.
Pre-configuration Tasks
Before configuring Switch-MDT switchover, complete the task of Configuring Basic MD VPN
Functions.
Data Preparation
To configure Switch-MDT switchover, you need the following data.
No. Data
3 Switching threshold
Context
Do as follows on the PE router:
NOTE
This configuration is optional. If this configuration is not done, Switch-MDT switchover cannot be
performed and Share-MDT is always used to transmit VPN multicast data.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip vpn-instance vpn-instance-name
Step 3 Run:
multicast-domain switch-group-pool switch-group-pool { network-mask | network-mask-
length } [ threshold threshold-value | acl { advanced-acl-number | acl-name } ] *
l switch-group-pool: specifies a switch-group-pool. It's suggested that the same VPN instance
on different PEs are configured with the same switch-group-pool. On a PE, the Switch-Group
address ranges to which different VPNs correspond cannot overlap.
l threshold-value: Specifies the threshed. By default, it is 0 kbit/s.
l { advanced-acl-number | acl-name }: specifies the advanced ACL filtering rules. By default,
packets are not filtered.
The duration for keeping the rate of VPN multicast data lower than the threshold before reversely
switching from Switch-MDT to Share-MDT is set.
----End
Procedure
l Run the following commands to check Switch-Group information received by a specified
VPN instance in the MD.
– display multicast-domain vpn-instance vpn-instance-name switch-group receive
brief
– display multicast-domain vpn-instance vpn-instance-name switch-group receive
[ active | group group-address | sender source-address | vpn-source-address [ mask
{ source-mask-length | source-mask } ] | vpn-group-address [ mask { group-mask-
length | group-mask } ] ] *
l Run the display multicast-domain vpn-instance vpn-instance-name switch-group
send [ group group-address | reuse interval | vpn-source-address [ mask { source-mask-
length | source-mask } ] | vpn-group-address [ mask { group-mask-length | group-
mask } ] ] * command to check Switch-Group information sent by a specified VPN instance
in the MD.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of IPv4 Multicast VPN.
Procedure
l Run the display multicast-domain vpn-instance vpn-instance-name share-group
[ local | remote ] command in any view to check information about Share-Group of a
specified VPN instance in an MD.
l Run the following commands in any view to check information about Switch-Group
received by a specified VPN instance in an MD.
– display multicast-domain vpn-instance vpn-instance-name switch-group receive
brief
– display multicast-domain vpn-instance vpn-instance-name switch-group receive
[ active | group group-address | sender source-address | vpn-source-address [ mask
{ source-mask-length | source-mask } ] | vpn-group-address [ mask { group-mask-
length | group-mask } ] ] *
l Run the display multicast-domain vpn-instance vpn-instance-name switch-group
send [ group group-address | reuse interval | vpn-source-address [ mask { source-mask-
length | source-mask } ] | vpn-group-address [ mask { group-mask-length | group-
mask } ] ] * command in any view to check information about the Switch-Group sent to a
specified VPN instance in an MD.
----End
Context
In the VPN instance on the source PE, if the number of VPN multicast data flows that need to
be switched is more than the number of group addresses in the switch-group-pool of Switch-
MDT, the group addresses in the switch-group-pool can be used repeatedly.
By default, the logs of the reused Switch-Group addresses are not recorded.
To know the running status of the system or locate a fault through logs, do as follows on the PE:
Procedure
Step 1 Run:
system-view
----End
Networking Requirements
In the single-AS MPLS/BGP VPN shown in Figure 8-2, the MD solution is used to deploy
multicast services.
PC2
VPN
RED
Source2 GE1
CE-Rb
GE2 GE3
PC3
GE1 Loopback1
VPN Loopback1
BLUE GE3 GE3 GE1
GE2 VPN
Loopback1 RED
CE-Bb Public
GE2 GE2
PE-B GE1 P PE-C CE-Rc
GE2 GE3 GE2
Loopback2
GE1
Source1 GE1
GE3
CE-Ra
GE1Loopback1 GE2
GE1 GE3 VPN
CE-Bc BLUE
GE2
VPN GE2 GE1
RED PE-A
PC4
PC1 Loopback1
NOTE
In Figure 8-2, GE1 stands for GigabitEthernet 1/0/0, GE2 stands for GigabitEthernet 2/0/0, and GE3 stands
for GigabitEthernet 3/0/0. The IP address of each interface is shown in the following table.
The devices support two processing modes of the multicast VPN service: distributed mode and integrated
mode. In distributed mode, you must run the multicast-vpn slot command to specify the slot that supports
multicast VPN. In integrated mode, you must run the set board-type slot and multicast-vpn slot
commands to set the service mode of the SPUC to be Tunnel mode and enable multicast VPN on an SPUC
separately. The following configuration example use the distributed mode to cofigure the multicast VPN
service.
P GE1: 192.168.6.2/24 -
GE2: 192.168.7.2/24 -
GE3: 192.168.8.2/24 -
GE2: 10.110.2.2/24 -
GE2: 10.110.3.2/24 -
GE2: 10.110.4.2/24 -
GE3: 10.110.12.1/24 -
GE2: 10.110.5.2/24 -
GE3: 10.110.12.2/24 -
GE2: 10.110.6.2/24 -
Multicast Multicast source of VPN RED is Source1. The receivers include PC1, PC2, and
source/ PC3. Multicast source of VPN BLUE is Source2. The receiver is PC4. In VPN
receiver RED, Share-Group address is 239.1.1.1 and Switch-Group address pool ranges
from 225.2.2.1 to 225.2.2.16. In VPN BLUE, Share-Group address is 239.2.2.2
and Switch-Group address pool ranges from 225.4.4.1 to 225.4.4.16.
VPN On PE-A, GE2 and GE3 belong to VPN-RED instance, and GE1 and
instance Loopback1 belong to the public network instance. On PE-B, GE2 belongs to
which the VPN-BLUE instance, GE3 belongs to VPN-RED instance, and GE1 and
interfaces Loopback1 belong to the public network instance. On PE-C, GE2 belongs to
on PEs VPN-RED instance, GE3 and Loopback2 belong to VPN-BLUE instance, and
belong to GE1 and Loopback1 belong to the public network instance.
Routing Configure OSPF on the public network. Enable RIP on PE and CE routers.
protocol Establish a BGP peer connection and transmit all VPN routes between
and MPLS Loopback1 interfaces on PE-A, PE-B, and PE-C. Enable MPLS forwarding on
the public network.
Multicast Enable multicast on P. Enable multicast on the public network instance on PE-
function A, PE-B, and PE-C. Enable multicast on VPN-RED instance on PE-A, PE-B,
and PE-C. Enable multicast on VPN-BLUE instance on PE-B and PE-C. Enable
multicast on CE-Ra, CE-Rb, CE-Rc, CE-Bb, and CE-Bc.
IGMP Enable IGMP on GE2 of PE-A.Enable IGMP on GE1 of CE-Rb, GE1 of CE-
function Rc, and GE1 of CE-Bc.
PIM Enable PIM-SM on all the VPN interfaces in VPN-RED instance. Enable PIM-
function SM on all the VPN interfaces in VPN-BLUE instance. Enable PIM-SM on all
the interfaces of P and CEs, as well as public network instance interfaces of
PEs. Configure Loopback1 of P as the C-BSR and C-RP of public network
(serving all multicast groups). Configure Loopback1 of CE-Rb as the C-BSR
and C-RP of VPN-RED (serving all multicast groups). Configure Loopback2
of PE-C as the C-BSR and C-RP of VPN-BLUE (serving all multicast groups).
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure MPLS/BGP VPN to ensure that the VPN network works normally and unicast
routes are reachable.
2. Enable multicast VPN services of integrated mode.
3. Enable multicast and PIM in the entire network. Enable multicast of public network on PEs
and Ps, and enable multicast of the VPN instances on PEs and CEs.
4. Configure the same Share-group address, the same MTI, and the same switch-group-pool
for the same VPN instance on each PE.
5. Configure the MTI address of each PE as the IBGP peer interface address in the public
network, and enable PIM on the MTI.
Data Preparation
See Table 8-2.
Procedure
Step 1 Configure PE-A.
# Configure an ID for PE-A, enable IP multicast routing in the public network, configure MPLS
LSR-ID, and enable LDP.
[PE-A] router id 1.1.1.1
[PE-A] multicast routing-enable
[PE-A] mpls lsr-id 1.1.1.1
[PE-A] mpls
[PE-A-mpls] quit
[PE-A] mpls ldp
[PE-A-mpls-ldp] quit
# Set the service mode of the SPUC to be Tunnel mode.Enable multicast VPN services on the
SPUC of the PE. Suppose that the SPUC is in slot 4, the configuration is as follows:
[PE-A] quit
<PE-A> set board-type slot 4 tunnel
<PE-A> system-view
[PE-A] multicast-vpn slot 4
# Create VPN RED instance and enter the VPN instance view. Configure the VPN IPv4 prefix
and create egress and ingress routes for the instance. Enable IP multicast and configure Share-
Group. Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
# Enable LDP and PIM-SM on the interface GigabitEthernet 1/0/0 in the public network.
[PE-A] interface gigabitethernet 1/0/0
[PE-A-GigabitEthernet1/0/0] ip address 192.168.6.1 24
[PE-A-GigabitEthernet1/0/0] pim sm
[PE-A-GigabitEthernet1/0/0] mpls
[PE-A-GigabitEthernet1/0/0] mpls ldp
# Bind the interface GigabitEthernet 2/0/0 to VPN RED instance, and enable IGMP and PIM-
SM.
[PE-A] interface gigabitethernet 2/0/0
[PE-A-GigabitEthernet2/0/0] ip binding vpn-instance RED
[PE-A-GigabitEthernet2/0/0] ip address 10.110.1.1 24
[PE-A-GigabitEthernet2/0/0] pim sm
[PE-A-GigabitEthernet2/0/0] igmp enable
# Assign an IP address for the interface MTI0. The address of MTI0 is same as the that of the
interface Loopback0. The system then automatically binds MTI0 to VPN RED instance. Enable
PIM-SM on the interface.
[PE-A] interface MTunnel 0
[PE-A-MTunnel0] ip address 1.1.1.1 32
[PE-A-MTunnel0] pim sm
[PE-A-MTunnel0] quit
# Set the service mode of the SPUC to be Tunnel mode.Enable multicast VPN services on the
SPUC of the PE. Suppose that the SPUC is in slot 4, the configuration is as follows:
[PE-B] quit
<PE-B> set board-type slot 4 tunnel
<PE-B> system-view
[PE-B] multicast-vpn slot 4
# Create VPN BLUE instance and enter the VPN instance view. Configure VPN IPv4 prefix
and create the egress and ingress routes for the instance. Enable IP multicast and configure Share-
Group. Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-B] ip vpn-instance BLUE
[PE-B-vpn-instance-BLUE] route-distinguisher 200:1
[PE-B-vpn-instance-BLUE] vpn-target 200:1 export-extcommunity
[PE-B-vpn-instance-BLUE] vpn-target 200:1 import-extcommunity
[PE-B-vpn-instance-BLUE] multicast routing-enable
[PE-B-vpn-instance-BLUE] multicast-domain share-group 239.2.2.2 binding mtunnel 1
[PE-B-vpn-instance-BLUE] multicast-domain switch-group-pool 225.4.4.1 28
# Create VPN RED instance and enter the VPN instance view. Configure VPN IPv4 prefix and
create egress and ingress routes for the instance. Enable IP multicast and configure Share-Group.
Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-B] ip vpn-instance RED
[PE-B-vpn-instance-RED] route-distinguisher 100:1
[PE-B-vpn-instance-RED] vpn-target 100:1 export-extcommunity
[PE-B-vpn-instance-RED] vpn-target 100:1 import-extcommunity
[PE-B-vpn-instance-RED] multicast routing-enable
[PE-B-vpn-instance-RED] multicast-domain share-group 239.1.1.1 binding mtunnel 0
[PE-B-vpn-instance-RED] multicast-domain switch-group-pool 225.2.2.1 28
# Enable LDP and PIM-SM on the interface GigabitEthernet 1/0/0 in the public network.
[PE-B] interface gigabitethernet 1/0/0
[PE-B-GigabitEthernet1/0/0] ip address 192.168.7.1 24
[PE-B-GigabitEthernet1/0/0] pim sm
[PE-B-GigabitEthernet1/0/0] mpls
[PE-B-GigabitEthernet1/0/0] mpls ldp
# Bind the interface GigabitEthernet 2/0/0 to VPN BLUE instance, and enable PIM-SM.
[PE-B] interface gigabitethernet 2/0/0
[PE-B-GigabitEthernet2/0/0] ip binding vpn-instance BLUE
[PE-B-GigabitEthernet2/0/0] ip address 10.110.3.1 24
[PE-B-GigabitEthernet2/0/0] pim sm
# Bind the interface GigabitEthernet 3/0/0 to VPN RED instance, and enable PIM-SM.
# Assign an IP address for the interface MTI0. The address of MTI0 is same as the that of the
interface Loopback1. Enable PIM-SM on the interface.
[PE-B] interface MTunnel 0
[PE-B-MTunnel0] ip address 1.1.1.2 32
[PE-B-MTunnel0] pim sm
# Assign an IP address for MTI1. The address of MTI1 is the same as that of the interface
Loopback1. Enable PIM-SM on the interface.
[PE-B] interface MTunnel 1
[PE-B-MTunnel1] ip address 1.1.1.2 32
[PE-B-MTunnel1] pim sm
[PE-C] mpls
[PE-C-mpls] quit
[PE-C] mpls ldp
[PE-C-mpls-ldp] quit
# Set the service mode of the SPUC to be Tunnel mode.Enable multicast VPN services on the
SPUC of the PE. Suppose that the SPUC is in slot 4, the configuration is as follows:
[PE-C] quit
<PE-C> set board-type slot 4 tunnel
<PE-C> system-view
[PE-C] multicast-vpn slot 4
# Create VPN RED instance and enter the VPN instance view. Configure VPN IPv4 prefix and
create egress and ingress routes for the instance. Enable IP multicast and configure Share-Group.
Specify an MTI bound to the VPN instance and the range of the switch-addres-pool.
[PE-C] ip vpn-instance RED
[PE-C-vpn-instance-RED] route-distinguisher 100:1
[PE-C-vpn-instance-RED] vpn-target 100:1 export-extcommunity
[PE-C-vpn-instance-RED] vpn-target 100:1 import-extcommunity
[PE-C-vpn-instance-RED] multicast routing-enable
[PE-C-vpn-instance-RED] multicast-domain share-group 239.1.1.1 binding mtunnel 0
[PE-C-vpn-instance-RED] multicast-domain switch-group-pool 225.2.2.1 28
# Create VPN BLUE instance and enter the VPN instance view. Configure the VPN IPv4 prefix
and create egress and ingress routes for the instance. Enable IP multicast and configure Share-
Group. Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-C] ip vpn-instance BLUE
[PE-C-vpn-instance-BLUE] route-distinguisher 200:1
[PE-C-vpn-instance-BLUE] vpn-target 200:1 export-extcommunity
[PE-C-vpn-instance-BLUE] vpn-target 200:1 import-extcommunity
[PE-C-vpn-instance-BLUE] multicast routing-enable
[PE-C-vpn-instance-BLUE] multicast-domain share-group 239.2.2.2 binding mtunnel 1
[PE-C-vpn-instance-BLUE] multicast-domain switch-group-pool 225.4.4.1 28
# Enable LDP and PIM-SM on the interface GigabitEthernet1/0/0 in the public network.
[PE-C] interface gigabitethernet 1/0/0
[PE-C-GigabitEthernet1/0/0] ip address 192.168.8.1 24
[PE-C-GigabitEthernet1/0/0] pim sm
[PE-C-GigabitEthernet1/0/0] mpls
[PE-C-GigabitEthernet1/0/0] mpls ldp
# Bind the interface GigabitEthernet2/0/0 to VPN RED instance, and enable PIM-SM.
[PE-C] interface gigabitethernet 2/0/0
[PE-C-GigabitEthernet2/0/0] ip binding vpn-instance RED
[PE-C-GigabitEthernet2/0/0] ip address 10.110.5.1 24
[PE-C-GigabitEthernet2/0/0] pim sm
# Bind the interface GigabitEthernet3/0/0 to VPN BLUE instance, and enable PIM-SM.
[PE-C] interface gigabitethernet 3/0/0
[PE-C-GigabitEthernet3/0/0] ip binding vpn-instance BLUE
[PE-C-GigabitEthernet3/0/0] ip address 10.110.6.1 24
[PE-C-GigabitEthernet3/0/0] pim sm
# Assign the an IP address for the interface MTI0. The address of MTI0 is the same as that of
the interface Loopback1. Enable PIM-SM on the interface.
# Assign the an IP address for MTI1. The address of MTI1 is the same as that of the interface
Loopback1. Enable PIM-SM on the interface.
[PE-C] interface MTunnel 1
[PE-C-MTunnel1] ip address 1.1.1.3 32
[PE-C-MTunnel1] pim sm
# Bind the interface Loopback2 to VPN BLUE instance, and enable PIM-SM.
[PE-C] interface loopback 2
[PE-C-LoopBack2] ip binding vpn-instance BLUE
[PE-C-LoopBack2] ip address 33.33.33.33 32
[PE-C-LoopBack2] pim sm
[PE-C-LoopBack2] quit
# Configure the interface Loopback2 as the C-BSR and the C-RP of VPN-BLUE.
[PE-C] pim vpn-instance BLUE
[PE-C-pim-blue] c-bsr Loopback2
[PE-C-pim-blue] c-rp Loopback2
[PE-C-pim-blue] quit
Step 4 Configure P.
# Enable multicast in the public network, configure MPLS LSR-ID, and enable LDP.
[P] multicast routing-enable
[P] mpls lsr-id 2.2.2.2
[P] mpls
[P-mpls] quit
[P] mpls ldp
[P-mpls-ldp] quit
# Enable LDP and PIM-SM on the interface GigabitEthernet 1/0/0 in the public network.
[P] interface gigabitethernet 1/0/0
[P-GigabitEthernet1/0/0] ip address 192.168.6.2 24
[P-GigabitEthernet1/0/0] pim sm
[P-GigabitEthernet1/0/0] mpls
[P-GigabitEthernet1/0/0] mpls ldp
# Enable LDP and PIM-SM on the interface GigabitEthernet 2/0/0 in the public network.
[P] interface gigabitethernet 2/0/0
[P-GigabitEthernet2/0/0] ip address 192.168.7.2 24
[P-GigabitEthernet2/0/0] pim sm
[P-GigabitEthernet2/0/0] mpls
[P-GigabitEthernet2/0/0] mpls ldp
# Enable LDP and PIM-SM on the interface GigabitEthernet 3/0/0 in the public network.
[P] interface gigabitethernet 3/0/0
[P-GigabitEthernet3/0/0] ip address 192.168.8.2 24
[P-GigabitEthernet3/0/0] pim sm
[P-GigabitEthernet3/0/0] mpls
[P-GigabitEthernet3/0/0] mpls ldp
# Configure the interface Loopback1 as the C-BSR and C-RP of the public network instance.
[P] pim
[P-pim] c-bsr Loopback1
[P-pim] c-rp Loopback1
# Configure OSPF.
[P] ospf 1
[P-ospf-1] area 0.0.0.0
[P-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[P-ospf-1-area-0.0.0.0] network 192.168.0.0 0.0.255.255
[P-ospf-1-area-0.0.0.0] quit
# Configure RIP.
[CE-Ra] rip 2
[CE-Ra-rip-2] network 10.0.0.0
[CE-Ra-rip-2] import-route direct
# Configure RIP.
[CE-Bb] rip 3
[CE-Bb-rip-3] network 10.0.0.0
[CE-Bb-rip-3] import-route direct
# Configure RIP.
[CE-Rb] rip 2
[CE-Rb-rip-2] network 10.0.0.0
[CE-Rb-rip-2] network 22.0.0.0
[CE-Rb-rip-2] import-route direct
# Enable PIM-SM and IGMP on the interface GigabitEthernet 1/0/0 in the VPN.
[CE-Rc] interface gigabitethernet 1/0/0
[CE-Rc-GigabitEthernet1/0/0] ip address 10.110.10.1 24
[CE-Rc-GigabitEthernet1/0/0] pim sm
[CE-Rc-GigabitEthernet1/0/0] igmp enable
# Configure RIP.
[CE-Rc] rip 2
[CE-Rc-rip-2] network 10.0.0.0
[CE-Rc-rip-2] import-route direct
# Enable PIM-SM and IGMP on the interface GigabitEthernet 1/0/0 in the VPN.
[CE-Bc] interface gigabitethernet 1/0/0
[CE-Bc-GigabitEthernet1/0/0] ip address 10.110.11.1 24
[CE-Bc-GigabitEthernet1/0/0] pim sm
[CE-Bc-GigabitEthernet1/0/0] igmp enable
# Configure RIP.
[CE-Bc] rip 3
[CE-Bc-rip-3] network 10.0.0.0
[CE-Bc-rip-3] import-route direct
----End
Configuration Files
l Configuration file of PE-A
#
sysname PE-A
#
router id 1.1.1.1
#
multicast routing-enable
#
multicast-vpn slot 4
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
ip vpn-instance RED
route-distinguisher 100:1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
multicast routing-enable
multicast-domain share-group 239.1.1.1 binding MTunnel 0
multicast-domain switch-group-pool 225.2.2.0 255.255.255.240
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.6.1 255.255.255.0
pim sm
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance RED
ip address 10.110.1.1 255.255.255.0
pim sm
igmp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance RED
ip address 10.110.2.1 255.255.255.0
pim sm
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
pim sm
#
interface MTunnel0
ip binding vpn-instance RED
ip address 1.1.1.1 255.255.255.255
pim sm
#
bgp 100
group VPN-G internal
peer VPN-G connect-interface LoopBack1
peer 1.1.1.2 as-number 100
peer 1.1.1.2 group VPN-G
peer 1.1.1.3 as-number 100
peer 1.1.1.3 group VPN-G
#
ipv4-family unicast
undo synchronization
peer VPN-G enable
peer 1.1.1.2 enable
peer 1.1.1.2 group VPN-G
peer 1.1.1.3 enable
peer 1.1.1.3 group VPN-G
#
ipv4-family vpnv4
policy vpn-target
peer VPN-G enable
peer 1.1.1.2 enable
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
pim sm
#
pim
c-bsr Loopback1
c-rp Loopback1
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.0.0 0.0.255.255
#
return
l Configuration file of CE-Ra
#
sysname CE-Ra
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.7.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.2.2 255.255.255.0
pim sm
#
rip 2
network 10.0.0.0
import-route direct
#
return
l Configuration file of CE-Bb
#
sysname CE-Bb
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.8.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.110.3.2 255.255.255.0
pim sm
#
rip 3
network 10.0.0.0
import-route direct
#
return
l Configuration file of CE-Rb
#
sysname CE-Rb
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
undo shutdown
ip address 10.110.6.2 255.255.255.0
pim sm
#
rip 3
network 10.0.0.0
import-route direct
#
return
Networking Requirements
In the inter-AS MPLS/BGP VPN (VPN-Option C mode) shown in Figure 8-3, the MD solution
is used to deploy multicast services.
Source2 PC1
VPN VPN
BLUE RED
GE1 GE1
Loopback0 Loopback0
Loopback1 Loopback1
CE-B GE2 GE2 CE-C
AS 100 AS 200
GE3 GE2
GE1 GE2 GE1
Loopback1 Loopback1
GE1 GE2 GE1
GE2 PE-A PE-B PE-C PE-D GE3
CE-A GE2 GE2 CE-D
Loopback2 Loopback2
GE1
GE1 VPN VPN
RED BLUE
PC2
Source1
NOTE
GE1 stands for GigabitEthernet 1/0/0, GE2 stands for GigabitEthernet 2/0/0, and GE3 stands for
GigabitEthernet 3/0/0. The IP address of each interface is listed as follows:
The devices support two processing modes of the multicast VPN service: distributed mode and integrated
mode. In distributed mode, you must run the multicast-vpn slot command to specify the slot that supports
multicast VPN. In integrated mode, you must run the set board-type slot and multicast-vpn slot
commands to set the service mode of the SPUC to be Tunnel mode and enable multicast VPN on an SPUC
separately. The following configuration example use the distributed mode to cofigure the multicast VPN
service.
GE2: 10.111.1.2/24 -
GE2: 10.111.2.2/24 -
Loopback0: 2.2.2.2/32 It acts as the C-BSR and C-RP of the PIM domain
in VPN-BLUE
GE2: 10.111.3.2/24 -
Loopback0: 3.3.3.3/24 It acts as the C-BSR and C-RP of the PIM domain
in VPN-RED
GE2: 10.111.4.2/24 -
Multicast source Multicast source of VPN RED is Source1 and the multicast receiver is
and multicast PC1.
receiver Multicast source of VPN BLUE is Source2 and the multicast receiver
is PC2.
In VPN RED, Share-Group address is 239.1.1.1 and Switch-Group
address pool ranges from 225.1.1.1 to 225.1.1.16.
In VPN BLUE, Share-Group address is 239.4.4.4 and Switch-Group
address pool ranges from 225.4.4.1 to 225.4.4.16.
VPN instance On PE-A, GE1 and Loopback1 belong to the public network instance,
which the GE2 belongs to VPN-RED, and GE3 belongs to VPN-BLUE.
interfaces on PEs On PE-B, GE1, GE2, Loopback1, and Loopback2 belong to the public
belong to network instance.
On PE-C, GE1, GE2, Loopback1, and Loopback2 belong to the public
network instance.
On PE-D, GE1 and Loopback1 belong to the public network instance,
GE2 belongs to VPN-RED, and GE3 belongs to VPN-BLUE.
Routing protocol Configure OSPF in AS 100 and AS 200. Enable OSPF on PE and CE
and MPLS routers. Configure BGP peers and transmit all VPN routes between
Loopback1 interfaces of PE-A, PE-B, PE-C and PE-D. Enable MPLS
in AS 100 and AS 200.
Multicast Enable multicast on the public network instance on PE-A, PE-B, PE-C,
functions and PE-D. Enable multicast on VPN-RED and VPN-BLUE on PE-A
and PE-D. Enable multicast on CE-A, CE-B, CE-C, and CE-D.
IGMP functions Enable IGMP on GE1 on CE-C. Enable IGMP on GE1 on CE-D.
PIM functions Enable PIM-SM on all the public network instance interfaces on PE-A,
PE-B, PE-C, and PE-D. Enable PIM-SM on the interfaces of VPN-RED
instance and VPN-BLUE instance on PE-A and PE-D. Set Loopback2
interfaces of PE-B and PE-C as the C-BSR and C-RP of the public
network instance in the AS which they belong to (serving all multicast
groups). Set Loopback0 interface of CE-B as the C-BSR and C-RP of
VPN-BLUE (serving all multicast groups). Set Loopback0 interface of
CE-C as the C-BSR and C-RP of VPN-RED (serving all multicast
groups).
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure MPLS/BGP VPN to ensure that the VPN network works normally and unicast
routes are reachable.
2. Enable the SPUC to support the multicast VPN service of integrated mode. In this manner,
multicast packets are encapsulated and decapsulated in GRE mode during the transmission
between the VPN instance and the public network instance.
NOTE
In VPN-OptionC mode, multicast packets on the ASBR-PE (PE-B and PE-C in Figure 8-3) do not
require the GRE encapsulation and decapsulation. As a result, the SPUC is not enabled to support
the multicast VPN service.
3. Enable multicast and PIM in the entire network, enable multicast on the PEs and PEs in
the public network, and enable multicast on PEs and CEs in the VPNs.
4. Configure MSDP peers on EBGP peers of PE-B and PE-C.
5. Configure the same Share-Group address, the same MTI, and the same Switch-MDT switch
for a VPN instance on PE-A and PE-D.
6. Configure the MTI address as the IBGP peer interface address of the public network on
PE-A and PE-D, and enable PIM on the MTI.
Data Preparation
See Table 8-4 for information about data preparation.
Procedure
Step 1 Configuring PE-A
# Configure an ID for PE-A, enable multicast in the public network, configure an ID for an
MPLS LSR, and enable LDP.
[PE-A] router id 1.1.1.1
[PE-A] multicast routing-enable
[PE-A] mpls lsr-id 1.1.1.1
[PE-A] mpls
[PE-A-mpls] quit
[PE-A] mpls ldp
[PE-A-mpls-ldp] quit
# Set the service mode of the SPUC to be Tunnel mode.Enable multicast VPN services on the
SPUC of the PE. Suppose that the SPUC is in slot 4, the configuration is as follows:
[PE-A] quit
<PE-A> set board-type slot 4 tunnel
<PE-A> system-view
[PE-A] multicast-vpn slot 4
# Create VPN RED instance and enter the VPN instance view. Configure VPN IPv4 prefix and
create egress and ingress routes for the instance. Enable IP multicast and configure Share-Group.
Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-A] ip vpn-instance RED
[PE-A-vpn-instance-RED] route-distinguisher 100:1
[PE-A-vpn-instance-RED] vpn-target 100:1 export-extcommunity
[PE-A-vpn-instance-RED] vpn-target 100:1 import-extcommunity
[PE-A-vpn-instance-RED] multicast routing-enable
[PE-A-vpn-instance-RED] multicast-domain share-group 239.1.1.1 binding mtunnel 0
[PE-A-vpn-instance-RED] multicast-domain switch-group-pool 225.1.1.1 28
[PE-A-vpn-instance-RED] quit
# Create VPN BLUE instance and enter the VPN instance view. Configure VPN IPv4 prefix
and create egress and ingress routes for the instance. Enable IP multicast and configure Share-
Group. Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-A] ip vpn-instance BLUE
[PE-A-vpn-instance-BLUE] route-distinguisher 200:1
[PE-A-vpn-instance-BLUE] vpn-target 200:1 export-extcommunity
[PE-A-vpn-instance-BLUE] vpn-target 200:1 import-extcommunity
[PE-A-vpn-instance-BLUE] multicast routing-enable
[PE-A-vpn-instance-BLUE] multicast-domain share-group 239.4.4.4 binding mtunnel 1
[PE-A-vpn-instance-BLUE] multicast-domain switch-group-pool 225.4.4.1 28
[PE-A-vpn-instance-BLUE] quit
# Configure an IP address (the same as that of Loopback 1 interface) for MTI0 and enable PIM-
SM.
[PE-A] interface MTunnel 0
[PE-A-MTunnel0] ip address 1.1.1.1 32
[PE-A-MTunnel0] pim sm
# Configure an IP address (the same as that of Loopback 1 interface) for MTI1 and enable PIM-
SM.
[PE-A] interface MTunnel 1
[PE-A-MTunnel1] ip address 1.1.1.1 32
[PE-A-MTunnel1] pim sm
# Enable MPLS LDP and PIM-SM on GigabitEthernet 1/0/0 in the public network.
[PE-C] interface gigabitethernet 1/0/0
[PE-C-GigabitEthernet1/0/0] ip address 10.10.2.1 24
[PE-C-GigabitEthernet1/0/0] pim sm
[PE-C-GigabitEthernet1/0/0] mpls
[PE-C-GigabitEthernet1/0/0] mpls ldp
[PE-C-GigabitEthernet1/0/0] quit
[PE-C-LoopBack1] quit
# Set the service mode of the SPUC to be Tunnel mode.Enable multicast VPN services on the
SPUC of the PE. Suppose that the SPUC is in slot 4, the configuration is as follows:
[PE-A] quit
<PE-A> set board-type slot 4 tunnel
<PE-A> system-view
[PE-A] multicast-vpn slot 4
# Create VPN RED instance and enter the VPN instance view. Configure VPN IPv4 prefix and
create egress and ingress routes for the instance. Enable IP multicast and configure Share-Group.
Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-D] ip vpn-instance RED
[PE-D-vpn-instance-RED] route-distinguisher 100:1
[PE-D-vpn-instance-RED] vpn-target 100:1 export-extcommunity
[PE-D-vpn-instance-RED] vpn-target 100:1 import-extcommunity
[PE-D-vpn-instance-RED] multicast routing-enable
[PE-D-vpn-instance-RED] multicast-domain share-group 239.1.1.1 binding mtunnel 0
[PE-D-vpn-instance-RED] multicast-domain switch-group-pool 225.1.1.1 28
[PE-D-vpn-instance-RED] quit
# Create VPN BLUE instance and enter the VPN instance view. Configure VPN IPv4 prefix
and create egress and ingress routes for the instance. Enable IP multicast and configure Share-
Group. Specify an MTI bound to the VPN instance and the range of the switch-address-pool.
[PE-D] ip vpn-instance BLUE
[PE-D-vpn-instance-BLUE] route-distinguisher 200:1
[PE-D-vpn-instance-BLUE] vpn-target 200:1 export-extcommunity
[PE-D-vpn-instance-BLUE] vpn-target 200:1 import-extcommunity
[PE-D-vpn-instance-BLUE] multicast routing-enable
[PE-D-vpn-instance-BLUE] multicast-domain share-group 239.4.4.4 binding mtunnel 1
[PE-D-vpn-instance-BLUE] multicast-domain switch-group-pool 225.4.4.1 28
[PE-D-vpn-instance-BLUE] quit
# Configure an IP address (the same as that of Loopback1 interface) for MTI0 and enable PIM-
SM.
[PE-D] interface MTunnel 0
[PE-D-MTunnel1] ip address 1.1.1.4 32
[PE-D-MTunnel1] pim sm
# Configure an IP address (the same as that of Loopback 1 interface) for MTI1 and enable PIM-
SM.
[PE-D] interface MTunnel 1
[PE-D-MTunnel1] ip address 1.1.1.4 32
[PE-D-MTunnel1] pim sm
# Configure OSPF.
[CE-A] ospf 1
[CE-A-ospf-1] area 0.0.0.0
[CE-A-ospf-1-area-0.0.0.0] network 10.111.0.0 0.0.255.255
[CE-A-ospf-1-area-0.0.0.0] quit
# Configure OSPF.
[CE-B] ospf 1
[CE-B-ospf-1] area 0.0.0.0
[CE-B-ospf-1-area-0.0.0.0] network 10.111.0.0 0.0.255.255
[CE-B-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[CE-B-ospf-1-area-0.0.0.0] quit
# Configure OSPF.
[CE-C] ospf 1
[CE-C-ospf-1] area 0.0.0.0
[CE-C-ospf-1-area-0.0.0.0] network 10.111.0.0 0.0.255.255
[CE-C-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[CE-C-ospf-1-area-0.0.0.0] quit
# Configure OSPF.
[CE-D] ospf 1
[CE-D-ospf-1] area 0.0.0.0
[CE-D-ospf-1-area-0.0.0.0] network 10.111.0.0 0.0.255.255
[CE-D-ospf-1-area-0.0.0.0] quit
----End
Configuration Files
l Configuration file of PE-A
#
sysname PE-A
#
multicast routing-enable
#
multicast-vpn slot 4
#
ip vpn-instance RED
route-distinguisher 100:1
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
multicast routing-enable
multicast-domain share-group 239.1.1.1 binding mtunnel 0
import-route direct
import-route ospf 2
#
ipv4-family vpn-instance BLUE
import-route direct
import-route ospf 3
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.4 enable
#
ospf 1
area 0.0.0.0
network 10.10.0.0 0.0.255.255
network 1.1.1.1 0.0.0.0
#
ospf 2 vpn-instance RED
import-route bgp
area 0.0.0.0
network 10.111.0.0 0.0.255.255
#
ospf 3 vpn-instance BLUE
import-route bgp
area 0.0.0.0
network 10.111.0.0 0.0.255.255
#
return
l Configuration file of PE-B
#
sysname PE-B
#
multicast routing-enable
#
mpls lsr-id 1.1.1.2
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.2 255.255.255.0
pim sm
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
pim bsr-boundary
mpls
#
interface LoopBack1
ip address 1.1.1.2 255.255.255.255
pim sm
#
interface LoopBack2
ip address 11.11.11.11 255.255.255.255
pim sm
#
ip route-static 1.1.1.3 255.255.255.255 GigabitEthernet2/0/0 192.168.1.2
#
bgp 100
group peb-pea internal
peer peb-pea connect-interface LoopBack1
peer 1.1.1.1 group peb-pea
group peb-pec external
peer peb-pec as-number 200
peer peb-pec ebgp-max-hop 255
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance RED
import-route direct
import-route ospf 2
#
ipv4-family vpn-instance BLUE
import-route direct
import-route ospf 3
#
ospf 1
area 0.0.0.0
network 10.10.0.0 0.0.255.255
network 1.1.1.4 0.0.0.0
#
ospf 2 vpn-instance RED
import-route bgp
area 0.0.0.0
network 10.111.0.0 0.0.255.255
#
ospf 3 vpn-instance BLUE
import-route bgp
area 0.0.0.0
network 10.111.0.0 0.0.255.255
#
return
l Configuration file of CE-A
#
sysname CE-A
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.111.5.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.111.1.2 255.255.255.0
pim sm
#
ospf 1
area 0.0.0.0
network 10.111.0.0 0.0.255.255
#
return
l Configuration file of CE-B
#
sysname CE-B
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.111.6.1 255.255.255.0
pim sm
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.111.2.2 255.255.255.0
pim sm
#
interface Loopback0
ip address 2.2.2.2 255.255.255.255
pim sm
#
ospf 1
area 0.0.0.0
network 10.111.0.0 0.0.255.255
network 2.2.2.2 0.0.0.0
#
pim
c-bsr Loopback0
c-rp Loopback0
#
return
This chapter describes the configurations and maintenance of IPv4 multicast CAC, and provides
configuration examples.
9.1 IPv4 Multicast CAC Introduction
This section describes the multicast CAC features supported by the NE80E/40E and the basic
principle of multicast CAC.
9.2 Configuring Multicast CAC
This section describes how to configure multicast CAC.
9.3 Maintaining Multicast CAC
This section describes how to monitor the running status of multicast CAC.
9.4 Configuration Examples
This section provides a configuration example of Multicast CAC.
Internet Protocol Television (IPTV) carries real-time video services. With the development of
IPTV, operators propose higher requirements for the utilization of IPTV network resources and
management of video services. In the current multicast technologies, there are only two ways
to control multicast networks. One is to limit the number of multicast forwarding entries, and
the other is to limit the number of outgoing interfaces of a single entry. The two ways cannot
meet the requirements of operators.
Multicast Call Admission Control (CAC) refers to configuring a policy on the first-hop router
connected to the multicast source, intermediate routers, and the last-hop router connected to
receivers to limit the number of multicast entries that the routers can create. After multicast CAC
is configured, operators can control the range of IPTV services and the number of users accessing
IP core networks.
l Control the range of multicast groups by limiting the number of PIM routing entries. This
prevents the introduction of large-volume multicast traffic beyond the forwarding
capability of routers.
l Plan multicast networks by reserving bandwidth for channels or interfaces. When
bandwidth resources of a channel or an interface are insufficient, no multicast group is
added to the channel or the interface. This ensures the quality of services.
l Control the range of multicast groups that users can join by limiting the number of IGMP
group memberships.
l Channel: It refers to a set of IPTV services of the same type defined by operators, such as
the Standard Definition Television (SDTV) channel and the High Definition Television
(HDTV) channel. Based on these channels, operators allocate bandwidth resources and
make charging policies. A channel can have multiple multicast groups.
l Multicast group: A multicast address represents a multicast group.
l IGMP only-link interface: It refers to the last-hop router interface that is connected to hosts
and meets one of the following requirements:
– The interface is enabled with IGMP rather than PIM-SM or PIM-DM.
– The interface is configured with static multicast groups but is not enabled with IGMP,
PIM-SM or PIM-DM.
IGMP only-link interfaces do not support multicast CAC. That is, to enable multicast CAC
on a router, ensure that the router does not have any IGMP only-link interface.
l Join status of an interface: It indicates that an interface is added to the outgoing interface
list of a PIM routing entry.
Applicable Environment
In the MAN shown in Figure 9-1, the IGMP Join message sent by the user is forwarded by Layer
2 devices such as the Digital Subscriber Line Access Multiplexer (DSLAM) and switch to the
Ultimate Provider Edge (UPE). The IP core network of operators may be connected to multiple
Internet Content Providers (ICPs). Each ICP uses multicast groups allocated in advance to carry
multicast data. The multicast data enters the MAN through the Network PE (NPE), and finally
reaches users.
To prevent multicast data entering networks from exceeding the processing capability of
routers or bandwidth limit of the access network, operators can adopt the following methods
when deploying IPTV services:
l Limiting the number of IGMP group memberships on the UPE globally, or based on an
instance or an interface.
l Limiting bandwidth and the number of groups in a channel or an interface on the UPE or
NPE.
NPE
Usage Guideline
Multicast CAC limits subsequent multicast entries only. For example, after the limit value is
configured, routers do not delete the existing entries or corresponding interfaces, even if the
number of existing entries, the number of interfaces in the Join state, or the occupied bandwidth
exceeds the limit value.
Here, the number of entries does not refer to the actual number of PIM or IGMP entries, but is
counted in the following ways:
l For groups in the Any Source Multicast (ASM) group address range: For the same G, all
(*, G) entries or (S, G) entries are counted as one entry.
l For groups in the Specified Source Multicast (SSM) group address range: An (S, G) entry
is counted as one entry.
The function of limiting the number of IGMP group memberships is applicable to PIM-DM and
PIM-SM networks, and the other multicast CAC functions are applicable to IPv4 PIM-SM
networks only.
l Limiting the number of PIM routing entries based on a single instance: The total number
of PIM routing entries related to an instance cannot exceed the limit value. When the total
number reaches the limit value, no PIM entry can be created.
l Limiting the number of PIM routing entries based on a channel: A channel can contain
multiple multicast groups. The total number of PIM routing entries related to a specified
channel cannot exceed the limit value. When the total number reaches the limit value, no
PIM entry can be created.
Limiting Bandwidth
When defining the range of multicast groups for a channel, you can specify the reserved
bandwidth for each multicast group. When a new group is added to the channel, a PIM entry is
created or an interface is added to the outgoing interface lists of an existing entry only when the
following conditions are met:
l The remaining bandwidth of the interface is equal to or greater than the reserved bandwidth
of the group.
l The remaining bandwidth of the channel is equal to or greater than the reserved bandwidth
of the group.
Applicable Environment
To limit IPTV ICPs and the number of users accessing IP core networks, you can configure
multicast CAC.
Multicast CAC can be enabled on the first-hop router connected to the multicast source,
intermediate routers, and the last-hop router connected to users. You can make the CAC policy
as required.
The CAC solutions adopted by PIM-SM networks of different models and different scales are
as follows:
l SSM model: It is recommended to set up a channel on the DR at the source side and the
last-hop router connected to users, limit the number of PIM routing entries and bandwidth
related to the channel, and limit the number of IGMP group memberships on the last-hop
router.
l ASM model: It is recommended to set up a channel on the DR at the source side, the RP
and the last-hop router connected to users, limit the number of PIM routing entries and
bandwidth related to the channel, and limit the number of IGMP group memberships on
the last-hop router.
l Inter-domain PIM: It is recommended to set up a channel on the RP in each domain and
the last-hop router connected to users, limit the number of PIM routing entries and
bandwidth related to the channel, and limit the number of IGMP group memberships on
the last-hop router.
NOTE
The functions of multicast CAC functions are applicable to IPv4 PIM-SM networks.
Pre-configuration Tasks
Before configuring basic Multicast CAC functions, complete the following tasks:
l Configuring a PIM-SM network, and enabling PIM-SM and IGMP (or configuring static
multicast groups) on the last-hop router interface connected to users
NOTE
Ensure that no IGMP only-link interfaces or PIM-DM interfaces exist in the instance to be configured with
multicast CAC.
Data Preparation
To configure basic multicast CAC functions, you need the following data.
No. Data
Context
Do as follows on the DR at the source side, the RP, and the last hop router connected to users:
Procedure
Step 1 Run:
system-view
The range of multicast groups and the range of multicast sources are configured.
per-bandwidth specifies the reserved bandwidth for each multicast group. To create PIM
routing entries of a multicast group, ensure that the remaining bandwidth is equal to or greater
than the reserved bandwidth of the multicast group; otherwise, no PIM entry can be created or
no interface can be added to the outgoing interface list.
----End
Context
Do as follows on the DR at the source side, the RP, and the last-hop router connected to users:
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast-channel [ vpn-instance vpn-instance-name ]
Step 3 Run:
multicast limit limit [ threshold threshold ]
The maximum number of PIM routing entries that can be created in the current instance is set.
If threshold is set, the system generates alarms in logs when the number of PIM entries exceeds
the value of threshold.
----End
Context
Do as follows on the DR at the source side, the RP, and the last-hop router connected to users:
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast-channel [ vpn-instance vpn-instance-name ]
Step 3 Run:
channel channel-name
Step 4 Run:
multicast limit limit [ threshold threshold ]
The maximum number of PIM routing entries that can be created in the channel is set.
If threshold is set, the system generates alarms in logs when the number of PIM entries exceeds
the value of threshold.
----End
Context
This configuration involves two cases, and you can perform the configuration as required. Do
as follows on the DR at the source side, the RP, and the last-hop router connected to users:
Procedure
l All multicast groups
1. Run:
system-view
The maximum number of outgoing interface lists to which the interface is added and
bandwidth limit are set for all multicast groups.
l Single channel
1. Run:
system-view
The maximum number of outgoing interface lists to which the interface is added and
bandwidth limit are set for the channel.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of Multicast CAC.
Procedure
l Run the display multicast [ vpn-instance vpn-instance-name ] limit channel channel-
name out-interface { interface-type interface-number | all } command in any view to check
the outgoing interface lists to which an interface is added and bandwidth limit based on a
channel.
l Run the display multicast [ vpn-instance vpn-instance-name ] limit out-interface
{ interface-type interface-number | all } command in any view to check all multicast entries
and bandwidth limit on an interface.
l Run the display multicast [ vpn-instance vpn-instance-name ] limit [ channel channel-
name ] global command in any view to check the statistics of global entries of a channel
or all multicast groups.
l Run the display pim [ vpn-instance vpn-instance-name ] routing-table channel channel-
name command to check information about (*, G) entries or (S, G) entries of a specified
channel.
l Run the following commands to check information about IGMP multicast groups.
– display igmp igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] * [ static ] [ verbose ]
– display igmp [ vpn-instance vpn-instance-name | all-instance ] group [ group-
address | interface interface-type interface-number ] ssm-mapping [ verbose ]
----End
Networking Requirements
In the MAN shown in Figure 9-2, Router D is the first-hop router connected to the multicast
source, that is, the DR at the source side; Router A, Router B, and Router C are the last-hop
routers connected to users; Router E functions as the RP. The network of the carrier is planned
as follows:
l Set up SDTV and HDTV channels and work out different prices for the two channels.
l Configure different reserved bandwidth for the two types of channels to ensure the quality
of services (QoS).
l Configure users to receive multicast data from Source 1 and Source 2 only.
l Limit the number of multicast groups that a user joins on routers connected to users.
RouterA
GE2/0/0
POS1/0/0
Receiver
PIM-SM HostA
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address for each interface on the routers and a unicast routing protocol.
2. Enable multicast on routers, PIM-SM on each interface, and IGMP on the interface
connected to hosts.
3. Set up two channels on Router D, and configure multicast groups and reserved bandwidth
for the channel.
4. Limit the number of PIM routing entries and the number of IGMP group memberships on
Router A, Router B, and Router C.
Data Preparation
To complete the configuration, you need the following data:
l IP address of each interface
l Multicast group address
l Reserved bandwidth of each multicast group
l The maximum number of PIM entries and the maximum number of IGMP group
memberships
Procedure
Step 1 Configure an IP address for each interface on the routers and configure a unicast routing protocol.
NOTE
In this configuration example, only the commands related to the multicast CAC configuration are
mentioned.
Step 2 Enable multicast on each router, PIM-SM on each interface, and IGMP and PIM-SM on the
interface connected to hosts.
# Enable multicast on each router, PIM-SM on each interface, and IGMP on interfaces through
which Router A, Router B, and Router C are connected to leaf networks.
<RouterA> system-view
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet2/0/0
[RouterA-GigabitEthernet2/0/0] pim sm
[RouterA-GigabitEthernet2/0/0] igmp enable
[RouterA-GigabitEthernet2/0/0] igmp version 3
[RouterA-GigabitEthernet2/0/0] quit
[RouterA] interface pos1/0/0
[RouterA-Pos1/0/0] pim sm
<RouterB> system-view
[RouterB] multicast routing-enable
[RouterB] interface gigabitethernet2/0/0
[RouterB-GigabitEthernet2/0/0] pim sm
[RouterB-GigabitEthernet2/0/0] igmp enable
[RouterB-GigabitEthernet2/0/0] igmp version 3
[RouterB-GigabitEthernet2/0/0] quit
[RouterB] interface pos1/0/0
[RouterB-Pos1/0/0] pim sm
<RouterC> system-view
[RouterC] multicast routing-enable
[RouterC] interface gigabitethernet2/0/0
[RouterC-GigabitEthernet2/0/0] pim sm
[RouterC-GigabitEthernet2/0/0] igmp enable
[RouterC-GigabitEthernet2/0/0] igmp version 3
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface pos1/0/0
[RouterC-Pos1/0/0] pim sm
<RouterD> system-view
[RouterD] multicast routing-enable
[RouterD] interface gigabitethernet2/0/0
[RouterD-GigabitEthernet2/0/0] pim sm
[RouterD-GigabitEthernet2/0/0] quit
[RouterD] interface pos1/0/0
[RouterD-Pos1/0/0] pim sm
<RouterE> system-view
[RouterE] interface pos1/0/0
[RouterE-Pos1/0/0] pim sm
[RouterE-Pos1/0/0] quit
[RouterE] interface pos2/0/0
[RouterE-Pos2/0/0] pim sm
[RouterE-Pos2/0/0] quit
[RouterE] interface pos3/0/0
[RouterE-Pos3/0/0] pim sm
[RouterE-Pos3/0/0] quit
[RouterE] interface pos4/0/0
[RouterE-Pos4/0/0] pim sm
Step 3 Configure multicast CAC to limit the number of entries and bandwidth on Router D.
# Set up the SDTV channel that contains multicast groups in the 225.0.0.0/24 range. The reserved
bandwidth of each multicast group is 3000 kbit/s, and a maximum of 10 PIM entries can be
created for the channel.
[RouterD] multicast-channel
[RouterD-multicast-channel] channel SDTV type asm
[RouterD-multicast-channel-SDTV] group 225.0.0.0 mask 255.255.255.0 per-bandwidth
3000
[RouterD-multicast-channel-SDTV] multicast limit 10
[RouterD-multicast-channel-SDTV] quit
# Set up the HDTV channel of the SSM model. The multicast groups contained in the channel
is in the 232.0.0.0/24 range, and the source address range is 192.168.3.0/24. The reserved
bandwidth of each multicast group is 4000 kbit/s. A maximum number of 12 PIM entries can
be created for the channel.
[RouterD-multicast-channel] channel HDTV type ssm
[RouterD-multicast-channel-HDTV] group 232.0.0.0 mask 255.255.255.0 source
192.168.3.0 mask 255.255.255.0 per-bandwidth 4000
[RouterD-multicast-channel-HDTV] multicast limit 12
[RouterD-multicast-channel-HDTV] quit
# Set the maximum number of PIM entries to 20 in the public network instance.
[RouterD-multicast-channel] multicast limit 20
[RouterD-multicast-channel] quit
# Set the number of PIM entries and bandwidth limit on POS 1/0/0.
[RouterD] interface pos1/0/0
[RouterD-Pos1/0/0] multicast limit out max-entry 12 bandwidth 160000
[RouterD-Pos1/0/0] multicast limit channel HDTV out max-entry 10 bandwidth 40000
[RouterD-Pos1/0/0] multicast limit channel SDTV out max-entry 10 bandwidth 20000
Step 4 Configure a channel and limit the number of PIM entries and bandwidth on the last-hop router
connected to users.
Set up SDTV and HDTV channels on Router A. The group range and source range are the same
as those on Router D. A maximum of 20 PIM entries can be created in the public network
instance. Configure the router not to create multicast entries for groups out of the channel.
Configurations of Router B and Router C are similar to those of Router A, and are not mentioned
here.
[RouterA] multicast-channel
[RouterA-multicast-channel] multicast limit 20
[RouterA-multicast-channel] channel SDTV type asm
[RouterA-multicast-channel-SDTV] group 225.0.0.0 mask 255.255.255.0 per-bandwidth
3000
[RouterA-multicast-channel-SDTV] multicast limit 10
[RouterA-multicast-channel-SDTV] quit
[RouterA-multicast-channel] channel HDTV type ssm
[RouterA-multicast-channel-HDTV] group 232.0.0.0 mask 255.255.255.0 source
192.168.3.0 mask 255.255.255.0 per-bandwidth 4000
[RouterA-multicast-channel-HDTV] multicast limit 12
[RouterA-multicast-channel-HDTV] quit
[RouterA-multicast-channel] unspecified-channel deny
Step 5 Limit the number of IGMP group memberships on the last-hop router connected to users.
# Configure the maximum number of IGMP group memberships to 20 in the public network
instance.
[RouterA] igmp
[RouterA-igmp] limit 20
[RouterA-igmp] quit
# Configurations of Router B and Router C are similar to those of Router A, and are not
mentioned here.
Run the display multicast limit global command, and you can view the global configuration
of multicast CAC and related statistics on a router. Take Router A as an example.
<RouterA> display multicast limit global
Multicast limitation information of instance: public net
Global entries : 20
Global threshold : -
Global Used-Entries : 2
# Run the display multicast limit channel global command, and you can view the global CAC
configuration and related statistics of a specified channel on a router. Take Router A as an
example.
<RouterA> display multicast limit channel SDTV global
Multicast limitation information of instance: public net
-----------------------------------------------------------------------
Channel Program Mode Entries threshold Used-Entries
-----------------------------------------------------------------------
SDTV ASM 10 - 0
-----------------------------------------------------------------------
# Run the display multicast limit out-interface command, and you can view the total number
of entries that takes this interface as the outgoing interface. Take Router A as an example.
<RouterD> display multicast limit out-interface all
Multicast limitation information of instance: public net
All Channel:
-------------------------------------------------------------------------------
Interface Entries Bandwidth(kbit/s) Used-Entries Used-Bandwidth(kbit/s)
-------------------------------------------------------------------------------
Pos1/0/0 12 160000 0 0
-------------------------------------------------------------------------------
----End
Configuration Files
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
multicast-channel
multicast limit 20
channel HDTV type ssm
multicast limit 12
group 232.0.0.0 mask 255.255.255.0 source 192.168.3.0 mask 255.255.255.0 per-
bandwidth 4000
channel SDTV type asm
multicast limit 10
group 225.0.0.0 mask 255.255.255.0 per-bandwidth 3000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.3.1 255.255.255.0
pim sm
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.4.2 255.255.255.0
pim sm
multicast limit out max-entry 12 bandwidth 160000
multicast limit channel HDTV out max-entry 10 bandwidth 40000
multicast limit channel SDTV out max-entry 10 bandwidth 20000
#
interface NULL0
#
ospf 100
area 0.0.0.0
network 192.168.3.0 0.0.0.255
network 10.1.4.0 0.0.0.255
#
return
pim sm
#
interface Pos4/0/0
undo shutdown
link-protocol ppp
ip address 10.1.4.1 255.255.255.0
pim sm
#
interface NULL0
#
ospf 100
area 0.0.0.0
network 10.1.4.0 0.0.0.255
network 10.1.3.0 0.0.0.255
network 10.1.2.0 0.0.0.255
network 10.1.1.0 0.0.0.255
#
return
l Configuration file of Router A
#
sysname RouterA
#
igmp global limit 30
#
multicast routing-enable
#
multicast-channel
unspecified-channel deny
multicast limit 20
channel HDTV type ssm
multicast limit 12
group 232.0.0.0 mask 255.255.255.0 source 192.168.3.0
mask 255.255.255.0 per-bandwidth 4000
channel SDTV type asm
multicast limit 10
group 225.0.0.0 mask 255.255.255.0 per-bandwidth 3000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
igmp enable
igmp version 3
igmp limit 15
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
pim sm
#
interface NULL0
#
ospf 100
area 0.0.0.0
network 192.168.1.0 0.0.0.255
network 10.1.1.0 0.0.0.255
#
igmp
limit 20
#
return
l Configuration file of Router B
#
sysname RouterB
#
igmp global limit 30
#
multicast routing-enable
#
multicast-channel
unspecified-channel deny
multicast limit 20
channel HDTV type ssm
multicast limit 12
group 232.0.0.0 mask 255.255.255.0 source 192.168.3.0
mask 255.255.255.0 per-bandwidth 4000
channel SDTV type asm
multicast limit 10
group 225.0.0.0 mask 255.255.255.0 per-bandwidth 3000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
pim sm
igmp enable
igmp version 3
igmp limit 15
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.2.1 255.255.255.0
pim sm
#
interface NULL0
#
ospf 100
area 0.0.0.0
network 192.168.2.0 0.0.0.255
network 10.1.2.0 0.0.0.255
#
igmp
limit 20
#
return
l Configuration file of Router C
#
sysname RouterC
#
igmp global limit 30
#
multicast routing-enable
#
multicast-channel
unspecified-channel deny
multicast limit 20
channel HDTV type ssm
multicast limit 12
group 232.0.0.0 mask 255.255.255.0 source 192.168.3.0
mask 255.255.255.0 per-bandwidth 4000
channel SDTV type asm
multicast limit 10
group 225.0.0.0 mask 255.255.255.0 per-bandwidth 3000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
pim sm
igmp enable
igmp version 3
igmp limit 15
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ip address 10.1.3.1 255.255.255.0
pim sm
#
interface NULL0
#
ospf 100
area 0.0.0.0
network 192.168.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
#
igmp
limit 20
#
return
This chapter describes the basic principles of IPv4 multicast forwarding, configuration methods
of forwarding policies, and maintenance of commands, and provides configuration examples.
10.1 IPv4 Multicast Routing Management Introduction
This section describes the principle and basic concepts of multicast routing, multicast
forwarding, and RPF.
10.2 Configuring a Static Multicast Route
This section describes how to configure static multicast routes.
10.3 Configuring the Multicast Routing Policy
This section describes how to configure a multicast routing policy.
10.4 Configuring the Multicast Forwarding Scope
This section describes how to configure the multicast forwarding range.
10.5 Configuring Control Parameters of the Multicast Forwarding Table
This section describes how to configure control parameters of the multicast forwarding table.
10.6 Maintaining the Multicast Policy
This section describes how to test multicast routing, check RPF paths and multicast paths, clear
multicast routing and forwarding entries, and monitor the status of multicast routing and
forwarding.
10.7 Configuration Examples
This section provides several configuration examples of IPv4 multicast routing management.
10.8 Troubleshooting of Static Multicast Routes
This section describes how to locate and clear the faults of a static multicast route.
The static multicast route cannot be used to forward data. It only affects RPF check, and is also
called static RPF route.
The static multicast route is valid only on the configured multicast routers, and cannot be
advertised or imported to other routers.
l By default, the router chooses the route with the largest next-hop address.
l According to the longest match, the router selects the route longest matching the address
of the source of the packet.
l Load splitting is configured among equal-cost routes. Performing load splitting of multicast
traffic according to different policies can optimize network traffic transmission in the
scenario where multiple multicast data flows exist.
There are five multicast load splitting policies: stable-preferred, balance-preferred, source
address-based, group address-based, and source and group addresses-based. The five load
splitting policies are mutually exclusive. In addition, you can configure load splitting
weights on the interfaces to achieve unbalanced multicast load splitting.
number of entries according to the actual networking and service performance can avoid
router faults caused by excessive entries.
l Limiting the number of downstream nodes of each forwarding entry
Router replicate a multicast packet for each downstream node, and then send it out. Each
downstream node forms a branch of an MDT. The number of downstream nodes determines
the maximum scale of the MDT and the multicast service range. Users can define the
number of downstream nodes of a single forwarding entry. Limiting the number of
downstream nodes according to the actual networking and service performance can reduce
the processing pressure of a router and control the multicast service range.
NOTE
The mtrace command can be used to trace multicast path on a specified multicast VPN network to maintain
multicast VPN services and locate faults on the network.
The ping multicast command is used to check whether a group is reachable and to implement
the following functions:
The mtrace command can be used to trace the following paths and output the hop information:
You can ping multicast addresses by using the Network Quality Analysis (NQA) test instances or related
commands. For detailed configurations of NQA test instances, refer to the HUAWEI NetEngine80E/40E
Router Configuration Guide - System Management NQA Configuration.
Applicable Environment
Static multicast route has the following functions:
Pre-configuration Tasks
Before configuring a static multicast route, complete the following tasks:
Data Preparation
To configure a static multicast route, you need the following data.
No. Data
Context
CAUTION
When configuring a static multicast route, configure the outgoing interface through the command
if the next hop is in the point-to-point format. If the next hop is not in the point-to-point format,
you must use the next hop.
Procedure
Step 1 Run:
system-view
----End
Applicable Environment
If multiple equal-cost unicast routes exist when a multicast router select an upstream interface,
you can configure the router to the RPF router by using one of the following methods:
l By default, the router chooses the route with the largest next-hop address.
l According to the longest match rules, you can configure the router to select the route with
the destination address that longest matches the address of the source of the packet.
l You can configure load splitting among these routers. Performing load splitting of multicast
traffic according to different policies can optimize network traffic when multiple multicast
data flows exist.
Pre-configuration Tasks
Before configuring the multicast routing policy, complete the following tasks:
Data Preparation
To configure the multicast routing policy, you need the following data.
No. Data
Context
CAUTION
Configurations related to VPN instances are applicable only to the PE router. When configuring
the longest match of multicast routes for a VPN instance on a PE, perform the configuration in
the VPN instance. In other cases, the longest match is configured in the public network instance.
Procedure
l Public network instance
1. Run:
system-view
Context
CAUTION
Configurations related to VPN instances are applicable only to the PE router. When configuring
load splitting among multicast routes for a VPN instance on a PE, perform the configuration in
the VPN instance. In other cases, load balancing among multicast routes is configured in the
public network instance.
The multicast load splitting function extends multicast routing rules, which does not fully depend
on the RPF check. If multiple equal-cost optimal routes exist over the network, they all can be
used for multicast data forwarding and multicast traffic is load split among multiple equal-cost
routes.
By default, load splitting is not performed.
Do as follows on the multicast router router:
Procedure
l Public network instance
1. Run:
system-view
Multicast load balancing is configured. The parameters of the command are explained
as follows:
– balance-preferred: indicates balance-preferred load splitting. This policy is
applicable to the scenario where hosts frequently join or leave the groups, which
requires automatic load adjustment.
If balance-preferred is specified, the router automatically adjusts and balances
the entries on the equal-cost routes when equal-cost routes are added or deleted,
IPv4 multicast routing entries are deleted, or IPv4 load splitting weights on the
interfaces are changed.
– stable-preferred: indicates stable-preferred load splitting. This policy is
applicable to the stable multicast networking.
If stable-preferred is specified, the router automatically adjusts and balances the
entries when equal-cost routes are added or deleted; however, when IPv4 multicast
routing entries are deleted or load splitting weights on the interfaces are changed,
the router does not automatically adjust the entries on the equal-cost routes.
– group: indicates group address-based load splitting. This policy is applicable to
the scenario of one source to multiple groups.
– source: indicates source address-based load splitting. This policy is applicable to
the scenario of one group to multiple sources.
– source-group: indicates source and group addresses-based load splitting. This
policy is applicable to the scenario of multiple sources to multiple groups.
NOTE
It is recommended to adopt a fixed IPv4 multicast load splitting policy based on the actual
networking. It is recommended to use the balance-preferred or stable-preferred policy.
balance-preferred or stable-preferred cannot be configured on the interface enabled with
PIM-DM.
l VPN instance
1. Run:
system-view
2. Run:
ip vpn-instance vpn-instance-name
----End
Context
When stable-preferred or balance-preferred load splitting is configured, because the forwarding
capabilities of equal-cost routes are different from the actual load bearing situation on the equal-
cost routes, balanced load splitting cannot meet network requirements in some scenarios. In such
a case, you can configure a load splitting weight on an interface to achieve unbalanced multicast
load splitting.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
The greater the multicast load splitting weight of an interface, the more multicast routing entries
with this interface being the upstream interface. When the multicast load splitting weight on an
interface is 0, it indicates that the routes with this interface being the upstream interface do not
take part in load splitting.
Step 3 Run:
multicast load-splitting weight weight-value
----End
Procedure
l Run the following commands to check the multicast routing table.
----End
Applicable Environment
Multicast information to which each multicast group corresponds is forwarded in a certain scope
in network. Uers can define the multicast forwarding scope by using the following methods:
l Configuring the multicast forwarding boundary to form a close multicast forwarding area.
The interface configured with a forwarding boundary of a multicast group cannot send or
receive packets of the multicast group.
l Configuring the TTL threshold of multicast forwarding on an interface to limit the
forwarding distance of multicast packets. The interface forwards only the packet whose
TTL value is not smaller than the threshold. If the TTL value of a packet is smaller than
the threshold, the packet is discarded.
Pre-configuration Tasks
Before configuring the multicast forwarding scope, complete the following tasks:
Data Preparation
To configure the multicast forwarding scope, you need the following data.
No. Data
1 Group address, mask, and mask length of the multicast forwarding boundary
Context
By default, no multicast forwarding boundary is configured on the interface.
Do as follows on the multicast router:
Procedure
Step 1 Run:
system-view
----End
Context
By default, the forwarding TTL threshold is 1.
Do as follows on the multicast router:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
multicast minimum-ttl ttl-value
----End
Procedure
l Run the following commands to check the multicast routing table.
– display multicast { vpn-instance vpn-instance-name | all-instance } routing-table
[ group-address [ mask { group-mask | group-mask-length } ] | source-address
[ mask { source-mask | source-mask-length } ] | incoming-interface { interface-type
interface-number | register } | outgoing-interface { include | exclude | match }
{ interface-type interface-number | register | none } ] * [ outgoing-interface-
number [ number ] ]
– display multicast routing-table [ group-address [ mask { group-mask | group-mask-
length } ] | source-address [ mask { source-mask | source-mask-length } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
{ include | exclude | match } { interface-type interface-number | vpn-instance vpn-
instance-name | register | none } ] * [ outgoing-interface-number [ number ] ]
l Run the display multicast [ vpn-instance vpn-instance-name | all-instance ] boundary
[ group-address [ mask | mask-length ] ] [ interface interface-type interface-number ]
command to check information about the multicast boundary of an interface.
l Run the display multicast [ vpn-instance vpn-instance-name | all-instance ] minimum-
ttl [ interface-type interface-number ] command to check the forwarding TTL threshold on
an interface.
----End
Applicable Environment
To plan a network according to the services, the ISP needs to perform the following configuration
policies:
Pre-configuration Tasks
Before configuring control parameters of the multicast forwarding table, complete the following
tasks:
Data Preparation
To configure control parameters of the multicast forwarding table, you need the following data.
No. Data
2 Matching policy, router sequence, and route preference of the multicast routes
Context
CAUTION
Configurations related to VPN instances are applicable only to the PE router. When configuring
the maximum number of entries in the forwarding table for a VPN instance on a PE, perform
the configuration in the VPN instance. In other cases, the maximum number of entries in the
forwarding table is configured in the public network instance.
Procedure
l Public network instance
1. Run:
system-view
----End
Context
CAUTION
This configuration becomes valid only after the reset multicast forwarding-table command is
used. Multicast services are interrupted after you run the reset multicast forwarding-table
command. So, confirm the action before you use the command.
Configurations related to the VPN instance are applicable only to the PE router. When
configuring the maximum number of downstream nodes for a forwarding entry in a VPN instance
on a PE, perform the configuration in the VPN instance. In other cases, the maximum number
of entries in the forwarding table is configured in the public network instance.
Procedure
l Public network instance
1. Run:
system-view
The maximum number is valid when it is smaller than the default value.
----End
Procedure
l Run the ping multicast -i interface-type interface-number [ -c count | -m time | -p
pattern | -q | -s packetsize | -t timeout | -tos tos-value | -v ] * host command in any view to
ping a reserved group address.
l Run the ping multicast [ -c count | -h ttl-value | -m time | -p pattern | -q | -s packetsize | -
t timeout | -tos tos-value | -v ] * host command in any view to ping a common group address.
----End
Context
NOTE
When checking the RPF path or multicast path from a source to a destination host, run the mtrace query-
policy [ acl-number ] command on the router connected to hosts to configure the filtering policy for queriers.
The ACL defines the address scope of reliable queriers. Based on the ACL, the last-hop router refuses the
IGMP-Tracert-Query messages sent by illegal queriers. Note the following when using this command:
l This command is valid only for the last-hop router, and the querier is not the last-hop router.
l This command is used to filter only the IGMP-Tracert-Query message encapsulated in a unicast IP
packet.
l This command is not applicable to the trace that is initiated by the local querier.
When a fault occurs during data transmission, you can run the following commands in any view
to check RPF paths and multicast paths.
Procedure
l Run the mtrace [ -ur resp-dest | -l [ stat-times ] [ -st stat-int ] | -m max-ttl | -q nqueries | -
ts ttl | -tr ttl | -v | -w timeout | -vpn-instance vpn-name ] * source source-address command
in any view to check the RPF path from a source to a querier.
l Run the mtrace -g group [ { -mr | -ur resp-dest } | -l [ stat-times ] [ -st stat-int ] | -m max-
ttl | -q nqueries | -ts ttl | -tr ttl | -v | -w timeout | -vpn-instance vpn-name ] * source source-
address command in any view to check the multicast path from a source to a querier.
l Run the mtrace [ -gw last-hop-router | -d ] -r receiver [ -ur resp-dest | -a source-ip-
address | -l [ stat-times ] [ -st stat-int ] | -m max-ttl | -q nqueries | -ts ttl | -tr ttl | -v | -w
timeout | -vpn-instance vpn-name ] * source source-address command in any view to
check the RPF path from a source to a destination host.
l Run the mtrace [ -gw last-hop-router | -b | -d ] -r receiver -g group [ { -mr | -ur resp-
dest } | -a source-ip-address | -l [ stat-times ] [ -st stat-int ] | -m max-ttl | -q nqueries | -ts
ttl | -tr ttl | -v | -w timeout | -vpn-instance vpn-name ] * source source-address command
in any view to check the multicast path from a source to a destination host.
----End
Context
CAUTION
The reset command clears the entries in the multicast forwarding table or the multicast routing
table. It may result in abnormal multicast information forwarding. After the routing entries in
the multicast routing table are cleared, the corresponding forwarding entries corresponding to
the public network instance or VPN instance are also cleared. So, confirm the action before you
use the command.
Procedure
l Run the following commands to clear the forwarding entries in the multicast forwarding
table.
– reset multicast [ vpn-instance vpn-instance-name | all-instance ] forwarding-table
all
– reset multicast [ vpn-instance vpn-instance-name | all-instance ] forwarding-table
{ group-address [ mask { group-mask | group-mask-length } ] | source-address
[ mask { source-mask | source-mask-length } ] | incoming-interface { interface-type
interface-number | register } } *
l Run the following commands to clear the routing entries in the multicast routing table.
– reset multicast [ vpn-instance vpn-instance-name | all-instance ] routing-table all
– reset multicast [ vpn-instance vpn-instance-name | all-instance ] routing-table
{ group-address [ mask { group-mask | group-mask-length } ] | source-address
[ mask { source-mask | source-mask-length } ] | incoming-interface { interface-type
interface-number | register } } *
----End
Context
In routine maintenance, you can run the following commands in any view to check the status of
multicast routing and forwarding.
Procedure
l Run the display multicast [ vpn-instance vpn-instance-name | all-instance ] boundary
[ group-address [ mask | mask-length ] ] [ interface interface-type interface-number ]
command in any view to check the multicast boundary configured on an interface.
l Run the display multicast [ vpn-instance vpn-instance-name | all-instance ] forwarding-
table [ group-address [ mask { group-mask | group-mask-length } ] | source-address
[ mask { source-mask | source-mask-length } ] | incoming-interface { interface-type
interface-number | register } | outgoing-interface { { include | exclude | match }
{ interface-type interface-number | register | none } } | statistics | slot slot-number ] *
command in any view to check the multicast forwarding table.
l Run the display multicast [ vpn-instance vpn-instance-name | all-instance ] minimum-
ttl [ interface-type interface-number ] command in any view to check the minimum TTL
value when a multicast data packet is forwarded by an interface.
l Run the following commands in any view to check the multicast routing table.
– display multicast { vpn-instance vpn-instance-name | all-instance } routing-table
[ group-address [ mask { group-mask | group-mask-length } ] | source-address
[ mask { source-mask | source-mask-length } ] | incoming-interface { interface-type
interface-number | register } | outgoing-interface { include | exclude | match }
{ interface-type interface-number | register | none } ] * [ outgoing-interface-
number [ number ] ]
– display multicast routing-table [ group-address [ mask { group-mask | group-mask-
length } ] | source-address [ mask { source-mask | source-mask-length } ] | incoming-
interface { interface-type interface-number | register } | outgoing-interface
Networking Requirements
As shown in Figure 10-1, the network runs PIM-DM, all routers support multicast, and the
receiver can receive information from the multicast source. Router A, Router B, and Router C
run OSPF. It is required to configure a static multicast route to distinguish the multicast path
from the source to the receiver from the unicast path from the source to the receiver.
Figure 10-1 Networking diagram for changing static multicast routes to RPF routes
RouterC
POS3/0/0 POS2/0/0
12.1.1.2/24 13.1.1.2/24
RouterA RouterB
POS1/0/0 POS1/0/0
GE2/0/0 9.1.1.1/24 9.1.1.2/24 GE3/0/0
8.1.1.1/24 7.1.1.1/24
8.1.1.2/24 7.1.1.2/24
Source Receiver
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and enable OSPF on each interface.
2. Enable multicast on each router, PIM-DM on each interface, and IGMP on the interface
connected to hosts.
3. Configure static multicast RPF routes on Router B, and specify Router C as the RPF
neighbor to the source.
Data Preparation
To complete the configuration, you need the following data:
l IP address of the source
l POS 2/0/0 through which Router B connects to Router C
Procedure
Step 1 Configure an IP address and a unicast routing protocol on each router.
# As shown in Figure 10-1, configure IP addresses and masks on the interfaces of each router.
OSPF is run on Router A, Router B, and Router C, and the three routers are able to update routes
among them through the unicast routing protocol. The configuration procedure is not mentioned
here.
Step 2 Enable multicast on each router and PIM-DM on each interface
# Enable multicast on each router, PIM-DM on each interface, and IGMP on the interface
connected to hosts. The configurations on other routers are similar to that of Router B, and are
not mentioned here.
[RouterB] multicast routing-enable
[RouterB] interface pos 1/0/0
[RouterB-Pos1/0/0] pim dm
[RouterB-Pos1/0/0] quit
[RouterB] interface pos 2/0/0
[RouterB-Pos2/0/0] pim dm
[RouterB-Pos2/0/0] quit
[RouterB] interface gigabitethernet 3/0/0
[RouterB-GigabitEthernet3/0/0] pim dm
[RouterB-GigabitEthernet3/0/0] igmp enable
[RouterB-GigabitEthernet3/0/0] quit
# Run the display multicast rpf-info command on Router B to view the RPF information of
the source. You can find that the RPF route is a unicast route, and the RPF neighbor is Router
A. The display is as follows:
<RouterB]> display multicast rpf-info 8.1.1.2
VPN-Instance: public net
RPF information about source 8.1.1.2:
RPF interface: Pos1/0/0, RPF neighbor: 9.1.1.1
Referenced route/mask: 8.1.1.0/24
Referenced route type: unicast
Route selection rule: preference-preferred
Load splitting rule: disable
----End
Configuration Files
Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
interface pos1/0/0
undo shutdown
link-protocol ppp
ip address 9.1.1.2 255.255.255.0
pim dm
#
interface pos2/0/0
undo shutdown
link-protocol ppp
ip address 13.1.1.1 255.255.255.0
pim dm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 7.1.1.1 255.255.255.0
pim dm
igmp enable
#
ospf 1
area 0.0.0.0
network 7.1.1.0 0.0.0.255
network 9.1.1.0 0.0.0.255
network 13.1.1.0 0.0.0.255
#
ip rpf-route-static 8.1.1.0 255.255.255.0 13.1.1.2
#
return
Networking Requirements
As shown in Figure 10-2, the network runs PIM-DM, all routers support multicast, and receiver
can receive information from the multicast source Source1. Router B and Router C run OSPF.
There is no unicast route between Router A and Router B. It is required to use a multicast static
route to enable the receiver to receive information sent by Source2.
Figure 10-2 Networking diagram for connecting the RPF route through static multicast routes
PIM-DM
OSPF
Source1
10.1.3.2/24
GE2/0/0
10.1.3.1/24 GE3/0/0 GE3/0/0 RouterA
10.1.4.1/24 10.1.4.2/24
RouterB
GE1/0/0
GE1/0/0 GE1/0/0
10.1.2.2/24
10.1.2.1/24 10.1.5.1/24
RouterC
GE2/0/0 Source2
10.1.1.1/24 10.1.5.2/24
Receiver
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and OSPF on each interface.
2. Enable multicast on each router, PIM-DM on each interface, and IGMP on the interface
connected to hosts.
3. Configure a static multicast RPF route between Router B and Router C.
Data Preparation
To complete the configuration, you need the following data:
l IP address of Source 2.
l RPF interface through which Router B connects to Source2 is GE 3/0/0, and the RPF
neighbor is Router A.
l RPF interface through which Router C connects to Source2 is GE 1/0/0, and the RPF
neighbor is Router B.
Procedure
Step 1 Configure an IP address and a unicast routing protocol on each router.
# As shown in Figure 10-2, configure IP addresses and masks of the interfaces on each router.
Router B and Router C belong to the same OSPF area, and the two routers are able to update
routes among them through the unicast routing protocol. The configuration procedure is not
mentioned here.
Step 2 Enable multicast on each router and PIM-DM on each interface
# Enable multicast on each router, enable PIM-DM on each interface, and enable IGMP on the
interface connected to hosts.
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim dm
[RouterA-GigabitEthernet1/0/0] quit
[RouterA] interface gigabitethernet 3/0/0
[RouterA-GigabitEthernet3/0/0] pim dm
[RouterB] multicast routing-enable
[RouterB] interface gigabitethernet 1/0/0
[RouterB-GigabitEthernet1/0/0] pim dm
[RouterB-GigabitEthernet1/0/0] quit
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] pim dm
[RouterB-GigabitEthernet2/0/0] quit
[RouterB] interface gigabitethernet 3/0/0
[RouterB-GigabitEthernet3/0/0] pim dm
[RouterB-GigabitEthernet3/0/0] quit
[RouterC] multicast routing-enable
[RouterC] interface gigabitethernet 1/0/0
[RouterC-GigabitEthernet1/0/0] pim dm
[RouterC-GigabitEthernet1/0/0] quit
[RouterC] interface gigabitethernet 2/0/0
[RouterC-GigabitEthernet2/0/0] pim dm
[RouterC-GigabitEthernet2/0/0] igmp enable
[RouterC-GigabitEthernet2/0/0] quit
# Source 1 (10.1.3.2/24) and Source 2 (10.1.5.2/24) send multicast data to the multicast group
G (225.1.1.1). The receiver joins G. The receiver can then receive the multicast data sent by
Source 1, but cannot receive the multicast data sent by Source 2.
# Run the display multicast rpf-info 10.1.5.2 command on Router B and Router C. If there is
no display, it indicates that Router B and Router C have no RPF route to Source 2.
Step 3 Configure a static multicast route
# Configure a static multicast RPF route on Router B and configure Router A as the RPF neighbor
to Source 2.
[RouterB] ip rpf-route-static 10.1.5.0 255.255.255.0 10.1.4.2
# Configure a static multicast RPF route on Router C, and configure Router B as the RPF
neighbor to Source 2.
[RouterC] ip rpf-route-static 10.1.5.0 255.255.255.0 10.1.2.2
# Run the display pim routing-table command to view the routing table. Router C has the
multicast entry of Source 2. The receiver can receive the multicast data from Source 2.
<RouterC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 2 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 03:54:19
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-dm, UpTime: 01:38:19, Expires: never
(10.1.3.2, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:00:44
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 10.1.2.2
RPF prime neighbor: 10.1.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-dm, UpTime: 00:00:44, Expires: never
(10.1.5.2, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:00:44
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 10.1.2.2
RPF prime neighbor: 10.1.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-dm, UpTime: 00:00:44, Expires: never
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.5.1 255.255.255.0
pim dm
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.4.2 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.5.0 0.0.0.255
network 10.1.4.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 10-3, the network runs PIM-DM; Router A and Router B support multicast;
Router C does not support multicast; The receiver in the network can normally receive
information sent by the source. Router A, Router B, and Router C run OSPF. It is required to
use a multicast static route to enable the receiver to receive the information sent by the Source.
Figure 10-3 Networking diagram of implementing multicast through a static multicast routed
tunnel
Source Receiver
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address for each interface.
# Configure an IP address and mask for each interface as shown in Figure 10-3. The
configuration details are not mentioned here.
Step 2 Establish a GRE tunnel between Router A and Router B.
# Configure Router A.
[RouterA] interface tunnel 1/0/1
[RouterA-Tunnel1/0/1] ip address 20.1.1.1 24
[RouterA-Tunnel1/0/1] tunnel-protocol gre
[RouterA-Tunnel1/0/1] source 10.1.1.1
[RouterA-Tunnel1/0/1] destination 14.1.1.2
[RouterA-Tunnel1/0/1] quit
# Configure Router B.
[RouterB] interface tunnel 1/0/1
[RouterB-Tunnel1/0/1] ip address 20.1.1.2 24
[RouterB-Tunnel1/0/1] tunnel-protocol gre
[RouterB-Tunnel1/0/1] source 14.1.1.2
[RouterB-Tunnel1/0/1] destination 10.1.1.1
[RouterB-Tunnel1/0/1] quit
Step 4 Enable multicast on Router A and Router B and enable PIM-DM on each interface.
# Enable multicast on Router A and Router B, enable PIM-DM on each interface, and enable
IGMP on the interface connected to hosts.
# Configure Router A.
[RouterA] multicast routing-enable
[RouterA] interface gigabitethernet2/0/0
[RouterA-GigabitEthernet2/0/0] pim dm
[RouterA-GigabitEthernet2/0/0] quit
[RouterA] interface pos 1/0/0
[RouterA-Pos1/0/0] pim dm
[RouterA-Pos1/0/0] quit
[RouterA] interface tunnel 1/0/1
[RouterA-Tunnel1/0/1] pim dm
[RouterA-Tunnel1/0/1] quit
# Configure Router B.
[RouterB] multicast routing-enable
[RouterB] interface pos 3/0/0
[RouterB-Pos3/0/0] pim dm
[RouterB-Pos3/0/0] quit
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] pim dm
[RouterB-GigabitEthernet2/0/0] igmp enable
[RouterB-GigabitEthernet2/0/0] quit
[RouterB] interface tunnel 1/0/1
[RouterB-Tunnel1/0/1] pim dm
[RouterB-Tunnel1/0/1] quit
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
interface Pos1/0/0
link-protocol ppp
ip address 10.1.1.1 255.255.255.0
pim dm
#
interface GigabitEthernet2/0/0
ip address 9.1.1.1 255.255.255.0
pim dm
#
interface Tunnel1/0/0
equal-cost routes for transmission. In addition, it is required to configure multicast load splitting
weights for different interfaces to achieve unbalanced multicast load splitting.
Networking Requirements
Multicast route selection is based on the RPF check, and the route selection policy depends on
unicast routes. The selected unique route is used to guide the forwarding of multicast data. If
the volume of multicast traffic is excessively large on a network, the network may be congested
and multicast services may be affected. Multicast load splitting extends the rules of multicast
route selection and makes multicast route selection not completely depend on the RPF check.
When there are multiple equal-cost optimal routes on a network and the routes can be used to
forward multicast data, multicast load splitting can be performed among the routes for multicast
traffic.
Currently, multicast load splitting is classified into multicast source-based load splitting,
multicast group-based load splitting, and multicast source- and multicast group-based load
splitting. These types of multicast load splitting cannot meet the requirements of multicast
splitting in all scenarios. In the case that multicast routing entries and network configurations
are stable, RPF interfaces and RPF neighbors keep unchanged. When the number of entries is
excessively small, balanced load splitting cannot be achieved.
Stable-preferred load splitting consummates the preceding types of multicast load splitting. As
shown in Figure 10-4, there are three equal-cost routes between Router E connected to Host A
and the multicast source, and stable-preferred load splitting is configured on Router E. Therefore,
entries can be evenly distributed on the equal-cost routes and balanced load splitting can be
implemented among the equal-cost routes.
If the forwarding capabilities and the severities of traffic congestion of the three equal-cost routes
on Router E are different, balanced load splitting cannot meet network requirements. In this case,
you need to configure unbalanced load splitting on Router E, and set different load splitting
weights on the upstream interfaces of Router E to change the number of entries distributed to
the equal-cost routes. Thus, you can flexibly control the number of entries distributed on the
equal-cost routes.
PIM-SM
Source
0/0 PO
S2/0/
1
O S1/ RouterB S2/0/0 POS1/0
GE1/0/0 PO P /1
POS2/0/2 POS1/0/2 GE2/0/0
RouterA POS
2/0/ POS1/0/0 POS2/0/0
S1/
0/3
3 PO
RouterC RouterE
POS 0/0 HostA
1/0/
0 S2/
Loopback0 PO
RouterD
GE2/0/0 10.110.2.1/24
Configuration Roadmap
The configuration roadmap is as follows:
l Host A requires to receive data from a new multicast group. Therefore, configure a multicast
load splitting weight for each upstream interface on Router E to achieve unbalanced
multicast load splitting.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface on the routers according to Figure 10-4. The detailed
configuration procedure is not mentioned here.
Step 2 Configure IS-IS to implement interworking among routers and ensure that route costs are equal.
The detailed configuration procedure is not mentioned here.
Step 3 Enable multicast on all routers and enable PIM-SM on each interface.
Step 6 Configure the interfaces at the host side of Router E to join the multicast groups statically in
batches.
You can find that (*, G) and (S, G) entries are equally distributed to three equal-cost routes, with
the upstream interfaces being POS 1/0/3, POS 1/0/2, and POS 1/0/1 respectively.
NOTE
The load splitting algorithm processes (*, G) and (S, G) entries separately and the process rules are the
same.
Step 8 Configure multicast load splitting weights for different upstream interfaces of Router E to
achieve unbalanced multicast load splitting.
# Set the multicast load splitting weight on POS 1/0/1 to 2.
[RouterE] interface pos 1/0/1
[RouterE-Pos1/0/1] multicast load-splitting weight 2
[RouterE-Pos1/0/1] quit
Step 9 Configure the interfaces at the host side of Router E to join new multicast groups statically in
batches.
# Configure GE 2/0/0 to join multicast groups from 225.1.1.4 to 225.1.1.9 statically.
[RouterE] interface gigabitethernet 2/0/0
[RouterE-GigabitEthernet2/0/0] igmp static-group 225.1.1.4 inc-step-mask 32 number
6
[RouterE-GigabitEthernet2/0/0] quit
The upstream interfaces of existent (*, G) and (S, G) entries keep unchanged. Since the multicast
load splitting weight of POS 1/0/1 is higher than that of POS 1/0/2, the newly generated entries
with the upstream interface being POS 1/0/1 are more than those with the upstream interface
being POS 1/0/2. The multicast load splitting weight of POS 1/0/3 is 0, which indicates that this
interface does not take part in load splitting of new entries.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
multicast routing-enable
#
isis 1
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.110.1.2 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/1
link-protocol ppp
undo shutdown
ip address 192.168.1.1 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/2
link-protocol ppp
undo shutdown
ip address 192.168.2.1 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/3
link-protocol ppp
undo shutdown
ip address 192.168.3.1 255.255.255.0
isis enable 1
pim sm
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
pim sm
#
pim
c-bsr LoopBack0
c-rp LoopBack0
#
return
l Configuration file of Router B
#
sysname RouterB
#
multicast routing-enable
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.1.2 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.4.1 255.255.255.0
isis enable 1
pim sm
#
return
l Configuration file of Router C
#
sysname RouterC
#
multicast routing-enable
#
isis 1
network-entity 10.0000.0000.0003.00
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.2.2 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.5.1 255.255.255.0
isis enable 1
pim sm
#
return
l Configuration file of Router D
#
sysname RouterD
#
multicast routing-enable
#
isis 1
network-entity 10.0000.0000.0004.00
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ip address 192.168.3.2 255.255.255.0
isis enable 1
pim sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ip address 192.168.6.1 255.255.255.0
isis enable 1
pim sm
#
return
Fault Symptom
A router is not configured with any dynamic routing protocol. Its physical status and the status
of the link layer protocol are UP. The static route of the router, however, fails.
Fault Analysis
The possible causes are as follows:
l The static route is not configured or updated correctly so that it does not exist in the multicast
routing table.
l An optimal route has been received. This may also lead to static route failure.
Troubleshooting
1. Check whether the static route exists in the multicast routing table. Run the display
multicast routing-table static command to check the multicast static routing table and to
ensure that the corresponding route is configured correctly and exists in the multicast
configuration table.
2. Check the interface type of the next hop of the multicast static route. If the interface is not
a point-to-point one, the outgoing interface must be configured in the format of next hop
address.
3. Check whether a static multicast route matches a specified routing protocol. If the protocol
is specified when the static multicast route is configured, you can run the display ip
routing-table command to check whether the route is added for the specified protocol.
4. Check whether a static multicast route matches a specified routing policy. If the routing
policy is specified when the static multicast route is configured, you can run the display
route-policy command to check the configured routing policy.
11 MLD Configuration
This chapter describes the MLD fundamentals and configuration steps and maintenance for MLD
functions, along with typical examples.
11.1 MLD Introduction
This section describes the principle and concepts of MLD.
11.2 Configuring Basic MLD Functions
This section describes how to configure MLD.
11.3 Configuring Options of an MLD Packet
This section describes how to configure options of an MLD packet.
11.4 Configuring MLD Query Control
This section describes how to configure an MLD querier.
11.5 Configuring SSM Mapping
This section describes how to configure SSM mapping.
11.6 Configuration MLD Limit Function
This section describes the application scene of MLD limit and how to configure the MLD limit
function.
11.7 Maintaining MLD
This section describes how to clear MLD statistics and monitor the running status of MLD.
11.8 Configuration Example
This section provides a configuration example of MLD.
SSM Mapping
For MLDv1 hosts, you can configure SSM mapping to build a multicast network of the SSM
model.
Pre-configuration Tasks
Before configuring basic MLD functions, configure a unicast routing protocol to interconnect
the entire multicast domain.
Data Preparation
To configure basic MLD functions, you need the following data.
No. Data
1 MLD version
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 3 Run:
mld enable
----End
Context
CAUTION
Ensure that all MLD router interfaces in the same network segment are configured with the same
MLD version. Otherwise, faults may occur.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
mld version { 1 | 2 }
----End
Context
Do as follows on the router connected to hosts:
NOTE
This configuration is optional. By default, the interface does not statically join any group.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
mld static-group ipv6-group-address [ inc-step-mask ipv6-group-mask-length number
group-number ] [ source ipv6-source-address ]
The interface is configured to statically join a single multicast group or multiple multicast groups
in batches. After the interface statically joins the multicast groups, the router considers that the
members of the multicast groups exist on the network segment where the interface resides.
----End
Context
Do as follows on the router connected to hosts:
NOTE
This configuration is optional. By default, the interface can join any group.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
mld group-policy { acl6-number | acl6-name acl6-name } [ 1 | 2 ]
----End
Procedure
l Run the display mld interface [ interface-type interface-number ] [ verbose ] command
to check the MLD configuration and running information on an interface.
l Run the display mld group [ ipv6-group-address | interface interface-type interface-
number ] * [ static ] [ verbose ] command to check information on members of an MLD
multicast group.
----End
Applicable Environment
MLD has the group-specific and source/group-specific query messages. The groups are varied.
Routers cannot join all groups. The Router-Alert option is used to send the MLD packets of
which the multicast group is not specified by the upper-layer protocol of the IP layer to the upper-
layer protocol for processing.
Pre-configuration Tasks
Before configuring options of an MLD packet, complete the following tasks:
Data Preparation
To configure options of an MLD packet, you need the following data.
No. Data
Context
Do as follows on the router connected to hosts:
This configuration is optional. By default, MLD packets sent by routers carry the Router-Alert
option, but the routers do not check the Router-Alert option. That is, the routers process all the
received MLD packets, including those without the Router-Alert option.
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The Router-Alert option is set in the header of the MLD packet to be sent.
l Configuration on an Interface
1. Run:
system-view
The Router-Alert option is set in the header of the MLD packet to be sent.
----End
Procedure
l Run the display mld interface [ interface-type interface-number | up | down ]
[ verbose ] command to check the MLD configuration and running information on an
interface.
----End
Applicable Environment
CAUTION
A great many of MLD interfaces exist in the network and these MLD interfaces are mutually
restricted. Ensure that all MLD parameters of all MLD router interfaces on the same network
segment are identical. Otherwise, the network may be faulty.
The MLD querier periodically sends MLD Query messages on the share network connected to
receivers. When receiving a Report message from a member, the routers on the shared network
refresh information about the member relationship.
If non-queriers do not receive any query message within the Keepalive period of the MLD
querier, the querier is considered faulty, and a new round of the querier election is triggered
automatically.
In the application of ADSL dial-up access, one host corresponds to a port. A querier, therefore,
corresponds to only one receiver host. When a receiver host switches among multiple groups
such as TV channels, you can enable the fast leave mechanism on the querier.
Pre-configuration Tasks
Before configuring MLD query control, complete the following tasks:
Data Preparation
To configure MLD query control, you need the following data.
No. Data
2 Robustness variable
5 The interval for sending MLD group-specific Query messages and group/source-
specific Query messages
Context
CAUTION
In the actual configuration, ensure that the interval for sending general Query messages is greater
than the maximum response time and is smaller than the Keepalive time of the other MLD
queriers.
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The interval for the Keepalive period of the other MLD queriers is set.
By default, the formula used to calculate the Keepalive period of the other queriers
is: the Keepalive period of the other MLD queriers = robustness variable x the interval
for sending MLD general query messages + 1/2 x the MLD maximum response time.
When default values of robustness variable, the interval for sending MLD general
query messages, and maximum response time are used, the Keepalive period of the
other MLD queriers is 255 seconds.
7. Run:
lastlistener-queryinterval interval
The interval for sending MLD last listener query messages is set.
8. Run:
mld prompt-leave [ group-policy { basic-acl6-number | acl6-name acl6-
name } ]
Applicable Environment
In the SSM host network segment, the router can know the specific source when a host joins a
group. The SSM solutions are as follows:
l Hosts and routers run MLDv2. When a host joins a group, the host specifies the source
from which the host wants to receive data.
l If some hosts in the network segment can run only MLDv1, the hosts cannot specify the
sources from which they want to receive data when joining related groups. In this case, you
need to configure SSM mapping and static mapping rules on the router. As a result, the (*,
G) information carried in the Report message is mapped to the (G, INCLUDE, (S1, S2, ...))
information.
Pre-configuration Tasks
Before configuring SSM mapping, complete the following tasks:
Data Preparation
To configure SSM mapping, you need the following data.
No. Data
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
mld enable
MLD is enabled.
Step 4 Run:
mld version 2
To ensure that the hosts that run any MLD version on the network segment can obtain the SSM
services, it is recommended to configure MLDv2 on the router interface.
Step 5 Run:
mld ssm-mapping enable
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
mld
Step 3 Run:
ssm-mapping ipv6-group-address ipv6-group-mask-length ipv6-source-address
You can run the command repeatedly to configure the mapping from a group to multiple sources.
----End
Procedure
l Run the display mld interface [ interface-type interface-number | up | down ]
[ verbose ] command to check the MLD configuration and running information on an
interface.
l Run the display mld group [ ipv6-group-address | interface interface-type interface-
number ] * ssm-mapping [ verbose ] to check information about a source/group-specific
address.
l Run the display mld ssm-mapping interface [ interface-type interface-number [ group
ipv6-group-address ] ] command to check information about an interface enabled with SSM
mapping.
l Run the display mld ssm-mapping group [ ipv6-group-address ] command to check SSM
mapping rules of a specified group address.
----End
Applicable Environment
To limit IPTV ICPs and the number of users accessing IP core networks, you can configure the
MLD limit function.
The MLD limit function is configured on the last-hop router connected to users. You can perform
the following configurations as required:
If the MLD limit function is required to be configured globally, for a single instance, and for an interface
on the same router, it is recommended that the limits on the number of global MLD group memberships,
the number of MLD group memberships in the single instance, and the number of MLD group memberships
on the interface should be in descending order.
Pre-configuration Tasks
Before configuring the MLD limit function, complete the following task:
Data Preparation
To configure the MLD limit function, you need the following data.
No. Data
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
mld global limit number
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
mld
Step 3 Run:
limit number
----End
Context
Do as follows on the router connected to hosts:
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
mld limit number [ except { acl6-number | acl6-name acl6-name } ]
NOTE
If except is not specified in the command, the router is limited by the maximum number of MLD entries
when creating the entries for all the groups or source/groups.
Before specifying except, you need to configure the corresponding ACl. Then, the interface filters the
received MLD Join messages according to the ACL. The number of entries that are filtered according to
the ACL is not limited by the maximum number of MLD entries.
----End
Procedure
l Run the display mld interface [ interface-type interface-number | up | down ]
[ verbose ] command to check the configuration and running of MLD on an interface.
l Run the following commands to check information about the members of an MLD multicast
group.
– display mld group [ ipv6-group-address | interface interface-type interface-number ]
*[ verbose ]
– display mld group [ ipv6-group-address | interface interface-type interface-number ]
*ssm-mapping [ verbose ]
# Run the display mld interface command to view the configuration and running status
of MLD on the router interface. The display information is as follows:
<RouterA> display mld interface gigabitethernet 1/0/0
GigabitEthernet1/0/0(FE80::200:5EFF:FE66:5100):
MLD is enabled
Current MLD version is 1
MLD state: up
MLD group policy: none
MLD limit: 30
Value of query interval for MLD (negotiated): 125 s
Value of query interval for MLD (configured): 125 s
Value of other querier timeout for MLD: 0 s
Value of maximum query response time for MLD: 10 s
Querier for MLD: FE80::200:5EFF:FE66:5100 (this router)
From the display, you can see the maximum number of MLD group members on GE1/0/0
of Router A.
----End
Context
CAUTION
The MLD groups that an interface dynamically joins are deleted after you run the reset mld
group or the reset mld group ssm-mapping command. Receivers may not receive multicast
information normally. So, confirm the action before you use the command.
Procedure
l Run the reset mld explicit-tracking { all | interface interface-type interface-number
[ host ipv6-host-address [ group ipv6-group-address [ source ipv6-source-address ] ] ] }
command in the user view to clear the hosts that join a multicast group through MLD on
an interface.
l Run the following commands in the user view to clear the MLD groups that the interface
dynamically joins (not including the MLDv1 group in the SSM range).
– reset mld group all
– reset mld group interface interface-type interface-number { all | ipv6-group-
address [ ipv6-group-mask-length ] [ ipv6-source-address [ ipv6-source-mask-
length ] ] }
l Run the following commands in the user view to clear MLDv1 groups in the SSM range.
– reset mld group ssm-mapping all
– reset mld group ssm-mapping interface interface-type interface-number { all | ipv6-
group-address [ ipv6-group-mask-length ] }
l Run the reset mld control-message counters [ interface interface-type interface-
number ] [ message-type { query | report } ] command in the user view to delete statistics
of MLD messages.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of MLD.
Procedure
l Run the display mld explicit-tracking [ interface interface-type interface-number [ host-
address ipv6-host-address | group ipv6-group-address source ipv6-source-address ] ]
command in any view to check information about the MLD hosts on an interface.
l Run the display mld group [ ipv6-group-address | interface interface-type interface-
number ] * [ static ] [ verbose ] command in any view to check information about groups
on each interface.
l Run the display mld group [ ipv6-group-address | interface interface-type interface-
number ] * ssm-mapping [ verbose ] command in any view to check information about a
group configured with SSM mapping.
l Run the display mld interface [ interface-type interface-number ] [ verbose ] command
in any view to check the MLD configuration and running information on an interface.
l Run the display mld routing-table [ ipv6-source-address [ ipv6-source-mask-length ] |
ipv6-group-address [ ipv6-group-mask-length ] ] * [ static ] [ outgoing-interface-
number [ number ] ] command in any view to check the MLD routing table.
l Run the display mld ssm-mapping interface [ interface-type interface-number [ group
group-address ] ] command in any view to check information about an interface enabled
with SSM mapping.
l Run the display mld ssm-mapping group [ ipv6-group-address ] command in any view
to check SSM mapping rules of a specified group address.
l Run the display mld control-message counters [ interface interface-type interface-
number ] [ message-type { query | report } ] command in any view to check the statistics
of MLD messages received by interfaces.
----End
Networking Requirements
In the IPv6 network shown in Figure 11-1, unicast routes are normal. You are required to
implement multicast in the network to enable hosts to receive the Video On Demand (VOD)
information.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, the version number of MLD of routers and user hosts is required.
Procedure
Step 1 Enable multicast on routers and MLD and PIM-IPv6-SM on the interface at the host side.
# Enable multicast on Router A, enable MLD and PIM-IPv6-SM on GE 1/0/0, and set the MLD
version to 1.
<RouterA> system-view
[RouterA] multicast ipv6 routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim ipv6 sm
[RouterA-GigabitEthernet1/0/0] mld enable
[RouterA-GigabitEthernet1/0/0] mld version 1
[RouterA-GigabitEthernet1/0/0] quit
# The configurations of Router B and Router C are the same as the configuration of Router A,
and are not mentioned here.
Step 2 Verify the configuration.
# Run the display mld interface command to view the configuration and running status of MLD
on the router interface. MLD information on GE 1/0/0 of Router B is as follows:
<RouterB> display mld interface gigabitethernet 1/0/0
GigabitEthernet1/0/0(FE80::200:5EFF:FE66:5100):
MLD is enabled
Current MLD version is 1
MLD state: up
MLD group policy: none
Value of query interval for MLD (negotiated): 125 s
Value of query interval for MLD (configured): 125 s
Value of other querier timeout for MLD: 0 s
Value of maximum query response time for MLD: 10 s
Querier for MLD: FE80::200:5EFF:FE66:5100 (this router)
From the display, you can see that Router B is a querier. This is because the IPv6 link-local
address of GE 1/0/0 on Router B is the smallest in the network segment.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
multicast ipv6 routing-enable
#
interface Pos2/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::1/64
pim ipv6 sm
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 3000::12/64
pim ipv6 sm
mld enable
mld version 1
#
return
Networking Requirements
When a large number of users watch multiple programs simultaneously, great bandwidth of
devices is consumed, which degrades the performance of the devices and lowers the stability of
receiving multicast data.
The existing multicast technologies control multicast networks by limiting the number of
multicast forwarding entries or the number of outgoing interfaces of an entry, which cannot meet
the requirements of operators for real-time video services on IPTV networks and flexible
management of network resources.
Configuring MLD limit can enable operators to properly plan network resources and flexibly
control the number of multicast groups that hosts can join. In the network shown in Figure
11-2, multicast services are deployed. The global MLD limit, instance-based MLD limit, and
interface-based MLD limit are configured on Router A, Router B, and Router C connected to
hosts to limit the number of multicast groups that the hosts can join. When the number of
multicast groups that hosts can join reaches the limit, the devices are not allowed to create new
MLD entries. This ensures that the users that join related multicast groups more clearly and
stably watch related programs.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l the version number of MLD of routers and user hosts.
l The maximum number of MLD group memberships.
Procedure
Step 1 Enable multicast on routers and MLD and PIM-IPv6-SM on the interface at the host side.
# Enable multicast on Router A, enable MLD and PIM-IPv6-SM on GE 1/0/0, and set the MLD
version to 1.
<RouterA> system-view
[RouterA] multicast ipv6 routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim ipv6 sm
[RouterA-GigabitEthernet1/0/0] mld enable
[RouterA-GigabitEthernet1/0/0] mld version 1
[RouterA-GigabitEthernet1/0/0] quit
[RouterA] interface pos 2/0/0
[RouterA-Pos2/0/0] pim ipv6 sm
[RouterA-Pos2/0/0] quit
# The configurations of Router B and Router C are the same as the configuration of Router A,
and are not mentioned here.
Step 2 Limit the number of MLD group memberships on the last-hop router connected to users.
# Configure the maximum number of MLD group memberships to 40 in the public network
instance.
[RouterA] mld
[RouterA-mld] limit 40
[RouterA-mld] quit
# Configurations of Router B and Router C are similar to those of Router A, and are not
mentioned here.
# Run the display mld interface command to view the configuration and running status of MLD
on the router interface. MLD information on GE 1/0/0 of Router B is as follows:
<RouterB> display mld interface gigabitethernet 1/0/0
GigabitEthernet1/0/0(FE80::200:5EFF:FE66:5100):
MLD is enabled
Current MLD version is 1
MLD state: up
MLD group policy: none
MLD limit: 30
Value of query interval for MLD (negotiated): 125 s
Value of query interval for MLD (configured): 125 s
Value of other querier timeout for MLD: 0 s
Value of maximum query response time for MLD: 10 s
Querier for MLD: FE80::200:5EFF:FE66:5100 (this router)
From the display, you can see that the maximum number of MLD group members that the
GE1/0/0 of Router B can creat is 30.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
mld global limit 50
#
multicast ipv6 routing-enable
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ipv6 enable
ipv6 address 2001::1/64
pim ipv6 sm
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 3000::12/64
pim ipv6 sm
mld enable
mld version 1
mld limit 30
#
mld
limit 40
#
return
#
mld global limit 50
#
multicast ipv6 routing-enable
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ipv6 enable
ipv6 address 2003::1/64
pim ipv6 sm
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 3001::12/64
pim ipv6 sm
mld enable
mld version 1
mld limit 30
#
mld
limit 40
#
return
This chapter describes the PIM-DM (IPv6) fundamentals and configuration steps and
maintenance for PIM-DM functions, along with typical examples. PIM-DM in this chapter refers
to the IPv6 PIM-DM, unless otherwise specified.
12.1 PIM-IPv6 Introduction
This section describes the principle and concepts of PIM-DM.
12.2 Configuring Basic PIM-DM (IPv6) Functions
This section describes how to configure PIM-DM.
12.3 Adjusting Control Parameters of a Source
This section describes how to control the forwarding of multicast data according to the multicast
source in the PIM-IPv6 network.
12.4 Adjusting Control Parameters for Maintaining Neighbors
This section describes how to configure control parameters of the PIM-DM Hello message.
12.5 Adjusting Control Parameters for Prune
This section describes how to configure control parameters of the PIM-DM Join/Prune message.
12.6 Adjusting Control Parameters for State-Refresh
This section describes how to configure control parameters of the PIM-DM State-Refresh
message.
12.7 Adjusting Control Messages for Graft
This section describes how to configure control parameters of the PIM-DM Graft message.
12.8 Adjusting Control Messages for Assert
This section describes how to configure control parameters of the PIM-DM Assert message.
12.9 Configuring PIM-IPv6 Silent
This section describes how to configure PIM silent to prevent the malicious attack from a host.
12.10 Maintaining PIM-DM
This section describes how to clear statistics of IPv6 PIM control messages and monitor the
running status of IPv6 PIM-DM.
12.11 Configuration Example
In IPv4 multicast applications, the Protocol Independent Multicast (PIM) is used to establish
multicast routes between routers to replicate and forward multicast packets.
In IPv6 multicast applications, routers run PIM-IPv6. The functions and principles of PIM-IPv6
are similar to those of PIM-IPv4.
l This chapter is concerned only about the PIM-DM configuration in the IPv6 network. PIM-DM in this
chapter refers to IPv6 PIM-DM, unless otherwise specified.
l For details of PIM-SM (IPv6), refer to the chapter PIM-SM (IPv6) Configuration in the HUAWEI
NetEngine80E/40E Router Configuration Guide - IP Multicast.
l For details of ASM and SSM models, refer to the chapter "IP Multicast Overview" in the HUAWEI
NetEngine80E/40E Router Feature Description - IP Multicast.
l Neighbor filtering function: An interface sets up neighbor relationships with only the
addresses matching the filtering rules and deletes the neighbors unmatched with the filtering
rules
Pre-configuration Tasks
Before configure basic PIM-DM functions, complete the following tasks:
l Configuring a unicast routing protocol
Data Preparation
To configure basic PIM-DM functions, you need the following data.
No. Data
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router:
Procedure
Step 1 Run:
system-view
Step 3 Run:
pim ipv6 dm
After PIM-DM (IPv6) is enabled on interfaces, routers set up the PIM-IPv6 neighbor relationship
with each other. The routers can then process protocol packets received from PIM-IPv6
neighbors.
You can run the undo pim ipv6 dm command to disable PIM-DM (IPv6) on an interface.
NOTE
PIM-DM (IPv6) and PIM-SM (IPv6) cannot be enabled on an interface simultaneously. The PIM-IPv6
modes on all interfaces that belong to the same instance must be the same. When a router is deployed in a
PIM-DM (IPv6) domain, enable PIM-DM (IPv6) on all non-boundary interfaces.
----End
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 neighbor [ ipv6-link-local-address | interface interface-type
interface-number | verbose ] * command to check information about PIM-IPv6 neighbors.
l Run the following commands to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Pre-configuration Tasks
Before adjusting control parameters of a source, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-DM (IPv6) Functions
Data Preparation
To adjust control parameters of a source, you need the following data.
No. Data
Context
Do as follows on the PIM-IPv6 router:
NOTE
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the PIM-IPv6 router:
Procedure
Step 1 Run:
system-view
A filter is configured.
The nearer the filter to the source, the more obvious the effect of the filtering.
If the basic ACL is configured, only the packets with the source addresses that pass the filtering
are forwarded.
If the advanced ACL is configured, only the packets with the source addresses and group
addresses that pass the filtering are forwarded.
NOTE
l If acl6-number | acl6-name acl6-name is specified in the source-policy command and ACL rules are
created, only the multicast packets whose source addresses match the ACL rules are permitted.
l If acl6-number | acl6-name acl6-name is specified in the source-policy command and no ACL rule is
created, the multicast packets with any source addresses are not forwarded.
l The source-policy command does not filter the static (S, G) entries.
----End
NOTE
Pre-configuration Tasks
Before adjusting control parameters for maintaining neighbors, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-DM (IPv6) Functions
Data Preparation
To adjust control parameters for maintaining neighbors, you need the following data.
No. Data
No. Data
Context
Do as follows on the PIM-IPv6 router:
NOTE
The configuration of the control parameters for maintaining PIM-IPv6 neighbors involves the following
cases:
l Global configuration: It is valid on all the interfaces.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not set, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
4. Run:
pim ipv6 triggered-hello-delay interval
Context
Do as follows on the PIM-IPv6 router:
NOTE
The configuration of the control parameters for maintaining PIM-IPv6 neighbors involves the following
cases:
l Global configuration: It is valid on each interface.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not set, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
If the local router does not receive any Hello message from a neighbor after the timer
times out, the local router considers that the neighbor is unreachable.
----End
Context
A router assigns a random Generation ID to an interface enabled with PIM. The Hello messages
sent by the interface carry the random Generation ID. If the status of the interface changes, the
random Generation ID is updated.
When the Generation ID option in the Hello message received from an upstream neighbor
changes, it indicates that the status of the upstream neighbor changes. If a router does not want
to receive data from an upstream neighbor, the router sends a Prune message after receiving a
data packet from the upstream neighbor.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 require-genid
----End
Context
To prevent some router from being involved in PIM, filtering PIM neighbors is required.
Procedure
Step 1 Run:
system-view
NOTE
When configuring the neighbor filtering function on the interface, you must also configure the neighbor
filtering function correspondingly on the router that sets up the neighbor relationship with the interface.
----End
receiving the Prune message, the upstream interface stops forwarding packets to this network
segment. If other downstream routers exist in this network segment, they must send the Join
message to override the prune action.
Routers under the control of default values can work normally. In the NE80E/40E, users can
adjust related parameters according to the specific network environment.
Pre-configuration Tasks
Before adjusting control parameters for prune, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-DM (IPv6) Functions
Data Preparation
To adjust control parameters for prune, you need the following data.
No. Data
Context
Do as follows on the PIM-DM router:
NOTE
The configuration of the control parameters of prune involves the following cases:
l Global Configuration: It is valid on all the interfaces.
l Configuration on the interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not set, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
3. Run:
holdtime join-prune interval
The period during which the downstream interface is in the Prune state is set.
After the period expires, the pruned interface continues to forward packets. Before
the period expires, the router resets the Holdtime timer after receiving a State-Refresh
message.
l Configuration on an Interface
1. Run:
system-view
The period during which the downstream interface is in the Prune state is set.
After the period expires, the pruned interface continues to forward packets. Before
the period expires, the router resets the Holdtime timer after receiving a State-Refresh
message.
----End
Context
The Hello message carries lan-delay and the override-interval. The relationship between lan-
delay, override-interval, and Prune-Pending Timer (PPT) is that lan-delay + override-interval =
PPT. PPT indicates the delay from the time when a router receives the Prune message from a
downstream interface to the time when the router performs the prune action to suppress the
forwarding on the downstream interface. If the router receives a Join message from a downstream
router within the PPT, the router does not perform the prune action.
Do as follows on the PIM-DM router:
NOTE
The configuration of control parameters for Prune involves the following two cases:
l Global configuration: It is valid on each interface.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
----End
Context
When a router receives a Prune message from an upstream interface, it indicates that router in
the LAN. When a router receives a Prune message from an upstream interface, it indicates that
the router still needs to receive multicast data. If the router still requests the multicast data, it
needs to send a Join message in the override-interval period to the upstream router.
NOTE
The configuration of control parameters for Prune involves the following two cases:
l Global configuration: It is valid on each interface.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
12.6.4 Configuring the Period for Receiving the Next State-Refresh Message
12.6.5 Configuring the TTL Value of a State-Refresh Message
12.6.6 Checking the Configuration
Applicable Environment
By default, a PIM-DM interface is in the forwarding state. The pruned interface continues to
forward packets after the prune timer times out. In the PIM-DM network, periodically flooding
State-Refresh messages can update the prune timer and maintain the suppressed state of the
prune interface.
Routers under the control of default values can work normally. In the NE80E/40E, users can
adjust related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for State-Refresh, complete the following tasks:
Data Preparation
To adjust control parameters for State-Refresh, you need the following data.
No. Data
Context
Do as follows on all routers in the PIM-DM domain:
NOTE
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
undo pim ipv6 state-refresh-capable
State-Refresh is disabled.
The interface on which PIM-DM State-Refresh is disabled cannot forward any State-Refresh
message.
NOTE
You can run the pim state-refresh-capable command to re-enable the PIM-DM State-Refresh on the
interface.
----End
Context
Do as follows on all routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
state-refresh-interval interval
NOTE
The interval for sending PIM State-Refresh messages should be shorter than the timeout period for keeping
the Prune state. You can run the holdtime join-prune command to set the timeout period for keeping the
Prune state.
----End
Context
Do as follows on all routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
state-refresh-rate-limit interval
The period for waiting to receive the next State-Refresh message is set.
----End
Context
Do as follows on all routers in the PIM-DM domain:
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
state-refresh-ttl ttl-value
----End
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 control-message counters interface interface-type interface-
number [ message-type { assert | bsr | graft | graft-ack | hello | join-prune | state-
refresh } ] command to check the number of sent and received PIM-IPv6 control messages.
l Run the following commands to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Applicable Environment
In the PIM-DM network, if State-Refresh is not enabled, the pruned interface can forward
packets only after the Prune timer times out. If State-Refresh is enabled, the pruned interface
may not continue to forward packets forever.
To enable new members in the network to quickly receive multicast data, the PIM-DM router
sends a Graft message through an upstream interface. After receiving the Graft message, the
upstream router immediately replies a Graft-Ack message and restores the forwarding of the
interface that receives the Graft message.
routers in the control of default values can work normally. In the NE80E/40E, users can adjust
related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for graft, complete the following tasks:
Data Preparation
To adjust control parameters for graft, you need the following data.
No. Data
Context
If the local router does not receive any Graft-Ack message from the upstream router in the
specified time, the local router resends the Graft message.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 timer graft-retry interval
----End
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 grafts command to check the unacknowledged PIM-DM Graft
messages.
l Run the display pim ipv6 control-message counters interface interface-type interface-
number [ message-type { assert | bsr | graft | graft-ack | hello | join-prune | state-
refresh } ] command to check the number of sent and received PIM-IPv6 control messages.
l Run the following commands to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Applicable Environment
When a PIM-DM router receives multicast data through a downstream interface, this indicates
that other upstream routers exist in this network segment. The router sends an Assert message
through the downstream interface to take part in the assert election.
Routers in the control of default values can work normally. In the NE80E/40E, users can adjust
related parameters according to the specific network environment.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for Assert, complete the following tasks:
Data Preparation
To adjust control parameters for asset, you need the following data.
No. Data
Context
Do as follows on the PIM-DM router:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
The router that fails in the election prohibits its downstream interface from forwarding
multicast data.
After the Holdtime timer times out, the downstream interface continues to forward
packets.
----End
interface are deleted. The interface then acts as the static DR, which takes effect immediately.
At the same time, MLD on the interface is not affected.
PIM silent is applicable only to the router interface directly connected to the host network
segment that is connected only to this router.
CAUTION
If PIM-IPv6 silent is enabled on the interface connected to a router, the PIM-IPv6 neighbor
cannot be established and a multicast fault may occur.
If the host network segment is connected to multiple routers and PIM-IPv6 silent is enabled on
multiple router interfaces, the interfaces become static DRs. Therefore, multiple DRs exist in
this network segment, and a multicast fault occurs.
Pre-configuration Tasks
Before configuring PIM-IPv6 silent, complete the following tasks:
l Configuring a unicast routing protocol to make the network layer reachable
l Configuring PIM-DM
l Configuring MLD
Data Preparation
To configure PIM-IPv6 silent, you need the following data.
No. Data
Context
CAUTION
PIM silent is applicable only to the router interface directly connected to the host network
segment that can be connected to only one PIM router.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 silent
After this function is enabled, the attacks from malicious hosts by sending Hello messages are
effectively prevented and the router is protected.
----End
Prerequisite
All the configurations of PIM silent are complete.
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
----End
Context
CAUTION
The statistics of PIM-IPv6 control messages on an interface cannot be restored after you clear
them. So, confirm the action before you use the command.
Procedure
l Run the reset pim ipv6 control-message counters [ interface interface-type interface-
number ] command in the user view to clear statistics of PIM-IPv6 control messages on an
interface.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of PIM-DM.
Procedure
l Run the display pim ipv6 control-message counters interface interface-type interface-
number [ message-type { assert | bsr | graft | graft-ack | hello | join-prune | state-
refresh } ] command in any view to check the statistics of sent and received PIM-IPv6
control messages.
l Run the display pim ipv6 grafts command in any view to check the unacknowledged PIM-
DM Graft messages.
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command in any view to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 neighbor [ ipv6-link-local-address | interface interface-type
interface-number | verbose ] * command in any view to check information about PIM-IPv6
neighbors.
l Run the following commands in any view to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ] [ | { begin | exclude |
include } regular-expression ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Networking Requirements
As shown in Figure 12-1, multicast is deployed in the experiment network. The integrated
Interior Gateway Protocol (IGP) is deployed in the network. The unicast routes work normally.
You are required to properly configure routers in the network to enable hosts to receive the Video
On Demand (VOD) information in multicast mode.
GE2/0/0
3001::1
POS1/0/0
2002::2 RouterB
POS2/0/0
Source 2002::1
GE1/0/0
2001::5 2001::1
RouterA
POS3/0/0
2003::1
POS1/0/0
2003::2 RouterC
GE2/0/0
4001::1
Host C Host D
Configuration Roadmap
The network is a small-scale experiment network, therefore, PIM-DM is adopted to configure
multicast. In this example, the host network segment is connected to only one router, therefore,
PIM silent can be used to prevent the Hello message attack. The configuration roadmap is as
follows:
1. Enable multicast on routers.
2. Enable PIM-DM on interfaces.
3. Enable PIM silent and MLD on the router interface connected to hosts.
Data Preparation
To complete the configuration, you need the following data:
l The address of the group G is FF0E::1.
Procedure
Step 1 Enable IPv6 multicast on all routers and PIM-DM on all interfaces.
# Enable IPv6 multicast on Router A and PIM-DM on all interfaces. The configurations of Router
B and Router C are the same as the configuration of Router A, and are not mentioned here.
[RouterA] multicast ipv6 routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim ipv6 dm
[RouterA-GigabitEthernet1/0/0] quit
[RouterA] interface pos 2/0/0
[RouterA-Pos2/0/0] pim ipv6 dm
[RouterA-Pos2/0/0] quit
[RouterA] interface pos 3/0/0
[RouterA-Pos3/0/0] pim ipv6 dm
[RouterA-Pos3/0/0] quit
Step 2 Enable PIM silent and MLD on the router interface connected to hosts.
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] pim ipv6 silent
[RouterB-GigabitEthernet2/0/0] mld enable
[RouterB-GigabitEthernet2/0/0] mld version 1
[RouterB-GigabitEthernet2/0/0] quit
The configuration of Router C is the same as the configuration of Router B, and is not mentioned
here.
Step 3 Verify the configuration.
# Run the display pim ipv6 interface command to view the PIM-IPv6 configuration and running
status on the router interface. The display of the PIM-IPv6 configuration on Router A is as
follows:
<RouterA> display pim ipv6 interface
VPN-Instance: public net
Interface State NbrCnt HelloInt DR-Pri DR-Address
GE1/0/0 up 0 30 1 FE80::200:AFF:FE01:109
(local)
Pos2/0/0 up 1 30 1 FE80::A01:109:1(local)
Pos3/0/0 up 1 30 1 FE80::A01:109:2(local)
# Run the display pim ipv6 neighbor command to check the PIM-IPv6 neighbor relationship
between routers. The display of the PIM-IPv6 neighbor relationship on Router A is as follows:
<RouterA> display pim ipv6 neighbor
VPN-Instance: public net
Total Number of Neighbors = 2
Neighbor Interface Uptime Expires Dr-Priority
FE80::A01:104:1 Pos2/0/0 00:04:16 00: 01:29 1
FE80::A01:105:1 Pos3/0/0 00:03:54 00 :01:17 1
# Run the display pim ipv6 routing-table command to view the PIM-IPv6 routing table on a
router. Assume that Host A joins multicast group G FF0E::1. Router B generates a (*, G) entry.
When multicast source S 2001::5 sends multicast packets to G, a Shortest Path Tree (SPT) is
generated through flooding-prune. The (S, G) entry exists on each router in the network. The
display is as follows:
<RouterA> display pim ipv6 routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(2001::5, FF0E::1)
Protocol: pim-dm, Flag: LOC ACT
UpTime: 00:01:20
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-dm, UpTime: 00:01:20, Expires: -
<RouterB> display pim ipv6 routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, FF0E::1)
Protocol: pim-dm, Flag: WC
UpTime: 01:46:23
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::A01:109:1
RPF prime neighbor: FE80::A01:109:1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: mld, UpTime: 01:46:23, Expires: never
(2001::5, FF0E::1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:02:19
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::A01:109:1
RPF prime neighbor: FE80::A01:109:1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-dm, UpTime: 00:02:19, Expires: -
<RouterC> display pim ipv6 routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(2001::5, FF0E::1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:02:19
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::A01:109:2
RPF prime neighbor: FE80::A01:109:2
Downstream interface(s) information:
Total number of downstreams: 0
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
multicast ipv6 routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
#
interface Pos2/0/0
undo shutdown
link-protocol ppp
clock master
ipv6 enable
ipv6 address 2002::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
#
interface Pos3/0/0
undo shutdown
link-protocol ppp
clock master
ipv6 enable
ipv6 address 2003::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
#
ospfv3 1
router-id 1.1.1.1
area 0.0.0.0
#
return
l Configuration file of Router B
#
sysname RouterB
#
ipv6
#
multicast ipv6 routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ipv6 address 2002::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 address 3001::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
pim ipv6 silent
mld enable
mld version 1
#
ospfv3 1
router-id 2.2.2.2
area 0.0.0.0
#
return
l Configuration file of Router C
#
sysname RouterC
#
ipv6
#
multicast ipv6 routing-enable
#
interface Pos1/0/0
undo shutdown
link-protocol ppp
ipv6 address 2003::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 address 4001::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 dm
This chapter describes the PIM-SM (IPv6) and SSM fundamentals and configuration steps, and
maintenance for PIM-SM functions, along with typical examples. PIM-SM in this chapter refers
to the IPv6 PIM-SM, unless otherwise specified.
13.1 PIM-SM (IPv6) Introduction
This section describes the principle and concepts of PIM-SM (IPv6).
13.2 Configuring Basic PIM-SM (IPv6) Functions
This section describes how to configure PIM-SM to implement ASM and SSM models.
13.3 Adjusting Control Parameters of a Source
This section describes how to control the forwarding of multicast data according to the multicast
source in the PIM-IPv6 network.
13.4 Adjusting Control Parameters of a C-RP and a C-BSR
This section describes how to configure control parameters of the C-RP, advertisement
messages, C-BSR, and Bootstrap messages.
13.5 Adjusting Control Parameters for Maintaining Neighbors
This section describes how to configure control parameters of the PIM-SM Hello message.
13.6 Adjusting Control Parameters of Source Registering
This section describes how to configure control parameters of the PIM-SM Register message.
13.7 Adjusting Control Parameters for Forwarding
This section describes how to configure control parameters of the PIM-SM Join/Prune message.
13.8 Configuring Control Parameters for Assert
This section describes how to configure control parameters of the PIM-SM Assert message.
13.9 Adjusting Control Parameters for the SPT Switchover
This section describes how to configure control parameters of the PIM-SM SPT switchover.
13.10 Configuring PIM GR (IPv6)
This section describes how to configure PIM GR (IPv6).
13.11 Configuring PIM-IPv6 Silent
This section describes how to configure PIM silent to prevent the malicious attack from a
malicious host.
13.12 Maintaining PIM-SM
This section describes how to clear IPv6 PIM-SM statistics and monitor the running status of
IPv6 PIM-SM.
13.13 Configuration Example
This section provides the configuration example of PIM-IPv6-SM.
l This chapter describes only the PIM-SM configuration in the IPv6 network. PIM-SM in this chapter
refers to the IPv6 PIM-SM, unless otherwise specified.
l For the configuration of PIM-DM, refer to the chapter PIM-DM (IPv6) Configuration in the
HUAWEI NetEngine80E/40E Router Configuration Guide - IP Multicast.
l For details of ASM and SSM models, refer to the chapter "IP Multicast Overview" in the HUAWEI
NetEngine80E/40E Router Feature Description - IP Multicast.
Embedded-RP
Based on PIM-SM (IPv4), embedded-Rendezvous Point (embedded-RP) is added in PIM-SM
(IPv6). The embedded-RP is a method of obtaining an RP. The function of the embedded-RP is
the same as that of the static RP and the dynamic RP.
Static RP
You can specify the static RP on all routers in the PIM-SM domain. When a dynamic RP exists
in the domain, the dynamic RP is preferred by default, but you can configure the static RP to be
preferred.
Dynamic RP
You can specify a C-RP in a BSR domain, adjust the priority for C-RP election, adjust the lifetime
of the advertisement message on the BSR received from the C-RP, adjust the interval for the C-
RP to advertise advertisement messages, and specify an Access Control List (ACL) to limit the
range of the multicast groups served by the C-RP.
BSR
You can specify a Candidate-BSR (C-BSR) in a BSR domain, adjust the hash mask length used
by the C-RP for C-RP election, adjust the priority for BSR election, and adjust the legal address
range of BSRs.
PIM GR
The NE80E/40E supports the PIM GR function on the router with double MPUs. PIM GR
ensures normal multicast data forwarding during master-slave switchover of the router.
To prevent the preceding case, you can set the status of the router interface to PIM silent. When
the interface is in the PIM silent state, the interface is prohibited from receiving and forwarding
any PIM packet. Then all PIM neighbors and PIM state machines on the interface are deleted.
The interface acts as the static DR and immediately takes effect. At the same time, MLD on the
interface are not affected.
Applicable Environment
A PIM-SM network can adopt the ASM and SSM models to provide multicast services for hosts.
The integrated components (including the RP) of the ASM model must be configured in the
network first. The SSM group address range is then adjusted as required.
NOTE
The SSM model needs to be supported by the Multicast Listener Discovery version 2 (MLDv2). If a host
must run MLDv1, configure MLD SSM mapping on the router interface.
Through MLD, a router knows the multicast group G that a user wants to join.
l If G is in the SSM group address range and the source S is specified when the user joins G
through MLDv2, the SSM model is used to provide multicast services.
l If G is in the SSM group address range and the router is configured with the (S, G) SSM
mapping rules, the SSM model is used to provide multicast services.
l If G is not in the SSM group address range, the ASM model is used to provide multicast
services.
In the PIM-SM network, the ASM model supports the following methods used to obtain an RP.
You can select at least one method to obtain an RP.
l Embedded-RP: By default, the embedded-RP is started. The range of groups served by the
embedded-RP is limited.
l Dynamic RP: To obtain a dynamic RP, select several routers in the PIM-SM domain and
configure them as C-RPs and C-BSRs, and then configure the BSR boundary on the
router interface on the boundary of the domain. Each router in the PIM-SM domain can
then automatically obtain the RP.
l Static RP: To obtain a static RP, manually configure the RP on each router in the PIM-SM
domain. For the large-scale PIM-IPv6 network, configuring a static RP is complicated. To
enhance the robustness and the operation management of the multicast network, the static
RP is usually used as the backup of the BSR-RP.
A multicast group may be in the service range of the embedded-RP, dynamic -RP and static RP
simultaneously. By default, the sequence used by routers to select an RP is embedded-RP>
dynamic RP > the static RP. If the static RP precedence is configured, the static RP is preferred.
Compared with all groups corresponding to an RP, different multicast groups correspond to
different RPs can reduce the load of a single RP and enhance the robustness of the network.
Pre-configuration Tasks
Before configure basic PIM-SM functions, complete the following task:
Data Preparation
To configure basic PIM-SM functions, you need the following data.
No. Data
No. Data
7 Timeout period during which a BSR waits to receive the advertisement message from
a C-RP
Context
Do as follows on the PIM router:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the PIM router:
Procedure
Step 1 Run:
system-view
Step 3 Run:
pim ipv6 sm
After PIM-SM (IPv6) is enabled on interfaces, routers set up PIM-IPv6 neighbor relationship
with each other. The routers can then process protocol packets received from PIM-IPv6
neighbors.
You can run the undo pim ipv6 sm command to disable PIM-SM (IPv6) on an interface.
NOTE
PIM-DM (IPv6) and PIM-SM (IPv6) cannot be enabled on an interface simultaneously. The PIM mode on
all interfaces that belong to the same instance must be the same. When a router is deployed in a PIM-SM
(IPv6) domain, enable PIM-SM (IPv6) on all non-boundary interfaces.
----End
Context
Do as follows on all routers in the PIM-SM (IPv6) domain:
NOTE
This configuration is optional. By default, the embedded-RP is enabled. The group address range of the
embedded-RP is FF7x::/12, and the value of x ranges from 0, 3 to F.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
embedded-rp [ basic-acl6-number | acl6-name acl6-name ]
If the address range defined by ACL6 is wider than the default group address range of an
embedded-RP, the embedded-RP is valid for the intersection part of the two address ranges.
NOTE
The group address scope of the embedded-RP on all routers in the PIM-SM (IPv6) domain must be the
same. You can run the undo embedded-rp command to shut down the embedded-RP.
----End
Context
CAUTION
Configuring a static RP and a BSR-RP in the PIM-SM simultaneously may cause a network
fault. Therefore, confirm the action before you perform this configuration.
If the static RP is not required in this PIM-SM network, this configuration is not necessary.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
static-rp rp-address [ basic-acl6-number | acl6-name acl6-name ] [preferred ]
A static RP is specified.
Multiple static RPs can be configured by using this command repeatedly, but the same ACL
cannot correspond to multiple static RPs. If the ACL is not configured, only one static RP can
be configured.
NOTE
The same static-rp command must be used on all routers in the PIM-SM domain.
----End
CAUTION
This configuration is applicable only to the dynamic RP. If the dynamic RP is not used in this
network, this configuration is not necessary.
Procedure
Step 1 Run:
system-view
In the RP election, the C-RP with the highest priority (with the lowest priority value) succeeds. In case
of the same priority, the hash function is calculated, and the C-RP with the greatest hash value succeeds.
In case of the same priority and the same hash value, the C-RP with the highest IP address succeeds.
l holdtime hold-interval: specifies the timeout period during which the BSR waits to receive
the advertisement message from the C-RP. By default, the value is 150s.
l advertisement-interval adv-interval: specifies the interval during which the C-RP sends
advertisement messages. By default, the value is 60s.
Step 4 Run:
c-bsr ipv6-address [ hash-length ]
A C-BSR is configured.
l ipv6-address: specifies the IPv6 address of the interface where the C-BSR resides. The
interface must be configured with PIM-SM.
l hash-length: specifies the length of the hash mask. According to the group address G, C-RP
address, and the value of hash-length, routers calculate the C-RPs that have the same priority
and want to serve G by operating hash functions, and compare the calculation results. The
C-RP with the greatest calculated value acts as the RP that serves G.
You can use the c-bsr hash-length hash-length command to set the global hash length of the
C-BSRs. The set hash length then applies to all the C-BSRs configured on the router.
----End
Context
Do as follows on all routers in the PIM-SM domain:
NOTE
This configuration is optional. By default, the SSM group address range is FF3x::/32.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
ssm-policy { basic-acl6-number | acl6-name acl6-name }
NOTE
Ensure that the SSM group address ranges of all routers in the network must be identical.
----End
Procedure
l Run the display pim ipv6 bsr-info command to check information about the BSR in the
PIM-SM (IPv6) domain.
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
----End
Applicable Environment
All the configurations in this section are applicable to the ASM and SSM models.
PIM-IPv6 routers check the passing multicast data packets. By checking whether the data packets
match the filtering rule, the routers determine whether to forward the packets. That is, the
routers in the PIM domain act as filters. The filters help to control data flows, and limit
information obtained by downstream receivers.
NOTE
Routers under the control of default values can work normally. In the NE80E/40E, users can adjust related
parameters according to the actual environment. If there is no special requirement from the actual network,
it is recommended to use default values.
Pre-configuration Tasks
Before adjusting control parameters of a multicast source, complete the following tasks:
Data Preparation
To adjust control parameters of a multicast source, you need the following data.
No. Data
Context
Do as follows on the PIM-IPv6 router:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the PIM-IPv6 router:
Procedure
Step 1 Run:
system-view
A filter is configured.
If the basic ACL is configured, only the packets with the source addresses that pass the filtering
are forwarded.
If the advanced ACL is configured, only the packets with the source addresses and group
addresses that pass the filtering are forwarded.
NOTE
l If acl6-number | acl6-name acl6-name is specified in the source-policy command and ACL rules are
created, only the multicast packets whose source addresses match the ACL rules are permitted.
l If acl6-number | acl6-name acl6-name is specified in the source-policy command and no ACL rule is
created, the multicast packets with any source addresses are not forwarded.
l The source-policy command does not filter the static (S, G) entries.
----End
Applicable Environment
To enhance the performance of an RP or a BSR, you can adjust control parameters of C-RPs
and C-BSRs by using related commands.
NOTE
Routers under the control of default values can work normally. In the NE80E/40E, users can adjust related
parameters according to the actual environment. If there is no special requirement of the actual network,
it is recommended to use default values.
Pre-configuration Tasks
Before adjusting control parameters of a C-RP and a C-BSR, complete the following tasks:
Data Preparation
To adjust control parameters of a C-RP and C-BSR, you need the following data.
No. Data
3 Timeout period during which the BSR waits to receive the advertisement message
from the C-RP
7 Period for keeping the Boostrap message received from the BSR
Context
Do as follows on the router on which the C-RP is configured:
NOTE
Do as follows on the router that has been configured with a C-RP. This configuration is optional. If there
is no special requirement, default values are recommended.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
c-rp priority priority
Step 4 Run:
c-rp advertisement-interval interval
The interval during which the C-RP sends advertisement messages is configured.
Step 5 Run:
c-rp holdtime interval
The period for keeping the advertisement message received from the C-RP is configured.
This period must be longer than the interval during the C-RP sends advertisement messages.
The C-RP periodically sends advertisement messages to the BSR. After receiving the
advertisement messages, the BSR obtains the Holdtime period from the messages. During the
Holdtime period, the advertisement message is valid. When the Holdtime period expires, the C-
RP ages.
----End
Context
Do as follows on the router on which the C-BSR is configured:
NOTE
Do as follows on the router that has been configured with a C-BSR. This configuration is optional. If there
is no special requirement, default values are recommended.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
c-bsr hash-length hash-length
Step 4 Run:
c-bsr priority priority
Step 5 Run:
c-bsr interval interval
The interval during which the C-BSR sends Bootstrap messages is configured.
Step 6 Run:
c-bsr holdtime interval
The period for keeping the Bootstrap received from the BSR is configured.
The BSR periodically sends a Bootstrap message to the network. After receiving the Bootstrap
message, the C-BSR keeps the message for a certain period. During the period, the BSR election
stops temporarily. If the Holdtime period expires, a new round of BSR election is triggered
among C-BSRs.
NOTE
Ensure that the period for keeping the Bootstrap message received from the BSR and the interval during
which the BSR sends Bootstrap messages must be the same on all C-BSRs in the PIM-IPv6 domains. The
period for keeping the Bootstrap message received from the BSR must be longer than the interval during
which the BSR sending Bootstrap messages.
----End
Context
Do as follows the routers that may become BSR boundaries:
NOTE
The BSR boundary is used to divide a PIM-SM (IPv6) domain. routers outside the BSR boundary do not
take part in the process of multicast forwarding in the PIM-SM (IPv6) domain. This configuration is
optional. By default, all PIM-SM (IPv6) routers in the network can receive BSR messages.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 bsr-boundary
----End
Context
Do as follows on all routers in the PIM-SM domain:
NOTE
This configuration is optional. By default, source addresses of the received BSR packets are not checked,
and all received BSR packets are received.
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
bsr-policy { basic-acl6-number | acl6-name acl6-name }
After receiving an IP packet carrying a Bootstrap message, a router checks the source address
of the IP packet. If the source address is not in the range of legal BSR addresses, the packet is
discarded. The BSR spoofing is thus avoided.
basic-acl6-number specifies the number of the basic ACL. The ACL defines the filtering policy
for the source addresses of BSR messages.
----End
Context
Do as follows on all C-BSRs in the PIM-SM domain:
NOTE
This configuration is optional. By default, the C-RP address carried in the received advertisement message
and the address of the group that the C-RP serves are not checked, and all received advertisement messages
are received and added to the RP-set.
Procedure
Step 1 Run:
system-view
The range of legal C-RP addresses and the range of groups that C-RPs serve are limited. When
a router receives an advertisement message, the router checks the C-RP address and the address
of the group that the C-RP serves in the message. The advertisement message is received and
added to the RP-set only when the C-RP address and the group address are in the legal address
range. The C-RP spoofing is thus avoided.
{ advanced-acl6-number | acl6-name acl6-name }: specifies the number of the advanced ACL.
The ACL defines the filtering policy to limit the range of legal C-RP and the range of groups
that the C-RP serves.
----End
Applicable Environment
The configuration in this section is applicable to both the ASM model and the SSM model.
The PIM routers send Hello messages to each other to establish the neighbor relationship,
negotiate the control parameters, and elect a DR.
The router can work normally by default. The NE80E/40E allows the users to adjust the
parameters as required.
NOTE
Pre-configuration Tasks
Before adjusting control parameters for maintaining neighbors, complete the following tasks:
Data Preparation
To adjust control parameters for maintaining neighbors, you need the following data.
No. Data
5 DR switching delay, that is, the remaining time for the entries to turn invalid after a
DR interface no longer functions as a DR.
Context
Do as follows on the PIM-SM (IPv6) router:
NOTE
The configuration of the control parameters for maintaining PIM neighbors involves the following cases:
l Global configuration: It is valid on all the interfaces.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
After the maximum delay is set, the conflict caused by multiple PIM-IPv6 routers
simultaneously sending Hello message is avoided.
5. Run:
pim ipv6 hello-option holdtime interval
Context
Do as follows on the PIM-SM (IPv6) router:
NOTE
The configuration of the control parameters for electing a DR involves the following cases:
l Global configuration: It is valid on all the interfaces.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
system-view
The DR switching delay is configured and the value of the delay is set.
When an interface changes from a DR to a non-DR, the existing routing entries are
valid until the maximum delay expires.
----End
Context
Do as follows on the router running IPv6 PIM-SM:
NOTE
Procedure
l Global Configuration
1. Run:
system-view
After this function is enabled, information about the downstream neighbor who has
sent a Join message and whose Join state does not times out is recorded.
NOTE
The function of tracking downstream neighbors cannot be implemented unless all the routers
running IPv6 PIM-SM in the shared network segment are enabled with this function.
l Configuration on an Interface
1. Run:
system-view
After this function is enabled, information about the downstream neighbor who has
sent a Join message and whose Join state does not times out is recorded.
NOTE
The function of tracking downstream neighbors cannot be implemented unless all the routers
running IPv6 PIM-SM in the shared network segment are enabled with this function.
----End
Context
To prevent some router from being involved in the PIM protocol and prevent this router from
becoming the DR, filtering PIM neighbors is required.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 neighbor-policy { basic-acl6-number | acl6-name acl6-name }
An interface sets up neighbor relationships with only the addresses matching the filtering rules
and deletes the neighbors unmatching the filtering rules.
NOTE
When configuring the PIM neighbor filtering function on the interface, you must also configure the
neighbor filtering function correspondingly on the router that sets up the neighbor relationship with the
interface.
----End
NOTE
Routers under the control of default values can work normally. In the NE80E/40E, users can adjust related
parameters according to the specific network environment. If there is no special requirement, default values
are recommended.
Pre-configuration Tasks
Before adjusting control parameters of source registering, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM (IPv6) Functions
Data Preparation
To adjust control parameters of source registering, you need the following data.
No. Data
2 Timeout period for keeping the suppressed state of the source registering
Context
Do as follows on routers that may become RPs:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that may become the DR at the source side:
Procedure
Step 1 Run:
system-view
The timeout period for keeping the suppressed state of the registering is set.
Step 4 Run:
probe-interval interval
----End
When the last member of a group on the router leaves its group, the router sends the Prune
message through the upstream interface to request the upstream router to perform the prune
action. After receiving the Prune message, the upstream interface stops forwarding packets to
this network segment. If other downstream routers exist in this network segment, they must send
the Join message to override the prune action.
In the ASM model, the routers periodically send Join messages to the RP in case the RPT branch
is deleted because of timeout.
NOTE
Routers under the control of default values can work normally. In the NE80E/40E, users can adjust related
parameters according to the specific network environment. If there is no special requirement, default values
are recommended.
Pre-configuration Tasks
Before adjusting control parameters for forwarding, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM (IPv6) Functions
Data Preparation
To adjust control parameters for forwarding, you need the following data.
No. Data
6 Whether to perform neighbor check on the received or sent Join/Prune messages and
Assert messages
Context
Do as follows on the PIM-SM (IPv6) router:
NOTE
The configuration of the control parameters for maintaining the forwarding relationship involves the
following cases:
l Global configuration: It is valid on all the interfaces.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
The value of the Holdtime filed in the sent Join/Prune message is set.
The Holdtime period is the period for keeping the Forwarding/Prune state of the
downstream interface.
l Configuration on an Interface
1. Run:
system-view
The value of the Holdtime filed in the sent Join/Prune message is set.
The Holdtime period is the period for keeping the Forwarding/Prune state of the
downstream interface.
5. Run:
pim ipv6 require-genid
Context
The Hello message carries lan-delay ( which indicates the delay for transmitting prune message )
and override-interval ( which indicates the interval for overriding a prune ). The relationship
between lan-delay, override-interval, and PPT is that lan-delay + override-interval = PPT. Prune-
Pending Timer (PPT) indicates the delay from the time when a router receives the Prune message
from the downstream interface to the time when the router performs the prune action to suppress
the forwarding of the downstream interface. If the router receives a Join message from a
downstream router within the PPT, the router does not perform the prune action.
Do as follows on the PIM-SM (IPv6) router:
NOTE
The configuration of the control parameters of prune involves the following cases:
l Global Configuration: It is valid on all the interfaces.
l Configuration on the interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
1. Run:
system-view
----End
Context
A Join/Prune message received by an interface may contain both join information and prune
information. You can configure the router to filter join information based on ACL6 rules. The
router then creates PIM entries for only the join information matching ACL6 rules, which can
prevent illegal users from accessing the group.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
pim ipv6 join-policy { asm { basic-acl6-number | acl6-name acl6-name } | ssm
{ advanced-acl6-number | acl6-name acl6-name } }
----End
Context
By default, checking whether the Join/Prune message and Assert messages are sent to or received
from a PIM neighbor is not enabled.
If PIM neighbor checking is required, it is recommended to configure the neighbor checking
function on the devices connected with user devices rather than on the internal devices of the
network. Then, the router checks whether the Join/Prune and Assert messages are sent to or
received from a PIM neighbor. If not, the router drops the messages.
Do as follows on the router enabled with IPv6 PIM-SM:
Procedure
Step 1 Run:
system-view
----End
Applicable Environment
The configurations in this section are applicable to both the ASM model and the SSM model.
When a PIM-SM router receives multicast data through the downstream interface, this indicates
that other upstream routers exist in this network segment. The router sends an Assert message
through the downstream interface to take part in the assert election.
NOTE
routers under the control of default values can work normally. In the NE80E/40E, you can adjust related
parameters according to the specific network environment. If there is no special requirement, default values
are recommended.
Pre-configuration Tasks
Before adjusting control parameters for assert, complete the following tasks:
Data Preparation
To adjust control parameters for asset, you need the following data.
No. Data
Context
Do as follows on all routers in the PIM-SM (IPv6) domain:
NOTE
The configuration of control parameters for Assert involves the following cases:
l Global configuration: It is valid on all the interfaces.
l Configuration on an interface: The configuration on an interface takes precedence over the global
configuration. If the configuration on an interface is not done, the global configuration is used.
Procedure
l Global Configuration
1. Run:
system-view
The router that fails in the election prohibits the downstream interface from forwarding
multicast data in this period. After the period expires, the router restores the forwarding
of the downstream interface.
l Configuration on an Interface
1. Run:
system-view
The router that fails in the election prohibits the downstream interface from forwarding
multicast data in this period. After the period expires, the router restores the forwarding
of the downstream interface.
----End
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 control-message counters interface interface-type interface-
number [ message-type { assert | bsr | graft | graft-ack | hello | join-prune | state-
refresh } ] command to check the number of sent and received PIM-IPv6 control messages.
l Run the following commands to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
source, and establishes a multicast route along the shortest path from the DR at the source
side to the DR at the receiver side. The subsequent packets are forwarded along the path.
NOTE
routers under the control of default values can work normally. In the NE80E/40E, users can adjust related
parameters according to the specific network environment. If there is no special requirement, default values
are recommended.
Pre-configuration Tasks
Before adjusting control parameters for the SPT switchover, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring Basic PIM-SM (IPv6) Functions
Data Preparation
To adjust control parameters for the SPT switchover, you need the following data.
No. Data
1 Forwarding rate threshold that the DR at the receiver side switches packets from the
RPT to the SPT
2 Group filtering policy and sequence policy for the RPT-to-SPT switchover
3 Interval for checking the forwarding rate threshold of multicast data before RPT-to-
SPT switchover
Context
Do as follows on the router that may become the DR at the receiver side:
Procedure
Step 1 Run:
system-view
The interval for checking the forwarding rate of multicast data is set.
By default, the DR at the receiver side performs SPT switchover after receiving the first multicast
data packet.
NOTE
Before configuring the timer spt-switch command, run the spt-switch-threshold command to set the
threshold of the rate that will trigger SPT switchover. Otherwise, timer spt-switch does not take effect.
----End
Context
Do as follows on the router that may become the DR at the receiver side:
NOTE
This configuration is optional. By default, the RP and the DR at the receiver side immediately perform the
SPT switchover after receiving the first multicast data packet.
Procedure
Step 1 Run:
system-view
----End
Applicable Environment
In some multicast applications, the router may need to perform active/standby switchover. After
active/standby switchover, the new active main control board deletes the forwarding entries on
the interface board and re-learns the PIM routing table and multicast routing table. During this
process, multicast traffic is interrupted.
In the PIM-SM/SSM network, PIM Graceful Restart (GR) can be applied to the router with dual
main control boards to ensure normal multicast traffic forwarding during active/standby
switchover.
The active main control board of the router backs up PIM routing entries and Join/Prune
information to be sent upstream to the standby main control board. The interface board keeps
forwarding entries. Therefore, after active/standby switchover, the router can actively and fast
send Join messages upstream to maintain the Join state of the upstream. Then, the PIM protocol
sends Hello messages carrying new Generation ID to all routers enabled with PIM-SM. When
the downstream router finds that the Generation ID of its neighbor changes, it sends a Join/Prune
message to the neighbor for re-creating routing entries, thereby ensuring non-stop forwarding
of multicast data on the forwarding plane.
If a dynamic RP is used on the network, after receiving a Hello message with the Generation ID
being changed, the DR or candidate DR unicasts a BSM message to the router performing active/
standby switchover and the router learns and restores RP information based on the received BSM
message. If the router has not leant any RP information from the BSM messages, it obtains the
RP information from the Join/Prune message received from the downstream router and re-creates
multicast routing table.
NOTE
Pre-configuration Tasks
Before configuring PIM GR, complete the following task:
l Configuring a unicast routing protocol and enabling unicast GR
l Configuring Basic PIM-SM (IPv6) Functions
Data Preparation
To enable PIM GR, you need the following data.
No. Data
1 Unicast GR period
2 PIM-IPv6 GR period
Context
Do as follows on the router enabled with PIM SM (IPv6).
Procedure
Step 1 Run:
system-view
Step 2 Run:
pim-ipv6
Step 3 Run:
graceful-restart
PIM GR is enabled.
----End
Procedure
Step 1 Run the following commands to check PIM routing table.
l display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-group-
address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif | nonbr | none |
rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-interface [ interface-type
interface-number| register ] | mode { dm | sm | ssm } | outgoing-interface { exclude |
include | match } [ interface-type interface-number | none | register ] ] * [ outgoing-
interface-number [ number ] ]
----End
Applicable Environment
At the access layer, the router interface directly connected to hosts needs to be enabled with
PIM-IPv6. You can establish the PIM-IPv6 neighbor relationship on the router interface to
process various PIM-IPv6 packets. The configuration, however, has the security vulnerability.
When a host maliciously generates PIM-IPv6 Hello packets and sends the packets in large
quantity, the router may break down.
To avoid the preceding case, you can set the status of the router interface to PIM-IPv6 silent.
When the interface is in the PIM-IPv6 silent state, the interface is prohibited from receiving and
forwarding any PIM-IPv6 packet. All PIM-IPv6 neighbors and PIM-IPv6 state machines on the
interface are deleted. The interface acts as the static DR and immediately takes effect. At the
same time, MLD on the interface is not affected.
To enable PIM-IPv6 silent, the network environment must meet the following conditions:
CAUTION
If PIM-IPv6 silent is enabled on the interface connected to a router, the PIM-IPv6 neighbor
cannot be established and a multicast fault may occur.
If the host network segment is connected to multiple routers and PIM-IPv6 silent is enabled on
multiple router interfaces, the interface become static DRs. Therefore, multiple DRs exist in this
network segment, and a multicast fault occurs.
Pre-configuration Tasks
Before configuring PIM-IPv6 silent, complete the following tasks:
Data Preparation
To configure PIM-IPv6 silent, you need the following data.
No. Data
Context
CAUTION
PIM-IPv6 silent is applicable only to the router interface connected to the host network segment
that can be connected to only one PIM-IPv6 router.
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command to check information about PIM-IPv6 interfaces.
----End
CAUTION
The statistics of PIM-IPv6 control messages on an interface cannot be restored after you clear
it. Therefore, confirm the action before you use the command.
Procedure
l Run the reset pim ipv6 control-message counters [ interface interface-type interface-
number ] command in the user view to clear statistics of PIM-IPv6 control messages on an
interface.
----End
CAUTION
Clearing PIM status of the downstream interfaces may trigger the sending of corresponding Join/
Prune messages, which affects multicast services.
Using the following command can clear join information about illegal users, and clear the PIM
status of the specified interface in a specified entry, such as PIM Join/Prune status and Assert
status.
The command cannot be used to clear the MLD or static group join status on a specified interface.
Procedure
Step 1 After confirming that PIM status of the specified downstream interfaces of the specified PIM
entry need to be cleared, run the reset pim ipv6 routing-table group ipv6-group-address
mask ipv6-group-mask-length source ipv6-source-address interface interface-type interface-
number command in the user view.
----End
Context
In routine maintenance, you can run the following commands in any view to check the running
status of PIM-SM.
Procedure
l Run the display pim ipv6 control-message counters interface interface-type interface-
number [ message-type { assert | bsr | graft | graft-ack | hello | join-prune | state-
refresh } ] command in any view to check the number of sent and received PIM-IPv6
control messages.
l Run the display pim ipv6 bsr-info command in any view to check information about the
BSR.
l Run the display pim ipv6 claimed-route [ ipv6-source-address ] command in any view
to check unicast routing information used by (S, G) and (*, G) entries in the PIM-SM routing
table.
l Run the display pim ipv6 rp-info [ ipv6-group-address ] command in any view to check
information about the RP to which the multicast group corresponds.
l Run the display pim ipv6 grafts command in any view to check the unacknowledged PIM-
IPv6 Graft messages.
l Run the display pim ipv6 interface [ interface-type interface-number | up | down ]
[ verbose ] command in any view to check information about PIM-IPv6 interfaces.
l Run the display pim ipv6 neighbor [ ipv6-link-local-address | interface interface-type
interface-number | verbose ] * command in any view to check information about PIM-IPv6
neighbors.
l Run the following commands in any view to check the PIM-IPv6 multicast routing table.
– display pim ipv6 routing-table [ ipv6-source-address [ mask mask-length ] | ipv6-
group-address [ mask mask-length ] | flags { act | del | exprune | ext | loc | niif |
nonbr | none | rpt | sg_rcvr | sgjoin | spt | swt | wc | upchg } | fsm | incoming-
interface [ interface-type interface-number| register ] | mode { dm | sm | ssm } |
outgoing-interface { exclude | include | match } [ interface-type interface-number |
none | register ] ] * [ outgoing-interface-number [ number ] ]
– display pim ipv6 routing-table brief [ ipv6-source-address [ mask mask-length ] |
ipv6-group-address [ mask mask-length ] | incoming-interface { interface-type
interface-number | register } ] *
----End
Networking Requirements
As shown in Figure 13-1, multicast services are deployed in the Internet Service Provider (ISP)
network. The integrated Interior Gateway Protocol (IGP) is deployed in the network. The unicast
routes work normally and are connected to the Internet. You are required to properly configure
routers in the network to enable hosts to receive the Video On Demand (VOD) information in
multicast mode.
RouterB HostA
GE2/0/0
3001::1
POS1/0/0
2002::2
POS3/0/0 HostB
2004::1
POS2/0/0
2002::1 POS2/0/0
RouterA 2003::1 2004::2
POS3/0/0 RouterD
GE1/0/0 POS1/0/0
2001::1 POS4/0/0 2003::2 POS3/0/0
Source 2006::1 2005::1
2001::5
POS1/0/0
2005::2 HostC
Internet
GE2/0/0
4001::2 HostD
RouterC
Configuration Roadmap
The ISP network is accessed to the Internet. To expand services, PIM-SM is adopted to configure
multicast, and ASM and SSM models are used to provide multicast services. In this example,
the host network segment is connected to only one router, so PIM silent can be used to prevent
the Hello message attack. The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
l The router interface connected to the host network segment runs MLDv2.
l The IPv6 global unicast address of the C-BSR and C-RP is 2004::2.
l The SSM group address range is FF3E::1/64.
Procedure
Step 1 Enable IPv6 multicast on all routers and IPv6 PIM-SM on all interfaces.
# Enable IPv6 multicast on Router A and PIM-SM on all interfaces.
[RouterA] multicast ipv6 routing-enable
[RouterA] interface gigabitethernet 1/0/0
[RouterA-GigabitEthernet1/0/0] pim ipv6 sm
[RouterA-GigabitEthernet1/0/0] quit
[RouterA] interface pos 2/0/0
[RouterA-Pos2/0/0] pim ipv6 sm
[RouterA-Pos2/0/0] quit
[RouterA] interface pos 3/0/0
[RouterA-Pos3/0/0] pim ipv6 sm
[RouterA-Pos3/0/0] quit
[RouterA] interface pos 4/0/0
[RouterA-Pos4/0/0] pim ipv6 sm
[RouterA-Pos4/0/0] quit
# The configurations of Router B, Router C, and Router D are the same as the configuration of
Router A, and are not mentioned here.
# Run the display pim ipv6 interface command to check the PIM configuration and the running
information on each interface.
Step 2 Configure the C-BSR and C-RP.
# Configure the C-RP on Router D and specify the range of groups that the C-RP served.
[RouterD] acl ipv6 number 2001
[RouterD-acl6-basic-2001] rule permit source ff3e::1 64
[RouterD-acl6-basic-2001] quit
[RouterD] pim-ipv6
[RouterD-pim6] c-rp 2004::2 group-policy 2001
# Running the display pim ipv6 bsr-info command on a router to check information about the
BSR election. Take information about the BSR on Router B and Router D as an example (on
Router D, information about the C-BSR is displayed).
# Run the display pim ipv6 rp-info command to check RP information on each router. Take
RP information on Router B as an example.
<RouterB> display pim ipv6 rp-info
VPN-Instance: public net
PIM-SM BSR RP information:
Group/MaskLen: FF3E::1/64
RP: 2004::2
Priority: 192
Uptime: 00:05:19
Expires: 00:02:11
Step 3 On Router A, configure the BSR boundary on the interface connected to the Internet.
[RouterA] interface pos 4/0/0
[RouterA-Pos4/0/0] pim ipv6 bsr-boundary
[RouterA-Pos4/0/0] quit
Step 4 Configure the SSM group address range on all routers in the network.
# Configure the PIM-IPv6 SSM group address range on Router A to FF3E::1/64.
[RouterA] acl ipv6 2000
[RouterA-acl6-basic-2000] rule permit source FF3E::1 64
[RouterA-acl6-basic-2000] quit
[RouterA] pim-ipv6
[RouterA-pim6] ssm-policy 2000
[RouterA-pim6] quit
# The configurations of Router B, Router C, and Router D are the same as the configuration of
Router A, and are not mentioned here.
Step 5 Enable PIM silent and MLD on the router interface connected to hosts.
# On Router B, enable PIM silent and MLD on the interface at the host side.
[RouterB] interface gigabitethernet 2/0/0
[RouterB-GigabitEthernet2/0/0] pim ipv6 silent
[RouterB-GigabitEthernet2/0/0] mld enable
[RouterB-GigabitEthernet2/0/0] quit
# On Router C, the configuration of the interface at the host side is similar to the preceding one,
and is not mentioned here.
# Run the display pim ipv6 interface command to check the PIM configuration and the running
information on each interface. Take PIM information on Router B as an example.
<RouterB> display pim ipv6 interface
VPN-Instance: public net
Interface State NbrCnt HelloInt DR-Pri DR-Address
Pos1/0/0 up 1 30 1 FE80::A01:10E:1 (local)
GE2/0/0 up 0 30 1 FE80::200:AFF:FE01:10E
(local)
Pos3/0/0 up 1 30 1 FE80::9D62:0:FDC5:2
# The multicast source S 2001::5 simultaneously sends packets to group FF3E::1 (in the SSM
group address range) and FF0E::1 (not in the SSM group address range). Host A and Host C
need to receive information to group FF0E::1. Host B needs to receive information sent by S
2001::5 to group FF3E::1.
<RouterA> display pim ipv6 routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 2 (S, G) entries
(2001::5, FF0E::1)
RP: 2004::2
Protocol: pim-sm, Flag: SPT LOC ACT
UpTime: 00:02:15
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 3
1: Register
Protocol: pim-sm, UpTime: 00:02:15, Expires: -
2: Pos2/0/0
Protocol: pim-sm, UpTime: 00:02:15, Expires: 00:03:15
3: Pos3/0/0
Protocol: pim-sm, UpTime: 00:02:15, Expires: 00:03:15
(2001::5, FF3E::1)
Protocol: pim-ssm, Flag: LOC SG_RCVR
UpTime: 00:00:11
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-ssm, UpTime: 00:00:11, Expires: 00:03:19
<RouterB> display pim ipv6 routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 2 (S, G) entries
(*, FF0E::1)
RP: 2004::2
Protocol: pim-sm, Flag: WC
UpTime: 00:14:44
Upstream interface: Pos3/0/0
Upstream neighbor: FE80::9D62:0:FDC5:2
RPF prime neighbor: FE80::9D62:0:FDC5:2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: mld, UpTime: 00:14:44, Expires: -
(2001::5, FF0E::1)
RP: 2004::2
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:2:42
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::A01:10C:1
RPF prime neighbor: FE80::A01:10C:1
Downstream interface(s) information:
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
multicast ipv6 routing-enable
#
acl ipv6 number 2000
rule 0 permit source FF3E::1/64
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos2/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2002::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos3/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2003::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos4/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2006::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
pim ipv6 bsr-boundary
#
ospfv3 1
router-id 1.1.1.1
area 0.0.0.0
#
pim-ipv6
ssm-policy 2000
#
return
#
ipv6
#
multicast ipv6 routing-enable
#
acl ipv6 number 2000
rule 0 permit source FF3E::1/64
#
acl ipv6 number 2001
rule 0 permit source FF0E::1/64
#
interface Pos1/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2003::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos2/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2004::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos3/0/0
undo shutdown
ipv6 enable
link-protocol ppp
ipv6 address 2005::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
ospfv3 1
router-id 4.4.4.4
area 0.0.0.0
#
pim-ipv6
c-bsr 2004::2
c-rp 2004::2 group-policy 2001
#
pim-ipv6
ssm-policy 2000
#
return
Networking Requirements
In multicast applications, if a device performs the active/standby switchover, the new main
control board deletes the multicast forwarding entries on the interface board and rebuilds the
PIM routing table and PIM forwarding table. During this process, multicast traffic of users is
interrupted.
Deploying IPv6 PIM GR on an IPTV network protects the core devices and edge devices. When
a device on the IPTV network performs the active/standby switchover, the interface board can
maintain the normal forwarding of multicast data, which increases the fault tolerance capacity
of the devices on the network.
In the network shown in Figure 13-2, multicast services are deployed and IPv6 PIM GR is
configured on Router C. During the process that Router C forwards multicast data to receiver,
the active main control board backs up the PIM routing entries and Join/Prune messages that
need to be sent to upstream device to the standby main control board. When Router C performs
the active/standby switchover, the interface board maintains the original forwarding entries,
which ensures the smooth forwarding of multicast data. Therefore, the receiver can normally
receive multicast data from the multicast source when the device performs the active/standby
switchover.
Ethernet
Leaf networks
RouterA
POS1/0/0 POS2/0/0
Source
GE2/0/0 POS1/0/0 POS1/0/0 GE2/0/0
Router B RouterC
Receiver
PIM-SM HostA
Ethernet
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IPv6 addresses for the interfaces and a unicast routing protocol on each router.
2. Enable unicast GR on each router and set unicast GR period.
3. Enable the multicast function, enable PIM SM (IPv6) on the interfaces of routers, and enable
MLD on the interface connecting router to the host.
4. Configure a RP. Configure a same static RP on routers.
5. On Router C, enable PIM GR and set the GR period.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure IPv6 addresses for the interfaces and a unicast routing protocol on each router.
# Based on Figure 13-2, configure IPv6 addresses and masks for the interfaces on each router.
Configure OSPFv3 as the unicast routing protocol running between routers to ensure IP
interworking between Router A, Router B, and Router C. The detailed configuration is not
mentioned here.
# Enable unicast GR and set the GR period to 200 seconds on each router. The configurations
on Router A and Router B are similar to those on Router C. The detailed configuration procedures
are not mentioned here.
[RouterC] ospfv3 1
[RouterC-ospfv3-1] graceful-restart
[RouterC-ospfv3-1] graceful-restart period 200
[RouterC-ospfv3-1] quit
Step 3 Enable the multicast function, enable PIM SM (IPv6) on the interfaces of routers, and enable
MLD on the interface connecting router and the host.
# Enable the multicast function on all routers, and enable PIM SM (IPv6) on the interfaces of
the routers, and enable MLD on the interface connecting Router C to the host. The configurations
on Router A and Router B are similar to those on Router C. The detailed configuration procedures
are not mentioned here.
[RouterC] multicast ipv6 routing-enable
[RouterC] interface gigabitethernet 2/0/0
[RouterC-GigabitEthernet2/0/0] pim ipv6 sm
[RouterC-GigabitEthernet2/0/0] mld enable
[RouterC-GigabitEthernet2/0/0] quit
[RouterC] interface pos 1/0/0
[RouterC-Pos1/0/0] pim ipv6 sm
[RouterC-Pos1/0/0] quit
# Create a loopback interface on Router A and enable PIM SM (IPv6) on the interface.
[RouterA] interface looplback 0
[RouterA-Loopback0] ipv6 enable
[RouterA-Loopback0] ipv6 address 2002:1::1 64
[RouterA-Loopback0] pim ipv6 sm
[RouterA-Loopback0] ospfv3 1 area 0
[RouterA-Loopback0] quit
# Configure a same static RP on each router. The configurations on Router B and Router C are
similar to those on Router A. The detailed configuration procedures are not mentioned here.
[RouterA] pim-ipv6
[RouterA-pim6] static-rp 2002:1::1
[RouterA-pim6] quit
# On Router C, enable PIM GR and set the PIM GR period to 300 seconds.
[RouterC] pim-ipv6
[RouterC-pim6] graceful-restart
[RouterC-pim6] graceful-restart period 300
[RouterC-pim6] quit
(*, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: WC
UpTime: 00:00:53
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::E0:F2C:3C02:1
RPF prime neighbor: FE80::E0:F2C:3C02:1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 00:00:53, Expires: 00:02:37
(2001:1::1, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:21:24
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::E0:F2C:3C02:1
RPF prime neighbor: FE80::E0:F2C:3C02:1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 00:00:53, Expires: 00:03:07
<RouterC> display pim ipv6 routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: WC
UpTime: 00:01:16
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::5463:0:9245:2
RPF prime neighbor: FE80::5463:0:9245:2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: mld, UpTime: 00:01:16, Expires: -
(2001:1::1, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:01:16
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::5463:0:9245:2
RPF prime neighbor: FE80::5463:0:9245:2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:01:16, Expires: -
After Router C performs active/standby switchover, during PIM GR, run the display pim ipv6
routing-table command on Router B and Router C to view PIM routing table. The command
output are as follows:
<RouterB> display pim ipv6 routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: WC
UpTime: 00:02:20
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::E0:F2C:3C02:1
RPF prime neighbor: FE80::E0:F2C:3C02:1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 00:02:20, Expires: 00:03:10
(2001:1::1, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:02:20
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::E0:F2C:3C02:1
RPF prime neighbor: FE80::E0:F2C:3C02:1
Downstream interface(s) information:
Total number of downstreams: 1
1: Pos2/0/0
Protocol: pim-sm, UpTime: 00:02:20, Expires: 00:03:17
<RouterC> display pim ipv6 routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: WC
UpTime: 00:02:44
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::5463:0:9245:2
RPF prime neighbor: FE80::5463:0:9245:2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: mld, UpTime: 00:02:44, Expires: -
(2001:1::1, FF2E::1)
RP: 2002:1::1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:02:44
Upstream interface: Pos1/0/0
Upstream neighbor: FE80::5463:0:9245:2
RPF prime neighbor: FE80::5463:0:9245:2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet2/0/0
Protocol: pim-sm, UpTime: 00:02:44, Expires: -
In a normal multicast network, the downstream router periodically sends Join/Prune messages
upstream to refresh the timeout period of PIM routing entries on the upstream, thereby ensuring
normal multicast data forwarding.
If the GR function is not configured on Router C, the new active main control board deletes the
multicast forwarding entries on the interface board, receives MLD Report messages from the
host again, and re-creates PIM routing entries. During this process, multicast traffic is
interrupted.
From the preceding command output, it can be found that after Router C performs active/standby
switchover, the downstream interface of Router B, remains unchanged. That is, after Router C
performs active/standby switchover, Router C sends the backed up Join messages upstream. In
this way, multicast forwarding entries can be maintained during GR to ensure non-stop multicast
data forwarding.
During the process of restoring multicast routing entries on Router C after active/standby
switchover, users can receive multicast data normally and services are not interrupted.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
multicast ipv6 routing-enable
#
ospfv3 1
router-id 2.2.2.2
graceful-restart period 200
area 0.0.0.0
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ipv6 enable
ipv6 address 2001:2::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:1::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface LoopBack0
ipv6 enable
ipv6 address 2002:1::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
pim-ipv6
static-rp 2002:1::1
#
return
area 0.0.0.0
#
interface Pos1/0/0
link-protocol ppp
undo shutdown
ipv6 enable
ipv6 address 2001:2::2/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
interface Pos2/0/0
link-protocol ppp
undo shutdown
ipv6 enable
ipv6 address 2001:5::1/64
ospfv3 1 area 0.0.0.0
pim ipv6 sm
#
pim-ipv6
static-rp 2002:1::1
#
return
This chapter describes the basic principles of IPv6 multicast forwarding, configuration methods
of forwarding policies, and maintenance of commands, and provides configuration examples.
14.1 IPv6 Multicast Routing Management Introduction
This section describes the principle and basic concepts of IPv6 multicast routing and forwarding.
14.2 Configuring the IPv6 Multicast Routing Policy
This section describes how to configure the multicast routing policy.
14.3 Limiting the Range of Multicast Forwarding
This section describes how to configure the range of multicast forwarding.
14.4 Configuring Control Parameters of the IPv6 Multicast Forwarding Table
This section describes how to configure control parameters of the multicast forwarding table.
14.5 Maintaining IPv6 Multicast Routing Management
This section describes how to clear the statistics of multicast routing and forwarding and monitor
the running status of IPv6 multicast forwarding and routing.
14.6 Configuration Examples
This section provides several configuration examples for IPv6 multicast routing management.
In the IPv6 multicast network, PIM-IPv6 adopts the Reverse Path Forwarding (RPF) check,
creates, and updates routing entries, including the (*, G) entry and (S, G) entry. The RPF check
is implemented on the basis of RPF routes. IPv6 RPF routes are selected from unicast routes.
The multicast routing table records (S, G) entries and downloads the entries to the multicast
forwarding table. The multicast forwarding table directly guides the forwarding of multicast data
packets.
l Source address S
l Group address G
l Incoming interface, which indicates that multicast data is received through this interface.
The incoming interface corresponds to the upstream interface of the related routing entry.
l Outgoing interface, which indicates that multicast data is sent out through this interface.
The outgoing interface corresponds to the downstream interface in the downstream
interface list of the related routing entry.
NOTE
IPv6 multicast forwarding and routing are similar to IPv4 multicast forwarding and routing. For details,
refer to the chapter "Multicast Forwarding and Routing" in the HUAWEI NetEngine80E/40E Router
Feature Description - IP Multicast.
Performing multicast load splitting according to different policies can optimize network traffic
transmission in the scenario where multiple multicast data flows exist.
There are five multicast load splitting policies: stable-preferred, balance-preferred, source
address-based, group address-based, and source and group addresses-based. The five load
splitting policies are mutually exclusive. In addition, you can configure IPv6 load splitting
weights on the interfaces to achieve unbalanced multicast load splitting.
Applicable Environment
If multiple unicast routes with the same cost exist when a router enabled with IPv6 multicast
selects the upstream interface, you can use one of the following methods to configure the
router to select the RPF route:
l By default, the router selects the route with the highest next-hop address.
l Load splitting is configured among equal-cost routes. Performing load splitting of multicast
traffic according to different policies can optimize network traffic transmission in the
scenario where multiple multicast data flows exist.
Pre-configuration Tasks
Before configuring the IPv6 multicast routing policy, complete the following tasks:
Data Preparation
To configure the IPv6 multicast routing policy, you need the following data.
No. Data
Context
The IPv6 multicast load splitting function extends multicast routing rules, which does not fully
depend on the RPF check. If multiple equal-cost optimal routes exist over the network, they all
can be used for multicast data forwarding. In the case that the multicast service is guaranteed,
multicast traffic is load split to multiple equal-cost routes.
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast ipv6 load-splitting { balance-preferred | stable-preferred | source |
group | source-group }
l source-group: indicates source and group addresses-based load splitting. This policy is
applicable to the scenario of multiple sources to multiple groups.
NOTE
It is recommended to adopt a fixed IPv6 multicast load splitting policy based on the actual networking.
The balance-preferred or stable-preferred policy is preferred.
balance-preferred or stable-preferred cannot be configured on the interface enabled with PIM-DM.
----End
Context
When stable-preferred or balance-preferred load splitting is configured, because the forwarding
capabilities of equal-cost routes are different from the actual load bearing situation on the equal-
cost routes, balanced load splitting cannot meet network requirements in some scenarios. In such
a case, you can configure an IPv6 multicast load splitting weight on an interface to achieve
unbalanced multicast load splitting.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 3 Run:
multicast ipv6 load-splitting weight weight-value
The greater the load splitting weight of an interface, the more IPv6 multicast routing entries with
this interface being the upstream interface. When the IPv6 multicast load splitting weight on an
interface is 0, it indicates that the routes with this interface being the upstream interface do not
take part in load splitting.
----End
Procedure
l Run the display multicast ipv6 routing-table [ ipv6-source-address [ ipv6-source-mask-
length ] | ipv6-group-address [ ipv6-group-mask-length ] | incoming-interface { interface-
type interface-number | register } | outgoing-interface { { exclude | include | match }
Pre-configuration Tasks
Before limiting the range of multicast forwarding, complete the following tasks:
l Configuring a unicast routing protocol
l Configuring basic multicast functions
Data Preparation
To limit the range of multicast forwarding, you need the following data.
No. Data
1 Group address, mask, and mask length of the IPv6 multicast forwarding boundary
Context
By default, the IPv6 multicast forwarding boundary is not set on an interface.
Do as follows on the multicast router:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the multicast router:
Procedure
Step 1 Run:
system-view
The minimum TTL value for forwarding IPv6 multicast packets is set on the interface.
By default, the TTL threshold for IPv6 multicast forwarding is 1.
----End
Procedure
l Run the display multicast ipv6 routing-table [ ipv6-source-address [ ipv6-source-mask-
length ] | ipv6-group-address [ ipv6-group-mask-length ] | incoming-interface { interface-
type interface-number | register } | outgoing-interface { { exclude | include | match }
{ interface-type interface-number | register | none } } ] * [ outgoing-interface-number
[ number ] ] command to check the IPv6 multicast routing table.
l Run the display multicast ipv6 boundary [ ipv6-group-address [ ipv6-group-mask-
length ] | interface interface-type interface-number ] command to check the IPv6 multicast
boundary configured on the interface.
l Run the display multicast ipv6 minimum-ttl [ interface-type interface-number ]
command to check the TTL threshold for forwarding IPv6 multicast packets on the
interface.
----End
Applicable Environment
When planning a specific network according to network services, the ISP needs to perform the
following configurations:
service performance can reduce the processing pressure of a router and control the service
range.
Pre-configuration Tasks
Before configuring control parameters of the IPv6 multicast routing policy, complete the
following tasks:
Data Preparation
To configure control parameters of the IPv6 multicast routing policy, complete the following
tasks.
No. Data
Context
By default, the maximum value permitted by the system is used.
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast ipv6 forwarding-table route-limit limit
The maximum number of entries in the IPv6 multicast forwarding table is set.
The configured value is valid only when it is smaller than the default value.
----End
Context
By default, the maximum value permitted by the system is used.
Do as follows on the multicast router:
Procedure
Step 1 Run:
system-view
The maximum number of downstream nodes of a single entry in the IPv6 multicast forwarding
table is set.
The configured value is valid only when it is smaller than the default value.
----End
CAUTION
After you run the reset command to clear information in the multicast forwarding table or in the
multicast routing table, multicast data cannot be normally transmitted. So, confirm the action
before you use the command.
Procedure
l Run the reset multicast ipv6 forwarding-table { ipv6-group-address [ ipv6-group-mask-
length ] | ipv6-source-address [ ipv6-source-mask-length ] | all | incoming-interface
{ interface-type interface-number | register } } * command in the user view to clear the
forwarding entry in the IPv6 multicast forwarding table.
l Run the reset multicast ipv6 routing-table { ipv6-source-address [ ipv6-source-mask-
length ] | ipv6-group-address [ ipv6-group-mask-length ] | all | incoming-interface
{ interface-type interface-number | register } } * command in the user view to clear the
routing entry in the IPv6 multicast routing table.
----End
Procedure
l Run the display multicast ipv6 boundary [ ipv6-group-address [ ipv6-group-mask-
length ] | interface interface-type interface-number ] command in any view to check
information about boundaries of all interfaces.
l Run the display multicast ipv6 forwarding-table [ ipv6-source-address [ ipv6-source-
mask-length ] | ipv6-group-address [ ipv6-group-mask-length ] | incoming-interface
{ interface-type interface-number | register } | outgoing-interface { { exclude | include |
match } { interface-type interface-number | register | none } } | slot slot-number |
statistics ] * command in any view to check the IPv6 multicast forwarding table.
l Run the display multicast ipv6 minimum-ttl [ interface-type interface-number ]
command in any view to check the minimum TTL value when a multicast data packet is
forwarded by an interface.
l Run the display multicast ipv6 routing-table [ ipv6-source-address [ ipv6-source-mask-
length ] | ipv6-group-address [ ipv6-group-mask-length ] | incoming-interface { interface-
type interface-number | register } | outgoing-interface { { exclude | include | match }
Networking Requirements
Multicast route selection is based on the RPF check, and the route selection policy depends on
unicast routes. The selected unique route is used to guide the forwarding of multicast data. If
the volume of multicast traffic is excessively large on a network, the network may be congested
and multicast services may be affected. Multicast load splitting extends the rules of multicast
route selection and makes multicast route selection not completely depend on the RPF check.
When there are multiple equal-cost optimal routes on a network and the routes can be used to
forward multicast data, multicast load splitting can be performed among the routes for multicast
traffic.
Currently, multicast load splitting is classified into multicast source-based load splitting,
multicast group-based load splitting, and multicast source- and multicast group-based load
splitting. These types of multicast load splitting cannot meet the requirements of multicast
splitting in all scenarios. In the case that multicast routing entries and network configurations
are stable, RPF interfaces and RPF neighbors keep unchanged. When the number of entries is
excessively small, balanced load splitting cannot be achieved.
Stable-preferred load splitting consummates the preceding types of multicast load splitting. As
shown in Figure 14-1, there are three equal-cost routes between Router E connected to Host A
and the multicast source, and stable-preferred load splitting is configured on Router E. Therefore,
entries can be evenly distributed on the equal-cost routes and balanced load splitting can be
implemented among the equal-cost routes.
If the forwarding capabilities and the severities of traffic congestion of the three equal-cost routes
on Router E are different, balanced load splitting cannot meet network requirements. In this case,
you need to configure unbalanced load splitting on Router E, and set different load splitting
weights on the upstream interfaces of Router E to change the number of entries distributed to
the equal-cost routes. Thus, you can flexibly control the number of entries distributed on the
equal-cost routes.
PIM-SM
Source
0/0 PO
S2/0/
1
O S1/ RouterB S2/0/0 POS1/0
GE1/0/0 PO P /1
POS2/0/2 POS1/0/2 GE2/0/0
RouterA POS
2/0/ POS1/0/0 POS2/0/0
S1/
0/3
3 PO
RouterC RouterE
POS 0/0 HostA
1/0/
0 S2/
Loopback0 PO
RouterD
GE2/0/0 3001::1/64
Configuration Roadmap
The configuration roadmap is as follows:
l Host A requires to receive data from a new multicast group. Therefore, configure a multicast
load splitting weight for each upstream interface on Router E to achieve unbalanced IPv6
multicast load splitting.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IPv6 address to each interface on the routers according to Figure 14-1. The detailed
configuration procedure is not mentioned here.
Step 2 Configure IS-IS IPv6 to implement interworking among routers and ensure that route costs are
equal. The detailed configuration procedure is not mentioned here.
Step 3 Enable IPv6 multicast on all routers and enable IPv6 PIM-SM on each interface.
Step 6 Configure the interfaces at the host side of RouterE to join the multicast groups statically in
batches.
Step 7 Verify the configuration about stable-preferred IPv6 multicast load splitting.
# The multicast source (2001::1/64) sends multicast data to multicast groups (FF13::1 to
FF13::3). Host A can receive the multicast data from the source. On Router E, check brief
information about the IPv6 PIM routing table.
<RouterE> display pim ipv6 routing-table brief
VPN-Instance: public net
Total 3 (*, G) entries; 3 (S, G) entries
00001.(*, FF13::1)
Upstream interface:Pos1/0/1
Number of downstream:1
00002.(2001::1, FF13::1)
Upstream interface:Pos1/0/1
Number of downstream:1
00003.(*, FF13::2)
Upstream interface:Pos1/0/2
Number of downstream:1
00004.(2001::1, FF13::2)
Upstream interface:Pos1/0/2
Number of downstream:1
00005.(*, FF13::3)
Upstream interface:Pos1/0/3
Number of downstream:1
00006.(2001::1, FF13::3)
Upstream interface:Pos1/0/3
Number of downstream:1
You can find that (*,G) and (S,G) entries are equally distributed to three equal-cost routes, with
the upstream interfaces being POS 1/0/1, POS 1/0/2, and POS 1/0/3 respectively.
NOTE
The load splitting algorithm processes (*,G) and (S,G) entries separately and the process rules are the same.
Step 8 Configure IPv6 multicast load splitting weights for different upstream interfaces of Router E to
achieve unbalanced IPv6 multicast load splitting.
Step 9 Configure the interfaces at the host side of Router E to join new multicast groups statically in
batches.
Step 10 Verify the configuration about unbalanced IPv6 multicast load splitting.
# The multicast source (2001::1/64) sends multicast data to multicast groups (FF13::1 to
FF13::9). Host A can receive the multicast data from the source. On Router E, check brief
information about the IPv6 PIM routing table.
<RouterE> display pim ipv6 routing-table brief
VPN-Instance: public net
Total 9 (*, G) entries; 9 (S, G) entries
00001.(*, FF13::1)
Upstream interface:Pos1/0/1
Number of downstream:1
00002.(2001::1, FF13::1)
Upstream interface:Pos1/0/1
Number of downstream:1
00003.(*, FF13::2)
Upstream interface:Pos1/0/2
Number of downstream:1
00004.(2001::1, FF13::2)
Upstream interface:Pos1/0/2
Number of downstream:1
00005.(*, FF13::3)
Upstream interface:Pos1/0/3
Number of downstream:1
00006.(2001::1, FF13::3)
Upstream interface:Pos1/0/3
Number of downstream:1
00007.(*, FF13::4)
Upstream interface:Pos1/0/1
Number of downstream:1
00008.(2001::1, FF13::4)
Upstream interface:Pos1/0/1
Number of downstream:1
00009.(*, FF13::5)
Upstream interface:Pos1/0/1
Number of downstream:1
00010.(2001::1, FF13::5)
Upstream interface:Pos1/0/1
Number of downstream:1
00011.(*, FF13::6)
Upstream interface:Pos1/0/1
Number of downstream:1
00012.(2001::1, FF13::6)
Upstream interface:Pos1/0/2
Number of downstream:1
00013.(*, FF13::7)
Upstream interface:Pos1/0/2
Number of downstream:1
00014.(2001::1, FF13::7)
Upstream interface:Pos1/0/1
Number of downstream:1
00015.(*, FF13::8)
Upstream interface:Pos1/0/1
Number of downstream:1
00016.(2001::1, FF13::8)
Upstream interface:Pos1/0/1
Number of downstream:1
00017.(*, FF13::9)
Upstream interface:Pos1/0/1
Number of downstream:1
00018.(2001::1, FF13::9)
Upstream interface:Pos1/0/1
Number of downstream:1
The upstream interfaces of existent (*,G) and (S,G) entries keep unchanged. Since the IPv6
multicast load splitting weight of POS 1/0/1 is higher than that of POS 1/0/2, the newly generated
entries with the upstream interface being POS 1/0/1 are more than those with the upstream
interface being POS 1/0/2. The IPv6 multicast load splitting weight of POS 1/0/3 is 0, which
indicates that this interface does not take part in load splitting of new entries.
----End
Configuration Files
l Configuration file of Router A
#
sysname RouterA
#
ipv6
#
multicast ipv6 routing-enable
#
isis 1
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology standard
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
undo shutdown
ipv6 address 2001::2/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/1
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2002::1/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/2
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2003::1/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/3
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2004::1/64
isis ipv6 enable 1
pim ipv6 sm
#
interface LoopBack0
ipv6 enable
ipv6 address 2000::1/64
isis ipv6 enable 1
pim ipv6 sm
#
pim-ipv6
c-bsr 2000::1
c-rp 2000::1
#
return
#
ipv6
#
multicast ipv6 routing-enable
#
isis 1
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology standard
#
#
interface Pos1/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2002::2/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2005::1/64
isis ipv6 enable 1
pim ipv6 sm
#
return
l Configuration file of Router C
#
sysname RouterC
#
ipv6
#
multicast ipv6 routing-enable
#
isis 1
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology standard
#
#
interface Pos1/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2003::2/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2006::1/64
isis ipv6 enable 1
pim ipv6 sm
#
return
l Configuration file of Router D
#
sysname RouterD
#
ipv6
#
multicast ipv6 routing-enable
#
isis 1
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology standard
#
#
interface Pos1/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2004::2/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos2/0/0
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2007::1/64
isis ipv6 enable 1
pim ipv6 sm
#
return
l Configuration file of Router E
#
sysname RouterE
#
ipv6
#
multicast ipv6 routing-enable
multicast ipv6 load-splitting stable-preferred
#
isis 1
network-entity 10.0000.0000.0005.00
#
ipv6 enable topology standard
#
#
interface GigabitEthernet2/0/0
ipv6 enable
undo shutdown
ipv6 address 3001::1/64
isis ipv6 enable 1
pim ipv6 sm
mld static-group FF13::1 inc-step-mask 128 number 3
mld static-group FF13::4 inc-step-mask 128 number 6
#
interface Pos1/0/1
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2005::2/64
isis ipv6 enable 1
pim ipv6 sm
multicast ipv6 load-splitting weight 2
#
interface Pos1/0/2
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2006::2/64
isis ipv6 enable 1
pim ipv6 sm
#
interface Pos1/0/3
link-protocol ppp
ipv6 enable
undo shutdown
ipv6 address 2007::2/64
isis ipv6 enable 1
pim ipv6 sm
multicast ipv6 load-splitting weight 0
#
return
This chapter describes the configuration methods and maintenance of the multicast network
management function.
15.1 Multicast Network Management Introduction
This section describes the multicast network management features supported by the NE80E/
40E and the basic principle of multicast network management.
15.2 Configuring Multicast Network Management
This section describes how to configure multicast network management.
15.3 Adjusting the Frequency for Multicast Protocols to Send Trap Messages
This section describes how to adjust the frequency for multicast protocols to send trap messages.
By default, the multicast trap function is disabled. You can enable the trap function for each
module and adjust the frequency of sending certain types of traps through command lines.
Applicable Environment
At present, multicast network management does not support multi-instance. That is, multicast
MIB cannot send information about multiple multicast instances to the NMS simultaneously.
Therefore, you must bind multicast MIB with a specific instance (a public network instance or
a VPN instance) on a router.
After multicast network management is enabled, you can bind multicast MIB to VPN instances,
or enable the trap function of a specified module as required.
Pre-configuration Tasks
Before configuring multicast network management, complete the following tasks:
l Configuring basic multicast functions
l ( Optional ) Configuring VPN instances
Data Preparation
To configure multicast network management, you need the following data.
No. Data
Context
Do as follows on the router managed by the NMS:
Procedure
Step 1 Run:
system-view
The multicast MIB view is displayed and multicast network management is enabled.
After this function is enabled, multicast MIB is bound to the public network instance.
----End
Context
To manage multicast VPN instances through the NMS, do as follows on the multicast router
managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
Step 2 Run:
snmp-agent trap enable feature-name mld [ trap-name { glblimit | iflimit |
inslimit | joingrp | leavegrp } ]
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
Step 2 Run:
snmp-agent trap enable feature-name mrm [ trap-name { cacglbchn | cacglbchnexceed
| cacglbtotal | cacglbtotalexceed | cacoifchn | cacoifchnexceed | cacoiftotal |
cacoiftotalexceed } ]
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
Context
Do as follows on the router that needs to be managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
Procedure
l Run the display current-configuration configuration command to check the VPN
instance to which multicast MIB is bound.
l Run the display snmp-agent trap feature-name igmp all command to check the status
of the IGMP trap function.
l Run the display snmp-agent trap feature-name mld all command to check the status of
the MLD trap function.
l Run the display snmp-agent trap feature-name mrm all command to check the status of
the MRM trap function.
l Run the display snmp-agent trap feature-name msdp all command to check the status
of the MSDP trap function.
l Run the display snmp-agent trap feature-name pim all command to check the status of
the PIM trap function.
l Run the display snmp-agent trap feature-name l2-multicast all command to check the
status of the l2-multicast trap function.
----End
Applicable Environment
By adjusting the frequency for each multicast protocol to send trap messages, you can know the
current statuses of various multicast events on the routers in real time.
Pre-configuration Tasks
Before adjusting the frequency for multicast protocols to send trap messages, complete the
following tasks:
Data Preparation
To adjust the frequency for multicast protocols to send trap messages, you need the following
data.
No. Data
1 Number of trap messages about Join/Leave events sent by IGMP per second
2 Number of trap messages about Join/Leave events sent by MLD per second
4 Interval for sending trap messages about join failure due to multicast CAC
Context
Do as follows on the router managed by the NMS:
Procedure
Step 1 Run:
system-view
Step 2 Run:
multicast-mib
Step 3 Run:
multicast mib-notification join-leave frequency count
The number of trap messages about Join/Leave events sent by IGMP or MLD per second is set.
----End
Context
After PIM is enabled on the routers, you can configure PIM-MIB notifications to set the interval
for sending PIM events. Therefore, you can know the current status of various PIM events on
the routers in real time. The PIM events include the neighbor loss event, neighbor addition, the
event of receiving invalid Join/Prune messages, RP-mapping change event, event of the interface
being elected as the DR, and the event of receiving invalid Register messages.
Do as follows on the router managed by the NMS:
Procedure
Step 1 Run:
system-view
----End
15.3.4 Adjusting the Interval for Sending Trap Messages About Join
Failure due to Multicast CAC
Context
Do as follows on the router managed by the NMS:
Procedure
l Multicast CAC limit
1. Run:
system-view
The interval for sending trap messages about IGMP or PIM Join failure due to
multicast CAC is set.
The value of min-interval ranges from 0 to 65535. The default value is 0.
l IGMP entries limit
1. Run:
system-view
The interval for sending trap messages of the failure of IGMP joining because the
number of the IGMP entries exceeds the limit is set.
The value of min-interval ranges from 0 to 65535. The default value is 0.
----End
A Glossary
A
ASM Any-Source Multicast that is implemented through PIM-DM and PIM-
SM.
Assert A mechanism that is applicable to PIM-DM and PIM-SM. After
receiving a multicast packet from the downstream, a router performs
RPF check on the packet. If the RPF check fails, other multicast
forwarders exist in the network segment. The router sends Assert
message through the downstream interface to join the Assert election.
If the router fails in the Assert election, it removes the downstream
interface from the downstream interface list. Assert ensures that only
one multicast forwarder exists on a network segment and only a cpoy
of multicast packet is transmitted by the forwarder.
C
CAC Call Admission Control (CAC) is to configure policies on the multicast
router connecting the receiver, intermediate multicast routers, and the
multicast router connecting the multicast source to limit the number of
multicast entries that can be created on multicast routers. In this manner,
the operator can control the number of access users and available IPTV
services on the IP core network.
D
Downstream The interface that forwards multicast data. The router or the receiver
Interface host that the sent multicast data reaches is called downstream router or
downstream host. The network segment where the downstream
interface resides is called downstream network segment.
DR Designated Router that is applicable only to PIM-SM. In the network
segment that connects to S, DR sends Register message to RP. In the
network segment that connects to members, DR sends the Join message
to RP. In SSM mode, the DR at the group member side directly sends
Join messages to S.
F
First-hop Router The PIM router that directly connects to the multicast source and
forwards the multicast data sent by the multicast source.
Flooding The flooding that is applicable only to PIM-DM. PIM-DM assumes that
all members are densely distributed on the network and each network
segment may have members. According to the assumption, the multicast
source floods multicast data to each network segment and then prune
the network segment that does not have any member. Through the
periodical Flooding-Prune, PIM-DM establishes and maintains a
unidirectional SPT that connects the multicast source with members.
H
Hash function A function that is expressed as Value (G, M, C ( i ) ) = (1103515245 *
( (1103515245 * (G & M )+12345 ) XOR C ( i ) ) + 12345 ) mod 2^31.
When using the dynamic RP, the PIM-SM uses the hash function. The
RP chosen from C-RPs serves the specific multicast group.
I
IGMP The Internet Group Management protocol that is a signaling mechanism
of the host towards the router and is used in IP multicast on the leaf
network. Hosts send an IGMP message to join or leave a multicast
group. Routers periodically send an IGMP message to hosts to check
whether multicast members exist.
J
Join The Join that is applicable to PIM-SM and PIM-DM.
l In PIM-SM, when a member exists in a network segment, the DR of
the network segment sends a Join message to the RP hop by hop. A
multicast route is thus generated. When the RP starts the SPT
switchover, the RP sends a Join message to the source hop by hop.
A multicast route is thus generated.
l In PIM-DM, the Join message is used for prune rejection.
L
Leaf Router The router that connects to user hosts.
Last-hop router The PIM router that directly connects to a multicast group member and
forwards multicast data to the member.
M
MBGP The Multicast BGP mainly refers to the application of Multiprotocol
Border Gateway Protocol (MP-BGP) to multicast. MP-BGP is the
multiprotocol extension of the Border Gateway Protocol (BGP). At
present, the BGP4 protocol is applied only in unicast. MP-BGP enables
BGP4 to support multiple routing protocols including multicast.
Mesh Group If multiple MSDP peers exist in the network, SA message is easily
caused to be flooded among peers. Configuring multiple MSDP peers
to join a Mesh group reduces the number of SA messages transmitted
between these MSDP peers. After receiving an SA message, a member
of the Mesh Group checks the source of the message first:
l If the SA message is from a certain MSDP peer outside the mesh
group. The member performs RPF check on the message. If the
message passes the check, the member forwards it to other members
in the mesh group.
l If the SA message is from a member of the mesh group, the member
does not perform RPF check and directly receives the message. At
the same time, the member does not forward the message to other
members in the mesh group.
MD The Multicast Domains that is an implementation mechanism of
multicast VPN. Only PE needs to support multi-instance. MD is
transparent to CE and P.MD refers to the set of all VPN instances that
can sends and receives multicast packets on each PE. Different VPN
instances belong to different MDs. An MD serves a specific VPN. All
private multicast data transmitted in the VPN is transmitted in the MD.
MDT Multicast Distribution Tree. In the PIM multicast domain, a point to
multi-points multicast forwarding path is set up. The multicast
forwarding path takes a group as unit. As the shape of the multicast
forwarding path likes a tree, it is also called multicast distribution tree.
The characteristic of the multicast distribution is: each link has only one
copy of multicast data, regardless of the number of members in the
network. The multicast data is copies and distributed at as far branch as
possible.
MLD Multicast Listener Discovery (MLD) is a sub-protocol of the Internet
Control Message Protocol Version 6 (ICMPv6) and applicable to IPv6
networks. MLD, whose function and implementation principle are
similar to those of IGMP, is used to set up a multicast group between
hosts and the nearest multicast router and maintain the relationship of
multicast members.
mping Multicast ping (mping) can be used to detect reserved multicast
members on the network segment or detect performance of the network
on bearing multicast services.
MSDP The Multicast Source Discovery Protocol that is applicable only to PIM-
SM and is useful for only Any-Source Multicast (ASM). The MSDP
peers set up by RPs of different PIM-SM domains share the information
on the multicast source by sending Source Active (SA). The inter-
domain multicast is thus implemented. The MSDP peers set up by RPs
of the same PIM-SM domain share the information on the multicast
source by sending Source Active (SA). The Anycast RP is thus
implemented.
MT Multicast Tunnel. In an MD, all PEs connect to an MT, which is equal
to that all PEs connects to the shared network segment. The PEs use MT
to transmit private data.The transmission process of MT is: the local PE
encapsulates a VPN data packet into a public network data packet, and
forwards it in the public network along the MDT. After receiving the
packet, the remote PE decapsulates it and reverts it to the VPN data
packet.
MTI Multicast Tunnel Interface is the outgoing interface or incoming
interface of MT. It is equal to the outgoing interface or incoming
interface of MD. The local PE sends the VPN data from MTI. The
remote PE receives it from MTI. The VRP defines MTI that calls back
the entire MT transmission process.MTI is the channel of the public
network instance and the VPN instance on a PE. PE connects to MT
through MTI, that is, PE connects to the shared network segment. On
each PE, VPN instances that belong to the MD set up PIM neighbor
relationship on the MTI.
mtrace Multicast trace route (mtrace) can be used to trace RPF paths or
multicast paths and applicable to routine maintenance and fault location
of multicast services.
P
PIM The Protocol Independent Multicast that belongs to multicast routing
protocol. Reachable unicast route is the basis of PIM forwarding.
According to the current unicast routing information, PIM performs
RPF check on multicast packets, creates multicast routing entry and
establishes multicast distribution tree.PIM is composed of two
independent protocols, namely PIM-DM and PIM-SM.
PIM-DM Protocol Independent Multicast Dense Mode (PIM-DM) is applicable
to small-scale multicast networks with dense members.
PIM-IPv6 PIM-IPv6 is the PIM protocol used on IPv6 networks. Its functions and
implementation principle are similar to those of PIM.
PIM-IPv6-DM PIM-IPv6-DM is the PIM-DM protocol used on IPv6 networks. Its
functions and implementation principle are similar to those of PIM-DM.
PIM-IPv6-SM PIM-IPv6-SM is the PIM-SM protocol used on IPv6 networks. Its
functions and implementation principle are similar to those of PIM-SM.
PIM-SM Protocol Independent Multicast Sparse Mode (PIM-SM) is applicable
to large-scale multicast networks with scattered members.
Prune The prune that is applicable to PIM-DM and PIM-SM. If there are no
downstream multicast group members, a router sends a prune message
to the upstream node to inform it not to forward any more data to it.
After receiving the prune message, the upstream node removes the
downstream interface from the downstream interface list.
R
Register The register that is applicable only to PIM-SM. When the active
multicast source exists on the network, the router of the first hop
encapsulates multicast data into a Register message, and unicasts the
packet to the RP to create an (S,G) entry on the RP and register the
multicast source information. .
RP The Rendezvous Point that is applicable only to PIM-SM. RP is the
forwarding core of the PIM-SM network. Members send Join message
to the RP and sets up a RPT with itself as root. Multicast source registers
with the RP and creates (S, G) entry on the RP. The multicast source
sends multicast packets to members through RP.
RPF Reverse Path Forwarding is the basis of the multicast routing. routers
perform RPF check on the received packets to create and maintain
multicast routing entries. As a result, the multicast data can be
forwarded along the correct path.After receiving a multicast packet, a
router searches the unicast routing table, MBGP routing table and static
multicast routing table according to the "packet source" to select the
RPF route. If the interface that the packet reaches is the same with the
RPF interface, the RPF check on the packet succeeds. Otherwise, the
RPF check fails.
Rpf route static RPF static route, also called static multicast route, is not used to forward
data. It affects only the RPF check. By configuration the static multicast
route, users can specify the RPF interface and the RPF neighbor for the
specific "packet source" on the current router.
RPT The multicast distribution tree that takes RP as root and multicast group
members as leaf. RPT is applicable only to PIM-SM.
S
SA Source Active is the message type of MSDP message. SA message
contains many groups of (S, G) information or a Register message.
MSDP peers share the multicast source by exchanging SA messages.
SA-Cache After receiving a SA message, an MSDP peer stores the (S, G)
information carried in the SA message to the SA-Cache. When there is
a receiving requirement, the (S, G) message can be obtained from the
SA-Cache.If a router is not configured with the SA-Cache, the router
cannot store the (S, G) entry carried in SA message. If the router need
to receive the packets of (S, G), it has to wait the SA message sent by
its MSDP Peer in the next period.
SA-request When the local MSDP peer has new members, but has no (S, G)
information that meet the requirements, it sends the SA request to the
specified remote MSDP peer. After receiving the SA request message,
the specified remote MSDP peer immediately responds to the SA
request message if it has the (S, G) information that meets the
requirements.
Share-Group A VPN instance uniquely specifies the address of a Share-Group. The
VPN data is transparent to the public network. PE does not distinguish
which multicast group a VPN packet belongs to or whether it is a
protocol packet or a data packet. PE uniformly encapsulates it into a
general public network multicast packet with Share-Group as its group.
The PE then sends it to the public network.
Share-MDT Share-Multicast Distribution Tree. One Share-Group uniquely maps a
MD. It uniquely sets up a Share-Multicast Distribution Tree (Share-
MDT) to guide routers to forward packets. All VPN packets are
forwarded along the Share-MDT; regardless of the PE from which they
enter the public network.
SPT The Shortest Path Tree is a multicast distribution tree that takes the
multicast source as root and multicast group members as leaf. The SPT
is applicable to PIM-DM, PIM-SM and PIM-SSM.
SPT switch The SPT switch is applicable only to the PIM-SM. When the rate of the
Register packet exceeds the threshold, The RP triggers the SPT switch,
sends a Join message to the multicast source, establishes the multicast
path from the source to the RP, and informs the DR not to send Register
message. When the packet rate of the RPT exceeds the threshold, DR
triggers the SPT switch, sends a Join message to the multicast source,
establishes the SPT from the source to the DR, and switch the multicast
data to the SPT.
SSM The Source-Specific Multicast is the technology that is implemented by
PIM-SM.
static-rpf-peer The static RPF peer is the special application of MSDP peer. You can
run the command to specify static RFP peer for the MSDP peer. The
SA message sent by the static RFP peer can be free of RFP check.
Switch-group-pool One Share-Group uniquely confirms a Switch-group-pool. The Switch-
group-pool defines an address scope of a multicast group for the Switch-
MDT switch. When the Switch-MDT is performed, an address that is
used for the smallest times is chosen from the Switch-group-pool. All
VPN multicast packets that enter the public network from a PE are
encapsulated with this Switch-group address.
Switch-MDT Switch-Multicast Distribution Tree. All PEs in the network monitor the
forwarding rate of the Share-MDT. When the forwarding rate of the
data that is sent to the public network through a PE exceeds the
threshold, The PE sends the switch notification to downstream receivers
along the Share-MDT. The switch notification carries the address of the
Switch-group. A Switch-MDT with the PE as source and Switch-group
as group address is set up between the PE and the remote PE. Compared
with Share-MDT, Switch-MDT prunes the remote PE without receiving
requirement. This implements the multicast according to requirements.
All VPN multicast data that enters the public network through the PE
is not encapsulated with the Share-Group address, but is encapsulated
into Switch-group packet of the public network and is forwarded along
the Switch-MDT.
U
Upstream Interface The interface through which the local router receives multicast data.
The router or the multicast source that forwards multicast data to the
local router is called the upstream router or upstream multicast source.
The network segment where the upstream interface resides is called the
upstream network segment.
This appendix collates frequently used acronyms and abbreviations in this document.
A
ASM Any-Source Multicast
B
BGP Border Gateway Protocol
BSR BootStrap Router
C
C-BSR Candidate-BSR
C-RP Candidate-RP
D
DR Designated Router
I
IGMP Internet Group Management Protocol
ISP Internet Service Provider
L
LSP Label Switched Path
O
OSPF Open Shortest Path First
P
PIM Protocol Independent Multicast
PIM-DM Protocol Independent Multicast Dense Mode
PIM-SM Protocol Independent Multicast Sparse Mode
R
PR Rendezvous Point
RPF Reverse Path Forwarding
RPT RP Tree
S
SA Source Active
SSM Source-Specific Multicast
SPT Shortest Path Tree
V
VPN Virtual Private Network