Professional Documents
Culture Documents
Issue draft 05
Date 2020-04-09
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: support@huawei.com
Contents
2.2.25.5 Example for Configuring EVPN VPLS over SRv6 TE Policy (Manual Configuration)......................... 900
2.2.25.6 Example for Configuring L3VPNv4 over SRv6 TE Policy Egress Protection...........................................913
2.2.25.7 Example for Configuring Inter-AS L3VPNv4 over SRv6 TE Policy.............................................................930
3 EVPN.......................................................................................................................................946
3.1 EVPN....................................................................................................................................................................................... 946
3.1.1 Overview of EVPN...........................................................................................................................................................946
3.1.2 Understanding EVPN..................................................................................................................................................... 947
3.1.3 EVPN-MPLS....................................................................................................................................................................... 955
3.1.3.1 EVPN Multi-Homing Technology........................................................................................................................... 955
3.1.3.2 EVPN Seamless MPLS Fundamentals................................................................................................................... 958
3.1.3.3 EVPN's Service Modes................................................................................................................................................ 972
3.1.4 EVPN-VXLAN..................................................................................................................................................................... 975
3.1.4.1 EVPN VXLAN Fundamentals.................................................................................................................................... 975
3.1.5 EVPN VPWS....................................................................................................................................................................... 980
3.1.5.1 EVPN VPWS Fundamentals...................................................................................................................................... 980
3.1.6 PBB-EVPN.......................................................................................................................................................................... 987
3.1.6.1 PBB-EVPN Fundamentals..........................................................................................................................................987
3.1.6.2 Migration from an HVPLS Network to a PBB-EVPN....................................................................................... 996
3.1.7 EVPN E-Tree.......................................................................................................................................................................997
3.1.8 MAC Duplication Suppression for EVPN................................................................................................................. 999
3.1.9 EVPN ORF........................................................................................................................................................................ 1002
3.1.10 IGMP Snooping over EVPN MPLS.........................................................................................................................1004
3.1.11 Application Scenarios for EVPN............................................................................................................................ 1011
3.1.11.1 Using EVPN to Interconnect Other Networks...............................................................................................1011
3.1.11.2 EVPN L3VPN HVPN................................................................................................................................................ 1012
3.1.11.3 EVPN 6VPE................................................................................................................................................................ 1017
3.1.11.4 EVPN Splicing........................................................................................................................................................... 1020
3.1.11.5 Inter-AS EVPN Option C....................................................................................................................................... 1025
3.1.11.6 DCI Scenarios........................................................................................................................................................... 1027
3.1.11.7 NFVI Distributed Gateway (SR Tunnels).........................................................................................................1042
3.1.11.8 NFVI Distributed Gateway Function (BGP VPNv4/v6 over E2E SR Tunnels)......................................1056
3.1.11.9 NFVI Distributed Gateway Function (BGP EVPN over E2E SR Tunnels).............................................. 1067
3.1.11.10 Application Scenarios for EVPN E-LAN Accessing L3VPN...................................................................... 1078
3.2 EVPN Configuration........................................................................................................................................................ 1079
3.2.1 Overview of EVPN........................................................................................................................................................ 1080
3.2.2 EVPN Licensing Requirements and Configuration Precautions.................................................................... 1081
3.2.3 Activating EVPN Interface Licenses on a Board................................................................................................. 1104
3.2.4 Configuring Common EVPN Functions................................................................................................................. 1105
3.2.4.1 Configuring an EVPN Instance............................................................................................................................. 1106
3.2.4.2 Configuring an EVPN Source Address................................................................................................................ 1108
3.2.4.3 Binding an Interface to an EVPN Instance....................................................................................................... 1109
3.2.4.4 Configuring an ESI.................................................................................................................................................... 1109
3.2.31.1 Configuring a DCI Scenario with an E2E VXLAN EVPN Deployed on a Gateway............................ 1232
3.2.31.2 Configuring a DCI Scenario with a VLAN Layer 3 Sub-interface Accessing a Common L3VPN. 1233
3.2.31.3 Configuring a DCI Scenario with a VXLAN EVPN L3VPN Accessing a Common L3VPN................1235
3.2.31.4 Configuring a DCI Scenario with a VLAN Base Accessing an MPLS EVPN IRB................................. 1237
3.2.31.5 Configuring a DCI Scenario with a VXLAN EVPN Accessing an MPLS EVPN IRB............................. 1245
3.2.31.6 Verifying the Configuration of DCI Functions...............................................................................................1249
3.2.32 Configuring the NFVI Distributed Gateway Function (SR Tunnels)......................................................... 1250
3.2.32.1 Configuring Route Recursion over SR Tunnels............................................................................................. 1252
3.2.32.2 Configuring Route Advertisement on DC-GWs............................................................................................ 1253
3.2.32.3 Configuring Route Advertisement on L2GW/L3GWs................................................................................. 1256
3.2.32.4 (Optional) Configuring the Load Balancing Function............................................................................... 1259
3.2.32.5 Verifying the Configurations of the NFVI Distributed Gateway Function...........................................1261
3.2.33 Configuring the NFVI Distributed Gateway Function (BGP VPNv4/VPNv6 over E2E SR Tunnels) 1267
3.2.33.1 Configuring Route Recursion over SR Tunnels............................................................................................. 1270
3.2.33.2 Configuring Route Advertisement on PEs...................................................................................................... 1270
3.2.33.3 Configuring Route Advertisement on DC-GWs............................................................................................ 1271
3.2.33.4 Configuring Route Advertisement on L2GW/L3GWs................................................................................. 1274
3.2.33.5 Configuring the Load Balancing Function......................................................................................................1277
3.2.33.6 Verifying the Configurations of the NFVI Distributed Gateway Function...........................................1279
3.2.34 Configuring the NFVI Distributed Gateway Function (BGP EVPN over E2E SR Tunnels)................. 1285
3.2.34.1 Configuring Route Recursion over SR Tunnels............................................................................................. 1288
3.2.34.2 Configuring Route Advertisement on PEs...................................................................................................... 1288
3.2.34.3 Configuring Route Advertisement on DC-GWs............................................................................................ 1290
3.2.34.4 Configuring Route Advertisement on L2GW/L3GWs................................................................................. 1293
3.2.34.5 Configuring the Load Balancing Function......................................................................................................1295
3.2.34.6 Verifying the Configurations of the NFVI Distributed Gateway Function...........................................1297
3.2.35 Configuration Examples for EVPN........................................................................................................................1303
3.2.35.1 Example for Configuring a VPN to Access a Common EVPN E-LAN.................................................... 1303
3.2.35.2 Example for Configuring Eth-Trunk Sub-interfaces to Access a Common EVPN E-LAN in Active-
Active Mode.............................................................................................................................................................................. 1316
3.2.35.3 Example for Configuring Eth-Trunk Sub-interfaces to Access a BD EVPN IRB in Active-Active
Mode............................................................................................................................................................................................ 1333
3.2.35.4 Example for Configuring EVPN VPWS over MPLS.......................................................................................1349
3.2.35.5 Example for Setting a Redundancy Mode per ESI Instance..................................................................... 1359
3.2.35.6 Example for Configuring EVPN E-Tree............................................................................................................. 1368
3.2.35.7 Example for Configuring EVPN ORF................................................................................................................ 1377
3.2.35.8 Example for Configuring a DCI Scenario with an E2E VXLAN EVPN Deployed on a Gateway... 1390
3.2.35.9 Example for Configuring a DCI Scenario with a VLAN Layer 3 Sub-Interface Accessing a Common
L3VPN.......................................................................................................................................................................................... 1400
3.2.35.10 Example for Configuring a DCI Scenario with a VXLAN EVPN L3VPN Accessing a Common
L3VPN.......................................................................................................................................................................................... 1407
3.2.35.11 Example for Configuring MPLS EVPN E-LAN Option B.......................................................................... 1417
3.2.35.12 Example for Configuring an MPLS EVPN L3VPN in E-LAN Option B Mode.................................... 1422
4 VXLAN..................................................................................................................................1860
4.1 VXLAN.................................................................................................................................................................................. 1860
4.1.1 VXLAN Introduction..................................................................................................................................................... 1860
4.1.2 VXLAN Basics..................................................................................................................................................................1862
5.5.5.3.10 (Optional) Configuring Switching Between I-PMSI and S-PMSI Tunnels........................................ 2483
5.5.5.4 Configuring NG MVPN Option C in Inter-AS Seamless MPLS Scenarios............................................... 2485
5.5.5.4.1 Configuring Global MPLS LDP Functions and Enabling MPLS LDP on Interfaces.......................... 2486
5.5.5.4.2 Configuring an Automatic mLDP P2MP Tunnel..........................................................................................2486
5.5.5.4.3 Configuring a Static RP........................................................................................................................................ 2486
5.5.5.4.4 Configuring MP-IBGP Among PEs, ABRs, and ASBRs in the Same AS................................................ 2487
5.5.5.4.5 Configuring MP-EBGP for PEs and ASBRs in Different ASs.....................................................................2487
5.5.5.4.6 Configuring a Routing Policy to Control Label Distribution................................................................... 2488
5.5.5.4.7 Configuring Route Reflection on an ABR...................................................................................................... 2491
5.5.5.4.8 Configuring BGP MVPN Peers........................................................................................................................... 2492
5.5.5.4.9 Configuring a P2MP LSP to Carry Multicast Traffic...................................................................................2492
5.5.5.4.10 (Optional) Configuring Switching Between I-PMSI and S-PMSI Tunnels........................................ 2494
5.5.5.4.11 Configuring PIM................................................................................................................................................... 2496
5.5.5.5 Configuring Inter-Area Seamless NG MVPN................................................................................................... 2497
5.5.5.5.1 Configuring an Automatic mLDP P2MP Tunnel..........................................................................................2498
5.5.5.5.2 Configuring a Static RP........................................................................................................................................ 2498
5.5.5.5.3 Configuring Route Reflection on an ABR...................................................................................................... 2498
5.5.5.5.4 Configuring BGP MVPN Peers........................................................................................................................... 2499
5.5.5.5.5 Configuring a P2MP LSP to Carry Multicast Traffic...................................................................................2500
5.5.5.5.6 (Optional) Configuring Switching Between I-PMSI and S-PMSI Tunnels.......................................... 2502
5.5.5.5.7 Configuring PIM..................................................................................................................................................... 2504
5.5.5.5.8 Configuring Route Exchange Between PEs and CEs.................................................................................. 2504
5.5.6 Configuring NG MVPN Extranet.............................................................................................................................. 2505
5.5.7 Configuring UMH Route Selection Rules............................................................................................................. 2507
5.5.8 Configuring Dual-Root 1+1 Protection................................................................................................................. 2508
5.5.9 Configuration Examples for NG MVPN.................................................................................................................2511
5.5.9.1 Example for Configuring an Intra-AS NG MVPN with an mLDP P2MP LSP.........................................2511
5.5.9.2 Example for Configuring an Intra-AS NG MVPN with an RSVP-TE P2MP LSP.................................... 2527
5.5.9.3 Example for Configuring Dual-Root 1+1 Protection for RSVP-TE P2MP LSPs......................................2544
5.5.9.4 Example for Configuring Dual-Root 1+1 Protection for mLDP P2MP LSPs.......................................... 2561
5.5.9.5 Example for Configuring an NG MVPN (PIM-SM MDT Setup Across the Public Network) with (*,
G) Join to Carry Multicast Traffic over an mLDP P2MP LSP.................................................................................... 2576
5.5.9.6 Example for Configuring an NG MVPN (PIM-SM MDT Setup Across the Public Network) with (*,
G) Join to Carry Multicast Traffic over an RSVP-TE P2MP LSP............................................................................... 2590
5.5.9.7 Example for Configuring an NG MVPN (PIM-SM MDT Setup Not Across the Public Network) with
(*, G) Join to Carry Multicast Traffic over an mLDP P2MP LSP.............................................................................. 2605
5.5.9.8 Example for Configuring an NG MVPN (PIM-SM MDT Setup Not Across the Public Network) with
(*, G) Join to Carry Multicast Traffic over an RSVP-TE P2MP LSP..........................................................................2616
5.5.9.9 Example for Configuring an Intra-AS NG MVPN with Segmented Tunnels......................................... 2627
5.5.9.10 Example for Configuring Dual-Root 1+1 Protection for Intra-AS Segmented Tunnels.................. 2642
5.5.9.11 Example for Configuring an Inter-AS NG MVPN Option B...................................................................... 2664
5.5.9.12 Example for Configuring an Inter-AS NG MVPN in Option C................................................................. 2678
5.5.9.13 Example for Configuring an Inter-AS Seamless MPLS NG MVPN Option B...................................... 2689
5.5.9.14 Example for Configuring an Inter-AS Seamless MPLS NG MVPN Option C...................................... 2702
5.5.9.15 Example for Configuring an Inter-Area Seamless MPLS NG MVPN..................................................... 2715
5.5.9.16 Example for Configuring an Inter-area NG MVPN......................................................................................2730
5.5.9.17 Example for Configuring NG MVPN Extranet in the Remote Route Leaking Scenario Where a
Source VPN Instance Needs to Be Configured on a Receiver PE............................................................................2742
5.5.9.18 Example for Configuring NG MVPN Extranet in the Remote Route Leaking Scenario (BAS
Multicast)................................................................................................................................................................................... 2753
5.5.9.19 Example for Configuring NG MVPN Extranet in the Local Cross Scenario Where the Source and
Receiver VPN Instances Reside on the Same PE.......................................................................................................... 2764
5.5.9.20 Example for Enabling the Highest IP Address to Be Selected as the UMH on an NG MVPN..... 2773
5.6 BIER Configuration.......................................................................................................................................................... 2784
5.6.1 Overview of BIER.......................................................................................................................................................... 2784
5.6.2 BIER Licensing Requirements and Configuration Precautions...................................................................... 2786
5.6.3 Configuring NG MVPN over BIER........................................................................................................................... 2788
5.6.3.1 Configuring BIER on the Public Network.......................................................................................................... 2789
5.6.3.2 Enabling BIER for an IGP........................................................................................................................................ 2790
5.6.3.3 Configuring BGP MVPN Peer Relationships..................................................................................................... 2791
5.6.3.4 Configuring a BIER Tunnel to Carry Multicast Traffic.................................................................................. 2792
5.6.3.5 (Optional) Configuring Switching Between I-PMSI and S-PMSI Tunnels..............................................2794
5.6.3.6 Configuring PIM......................................................................................................................................................... 2796
5.6.3.7 (Optional) Configuring IGMP............................................................................................................................... 2796
5.6.3.8 Verifying the NG MVPN over BIER Configuration......................................................................................... 2797
5.6.4 Configuring the BIER Loopback Board Enhancement Function................................................................... 2798
5.6.5 Configuring a Mode for BIER-MPLS to Process and Adjust TTLs................................................................. 2798
5.6.6 Configuration Examples for NG MVPN over BIER.............................................................................................2799
5.6.6.1 Example for Configuring NG MVPN over BIER............................................................................................... 2799
5.6.6.2 Example for Configuring NG MVPN over BIER Dual-Root 1+1 Protection........................................... 2812
6 Telemetry............................................................................................................................ 2824
6.1 Telemetry............................................................................................................................................................................ 2824
6.1.1 Overview of Telemetry................................................................................................................................................2824
6.1.2 Understanding Telemetry.......................................................................................................................................... 2825
6.1.2.1 Service Process of Telemetry Static Subscription........................................................................................... 2825
6.1.2.2 Service Process of Telemetry Dynamic Subscription..................................................................................... 2827
6.1.2.3 Key Telemetry Technologies.................................................................................................................................. 2829
6.1.3 Application Scenarios for Telemetry...................................................................................................................... 2841
6.1.3.1 Telemetry Applications in a Traffic Adjustment Scenario........................................................................... 2841
6.1.3.2 Telemetry Application in a Microburst Traffic Detection Scenario.......................................................... 2842
6.1.4 Terminology for Telemetry........................................................................................................................................ 2843
6.2 Telemetry Configuration................................................................................................................................................ 2844
6.2.1 Overview of Telemetry................................................................................................................................................2844
6.2.2 Telemetry Licensing Requirements and Configuration Precautions............................................................2845
6.2.3 Configuring Static Telemetry Subscription (IPv4 Collector).......................................................................... 2852
6.2.3.1 Configuring a Destination Collector................................................................................................................... 2853
Definition
Segment Routing (SR) is a protocol designed to forward data packets on a
network based on source routes. Segment Routing MPLS is Segment Routing
based on the MPLS forwarding plane, which is Segment Routing for short
hereafter. Segment Routing divides a network path into several segments and
assigns a segment ID to each segment and network forwarding node. The
segments and nodes are sequentially arranged (segment list) to form a
forwarding path.
Segment Routing encodes the segment list identifying a forwarding path into a
data packet header. The segment ID is transmitted along with the packet. After
receiving the data packet, the receive end parses the segment list. If the top
segment ID in the segment list identifies the local node, the node removes the
segment ID and proceeds with the follow-up procedure. If the top segment ID
does not identify the local node, the node uses the Equal Cost Multiple Path
(ECMP) algorithm to forward the packet to a next node.
Purpose
With the progress of the times, more and more types of services pose a variety of
network requirements. For example, real-time UC&C applications prefer to paths
of low delay and low jitter, and big data applications prefer to high bandwidth
tunnels with a low packet loss rate. In this situation, the rule helping the network
adapt to service growth cannot catch up with the rapid service development and
even makes network deployment more complex and difficult to maintain.
The solution is to allow services to drive network development and to define the
network architecture. Specifically, an application raises requirements (on the delay,
bandwidth, and packet loss rate). A controller collects information, such as
Benefits
Segment Routing offers the following benefits:
● The control plane of MPLS network is simplified.
A controller or an IGP is used to uniformly compute paths and distribute
labels, without using RSVP-TE or LDP. Segment Routing can be directly applied
to the MPLS architecture without any change in the forwarding plane.
● Provides efficient topology independent-loop-free alternate (TI-LFA) FRR
protection for fast path failure recovery.
Based on the Segment Routing technology, combined with the RLFA (Remote
Loop-free Alternate) FRR algorithm, an efficient TI-LFA FRR algorithm is
formed. TI-LFA FRR supports node and link protection of any topology and
overcomes drawbacks in conventional tunnel protection.
Basic Concepts
Segment routing involves the following concepts:
● Segment routing domain: is a set of SR nodes.
● Segment ID (SID): uniquely identifies a segment. A SID is mapped to an MPLS
label on the forwarding plane.
● SRGB: A segment routing global block (SRGB) is a set of local labels reserved
for segment routing of users.
Segment Category
An example of Prefix SIDs, Adjacency SIDs, and Node SIDs is shown in Figure 1-2.
Combining prefix (node) SIDs and adjacency SIDs in sequence can construct any
network path. Every hop on a path identifies a next hop based on the segment
information on the top of the label stack. The segment information is stacked in
sequence at the top of the data header.
● If segment information at the stack top contains the identifier of another
node, the receive end forwards a data packet to a next hop using ECMP.
● If segment information at the stack identifies the local node, the receive end
removes the top segment and proceeds with the follow-up procedure.
In actual application, the prefix segment, adjacency segment, and node segment
can be used independently or in combinations. The following three main cases are
involved.
Prefix Segment
If there are several paths having the same cost, they perform ECMP. If they have
different costs, they perform link backup. The prefix segment-based forwarding
paths are not fixed, and the ingress cannot control the whole forwarding path.
Adjacency Segment
SR Forwarding Mechanism
SR can be used directly in the MPLS architecture, where the forwarding
mechanism remains. SIDs are encoded as MPLS labels. The segment list is encoded
as a label stack. The segment to be processed is at the stack top. Once a segment
is processed, its label is removed from a label stack.
1.1.2.2 SR-MPLS BE
SR LSPs are established using the segment routing technique, and uses prefix or
node segments to guide data packet forwarding. Segment Routing Best Effort (SR-
MPLS BE) uses an IGP to run the shortest path algorithm to compute an optimal
SR LSP.
The establishment and data forwarding of SR LSPs are similar with those of LDP
LSPs. SR LSPs have no tunnel interfaces.
Creating an SR LSP
Creating an SR LSP involves the following operations:
● Devices report topology information to a controller (if the controller is used to
create a tunnel) and are assigned labels.
SR LSPs are created primarily using prefix labels. A destination node runs an IGP
to advertise prefix SIDs, and forwarders parse them and compute label values
based on local SRGBs. Each node then runs an IGP to collect topology information,
runs the SPF algorithm to calculate a label forwarding path, and delivers the
computed next hop and outgoing label (OuterLabel) to the forwarding table to
guide data packet forwarding.
Table 1-2 describes the process of using prefix labels to create an LSP shown in
Figure 1-6.
St Devi Operation
e ce
p
St Devi Operation
e ce
p
Data Forwarding
Similar to MPLS, SR-MPLS TE operates labels by pushing, swapping, or popping
them.
● Push: After a packet enters an SR LSP, the ingress adds a label between the
Layer 2 and IP header. Alternatively, the ingress adds a label stack above the
existing label stack.
● Swap: When packets are forwarded in an SR domain, a node searches the
label forwarding table for a label assigned by a next hop and swaps the label
on the top of the label stack with the matching label in each SR packet.
● Pop: After the packets leave out of an SR-MPLS TE tunnel, a node finds an
outbound interface mapped to the label on the top of the label stack and
removes the top label.
Table 1-3 describes the data forwarding process on the network shown in Figure
1-7.
2 B Receives the labeled packet, swaps label 26100 for label 36100, and
forwards the packet.
3 C Receives the labeled packet, swaps label 36100 for label 16100, and
forwards the packet.
non-null PHP is not The MPLS The MPLS TTL Using a non-
supported. EXP field is processing is null label
The egress reserved. QoS normal. consumes a
assigns a is supported. great number
common label of resources
to a on the egress
penultimate and is not
hop. recommended
. The non-null
label helps
the egress
identify
various types
of services.
Users are gravitating to segment routing (SR), a new tunneling technique that is a
substitute for MPLS. SR is introduced to simplify network deployment and
management and reduce capital expenditure (CAPEX).
The SR and LDP interworking technique allows both segment routing and LDP to
work within the same network. This technique connects an SR network to an LDP
network to implement MPLS forwarding.
Since LSPs are unidirectional, SR and LDP interworking involves two directions: SR
to LDP and LDP to SR.
SR to LDP
Figure 1-8 describes the process of creating an E2E SR-to-LDP LSP.
During data forwarding, P2 has no SR label destined for PE2 and encapsulates an
SR label to an LDP label based on the mapping between the prefix and SID.
LDP to SR
Figure 1-9 describes the process of creating an E2E LDP-to-SR LSP.
SR Local Block Advertises the label scope IS-IS Router Capability TLV-242
Sub-TLV that an NE reserves for
the local SID.
Adj-SID Sub-TLV
An Adj-SID Sub-TLV is optional and carries IGP Adjacency SID information. Figure
1-12 shows its format.
SID/Label Sub-TLV
A SID/Label Sub-TLV includes a SID or an MPLS label. The SID/Label Sub-TLV is a
part of the SR-Capabilities Sub-TLV.
Figure 1-15 shows the format of the SID/Label Sub-TLV.
Range 16 bits Prefix address and range of SIDs associated with the
prefix.
In segment routing, each NE must be able to advertise its SR capability and global
SID range (or global label index). To meet the preceding requirement, an SR-
Capabilities Sub-TLV is defined and embed in the IS-IS Router Capability TLV-242
for transfer. The SR-Capabilities Sub-TLV can be propagated only within the same
IS-IS level area.
SR-Algorithm Sub-TLV
NEs use different algorithms, for example, the SPF algorithm and various SPF
variant algorithms, to compute paths to the other nodes or prefixes. The newly
defined SR-Algorithm Sub-TLV enables an NE to advertise its own algorithm. The
SR-Algorithm Sub-TLV is also carried in the IS-IS Router Capability TLV-242 for
transfer. The SR-Algorithm Sub-TLV can be propagated within the same IS-IS level.
The SRLB TLV advertised by the NE may contain a label range that is out of the
SRLB. Such a label range is assigned locally and is not advertised in the SRLB. For
example, an adjacency SID is assigned a local label, not a label within the SRLB
range.
Devices A through F are deployed in areas of the same level. All Devices run IS-IS.
An SR tunnel originates from Device A and is terminated at Device D.
An SRGB is configured on Device D. A prefix SID is set on the loopback interface of
Device D. Device D encapsulates the SRGB and prefix SID into a link state protocol
data unit (LSP) (for example, IS-IS Router Capability TLV-242 containing SR-
Capability Sub-TLV) and floods the LSP across the network. After another device
receives the SRGB and prefix SID, it uses them to compute a forwarding label, uses
the IS-IS topology information, and runs the Dijkstra algorithm to calculate an LSP
and LSP forwarding entries.
An inter-IGP area SR LSP is created
In Figure 1-22, to establish an inter-area SR LSP, the prefix SID must be advertised
across areas by penetrating these areas. This overcomes the restriction on IS-IS's
flooding scope within each area.
Devices A and D are deployed in different areas, and all devices run IS-IS. An SR
tunnel originates from Device A and is terminated at Device D.
SR-Algorithm TLV
NEs use different algorithms, for example, the SPF algorithm and various SPF
variant algorithms, to compute paths to the other nodes or prefixes. The newly
defined SR-Algorithm TLV allows an NE to advertise an algorithm in use.
Figure 1-23 shows the format of the SR-Algorithm TLV.
SID/Label Sub-TLV
A SID/Label Sub-TLV includes a SID or an MPLS label. Figure 1-26 shows the
format of the SID/Label Sub-TLV.
Adj-SID Sub-TLV
An Adj-SID Sub-TLV is optional and carries IGP Adjacency SID information. Figure
1-29 shows its format.
Weight 8 bits Weight. The Adj-SID weight is used for load balancing.
Figure 1-31 shows the format of the LAN-Adj-SID Sub-TLV. Compared with the
Adj-SID Sub-TLV, the LAN Adj-SID Sub-TLV has an additional Neighbor ID field that
represents the router ID of a device that advertises the LAN Adj-SID Sub-TLV.
1.1.2.5 SR-MPLS TE
SR-MPLS TE Advantages
SR-MPLS TE tunnels are capable of meeting the requirements for rapid
development of software-defined networking (SDN), which Resource Reservation
Protocol-TE (RSVP-TE) tunnels are unable to meet. Table 1-20 describes the
comparison between SR-MPLS TE and RSVP-TE.
Label The extended IGP assigns and MPLS allocates and distributes
allocatio distributes labels. Each link is labels. Each LSP is assigned a
n assigned only a single label, and label, which consumes a great
all LSPs share the label, which number of labels resources and
reduces resource consumption results in heavy workloads
and maintenance workload of maintaining label forwarding
label forwarding tables. tables.
Control An IGP is used, which reduces the RSVP-TE is used, and the control
plane number of protocols to be used. plane is complex.
Related Concepts
Label Stack
A label stack is a set of Adjacency Segment labels in the form of a stack stored in
a packet header. Each Adjacency SID label in the stack identifies an adjacency to a
local node, and the label stack describes all adjacencies along an SR-MPLS TE LSP.
In packet forwarding, a node searches for an adjacency mapped to each Adjacency
Segment label in a packet, removes the label, and forwards the packet. After all
labels are removed from the label stack, the packet is sent out of an SR-MPLS TE
tunnel.
If a label stack depth exceeds that supported by a forwarder, the label stack
cannot carry all adjacency labels on a whole LSP. In this situation, the controller
assigns multiple label stacks to the forwarder. The controller delivers a label stack
to an appropriate node and assigns a special label to associate label stacks to
implement segment-based forwarding. The special label is a stitching label, and
the appropriate node is a stitching node.
The controller assigns a stitching label at the bottom of a label stack to a stitching
node. After a packet arrives at the stitching node, the stitching node swaps a label
stack associated with the stitching label based on the label-stack mapping. The
stitching node forwards the packet based on the label stack for the next segment.
● A forwarder runs IGP to collect network topology information and uses BGP-
LS to report the information to the controller.
● Both the controller and forwarders run IGP. Each forwarder floods network
topology information to one another. Each forwarder reports the information
to the controller.
1. P3 runs IGP to apply for a local dynamic label for an adjacency. For example,
P3 assigns adjacency label 9002 to the P3-to-P4 adjacency.
2. P3 runs IGP to advertise the adjacency label and flood it across the network.
3. P3 uses the label to generate a label forwarding table.
4. After the other nodes on the network run IGP to learn the Adjacency Segment
label advertised by P3, the nodes do not generate local forwarding tables.
PE1, P1, P2, P3, and P4 assign and advertise adjacency labels in the same way as
P3 does. The label forwarding table is then generated on each node. A node
establishes a BGP-LS neighbor relationship with the controller, generates topology
SR-MPLS TE Tunnel
Segment Routing Traffic Engineering (SR-MPLS TE) runs the SR protocol and uses
TE constraints to create a tunnel.
a. The controller delivers label stack {1005, 1009, 1010} to P1 and assigns a
stitching label of value 100 associated with the label stack. Label 100 is
the bottom label in the label stack on PE1.
b. The controller delivers label stack {1003, 1006, 100} to the ingress PE1.
3. The forwarder uses the delivered tunnel configurations and label stacks to
establish an LSP for an SR-MPLS TE tunnel.
An SR-MPLS TE tunnel does not support MTU negotiation. Therefore, the MTUs configured
on nodes along the SR-MPLS TE tunnel must be the same. For a manually configured SR-
MPLS TE tunnel, you can use the mtu command to manually configure the MTU under the
tunnel. If you do not manually configure the MTU, the default MTU value is 1500 bytes. On
the manual SR-MPLS TE tunnel, the smallest value in the following values takes effect:
MTU of the tunnel, MPLS MTU of the tunnel, MTU of the outbound interface, and MPLS
MTU of the outbound interface.
In Figure 1-35, the SR-MPLS TE path calculated by the controller is A -> B -> C ->
D -> E -> F. The path is mapped to two label stacks {1003, 1006, 100} and {1005,
1009, 1010}. The two label stacks are delivered to ingress A and stitching node C,
respectively. Label 100 is a stitching label and is associated with label stack {1005,
1009, 1010}. The other labels are adjacency labels. Process of forwarding data
packets along an SR-MPLS TE tunnel is shown as following:
1. The ingress A adds a label stack of {1003, 1006, 100}. The ingress A uses the
outer label of 1003 in the label stack to match against an adjacency and finds
A-B adjacency as an outbound interface. The ingress A strips off label 1003
from the label stack {1003, 1006, 100} and forwards the packet downstream
through A-B outbound interface.
2. Node B uses the outer label of 1006 in the label stack to match against an
adjacency and finds B-C adjacency as an outbound interface. Node B strips off
label 1006 from the label stack {1006, 100}. The pack carrying the label stack
{100} travels through the B-to-C adjacency to the downstream node C.
3. After stitching node C receives the packet, it identifies stitching label 100 by
querying the stitching label entries, swaps the label for the associated label
stack {1005, 1009, 1010}. Stitching node C uses the top label 1005 to search
for an outbound interface connected to the C-to-D adjacency and removes
label 1005. Stitching node C forwards the packet carrying the label stack
{1009, 1010} along the C-to-D adjacency to the downstream node D. For
more details about stick label and stick node, see 1.1.2.5 SR-MPLS TE.
4. After nodes D and E receive the packet, they treat the packet in the same way
as node B. Node E removes the last label 1010 and forwards the data packet
to node F.
5. Egress F receives the packet without a label and forwards the packet along a
route that is found in a routing table.
The preceding information shows that after adjacency labels are manually
specified, devices strictly forward the data packets hop by hop along the explicit
path designated in the label stack. This forwarding method is also called strict
explicit-path SR-MPLS TE.
– Static BFD session: The local and remote discriminators are manually
specified. The local discriminator of the local node must be equal to the
remote discriminator of the remote node. The remote discriminator of
the local node must be equal to the local discriminator of the remote
node. A discriminator inconsistency causes a failure to establish a BFD
session. After the BFD session is established, the interval at which BFD
packets are received and the interval at which BFD packets are sent can
be modified.
– Dynamic BFD session: The local and remote discriminators do not need to
be manually specified. After the SR-MPLS TE tunnel goes Up, a BFD
session is triggered. The devices on both ends of a BFD session to be
established negotiate the local discriminator, remote discriminator,
interval at which BFD packets are received, and interval at which BFD
packets are sent.
A BFD session is bound to an SR-MPLS TE LSP. This means that a BFD session
is established between the ingress and egress. A BFD packet is sent by the
ingress and forwarded to the egress through an LSP. The egress responds to
the BFD packet. A BFD session on the ingress can rapidly detect the status of
the path through which the LSP passes.
If a link fault is detected, the BFD module notifies the forwarding plane of the
fault. The forwarding plane searches for a backup SR-MPLS TE LSP and
switches traffic to the backup SR-MPLS TE LSP.
● BFD for SR-MPLS TE tunnel: BFD for SR-MPLS TE tunnel must be used with
BFD for SR-MPLS TE LSP.
– BFD for SR-MPLS TE LSP controls the status of the primary/backup LSP
switchover. BFD for SR-MPLS TE tunnel checks actual status of tunnels.
▪ If BFD for SR-MPLS TE tunnel is configured and the BFD status is set
to administrative Down, the BFD session does not work, and the
tunnel interface status is unknown.
▪ BFD for SR-MPLS TE tunnel is configured and the BFD status is not
set to administrative Down, the tunnel interface status is inconsistent
with the BFD status.
– The interface status of an SR-MPLS TE tunnel keeps consistent with the
status of BFD for SR-MPLS TE tunnel. The BFD session goes Up slowly
because of BFD negotiation. If a new label stack is delivered for a tunnel
in the Down state and the BFD for this tunnel goes Up, the process takes
10 to 20 seconds. As a result, hard tunnel convergence is delayed if no
protection is enabled for the tunnel.
● BFD for SR-MPLS TE (one-arm mode): A Huawei device on the ingress
cannot use BFD for SR-MPLS TE LSP to communicate with a non-Huawei
device on the egress. In this situation, no BFD session can be established. In
this case, one-arm BFD for SR-MPLS TE can be used.
On the ingress, enable BFD and specify the one-arm mode to establish a BFD
session. After the BFD session is established, the ingress sends BFD packets to
the egress through transit nodes along an SR-MPLS TE tunnel. After the
forwarding plane receives BFD packets, it removes MPLS labels and searches
for a route matching the destination IP address of the ingress. The forwarding
plane on the egress loops back the BFD packets to the ingress. The ingress
processes the BFD packets. This process is the one-arm detection mechanism.
In the following example, VPN traffic recurses to an SR-MPLS TE LSP, in the
scenario where BFD for SR-MPLS TE LSP is used.
A, CE1, CE2, and E are deployed on the same VPN, and CE2 advertises a route to E.
PE2 assigns the VPN label to E. PE1 installs the route to E and the VPN label. The
path of the SR-MPLS TE tunnel from PE1 to PE2 is PE1 -> P4 -> P3 -> PE2, and the
label stack is {9004, 9003, 9005}. When A sends a packet destined for E, PE1 finds
the packet's outbound interface based on label 9004 and adds label 9003, label
9005, and the inner VPN label assigned by PE2. Configure BFD to monitor the SR-
MPLS TE tunnel. If BFD enters the DetectDown state, the VPN recurses to another
SR-MPLS TE tunnel.
Figure 1-39) can be assigned to these links. Such an adjacency SID is called a
parallel adjacency label. Like common labels, the parallel adjacency label is also
used to compute a path.
When the data packets carrying the parallel label arrive at node B, node B parses
the parallel adjacency label and performs the hash algorithm to load-balance the
traffic over the three links, which efficiently uses network resources and prevents
network congestion.
If BFD for SR-MPLS TE is used and an SR-MPLS TE parallel adjacency label is used,
BFD packets can be load-balanced, whereas each BFD packet is hashed to a single
link. If the link fails, BFD detects a link Down event even if the other links keep
working, which poses the risk of false alarms.
Background
Devices can divert packets to SR-MPLS TE tunnels with matching differentiated
service code points (DSCPs), which is a TE tunnel selection method. Unlike the
traditional method of load balancing services on TE tunnels, DSCP priority-based
forwarding gives higher-priority services higher service quality.
Existing networks face a challenge that they may fail to provide exclusive high-
quality transmission resources for higher-priority services. This is because the
policy for selecting TE tunnels is based on public network routes or VPN routes,
which causes a node to select the same tunnel for services with the same
destination IP or VPN address but with different priorities.
With this function enabled, a PE can forward IP packets to tunnels based on DSCP
values. Class-of-service based tunnel selection (CBTS) supports only eight
priorities: AF1, AF2, AF3, AF4, BE, CS6, CS7, and EF. CBTS maps service traffic
against eight priorities based on a configured traffic policy. Compared with the
CBTS-based tunneling, DSCP-based tunneling maps IP traffic's DSCP values to SR-
MPLS TE tunnels and supports more refined priority management (0 to 63) so
that SR-MPLS TE can be more flexibly deployed based on services.
Implementation
DSCP values can be specified on the tunnel interface of a tunnel to which services
recurse so that the tunnel carries services of one or more priorities. Services with
specified priorities can only be transmitted on such tunnels, not be load-balanced
by all tunnels to which they may recurse. The service class attribute of a tunnel
can also be set to "default" so that the tunnel transmits mismatching services with
other priorities that are not specified.
The default DSCP attribute is not a mandatory setting. If the default attribute is not
configured, mismatching services will be transmitted along a tunnel that is assigned
no DSCP attribute. If such a tunnel does not exist, these services will be transmitted
along a tunnel that is assigned the smallest DSCP value.
Usage Scenario
● SR-MPLS TE tunnel load balancing on the public network, LDP over SR-MPLS
TE, or SR-MPLS TE tunnels (non-load balancing) are configured on a PE.
● IP/L3VPN, including IPv4 and IPv6 services, is configured on a PE.
● In the VLL, VPLS, and BGP LSP over SR-MPLS TE scenarios, DSCP-based
tunneling for IP packets is not supported.
● This function is not supported on a P.
IGP for SR allocates SIDs only within an autonomous system (AS). Proper SID
orchestration in an AS facilitates the planning of an optimal path in the AS.
However, IGP for SR does not work if paths cross multiple ASs on a large-scale
network. BGP for SR is an extension of BGP for Segment Routing. This function
enables a device to allocate BGP peer SIDs based on BGP peer information and
report the SID information to a controller. SR-MPLS TE uses BGP peer SIDs for
path orchestration, thereby obtaining the optimal inter-AS E2E SR-MPLS TE path.
BGP for SR involves the BGP egress peer engineering (EPE) extension and BGP-LS
extension.
BGP EPE
BGP EPE allocates BGP peer SIDs to inter-AS paths. BGP-LS advertises the BGP
peer SIDs to the controller. A forwarder that does not establish a BGP-LS peer
relationship with the controller can run BGP-LS to advertise a peer SID to a BGP
peer that has established a BGP-LS peer relationship with the controller. The BGP
peer then runs BGP-LS to advertise the peer SID to the network controller. As
shown in Figure 1-41, BGP EPE can allocate peer node segments (Peer-Node
SIDs), peer adjacency segments (Peer-Adj SIDs), and Peer-Set SIDs to peers.
● A Peer-Node SID identifies a peer node. The peers at both ends of each BGP
session are assigned Peer-Node SIDs. An EBGP peer relationship established
based on loopback interfaces may traverse multiple physical links. In this case,
the Peer-Node SID of a peer is mapped to multiple outbound interfaces.
Traffic can be forwarded through any outbound interface or be load-balanced
among multiple outbound interfaces.
● A Peer-Adj SID identifies an adjacency to a peer. An EBGP peer relationship
established based on loopback interfaces may traverse multiple physical links.
In this case, each adjacency is assigned a Peer-Adj SID. Only a specified link
(mapped to a specified outbound interface) can be used for forwarding.
● A Peer-Set SID identifies a group of peers that are planned as a set. BGP
allocates a Peer-Set SID to the set. Each Peer-Set SID can be mapped to
multiple outbound interfaces during forwarding. Because one peer set consists
of multiple peer nodes and peer adjacencies, the SID allocated to a peer set is
mapped to multiple Peer-Node SIDs and Peer-Adj SIDs.
On the network shown in Figure 1-41, ASBR1 and ASBR3 are directly connected
through two physical links. An EBGP peer relationship is established between
ASBR1 and ASBR3 through loopback interfaces. ASBR1 runs BGP EPE to assign the
Peer-Node SID 28001 to its peer (ASBR3) and to assign the Peer-Adj SIDs 18001
and 18002 to the physical links. For an EBGP peer relationship established
between directly connected physical interfaces, BGP EPE allocates a Peer-Node SID
rather than a Peer-Adj SID. For example, on the network shown in Figure 1-41,
BGP EPE allocates only Peer-Node SIDs 28002, 28003, and 28004 to the ASBR1-
ASBR5, ARBR2-ASBR4, and ASBR2-ASBR5 peer relationships, respectively.
Peer-Node SIDs and Peer-Adj SIDs are effective only on local devices. Different
devices can have the same Peer-Node SID or Peer-Adj SID. Currently, BGP EPE
supports only EBGP peers. For multi-hop EBGP peers, they must be directly
connected through physical links. This is because transit nodes do not have BGP
peer SID information, resulting in forwarding failures.
BGP EPE allocates SIDs to BGP peers and links, but cannot be used to construct a
forwarding tunnel. BGP peer SIDs must be used with IGP SIDs to establish an E2E
tunnel. SR-MPLS mainly involves SR LSPs and SR-MPLS TE tunnels.
● SR LSPs are dynamically computed by forwarders based on intra-AS IGP SIDs.
Because peer SIDs allocated by BGP EPE cannot be used by an IGP, inter-AS SR
LSPs are not supported.
1. You can select either method as needed. Selecting both of the methods is not allowed.
2. The preceding methods are also supported if all the links corresponding to a BGP Peer-
Set SID fail.
BGP-LS
Inter-AS E2E SR-MPLS TE tunnels can be established by manually specifying
explicit paths, or they can be orchestrated by a controller. In a controller-based
orchestration scenario, intra- and inter-AS SIDs are reported to the controller using
BGP-LS. Inter-AS links must support TE link attribute configuration and reporting
to the controller. This enables the controller to compute primary and backup paths
based on link attributes. BGP EPE discovers network topology information and
allocates SID information. BGP-LS then packages this information into the Link
network layer reachability information (NLRI) field and reports it to the controller.
Figure 1-43 and Figure 1-44 show Link NLRI formats.
Figure 1-43 Link NLRI format for Peer-Node and Peer-Adj SIDs advertised by BGP-
LS
Figure 1-44 Link NLRI format for Peer-Set SIDs advertised by BGP-LS
Field Description
A Peer-Node SID TLV and a Peer-Adj SID TLV have the same format, as shown in
Figure 1-45.
Flags 8 bits Flags field used in a Peer-Adj SID TLV. Figure 1-46
shows the format of this field.
In this field:
● V: value flag. If set, the Adj-SID carries a value. By
default, the flag is set.
● L: local flag. If set, the value or index carried by the
Adj-SID has local significance. By default, the flag
is set.
Binding SIDs are set on forwarders within an AS domain. Each binding SID
represents an intra-AS SR-MPLS TE tunnel. In Figure 1-47, a binding SID is set on
the ingress within an AS domain.
● Set binding SIDs to 6000 and 7000 on CSG1, representing label stacks {102,
203} and {110, 112, 213}, respectively.
● Set a binding SID to 8000 on ASBR3 to represent a label stack {405, 506}.
● Set a binding SID to 9000 on ASBR4 to represent a label stack {415, 516, 660}.
After binding SIDs are generated, the controller can calculate an inter-AS E2E SR-
MPLS TE tunnel using the binding SIDs and BGP peer SIDs. A static explicit path
can be configured so that an inter-AS E2E SR-MPLS TE tunnel is established over
the path. In Figure 1-47, the label stacks of the primary and backup LSPs in an
inter-AS E2E SR-MPLS TE are {6000, 3040, 8000} and {7000, 3140, 9000},
respectively. The complete label stacks are {102, 203, 3040, 405, 506} and {110,
112, 213, 3140, 415, 516, 660}.
A binding SID is associated with a local forwarding path by specifying a local label,
and is used for NE forwarding and encapsulation. Using binding BIDs reduces the
number of labels in a label stack on an NE, which helps build an inter-AS E2E SR-
MPLS TE network.
b. The controller delivers the E2E SR-MPLS TE tunnel label stack {6000,
3040, 8000} to CSG1 that is the ingress of an inter-AS E2E SR-MPLS TE
tunnel.
4. CSG1 establishes an inter-AS E2E SR-MPLS TE tunnel based on the tunnel
configuration and label stack information delivered by the controller.
A forwarder operates a label in a packet based on the label stack mapped to the
SR-MPLS TE LSP, searches for an outbound interface hop by hop based on the top
label of the label stack, and uses the label to guide the packet to the tunnel
destination address.
In Figure 1-49, the controller calculates an SR-MPLS TE tunnel path CSG1->AGG1-
>ASBR1->ASBR3->P1->PE1. The label stack of the path is {6000, 3040, 8000},
where 6000 and 8000 are binding SID labels, and 3040 is a BGP peer SID.
carries the label stack {203, 3040, 8000} and passes through the CSG1->AGG1
adjacency to the downstream AGG1.
2. After receiving the packet, AGG1 matches an adjacency against label 203 on
the top of the label stack, finds an outbound interface as the AGG1->ASBR1
adjacency, and removes label 203. The packet carries a label stack {3040,
8000} and passes through the AGG1->ASBR1 adjacency to the downstream
ASBR1.
3. After receiving the packet, AGG1 matches an adjacency against label 3040 on
the top of the label stack, finds an outbound interface as the AGG1->ASBR3
adjacency, and removes label 3040. The packet carries a label stack {8000}
and passes through the ASBR1->ASBR3 adjacency to the downstream ASBR3.
4. After receiving the packet, ASBR3 searches the My Local SID table based on
label 8000 on the top of the label stack. 8000 is a binding SID label and
mapped to a label stack {405, 506}. ASBR3 searches for an outbound interface
based on label 405, maps the label to the ASBR3->P1 adjacency, and then
removes label 405. The packet carries a label stack {506} and passes through
the ASBR3->P1 adjacency to the downstream P1.
5. After receiving the packet, P1 matches an adjacency against label 506 on the
top of the label stack, finds an outbound interface as the P1->PE1 adjacency,
and removes label 506. The packet without a label is forwarded to the
destination PE1 through the P1->PE1 adjacency.
E2E SR-MPLS TE does not use a protocol to establish tunnels. Once a label stack is
delivered to an SR-MPLS TE node, the node establishes an SR-MPLS TE LSP. The
LSP does not encounter the protocol Down state, except for the situation when a
label stack is withdrawn. Therefore, BFD must be used to monitor faults in the E2E
SR-MPLS TE LSP. A fault detected by BFD triggers a primary/backup SR-MPLS TE
LSP switchover. BFD for E2E SR-MPLS TE is an E2E rapid detection mechanism that
rapidly detects faults in links of an SR-MPLS TE tunnel. BFD for E2E SR-MPLS TE
modes are as follows:
● One-arm BFD for E2E SR-MPLS TE LSP: When an E2E SR-MPLS TE LSP is
established and a BFD session fails to be negotiated, the SR-MPLS TE LSP
cannot go Up. BFD for E2E SR-MPLS TE LSP rapidly triggers a primary/HSB
LSP switchover if the primary LSP fails.
● BFD for E2E SR-MPLS TE tunnel: The E2E SR-MPLS TE tunnel status is
monitored using both BFD for E2E SR-MPLS TE tunnel and BFD for E2E SR-
MPLS TE LSP.
– BFD for E2E SR-MPLS TE LSP controls the primary/HSB LSP switchover
and switchback status, whereas BFD for E2E SR-MPLS TE tunnel controls
the effective status of a tunnel. If BFD for E2E SR-MPLS TE tunnel is not
configured, the default tunnel status keeps Up, and the effective status
cannot be determined.
– The interface status of an E2E SR-MPLS TE tunnel keeps consistent with
the status of BFD for E2E SR-MPLS TE tunnel. The BFD session goes Up
slowly because of BFD negotiation. If a new label stack is delivered for a
tunnel in the Down state and BFD for this tunnel goes Up, the process
takes more than 10 seconds. As a result, hardware-based tunnel
convergence is delayed if no protection is enabled for the tunnel.
In Figure 1-52, the implementation of one-arm BFD for E2E SR-MPLS TE is as
follows:
1. Enable BFD and specify the one-arm mode to establish a BFD session on the
ingress.
2. Establish a reverse E2E SR-MPLS TE tunnel in advance on the egress and set a
binding SID for the tunnel.
3. Bind the BFD session to the binding SID of the reverse E2E SR-MPLS TE tunnel
on the ingress.
Figure 1-52 One-arm BFD for E2E SR-MPLS TE (BFD loopback packets are
forwarded through a tunnel.)
After the one-arm BFD session is established, the ingress sends a one-arm BFD
packet that carries the binding SID of the reverse tunnel. After the one-arm BFD
packet arrives at the egress through a transit node along the SR-MPLS TE tunnel,
the forwarding plane removes the MPLS label from the BFD packet and associates
it with the reverse SR-MPLS TE tunnel based on the binding SID carried in the
one-arm BFD packet. The reverse BFD packet is attached with a label stack of the
E2E SR-MPLS TE tunnel and looped back to the ingress through the transit node
along the SR-MPLS TE tunnel. The ingress processes the detection packet to
implement the one-arm loopback detection mechanism.
If the egress does not have a reverse E2E SR-MPLS TE tunnel, the egress searches
for an IP route based on the destination address of the BFD packet to loop back
the packet, as shown in Figure 1-53.
Figure 1-53 One-arm BFD for E2E SR-MPLS TE (BFD loopback packets are
forwarded over IP links.)
Theoretically, binding SIDs and BGP peer SIDs can be used to establish an explicit
path across multiple AS domains (greater than or equal to three). AS domains,
however, are subject to management of various organizations. When a path
crosses multiple AS domains, the path also crosses the networks of multiple
management organizations, which may hinder network deployment.
In Figure 1-54, the network is connected to three AS domains. If PE1 establishes
an E2E SR-MPLS TE network over a multi-hop explicit path to PE3, the traffic path
from AS y to AS z can be determined in AS x. AS y and AS z, however, may belong
to different management organizations than AS x. In this situation, the traffic path
may not be accepted by AS y or AS z, and the great depth of the label stack
decreases forwarding efficiency. AS y and AS z may be connected to a controller
different than AS x, leading to a difficult in establishing an E2E SR-MPLS TE
network from PE1 to PE3.
Static Route
No tunnel interface is available for SR LSP. Therefore, you can configure a static
route to specify the next hop, then the static route iterates SR LSP based on the
next hop.
Static routes on an SR-MPLS TE tunnel work in the same way as common static
routes. When configuring a static route, set the outbound interface of a static
route to an SR-MPLS TE tunnel interface so that traffic transmitted over the route
is directed to the SR-MPLS TE tunnel.
Tunnel Policy
By default, VPN traffic is forwarded through LDP LSPs, not SR LSPs or SR-MPLS TE
tunnels. If the default LDP LSPs cannot meet VPN traffic requirements, a tunnel
policy is used to direct VPN traffic to an SR LSP or an SR-MPLS TE tunnel.
The tunnel policy may be a tunnel type prioritizing policy or a tunnel binding
policy. Select either of the following policies as needed:
● Select-seq mode: This policy changes the type of tunnel selected for VPN
traffic. An SR LSP or SR-MPLS TE tunnel is selected as a public tunnel for VPN
traffic based on the prioritized tunnel types. If no LDP LSPs are available, SR
LSPs are selected by default.
● Tunnel binding mode: This policy defines a specific destination IP address, and
this address is bound to an SR-MPLS TE tunnel for VPN traffic to guarantee
QoS.
Auto Route
An IGP uses an auto route related to an SR-MPLS TE tunnel that functions as a
logical link to compute a path. The tunnel interface is used as an outbound
interface in the auto route. According to the network plan, a node determines
whether an LSP link is advertised to a neighbor node for packet forwarding. An
auto route is configured using either of the following methods:
● Forwarding shortcut: The node does not advertise an SR-MPLS TE tunnel to its
neighbor nodes. The SR-MPLS TE tunnel can be involved only in local route
calculation, but cannot be used by the other nodes.
● Forwarding adjacency: The node advertises an SR-MPLS TE tunnel to its
neighbor nodes. The SR-MPLS TE tunnel is involved in global route calculation
and can be used by the other nodes.
● Forwarding shortcut and forwarding adjacency are mutually exclusive, and cannot be
used simultaneously.
● When the forwarding adjacency is used, a reverse tunnel must be configured for a
routing protocol to perform bidirectional check after a node advertises LSP links to the
other nodes. The forwarding adjacency must be enabled for both tunnels in opposite
directions.
Policy-Based Routing
The policy-based routing (PBR) allows a device to select routes based on user-
defined policies, which improves traffic security and balances traffic. If PBR is
enabled on an SR network, IP packets are forwarded over specific LSPs based on
PBR rules.
packets do not match PBR rules, they are properly forwarded using IP; if they
match PBR rules, they are forwarded over specific tunnels.
In Figure 1-56, the deployment of public network BGP route recursive to an SR-
MPLS TE tunnel is as follows:
● An E2E IGP neighbor relationship is established between each pair of directly
connected devices, and segment routing is configured on PEs and Ps. An SR-
MPLS TE tunnel is established between PEs.
● A BGP peer relationship between PEs is established to enable the PEs to learn
the peer routes.
● Configure a tunnel policy on the PE to make the BGP service route iterate
over the SR-MPLS TE tunnel at PE1.
The network shown in Figure 1-59 consists of inconsecutive L3VPN subnets with a
backbone network in between. PEs establish an SR LSP to forward L3VPN packets.
PEs run BGP to learn VPN routes. The deployment is as follows:
● An IS-IS neighbor relationship is established between each pair of directly
connected devices on the public network to implement route reachability.
● A BGP peer relationship is established between the two PEs to learn peer VPN
routes of each other.
● The PEs establish an IS-IS SR LSP to assign public network labels and compute
a label switched path.
● BGP is used to assign a private network label, for example, label Z, to a VPN
instance.
● VPN routes recurse to the SR LSP.
● PE1 receives an IP packet, adds the private network label and SR public
network label to the packet, and forwards the packet along the label switched
path.
The network shown in Figure 1-60 consists of inconsecutive L3VPN subnets with a
backbone network in between. PEs establish an SR-MPLS TE tunnel to forward
L3VPN packets. PEs run BGP to learn VPN routes. The deployment is as follows:
● An IS-IS neighbor relationship is established between each pair of directly
connected devices on the public network to implement route reachability.
● A BGP peer relationship is established between the two PEs to learn peer VPN
routes of each other.
● The PEs establish an IS-IS SR-MPLS TE tunnel to assign public network labels
and compute a label switched path.
● BGP is used to assign a private network label, for example, label Z, to a VPN
instance.
● A tunnel policy is configured on the PE to allow the private network route to
recurse to the SR-MPLS TE tunnel.
● PE1 receives an IP packet, adds the private network label and SR public
network label to the packet, and forwards the packet along the label switched
path.
HVPN
On a growing network with increasing types of services, PEs encounter scalability
problems, such as insufficient access or routing capabilities, which reduces
network performance and scalability. In this situation, VPNs cannot be deployed in
a large scale. In Figure 2, on a hierarchical VPN (HVPN), PEs play different roles
and provide various functions. These PEs form a hierarchical architecture to
provide functions that are provided by one PE on a non-hierarchical VPN. HVPNs
lower the performance requirements for PEs.
● BGP peer relationships are established between the UPE and SPE and
between the SPE and NPE. A segment routing LSP is established between the
UPE and NPE.
● The SPE recurses VPNv4 routes to the SR LSP.
The process of forwarding HVPN packets that CE1 sends to CE2 is as follows:
1. CE1 sends a VPN packet to the NPE.
2. After receiving the packet, the NPE searches its VPN forwarding table for an
LSP to forward the packet based on the destination address of the packet.
Then, the NPE adds an inner label L4 and an outer SR public network label Lv
to the packet and sends the packet to the SPE over the corresponding LSP.
The label stack is L4/Lv.
3. After receiving the packet, the SPE replaces the outer SR public network label
Lv with Lu and the inner label L2 with L3. Then, the SPE sends the packet to
the NPE over the same LSP.
4. After receiving the packet, the NPE removes the outer SR public network label
Lu, searches for a VPN instance corresponding to the packet based on the
inner label L3, and removes the inner label L3 after the VPN instance is found.
Then, the NPE searches the VPN forwarding table of this VPN instance for the
outbound interface of the packet based on the destination address of the
packet. The NPE sends the packet through this outbound interface to CE2. The
packet sent by the NPE is a pure IP packet with no label.
VPN FRR
In Figure 1-62, PE1 adds the optimal route advertised by PE3 and less optimal
route advertised by PE4 into a forwarding entry. The optimal route is used to
guide traffic forwarding, and the less optimal route is used as a backup route.
P1-to-P3 link The TI-LFA local protection takes effect and the traffic is
failure switched to the path LSP2 in Figure 1-62.
PE3 node BFD for SR-MPLS TE or SBFD for SR-MPLS can detect the fault
failure of the PE3 node. The BFD or SBFD triggers the VPN FRR to
switch to the path LSP3 in Figure 1-62.
SBFD Principles
Figure 1-67 shows SBFD principles. An initiator and a reflector exchange SBFD
control packets to notify each other of SBFD parameters, for example,
discriminators. In link detection, the initiator proactively sends an SBFD Echo
packet, and the reflector loops this packet back. The initiator determines the local
status based on the looped packet.
● The initiator that performs detection runs the SBFD state machine and
mechanism. The state machine has only the Up and Down states. The initiator
sends packets only in the Up or Down state and receives packets only in the
Up or Admin Down state.
The initiator first sends an SBFD packet with the initial state of Down and
destination port number 7784 to the reflector.
● The reflector runs no SBFD state machine or mechanism. It does not
proactively send SBFD Echo packets. The reflector only loops SBFD packets to
the initiator.
The reflector receives SBFD packets sent by the initiator and checks whether
the received SBFD discriminator is the same as the locally configured global
SBFD discriminator. If they do not match, the packets are discarded. If they
match and the reflector is in the working state, the reflector constructs looped
SBFD packets. If they match and the reflector is not in the working state, the
reflector sets the status to Admin Down in packets.
● Initial state: The initiator sets the initial state to Down in an SBFD packet to
be sent to the reflector.
● Status migration: After receiving a looped packet carrying the Up state, the
initiator sets the local status to Up. After the initiator receives a looped packet
carrying the Admin Down state, the initiator sets the local status to Down. If
the initiator does not receive a packet looped by the reflector before the timer
expires, the initiator also sets the local status to Down.
● Status holding: When the initiator is in the Up state and receives a looped
packet carrying the Up state, the initiator remains the local state of Up. When
the initiator is in the Down state and receives a looped packet carrying the
Admin Down state or receives no packet after the timer expires, the initiator
remains the local state of Down.
In the following example, VPN traffic recurses to an SR LSP. SBFD for SR LSP is
configured, as shown in Figure 1-69.
Assume that the SRGB scope [16000-16100] is set on each PE and P on the
network shown in Figure 1-69. A, CE1, CE2, and E are deployed on the same VPN,
and CE2 advertises a route to E. PE2 assigns a VPN label to the route to E, and
then advertises it to PE1 through the MP-BGP neighbor. PE1 receives the VPN
route carrying the VPN label and installs the route to the forwarding table.
When A sends packets destined for E and the packets arrive at PE1, PE1 adds a
VPN label into the packets based on the VPN to which the packets belong,
recurses the packets to an SR LSP based on the destination IP address carried in
the packets, adds an SR label of 16100, and forwards the packets hop by hop
along the path PE1->P4->P3->PE2.
After SBFD is configured, PE1 rapidly detects a fault and switch traffic to a backup
SR LSP once a link or P on the primary LSP fails.
A, CE1, CE2, and E are deployed on the same VPN, and CE2 advertises a route to E.
PE2 assigns the VPN label to E. PE1 installs the route to E and the VPN label.
The path of the SR-MPLS TE tunnel from PE1 to PE2 is PE1 -> P4 -> P3 -> PE2, and
the label stack is {9004, 9003, 9005}. When A sends a packet destined for E, PE1
finds the packet's outbound interface based on label 9004 and adds label 9003,
label 9005, and the inner VPN label assigned by PE2. After SBFD is configured, PE1
rapidly detects a fault and switches traffic to a backup SR-MPLS TE LSP once a link
or P on the primary LSP fails.
SBFD for SR-MPLS TE tunnel
SBFD for SR-MPLS TE tunnel must be used with SBFD for SR-MPLS TE LSP.
SBFD for SR-MPLS TE LSP controls the status of the primary/backup LSP
switchover. SBFD for SR-MPLS TE tunnel checks actual status of tunnels.
● If SBFD for SR-MPLS TE tunnel is not configured, the default tunnel status
keeps Up, and the effective status cannot be determined.
● If SBFD for SR-MPLS TE tunnel is configured and the SBFD status is set to
administrative Down, the SBFD session does not work, and the tunnel
interface status is unknown.
● SBFD for SR-MPLS TE tunnel is configured and the SBFD status is not set to
administrative Down, the tunnel interface status is inconsistent with the SBFD
status.
Related Concepts
P space The P space contains a set of nodes reachable to the root node on
links, not the protected link, along the SPF tree that originates from
the protected link's source node functioning as the root node.
Extende The extended P space contains nodes reachable to the root nodes on
dP links, not the protected link, along the SPF trees originating from the
space root nodes that are neighbors of protected link's source node.
Q space The Q space contains nodes reachable to the root node on links, not
the protected link, along the reverse SPF tree originating from the
protected link's destination node functioning as the root node.
TI-LFA In some LFA or RLFA scenarios, the P space and Q space do not share
nodes or have directly connected neighbors. Consequently, no backup
path can be calculated, which does not meet reliability requirements.
In this situation, TI-LFA can be used. The TI-LFA algorithm computes
the P space and Q space based on a protected path, a shortest path
tree (also called a post-convergence tree), and a repair list. The
algorithm establishes a segment routing tunnel between the source
node and PQ node to provide backup next hop protection. If the
protected link fails, traffic automatically switches to the backup path,
which improves network reliability.
Background
Conventional LFA requires that at least one neighbor be a loop-free next hop to a
destination. RLFA requires that there be at least one node that connects to the
source and destination nodes along links without passing through any faulty node.
Unlike LFA or RLFA, TI-LFA uses an explicit path to represent a backup path, which
poses no requirements on topology constraints and provides more reliable FRR.
In Figure 1-71, there are packets that need to be sent from Device A to Device F. If
the P space and Q space do not intersect, RLFA requirements fail to be fulfilled,
and RLFA cannot compute a backup path, that is, the Remote LDP LSP. If a fault
occurs on the link between Device B and Device E, Device B forwards data packets
to Device C. Device C is not a Q node and cannot forward packets to the
destination IP address directly. In this situation, Device C has to recompute a path.
The cost of the link between Device C and Device D is 1000. Device C considers
that the optimal path to Device F passes through Device B. Device C loops the
packet to Device B, leading to a loop and resulting in a forwarding failure.
TI-LFA can be used to solve this problem. In Figure 1-72, if a fault occurs on the
link between Device B and Device E, Device B enables TI-LFA FRR backup entries
and adds new path information (node label of Device C and adjacency label for
the C-D adjacency) to the packets to ensure that the data packets can be
forwarded along the backup path.
Benefits
Segment routing-based TI-LFA FRR has the following advantages:
1. Meets basic requirements of IP FRR rapid convergence.
2. Theoretically supports all protection scenarios.
3. Uses an algorithm with moderate complexity.
4. Selects a backup path over a converged route and has no intermediate state,
compared with the other FRR techniques.
● A node that does not support segment routing or a node that does not
advertise a prefix or node SID cannot function as a repair node.
In Figure 1-74, Device F is a P node, and Device H is a Q node. The primary next-
hop B fails, which triggers FRR switching. Traffic switches to the backup path.
Device Upon receipt of the packet, Device D searches the label forwarding
D table based on the outgoing label and finds a matching entry with
the outgoing label of 120 and next hop at Device F. Device D swaps
the outgoing label for 120 and forwards the packet to Device F.
Device Upon receipt of the packet, Device F searches the label forwarding
F table based on the outgoing label. Device F is the egress so that it
removes the label. It finds a matching entry with a routed path label
of 130, the outgoing label as empty, and the next hop at Device G.
Device F removes label 130 and forwards the packet to Device G.
Device Upon receipt of the packet, Device G searches the label forwarding
G table based on the outgoing label, removes label 240, and forwards
the packet to Device H.
Device Upon receipt of the packet, Device H searches the label forwarding
H table based on the outgoing label and finds a matching entry with
the outgoing label of 510 and the next hop at Device E. Device H
swaps the outer label for 510 and forwards the packet to Device E.
Device E forwards the packet to Device C. The packet travels along
the shortest path.
Anycast SID
The anycast SID is the same SID advertised by all routers within a group. On the
network shown in Figure 1-75, Device D and Device E reside on the egress of an
SR area. Traffic can reach the non-SR area through either Device D or Device E.
The two devices can back up each other. In this situation, Device D and Device E
can be configured in the same group and advertise the same prefix SID, the so
called anycast SID.
An anycast SID's next hop directs to Device D that has the smallest IGP cost in the
router group. Device D is called the optimal source that advertises the anycast SID,
and the other device in the router group is the backup source. If the primary next-
hop link or direct neighbor node of Device D fails, traffic can reach the anycast SID
device through the other protection path. The anycast SID device can be the
source that has the same primary next hop or another anycast source. When VPN
traffic passes through an SR LSP, the same VPN private-network label must be
configured for anycast.
Anycast FRR
Anycast FRR allows multiple nodes to advertise the same prefix SID to form FRR.
Common FRR algorithms use the SPT to compute a backup next hop. This applies
to scenarios where a route is advertised by a single node instead of multiple
nodes.
When a route is advertised by multiple nodes, these nodes must be converted to a
single node before a backup next hop is computed for a prefix SID. Anycast FRR
constructs a virtual node to represent the multiple nodes that advertise the same
route and uses the TI-LFA algorithm to compute a backup next hop to the virtual
node. The anycast prefix SID inherits the backup next hop from the created virtual
node. This solution does not involve any modification of the algorithm for
computing the backup next hop. The solution retains the loop-free trait so that no
loop occurs between the computed backup next hop and the primary next hop of
the peripheral node before convergence.
Figure 1-76 IGP FRR networking in a scenario where multiple nodes advertise the
same route
As shown in Figure 1-76 (a), the cost of the Device A-to-Device B link is 5, and
that of the Device A-to-Device C link is 10. Device B and Device C advertise the
same route 10.1.1.0/24 simultaneously. TI-LFA FRR is enabled on Device A.
Because the TI-LFA condition is not met, Device A cannot compute a backup next
hop for the route 10.1.1.0/24. To address this problem, TI-LFA FRR can be used in
scenarios where multiple nodes advertise the same route.
1. A virtual node is constructed between Device B and Device C. The virtual node
is connected to both Device B and Device C. The costs of links from Device B
and Device C to the virtual node are 0. The costs of links from the virtual
node to Device B and Device C are infinite.
2. The virtual node advertises a prefix of 10.1.1.0/24. This means that the route
is advertised by a single node.
3. Device A uses the TI-LFA algorithm to compute a backup next hop to the
virtual node. The route 10.1.1.0/24 inherits the computation result. On the
network shown in Figure 1-76 (b), Device A computes two links to the virtual
node. The primary link is from Device A to Device B, and the backup link is
from Device A to Device C.
Anti-Micro-Loop Switchover
In Figure 1-77, if Device B fails, traffic is switched to a TI-LFA FRR backup path.
After Device A completes route convergence, traffic is switched from the TI-LFA
FRR backup path to a converged path. If Devices D and F do not complete route
convergence, they transmit traffic over the path established before convergence is
performed. As a result, a loop emerges between Devices A and F and is broken
after route convergence finishes on Devices D and F.
● During the multi-node route convergence delay, if the source of the route
advertisement is changed, the delay stops.
Anti-Micro-Loop Switchback
In Figure 1-78:
1. Data is transmitted along the backup path before the link between Device B
and Device C recovers.
2. After the link between Device B and Device C recovers, if Device A converges
earlier than Device B, Device A forwards traffic to Device B that does not
finish convergence. Upon receipt of the traffic, Device B forwards traffic along
the original path to Device A, causing a loop.
3. To prevent a micro loop, after a traffic switchback is performed on Device A,
configure an explicit path to forward packets. Device A adds E2E path
information (for example, a B-to-C adjacency label) to data packets. Upon
receipt of the data packets, Device B forwards packets to Device A, C based on
the carried path information.
1.1.2.13 SR OAM
Reply message is forwarded using multi-hop IP path. Ping and tracert can specify
the source IP address. If no source IP address is specified, the local MPLS LSR-ID is
used as the source IP address. The source IP address in the MPLS Echo Request
packet is used as the destination IP address for the MPLS Echo Reply packet.
SR-MPLS BE Ping
On the network shown in Figure 1-79, PE1, P1, P2, and PE2 are all capable of SR.
An SR LSP is established between PE1 and PE2.
SR-MPLS BE Tracert
On the network shown in Figure 1-79, the process of initiating an SR-MPLS
BEIPv4 tracert test from PE1 is as follows:
1. PE1 initiates a ping test and checks whether the specified tunnel type is SR-
MPLS BE IPv4.
– If the specified tunnel type is not SR-MPLS BE IPv4, PE1 reports an error
message indicating a tunnel type mismatch and stops the tracert test.
– If the specified tunnel type is SR-MPLS BE IPv4, the following operations
are performed:
2. PE1 constructs an MPLS Echo Request packet encapsulating the outer label of
the initiator and carrying destination address 127.0.0.0/8 in the IP header of
the packet.
3. PE1 forwards the packet to P1. After receiving the packet, P1 determines
whether the TTL–1 value in the outer label of the received packet is 0.
– If the TTL–1 value is 0, an MPLS TTL timeout occurs. P1 sends the packet
to the Rx/Tx module for processing and returns a reply packet to PE1.
– If the TTL–1 value is greater than 0, P1 swaps the outer MPLS label of the
packet, searches the forwarding table for the outbound interface, and
forwards the packet to P2.
4. Similar to P1, P2 also performs the following operations:
– If the TTL–1 value is 0, an MPLS TTL timeout occurs. P2 sends the packet
to the Rx/Tx module for processing and returns a reply packet to PE1.
– If the TTL–1 value is greater than 0, P2 swaps the outer MPLS label of the
received packet and determines whether it is the penultimate hop. If yes,
P2 removes the outer label, searches the forwarding table for the
outbound interface, and forwards the packet to PE2.
5. PE2 sends the packet to the Rx/Tx module for processing, returns an MPLS
Echo Reply packet to PE1, and generates the tracert test result.
SR-MPLS TE Ping
On the network shown in Figure 1-80, PE1, P1, and P2all support SR. An SR-MPLS
TE tunnel is established between PE1 and PE2. The devices assign adjacency labels
as follows:
● PE1 assigns adjacency label 9001 to PE1-P1 adjacency.
● P1 assigns adjacency label 9002 to P1-P2 adjacency.
● P2 assigns adjacency label 9005 to P2-PE2 adjacency.
SR-MPLS TE Tracert
On the network shown in Figure 1-80, the process of initiating an SR-MPLS TE
tracert test from PE1 is as follows:
1. PE1 initiates a tracert test and checks whether the specified tunnel type is SR-
MPLS TE.
– If the specified tunnel type is not SR-MPLS TE, PE1 reports an error
message indicating a tunnel type mismatch and stops the tracert test.
– If the specified tunnel type is SR-MPLS TE, the following operations are
performed:
2. PE1 constructs an MPLS Echo Request packet encapsulating label information
about the entire tunnel and carrying destination address 127.0.0.0/8 in the IP
header of the packet.
3. PE1 forwards the packet to P1. After receiving the packet, P1 determines
whether the TTL-1 value in the outer label is 0.
– If the TTL-1 value is 0, an MPLS TTL timeout occurs. P1 sends the packet
to the Rx/Tx module for processing and returns an MPLS Echo Reply
packet to PE1.
– If the TTL-1 value is greater than 0, P1 removes the outer MPLS label of
the packet, buffers the TTL-1 value, copies the value to the new outer
MPLS label, searches the forwarding table for the outbound interface,
and forwards the packet to P2.
4. Similar to P1, P2 also determines whether the TTL-1 value in the outer label
of the received packet is 0.
– If the TTL-1 value is 0, an MPLS TTL timeout occurs. P2 sends the packet
to the Rx/Tx module for processing and returns an MPLS Echo Reply
packet to P1.
– If the TTL-1 value is greater than 0, P2 removes the outer MPLS label of
the packet, buffers the TTL-1 value, copies the value to the new outer
MPLS label, searches the forwarding table for the outbound interface,
and forwards the packet to PE2.
5. P2 forwards the packet to PE2, and PE2 returns an MPLS Echo Reply packet to
PE1.
path with the highest preference functions as the primary path of the SR-MPLS TE
Policy, and the valid candidate path with the second highest preference functions
as the hot-standby path.
A candidate path can contain multiple segment lists, each of which carries a
Weight attribute. Currently, this attribute cannot be set and is defaulted to 1. A
segment list is an explicit label stack that instructs a network device to forward
packets, and multiple segment lists can work in load balancing mode.
+------------------+
| Endpoint | 4 or 16 octets
+------------------+
The meaning of each field is as follows:
● NLRI Length: length of the NLRI
● Distinguisher: distinguisher of the NLRI
● Policy Color: color value of the associated SR-MPLS TE Policy
● Endpoint: endpoint information about the associated SR-MPLS TE Policy
The content of an SR-MPLS TE Policy is encoded using the new Tunnel-Type TLV in
tunnel encapsulation attributes. The encoding format is as follows:
SR-MPLS TE Policy SAFI NLRI: <Distinguisher, Policy Color, Endpoint>
Attributes:
Tunnel Encaps Attribute (23)
Tunnel Type: SR-MPLS TE Policy
Binding SID
Preference
Priority
Policy Name
...
Segment List
Weight
Segment
Segment
...
...
The meaning of each field is as follows:
● Binding SID: binding SID of an SR-MPLS TE Policy.
● Preference: SR-MPLS TE Policy selection preference based on which SR Policies
are selected.
● Priority: SR-MPLS TE Policy convergence priority based on which the sequence
of SR-MPLS TE Policy re-computation is determined in the case of a topology
change.
● Policy Name: SR-MPLS TE Policy name.
● Segment List: list of segments, which contains Weight attribute and segment
information. One SR-MPLS TE Policy can contain multiple segment lists, and
one segment list can contain multiple segments.
Preference sub-TLV
Length 8 bits Packet length that does not contain Type and Length
fields. Available values are as follows:
● 2: No binding SID is contained.
● 6: An IPv4 binding SID is contained.
Length 8 bits Packet length. It does not contain Type and Length
fields.
BGP itself does not generate SR Policies. Instead, it receives NLRIs containing SR-
MPLS TE Policy information from a controller to generate SR Policies. Upon
reception of an NLRI, BGP must determine whether the NLRI is acceptable based
on the following rules:
● The NLRI must contain Distinguisher, Policy Color, and Endpoint attributes.
● The Update message carrying the NLRI must have either the NO_ADVERTISE
community or at least one route-target extended community in IPv4 address
format.
● The Update message carrying the NLRI must have one Tunnel Encapsulation
Attribute.
must implement MTU control properly to prevent packets from being fragmented
or discarded during transmission. Currently, no IGP/BGP-oriented MTU
transmission or negotiation mechanism is available. You can configure MTUs for
SR Policies as needed.
Route Coloring
Route coloring is to add the Color Extended Community to a route through a
route-policy, enabling the route to recurse to an SR-MPLS TE Policy based on the
color value and next-hop address in the route.
The route coloring process is as follows:
1. Configure a route-policy and set a specific color value for the desired route.
2. Apply the route-policy to a BGP peer or a VPN instance as an import or export
policy.
E, and G all remove the outer label in the SR-MPLS TE packet. Finally, device G
forwards the received packet to device B. After receiving the packet, device B
further processes it based on the packet destination address.
3. After receiving the packet, ASBR1 searches for the adjacency matching top
label 3040 in the label stack, finds that the corresponding outbound interface
is the ASBR1->ASBR3 adjacency, and removes the label. The packet carrying
label stack <30028> is forwarded to downstream device ASBR3 through the
ASBR1->ASBR3 adjacency.
4. After receiving the packet, ASBR3 searches the forwarding table based on top
label 30028 in the label stack. 30028 is a binding SID that corresponds to
label stack <405,506>. Then, the device searches for the outbound interface
based on label 405 of the ASBR3->P1 adjacency and removes the label. The
packet carrying label stack <506> is forwarded to downstream device P1
through the ASBR3->P1 adjacency.
5. After receiving the packet, P1 searches for the adjacency matching top label
506 in the label stack, finds that the corresponding outbound interface is the
P1->PE1 adjacency, and removes the label. In this case, the packet does not
carry any label and is forwarded to destination device PE1 through the P1-
>PE1 adjacency. After receiving the packet, PE1 further processes it based on
the packet destination address.
Figure 1-99 shows the SBFD for SR-MPLS TE Policy detection process.
SBFD for SR-MPLS TE Policy checks the connectivity of segment lists. If a segment
list is faulty, an SR-MPLS TE Policy failover is triggered.
On the networks shown in Figure 1-100 and Figure 1-101, the headend and
endpoint of SR-MPLS TE Policy 1 are PE1 and PE2, respectively. In addition, the
headend and endpoint of SR-MPLS TE Policy 2 are PE1 and PE3, respectively. VPN
FRR can be deployed for SR-MPLS TE Policy 1 and SR-MPLS TE Policy 2. Primary
and HSB paths are established for SR-MPLS TE Policy 1. Segment list 1 contains
node labels to P1, P2, and PE2. It can directly use all protection technologies of
SR-MPLS, for example, TI-LFA.
In Figure 1-100:
● If point 1, 3, or 5 is faulty, TI-LFA local protection takes effect only on PE1, P1,
or P2. If the SBFD session corresponding to segment list 1 on PE1 detects a
fault before traffic is restored through local protection, SBFD automatically
sets segment list 1 to down and instructs SR-MPLS TE Policy 1 to switch to
segment list 2.
● If point 2 or 4 is faulty, no local protection is available. SBFD detects node
faults, sets segment list 1 to down, and instructs SR-MPLS TE Policy 1 to
switch to segment list 2.
● If PE2 is faulty and all the candidate paths of SR-MPLS TE Policy 1 are
unavailable, SBFD can detect the fault, set SR-MPLS TE Policy 1 to down, and
trigger a VPN FRR switchover to switch traffic to SR-MPLS TE Policy 2.
The process of initiating a ping test on the SR-MPLS TE Policy from PE1 is as
follows:
1. PE1 constructs an MPLS Echo Request packet. In the IP header of the packet,
the destination IP address is 127.0.0.0/8, and the source IP address is the
MPLS LSR ID of PE1. MPLS labels are encapsulated as a SID label stack in
segment list form.
Note that, if an adjacency SID is configured, headend device PE1 only detects
the FEC of P1. As a result, after the ping packet reaches PE2, the FEC of PE2
fails to be verified. Therefore, to ensure that the FEC verification on the
endpoint device succeeds in this scenario, the MPLS Echo Request packet must
be encapsulated with nil_FEC.
2. PE1 forwards the packet to P1 and P2 hop by hop based on the segment list
label stack of the SR-MPLS TE Policy.
3. P2 removes the outer label from the received packet and forwards the packet
to PE2. In this case, all labels of the packet are removed.
4. PE2 sends the packet to the host transceiver for processing, constructs an
MPLS Echo Reply packet, and sends the packet to PE1. The destination
address of the packet is the MPLS LSR ID of PE1. Because no SR-MPLS TE
Policy is bound to the destination address of the reply packet, IP forwarding is
implemented for the packet.
5. After receiving the reply packet, PE1 generates ping test results.
If there are primary and backup paths with multiple segment lists, the ping test
checks all the segment lists.
1. PE1 constructs an MPLS Echo Request packet. In the IP header of the packet,
the destination IP address is 127.0.0.0/8, and the source IP address is an MPLS
LSR ID. MPLS labels are encapsulated as a SID label stack in segment list
form.
2. PE1 forwards the packet to P1. After receiving the packet, P1 determines
whether the outer TTL minus one is zero.
– If the outer TTL minus one is zero, an MPLS TTL timeout occurs and P1
sends the packet to the host transceiver for processing.
– If the outer TTL minus one is greater than zero, P1 removes the outer
MPLS label, buffers the value (outer TTL minus one), copies and pastes
the value to the new outer MPLS label, searches the forwarding table for
the outbound interface, and forwards the packet to P2.
3. Similar to P1, P2 also determines whether the outer TTL minus one is zero.
– If the outer TTL minus one is zero, an MPLS TTL timeout occurs and P2
sends the packet to the host transceiver for processing.
– If the outer TTL minus one is greater than zero, P2 removes the outer
MPLS label, buffers the value (outer TTL minus one), copies and pastes
the value to the new outer MPLS label, searches the forwarding table for
the outbound interface, and forwards the packet to PE2.
4. After receiving the packet, PE2 removes the outer MPLS label and sends the
packet to the host transceiver for processing. In addition, PE2 returns an MPLS
Echo Reply packet to PE1, with the destination address of the packet being
the MPLS LSR ID of PE1. Because no SR-MPLS TE Policy is bound to the
destination address of the reply packet, IP forwarding is implemented for the
packet.
5. After receiving the reply packet, PE1 generates tracert test results.
If there are primary and backup paths with multiple segment lists, the tracert test
checks all the segment lists.
MPLS in UDP is a DCN overlay technology that encapsulates MPLS packets (or SR
MPLS packets) into UDP packets to traverse through some networks that do not
support MPLS or SR MPLS. MPLS in UDP solves the problem of bearer protocol
conversion between the DCN and WAN and unifies the tunnel protocols run on
the DCN and WAN.
To prevent UDP packets from being attacked, MPLS in UDP technology supports
source IP address verification. The source IP address is the IP address of the ingress
of the MPLS-in-UDP tunnel. The packets are accepted only when the source IP
addresses are successfully verified. Otherwise, the packets are discarded. In Figure
1-103, source IP address verification is enabled on DCI-PE1 that is the egress of
the MPLS-in-UDP tunnel. After DCI-PE1 receives the MPLS-in-UDP packets, DCI-
PE1 verifies the contained source IP addresses, which improves transmission
security.
The TTL field in MPLS packets transmitted over public SR-MPLS TE and SR-MPLS
BE tunnels are processed in either of the following modes:
● Uniform mode
When an IP packet passes through an SR-MPLS network, the IP TTL decreases
by 1 on the ingress and is mapped to the MPLS TTL field. Then, the MPLS TTL
decreases by 1 each time the packet passes through a node on the SR-MPLS
network. After the packet arrives at the egress, the MPLS TTL decreases by 1
and is compared with the IP TTL value. Then, the smaller value is mapped to
the IP TTL field. Figure 1-105 shows how the TTL is processed in uniform
mode.
● Pipe mode
When an IP packet passes through an SR-MPLS network, the IP TTL decreases
by 1 on the ingress, and the MPLS TTL is a fixed value. Then, the MPLS TTL
decreases by 1 each time the packet passes through a node on the SR-MPLS
network. After the packet arrives at the egress, the IP TTL decreases by 1. To
summarize, in pipe mode, the IP TTL in a packet decreases by 1 only on the
ingress and egress no matter how many hops exist between the ingress and
egress on an SR-MPLS network. Figure 1-106 shows how the TTL is processed
in pipe mode.
An L3VPN within the network carries DCI services in per tenant per VPN mode.
Within a data center (DC), tenants are isolated, and the DC gateway accesses the
L3VPN through a VLAN sub-interface. Tenant VPNs are carried over the primary
and backup SR tunnels.
Service Overview
The future network is oriented to the 5G era. Bearer networks also need to adapt
to the trend by simplifying networks, providing low latency, and implementing
software-defined networking (SDN) and network functions virtualization (NFV).
E2E SR-MPLS TE can carry VPN services through a complete inter-AS tunnel, which
greatly reduces networking and O&M costs and meets carriers' unified O&M
requirements.
Networking Description
Figure 1-109 illustrates the typical application of the inter-AS E2E SR-MPLS TE
technology on a bearer network. The overall service model is EVPN over SR-MPLS
TE. An E2E SR-MPLS TE tunnel is used for unified transmission. The data service
model is EVPN. The network is oriented to 5G and has the features of simplified
solution, protocol, and tunnel, and unified reliability solution. It works with the
network controller to quickly respond to upper-layer service requirements.
Feature Deployment
When an inter-AS E2E SR-MPLS TE network is used to carry EVPN services, the
service deployment is as follows:
● Configure IGP SR within an AS domain to establish an intra-AS SR-MPLS TE
tunnel and BGP EPE between AS domains to assign inter-AS SIDs. Binding
SIDs are used to combine SR-MPLS TE tunnels in multiple AS domains into an
inter-AS E2E SR-MPLS TE tunnel.
● EVPN is deployed to carry various services, including EVPN VPWS, EVPN VPLS,
and EVPN L3VPN. In addition to EVPN, the traditional BGP L3VPN can be
smoothly switched to the E2E SR-MPLS TE tunnel.
● In terms of reliability, intra-AS SR-MPLS TE tunnels are monitored by BFD or
SBFD. TE hot standby (HSB) technology switches traffic between the primary
and HSB TE LSPs. Inter-AS E2E SR-MPLS TE tunnels are monitored by one-arm
BFD. TE HSB is used to switch traffic between the primary and HSB TE LSPs.
VPN FRR is used to switch traffic between the primary and HSB E2E SR-MPLS
TE tunnels.
On the network shown in Figure 1-110, DCI services are carried over L3VPN, with
per tenant per VPN instance as the service model, achieving tenant isolation in the
data center (DC). DC gateways use VLAN sub-interfaces to access the L3VPN. The
VPN services of tenants are carried using different SR Policies.
Terms
Term Definition
SID segment ID
SR Segment Routing
TE traffic engineering
receiving the data packet, the receive end parses the segment list. If the top
segment ID in the segment list identifies the local node, the node removes the
segment ID and proceeds with the follow-up procedure. If the top segment ID
does not identify the local node, the node uses the Equal Cost Multiple Path
(ECMP) algorithm to forward the packet to a next node.
Segment Routing offers the following benefits:
● The control plane of MPLS network is simplified.
A controller or an IGP is used to uniformly compute paths and distribute
labels, without using RSVP-TE or LDP. Segment Routing can be directly applied
to the MPLS architecture without any change in the forwarding plane.
● Provides efficient topology independent-loop-free alternate (TI-LFA) FRR
protection for fast path failure recovery.
Based on the Segment Routing technology, combined with the RLFA (Remote
Loop-free Alternate) FRR algorithm, an efficient TI-LFA FRR algorithm is
formed. TI-LFA FRR supports node and link protection of any topology and
overcomes drawbacks in conventional tunnel protection.
● Provides higher network capacity expansion capabilities.
MPLS TE is a connection-oriented technique. To maintain connections, nodes
need to send and process a large number of Keepalive packets, posing heavy
burdens on the control plane. Segment Routing controls any service paths by
merely operating labels on the ingress, and transit node do not have to
maintain path information, which reduces the burdens on the control plane.
In addition, Segment Routing labels equal to the sum of the number of
network-wide nodes and the number of local adjacencies. The label quantity
is related only to the network scale, not to the number of tunnels or the
service volume.
● Implements smoother evolution to SDN network.
Segment Routing is designed based on the source routing concept. Using the
source node alone can control forwarding paths over which packets are
transmitted across a network. The Segment Routing technique and the
centralized path computing module are used together to flexibly and
conveniently control and adjust paths.
Segment Routing supports both traditional and SDN networks. It is
compatible with existing equipment and ensures smooth evolution of existing
networks to SDN networks instead of subverting existing networks.
Configuration Precautions
Restrictions Guidelines Impact
Path verification cannot Do not run the mpls te Path verification cannot
be enabled on the path verification enable be enabled on the
ingress of an inter-AS command to enable path ingress of an inter-AS
SR-MPLS TE tunnel. If verification on the SR-MPLS TE tunnel. If
path verification is ingress of an inter-AS path verification is
enabled, LSP creation SR-MPLS TE tunnel. enabled, LSP creation
fails in the case of a fails in the case of a
verification failure. verification failure.
Ensure that the mpls te
path verification enable
command is not run to
enable path verification.
By default, the function
is disabled.
If you enable SBFD after You are advised to If SBFD is not enabled,
configuring an SR-MPLS enable SBFD and then traffic forwarding may
TE Policy under which configure an SR-MPLS TE fail even when the
candidate paths and the Policy. protocol-side tunnel
segment lists of the state is up.
candidate paths are up,
the candidate paths and
segment lists keep up,
and traffic can be
properly steered into the
SR-MPLS TE Policy.
If you enable SBFD and
then configure an SR-
MPLS TE Policy whose
initial state is down,
traffic can be steered
into the SR-MPLS TE
Policy only when SBFD
detects that the SR-MPLS
TE Policy is up.
The SID forwarding entry Properly plan the Service traffic fails to be
corresponding to the network. forwarded.
prefix of the IP route of
the involved process is
generated only when the
IP route is active. Ensure
that the following
requirements are met if
multiple processes or
protocols are deployed.
Otherwise, SR-MPLS BE
and SR-MPLS TE traffic
may fail to be forwarded.
The IP route prefix used
for SID advertisement is
not concurrently
advertised through
multiple processes or
protocols.
If an IP route prefix is
required to be
concurrently advertised
through multiple
processes or protocols,
the IP routes to be
advertised by the process
or protocol that
advertises the
corresponding SID must
have the highest priority
among all the routes on
the devices in the SID
advertisement scope.
The function of detecting When this function is If the type of the BGP
BGP EPE outbound deployed, ensure that EPE outbound interface
interface faults and the BGP EPE outbound is not supported, traffic
triggering service interface is an Ethernet cannot rapidly recover in
switching through SBFD main interface, Ethernet the case of a BGP EPE
for SR-MPLS TE tunnel sub-interface, Eth-Trunk outbound interface fault.
can be used only when main interface, Eth-Trunk
the BGP EPE outbound sub-interface, or POS
interface is an Ethernet interface.
main interface, Ethernet
sub-interface, Eth-Trunk
main interface, Eth-Trunk
sub-interface, or POS
interface.
The function of detecting Do not use LPUF-50/ If the BGP EPE inbound
BGP EPE outbound LPUF-50-L/LPUI-21-L/ interface resides on the
interface faults and LPUI-51-L/LPUF-51/ LPUF-50/LPUF-50-L/
triggering service LPUF-51-B/LPUI-51/ LPUI-21-L/LPUI-51-L/
switching through SBFD LPUI-51-B/LPUS-51/ LPUF-51/LPUF-51-B/
for SR-MPLS TE tunnel LPUF-101/LPUF-101-B/ LPUI-51/LPUI-51-B/
does not support LPUI-101/LPUI-101-B/ LPUS-51/LPUF-101/
LPUF-50/LPUF-50-L/ LPUS-101 boards. LPUF-101-B/LPUI-101/
LPUI-21-L/LPUI-51-L/ LPUI-101-B/LPUS-101
LPUF-51/LPUF-51-B/ board, the function of
LPUI-51/LPUI-51-B/ detecting BGP EPE
LPUS-51/LPUF-101/ outbound interface faults
LPUF-101-B/LPUI-101/ and triggering service
LPUI-101-B/LPUS-101 switching through SBFD
boards. for SR-MPLS TE tunnel
does not take effect, and
traffic cannot rapidly
recover.
The function of enabling When this function is If the type of the BGP
BGP EPE outbound deployed, ensure that EPE outbound interface
interface faults to trigger the BGP EPE outbound is not supported, traffic
routing table lookups interface is an Ethernet cannot rapidly recover in
due to the rollback of main interface, Ethernet the case of a BGP EPE
public IP unicast traffic sub-interface, Eth-Trunk outbound interface fault.
can be used when the main interface, Eth-Trunk
outbound interface of a sub-interface, or POS
BGP EPE link is an interface.
Ethernet, ETH-Trunk, or
POS interface. Other
types of interfaces do
not support fast
switching.
Usage Scenario
Creating an SR-MPLS BE tunnel involves the following operations:
● Devices report topology information to a controller (if the controller is used to
create a tunnel) and are assigned labels.
● The devices compute paths.
Pre-configuration Tasks
Before configuring a manual SR-MPLS TE tunnel, complete the following tasks:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls lsr-id lsr-id
The LSR ID is set for the local node.
Note the following issues when setting an LSR ID:
● LSR IDs must be set before other MPLS commands are run.
● No default LSR ID is available.
● Using the IP address of a loopback interface as the LSR ID is recommended
for an LSR.
Step 3 Run mpls
The MPLS view is displayed.
Step 4 Run commit
The configuration is committed.
----End
Context
Basic SR-MPLS BE function configurations involve enabling the global segment
routing capability, configuring the segment routing global block (SRGB), and
setting a prefix segment ID (SID).
Procedure
Step 1 Globally enable the segment routing capability.
1. Run system-view
The system view is displayed.
2. Run segment-routing
Segment routing is globally enabled, and the segment routing view is
displayed.
3. Run commit
The configuration is committed.
----End
Context
After Segment Routing is enabled, a great number of devices establish excessive
E2E LSPs, leading to resource wastes. To prevent resource wastes, a policy for
establishing LSPs can be configured. The policy allows the ingress to use only
allowed routes to establish SR-LSPs.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run isis
The IS-IS view is displayed.
Step 3 Run segment-routing lsp-trigger { none | host | ip-prefix ip-prefix-name }
A policy for establishing LSPs can be configured.
Configure one of the following parameters:
● none: does not allow the ingress to use any routes to establish SR-LSPs.
● host: allows the ingress to use host routes with 32-bit masks to establish SR-
LSPs.
● ip-prefix: allows the ingress to use the routes that match an IP prefix list to
establish SR-LSPs.
Step 4 Run commit
The configuration is committed.
----End
Context
In a tunnel recursion scenario, an LDP tunnel is preferentially selected to forward
traffic by default. To enable a device to preferentially select an SR-MPLS BE tunnel,
improve the SR-MPLS BE tunnel priority so that the SR-MPLS BE tunnel takes
preference over the LDP tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing
The segment routing view is displayed.
Step 3 Run tunnel-prefer segment-routing
SR-MPLS BE tunnels are configured to take precedence over LDP tunnels.
----End
Prerequisites
The SR-MPLS BE functions have been configured.
Procedure
After completing the configurations, you can run the following command to check
the configurations.
● Run the display isis lsdb [ { level-1 | level-2 } | verbose | { local | lsp-id | is-
name symbolic-name } ] * [ process-id | vpn-instance vpn-instance-name ]
command to check IS-IS LSDB information.
● Run the display segment-routing prefix mpls forwarding command to
check the label forwarding table for segment routing.
Usage Scenario
Creating an SR-MPLS BE tunnel involves the following operations:
● Devices report topology information to a controller (if the controller is used to
create a tunnel) and are assigned labels.
● The devices compute paths.
Pre-configuration Tasks
Before configuring an SR-MPLS BE tunnel, complete the following tasks:
● Configure OSPF to implement the connectivity of NEs at the network layer.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
Basic SR-MPLS BE function configurations involve enabling the global segment
routing capability, configuring the segment routing global block (SRGB), and
setting a prefix segment ID (SID).
Procedure
Step 1 Globally enable the segment routing capability.
1. Run system-view
----End
Context
After segment routing is enabled, a great number of devices establish excessive
E2E LSPs, leading to resource wastes. To prevent resource wastes, a policy for
establishing LSPs can be configured. The policy allows the ingress to use only
allowed routes to establish SR-LSPs.
Procedure
Step 1 Run system-view
----End
Context
In a tunnel recursion scenario, an LDP tunnel is preferentially selected to forward
traffic by default. To enable a device to preferentially select an SR-MPLS BE tunnel,
improve the SR-MPLS BE tunnel priority so that the SR-MPLS BE tunnel takes
preference over the LDP tunnel.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing
The segment routing view is displayed.
Step 3 Run tunnel-prefer segment-routing
SR-MPLS BE tunnels are configured to take precedence over LDP tunnels.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
All SR-MPLS BE functions have been configured.
Procedure
After completing the configurations, you can run the following commands to
check the configurations.
● Run the display segment-routing prefix mpls forwarding command to view
the information about the segment routing label forwarding table
information.
Usage Scenario
Among existing TE tunnels, each LSP route is assigned a label on each node, and a
forwarder both generates paths and establishes tunnels, which consumes a large
number of forwarder resources. To reduce the burden on the forwarder, SR-MPLS
TE can be used. It offers the following benefits:
● A controller generates labels, which reduces the burden for the forwarder.
● The controller calculates a path and assigns a label to each route, which
reduces the burden and resource consumption on the forwarder and helps
improve the forwarder performance since the forwarder can focus on core
forwarding tasks.
Pre-configuration Tasks
Before configuring an SR-MPLS TE tunnel, complete the following tasks:
● Configure IS-IS to implement network layer connectivity for LSRs.
● Configure an IS-IS neighbor between the controller and forwarder. See
Creating an IS-IS Process and Enabling an IS-IS Interface.
A controller must be configured to generate labels and calculate an LSP path for an SR-
MPLS TE tunnel to be established.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls lsr-id lsr-id
The LSR ID is set for the local node.
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
Before an SR-MPLS TE tunnel is established, a device must assign labels, collect
network topology information, and report the information to the controller so that
the controller uses the information to calculate a path and a label stack for the
path. SR-MPLS TE labels can be assigned by the controller or the extended IS-IS
protocol on forwarders. After IS-IS collects network topology information
(including labels assigned by IS-IS), IS-IS floods the information to or BGP-LS
advertises routes to the controller.
Procedure
Step 1 Configure the IS-IS SR-MPLS TE capability.
Perform the following steps on each node of an SR-MPLS TE tunnel to be
established:
1. Run system-view
The system view is displayed.
2. Run isis [ process-id ]
The IS-IS view is displayed.
3. Run cost-style { wide | compatible | wide-compatible }
The IS-IS wide metric function is enabled.
4. Run traffic-eng [ level-1 | level-2 | level-1-2 ]
IS-IS TE is enabled.
If no IS-IS level is specified, the node is a Level-1-2 device that can generate
two TEDBs for communicating with Level-1 and Level-2 devices.
5. Run segment-routing mpls [ level-1 | level-2 | level-1-2 ]
IS-IS SR-MPLS TE is enabled.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run ip address ip-address
{ mask | mask-length } [ sub ]
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run ip address unnumbered interface interface-type interface-
number
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
A tunnel ID is set.
Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using
this label is automatically set to Down.
This function does not need to be configured if the controller or BFD is used.
To enable this function globally, run the mpls te path verification enable
command in the MPLS view.
Step 10 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance unequal-cost enable
UCMP is enabled globally.
Step 3 Run interface tunnel tunnel-number
The SR-MPLS TE tunnel interface view is displayed.
Step 4 Run load-balance unequal-cost weight weight
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
Step 5 Run commit
The configuration is committed.
----End
Context
SR is configured on a PCC (forwarder). The PCC delegates LSPs to a controller for
path calculation. After the controller calculates a path, the controller sends a PCEP
message to deliver path information to the PCC (forwarder).
The path information can also be delivered by a third-party adapter to the forwarder. In this
situation, SR does not need to be configured on the PCC, and the following operation can
be skipped.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
An SR-MPLS TE-incapable device on an SR-MPLS TE network can be configured to
simulate an SR-MPLS TE transit node to perform link label-based forwarding. The
function resolves the forwarding issue on the SR-MPLS TE-incapable device.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run sr-te-simulate static-cr-lsp transit lsp-name incoming-interface interface-
type interface-number sid segmentid outgoing-interface interface-type interface-
number nexthop next-hop-address out-label implicit-null
A device is enabled to simulate an SR-MPLS TE transit node to perform link label-
based forwarding.
To modify parameters, except for lsp-name, run the sr-te-simulate static-cr-lsp
transit command. There is no need to run the undo sr-te-simulate static-cr-lsp
transit command before modifying a setting. These parameters can be
dynamically updated.
Step 3 Run commit
The configuration is committed.
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
● Run the following commands to check the IS-IS TE status:
– display isis traffic-eng advertisements [ { level-1 | level-2 | level-1-2 } |
{ lsp-id | local } ] * [ process-id | [ vpn-instance vpn-instance-name ] ]
– display isis traffic-eng statistics [ process-id | [ vpn-instance vpn-
instance-name ] ]
● Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id
session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ]
[ name tunnel-name ] [ { incoming-interface | interface | outgoing-
interface } interface-type interface-number ] [ verbose ] command to check
tunnel information.
● Run the display mpls te tunnel statistics or display mpls sr-te-lsp
command to check LSP statistics.
● Run the display mpls te tunnel-interface [ tunnel tunnel-number ]
command to check information about a tunnel interface on the ingress.
● (Optional) If the label stack depth exceeds the upper limit supported by a
forwarder, the controller needs to divide a label stack into multiple stacks for
an entire path. After the controller assigns a stick label to a stick node, run
the display mpls te stitch-label-stack command to check information about
the stick label stack mapped to the stick label.
----End
Usage Scenario
Among existing TE tunnels, each LSP route is assigned a label on each node, and a
PCC (forwarder) both generates paths and establishes tunnels, which consumes a
large number of forwarder resources. To reduce the burden on the PCC, SR-MPLS
TE can be used. The manual SR-MPLS TE tunnel technique offers the following
benefits:
● A controller generates labels, which reduces the burden for the PCC
(forwarder).
● The controller calculates a path and assigns a label to each route, which
reduces the burden and resource consumption on the forwarder and helps
improve the forwarder performance since the forwarder can focus on core
forwarding tasks.
Pre-configuration Tasks
Before configuring an SR-MPLS TE tunnel, complete the following tasks:
A controller must be configured to generate labels and calculate an LSP path for an SR-
MPLS TE tunnel to be established.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls lsr-id lsr-id
The LSR ID is set for the local node.
Note the following issues when setting an LSR ID:
● LSR IDs must be set before other MPLS commands are run.
● No default LSR ID is available.
● Using the IP address of a loopback interface as the LSR ID is recommended
for an LSR.
Step 3 Run mpls
The MPLS view is displayed.
Step 4 Run mpls te
MPLS TE is enabled globally.
Step 5 (Optional) Enabling the MPLS TE capability of the interface
In the scenario where the controller or the head node calculates the tunnel path,
the MPLS TE capability of the interface must be configured. In a static explicit
path scenario, this step can be ignored.
1. Run quit
The system view is displayed.
2. Run interface interface-type interface-number
The view of the interface on which the MPLS TE tunnel is established is
displayed.
3. Run mpls
MPLS is enabled on an interface.
4. Run mpls te
MPLS TE is enabled on the interface.
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing
SR is enabled globally.
Step 3 Run commit
The configuration is committed.
----End
Context
Before an SR-MPLS TE tunnel is established, a device must assign labels, collect
network topology information, and report the information to the controller so that
the controller uses the information to calculate a path and a label stack for the
path. SR-MPLS TE labels can be assigned by the controller or the extended OSPF
protocol on forwarders. After OSPF collects network topology information
(including labels assigned by OSPF), OSPF floods the information to or BGP-LS
advertises routes to the controller.
Procedure
Step 1 Configure the OSPF SR-MPLS TE capability.
Perform the following steps on each node of an SR-MPLS TE tunnel to be
established:
1. Run system-view
OSPF SR enabled.
5. Run area area-id
----End
Procedure
Step 1 Run system-view
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run ip address ip-address
{ mask | mask-length } [ sub ]
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run ip address unnumbered interface interface-type interface-
number
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance unequal-cost enable
UCMP is enabled globally.
Step 3 Run interface tunnel tunnel-number
The SR-MPLS TE tunnel interface view is displayed.
Step 4 Run load-balance unequal-cost weight weight
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
----End
Context
SR is configured on a PCC (forwarder). The PCC delegates LSPs to a controller for
path calculation. After the controller calculates a path, the controller sends a PCEP
message to deliver path information to the PCC (forwarder).
The path information can also be delivered by a third-party adapter to the forwarder. In this
situation, SR does not need to be configured on the PCC, and the following operation can
be skipped.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run pce-client
A PCC is configured, and the PCE client view is displayed.
Step 3 Run capability segment-routing
The Segment Routing capability is enabled.
Step 4 Run connect-server ip-address
A candidate server is specified for the PCC.
Step 5 Run commit
The configuration is committed.
----End
Context
An SR-MPLS TE-incapable device on an SR-MPLS TE network can be configured to
simulate an SR-MPLS TE transit node to perform link label-based forwarding. The
function resolves the forwarding issue on the SR-MPLS TE-incapable device.
Procedure
Step 1 Run system-view
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
Step 1 Run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask
| mask-length ] ] command to check routing table information of OSPF Segment
Routing.
Step 2 Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-
id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name
tunnel-name ] [ { incoming-interface | interface | outgoing-interface }
interface-type interface-number ] [ verbose ] command to check tunnel
information.
Step 3 Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to
check LSP statistics.
----End
Usage Scenario
SE-TE is a new TE tunnel technology that uses SR as a control protocol. If no
controller is deployed to compute paths, an explicit path can be manually
configured to perform SR-MPLS TE.
Pre-configuration Tasks
Before configuring an IS-IS SR-MPLS TE tunnel, configure IS-IS to implement
network layer connectivity.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls lsr-id lsr-id
The LSR ID is set for the local node.
Note the following issues when setting an LSR ID:
● LSR IDs must be set before other MPLS commands are run.
● No default LSR ID is available.
● Using the IP address of a loopback interface as the LSR ID is recommended
for an LSR.
Step 3 Run mpls
The MPLS view is displayed.
Step 4 Run mpls te
MPLS TE is enabled globally.
Step 5 (Optional) Enabling the MPLS TE capability of the interface
In the scenario where the controller or the head node calculates the tunnel path,
the MPLS TE capability of the interface must be configured. In a static explicit
path scenario, this step can be ignored.
1. Run quit
The system view is displayed.
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
SR is enabled globally.
----End
Context
SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency
SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE
tunnel is configured, the adjacency and node SIDs must be configured.
Procedure
Step 1 Configure an SRGB.
1. Run system-view
The system view is displayed.
2. Run isis [ process-id ]
The IS-IS view is displayed.
3. Run network-entity net
The network entity title (NET) is configured.
4. Run cost-style { wide | compatible | wide-compatible }
The IS-IS wide metric function is enabled.
5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]
IS-IS TE is enabled.
6. Run segment-routing mpls
The IS-IS segment routing function is enabled.
7. Run segment-routing global-block begin-value end-value
An SRGB is configured in an existing IS-IS instance.
8. Run commit
The configuration is committed.
Step 2 Configure an SR prefix label.
1. Run system-view
The system view is displayed.
2. Run interface loopback loopback-number
A loopback interface is created, and the loopback interface view is displayed.
3. Run isis enable [ process-id ]
The IS-IS interface is enabled.
4. Run ip address ip-address { mask | mask-length }
The IP address is configured for the loopback interface.
5. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]
The prefix SID is set on the loopback interface.
6. Run commit
The configuration is committed.
Step 3 (Optional) Configure an adjacency SID.
After IS-IS SR is enabled, an adjacency SID can be automatically generated. To
disable the adjacency SID function, run the segment-routing auto-adj-sid
disable command. After a device is restarted, the adjacency SID may change. If an
explicit path uses an adjacency SID, this adjacency SID must be reconfigured. You
can also manually configure an adjacency SID for an explicit path.
1. Run system-view
The system view is displayed.
2. Run segment-routing
The segment routing view is displayed.
3. Run ipv4 adjacency local-ip-addr local-ip-address remote-ip-addr remote-
ip-address sid sid-value
An adjacency SID is manually set for SR.
4. Run commit
The configuration is committed.
----End
Context
An explicit path is a vector path comprised of a series of nodes that are arranged
in the configuration sequence. The path through which an SR-MPLS TE LSP passes
can be planned by specifying next-hop labels or next-hop IP addresses on an
explicit path. The IP addresses involved in an explicit path are set to interfaces' IP
addresses. An explicit path in use can be dynamically updated.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run explicit-path path-name
An explicit path is created, and the explicit path view is displayed.
Step 3 Perform either of the following operations:
● To specify a next-hop label for the explicit path, run the next sid label label-
value type { adjacency | prefix | binding-sid } command.
● To specify a next-hop address for the explicit path, perform the following
operations:
a. Run the next hop ip-address [ include[ [ strict | loose ] | [ incoming |
outgoing ] ] *| exclude ] command to specify a next-hop node for the
explicit path.
The include parameter indicates that a tunnel must pass through a
specified node. The exclude parameter indicates that a tunnel does not
pass through a specified node.
b. (Optional) Run the add hop ip-address1 [ include [ [ strict | loose ] |
[ incoming | outgoing ] ] * | exclude ] { after | before } ip-address2
command to add a node to the explicit path.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
A tunnel ID is set.
The path-name value must be the same as that specified in the explicit-path
path-name command.
Step 9 (Optional) Run mpls te path verification enable
Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using
this label is automatically set to Down.
This function does not need to be configured if the controller or BFD is used.
To enable this function globally, run the mpls te path verification enable
command in the MPLS view.
Step 10 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance unequal-cost enable
UCMP is enabled globally.
Step 3 Run interface tunnel tunnel-number
The SR-MPLS TE tunnel interface view is displayed.
Step 4 Run load-balance unequal-cost weight weight
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
● Run the following commands to check the IS-IS TE status:
– display isis traffic-eng advertisements [ { level-1 | level-2 | level-1-2 } |
{ lsp-id | local } ] * [ process-id | [ vpn-instance vpn-instance-name ] ]
– display isis traffic-eng statistics [ process-id | [ vpn-instance vpn-
instance-name ] ]
----End
Usage Scenario
SE-TE is a new TE tunnel technology that uses SR as a control protocol. If no
controller is deployed to compute paths, an explicit path can be manually
configured to perform SR-MPLS TE.
Pre-configuration Tasks
Before configuring an OSPF SR-MPLS TE tunnel, configure OSPF to implement
connectivity at the network layer.
Procedure
Step 1 Run system-view
In the scenario where the controller or the head node calculates the tunnel path,
the MPLS TE capability of the interface must be configured. In a static explicit
path scenario, this step can be ignored.
1. Run quit
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
SR is enabled globally.
----End
Context
SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency
SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE
tunnel is configured, the adjacency and node SIDs must be configured.
Procedure
Step 1 Configure an SRGB.
1. Run system-view
The system view is displayed.
2. Run ospf [ process-id ]
The OSPF view is displayed.
3. Run opaque-capability enable
The Opaque capability is enabled.
4. Run segment-routing mpls
The OSPF segment routing function is enabled.
5. Run segment-routing global-block begin-value end-value
A global OSPF SR label range is set.
6. Run area area-id
The OSPF area view is displayed.
7. Run mpls-te enable [ standard-complying ]
TE is enabled in the OSPF area.
8. Run commit
The configuration is committed.
Step 2 Set a prefix SID.
1. Run system-view
The system view is displayed.
2. Run interface loopback loopback-number
A loopback interface is created, and the loopback interface view is displayed.
3. Run ospf enable [ process-id ] area area-id
OSPF is enabled on the interface.
----End
Context
An explicit path is a vector path comprised of a series of nodes that are arranged
in the configuration sequence. The path through which an SR-MPLS TE LSP passes
can be planned by specifying next-hop labels or next-hop IP addresses on an
explicit path. The IP addresses involved in an explicit path are set to interfaces' IP
addresses. An explicit path in use can be dynamically updated.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run explicit-path path-name
An explicit path is created, and the explicit path view is displayed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
The path-name value must be the same as that specified in the explicit-path
path-name command.
Step 9 (Optional) Run mpls te path verification enable
Path verification for SR-MPLS TE tunnels is enabled. If a label fails, an LSP using
this label is automatically set to Down.
This function does not need to be configured if the controller or BFD is used.
To enable this function globally, run the mpls te path verification enable
command in the MPLS view.
Step 10 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Procedure
Step 1 Run system-view
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
Step 1 Run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask
| mask-length ] ] command to check routing table information of OSPF Segment
Routing.
Step 2 Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-
id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name
tunnel-name ] [ { incoming-interface | interface | outgoing-interface }
interface-type interface-number ] [ verbose ] command to check tunnel
information.
Step 3 Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to
check LSP statistics.
----End
Usage Scenario
SE-TE is a new TE tunnel technology that uses SR as a control protocol. If no
controller is deployed to compute paths, CSPF can be configured on the ingress to
perform SR-MPLS TE.
Pre-configuration Tasks
Before configuring an IS-IS SR-MPLS TE tunnel, configure IS-IS to implement
network layer connectivity.
Procedure
Step 1 Run system-view
In the scenario where the controller or the head node calculates the tunnel path,
the MPLS TE capability of the interface must be configured. In a static explicit
path scenario, this step can be ignored.
1. Run quit
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing
SR is enabled globally.
Step 3 Run commit
The configuration is committed.
----End
Context
SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency
SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE
tunnel is configured, the adjacency and node SIDs must be configured.
Procedure
Step 1 Configure an SRGB.
1. Run system-view
The system view is displayed.
2. Run isis [ process-id ]
The IS-IS view is displayed.
3. Run network-entity net
The network entity title (NET) is configured.
4. Run cost-style { wide | compatible | wide-compatible }
The IS-IS wide metric function is enabled.
5. Run traffic-eng [ level-1 | level-2 | level-1-2 ]
IS-IS TE is enabled.
6. Run segment-routing mpls
The IS-IS segment routing function is enabled.
7. Run segment-routing global-block begin-value end-value
An SRGB is configured in an existing IS-IS instance.
8. Run commit
The configuration is committed.
Step 2 Set a prefix SID.
1. Run system-view
The system view is displayed.
2. Run interface loopback loopback-number
A loopback interface is created, and the loopback interface view is displayed.
3. Run isis enable [ process-id ]
The IS-IS interface is enabled.
4. Run ip address ip-address { mask | mask-length }
The IP address is configured for the loopback interface.
5. Run isis prefix-sid { absolute sid-value | index index-value } [ node-disable ]
The prefix SID is set on the loopback interface.
6. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run mpls
The MPLS view is displayed.
Step 3 Run mpls te
MPLS TE is enabled.
Step 4 Run mpls te cspf
CSPF is enabled on the ingress to compute paths.
Step 5 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
This function does not need to be configured if the controller or BFD is used.
To enable this function globally, run the mpls te path verification enable
command in the MPLS view.
Step 10 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
The DSCP setting on an SR-MPLS TE tunnel interface is mutually exclusive with
the service-class command. If both of them are configured, an error message is
displayed.
Step 11 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run load-balance unequal-cost enable
UCMP is enabled globally.
Step 3 Run interface tunnel tunnel-number
The SR-MPLS TE tunnel interface view is displayed.
Step 4 Run load-balance unequal-cost weight weight
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
● Run the following commands to check the IS-IS TE status:
– display isis traffic-eng advertisements [ { level-1 | level-2 | level-1-2 } |
{ lsp-id | local } ] * [ process-id | [ vpn-instance vpn-instance-name ] ]
– display isis traffic-eng statistics [ process-id | [ vpn-instance vpn-
instance-name ] ]
● Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id
session-id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ]
[ name tunnel-name ] [ { incoming-interface | interface | outgoing-
interface } interface-type interface-number ] [ verbose ] command to check
tunnel information.
● Run the display mpls te tunnel statistics or display mpls sr-te-lsp
command to check LSP statistics.
● Run the display mpls te tunnel-interface [ tunnel tunnel-number ]
command to check information about a tunnel interface on the ingress.
● (Optional) If the label stack depth exceeds the upper limit supported by a
forwarder, the controller needs to divide a label stack into multiple stacks for
an entire path. After the controller assigns a stick label to a stick node, run
the display mpls te stitch-label-stack command to check information about
the stick label stack mapped to the stick label.
----End
Usage Scenario
SE-TE is a new TE tunnel technology that uses SR as a control protocol. If no
controller is deployed to compute paths, CSPF can be configured on the ingress to
perform SR-MPLS TE.
Pre-configuration Tasks
Before configuring an OSPF SR-MPLS TE tunnel, configure OSPF to implement
connectivity at the network layer.
Procedure
Step 1 Run system-view
In the scenario where the controller or the head node calculates the tunnel path,
the MPLS TE capability of the interface must be configured. In a static explicit
path scenario, this step can be ignored.
1. Run quit
----End
Context
SR enables a forwarder to assign a label to each route, which reduces resource
consumption on the forwarder. SR must be enabled globally before using the SR
function.
Procedure
Step 1 Run system-view
SR is enabled globally.
----End
Context
SR-MPLS TE uses strict and loose explicit paths. Strict explicit paths use adjacency
SIDs, and loose explicit paths use adjacency and node SIDs. Before an SR-MPLS TE
tunnel is configured, the adjacency and node SIDs must be configured.
Procedure
Step 1 Configure an SRGB.
1. Run system-view
1. Run system-view
----End
Procedure
Step 1 Run system-view
MPLS TE is enabled.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface tunnel tunnel-number
A tunnel interface is created, and the tunnel interface view is displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure the IP address of the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
This function does not need to be configured if the controller or BFD is used.
To enable this function globally, run the mpls te path verification enable
command in the MPLS view.
Step 10 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Procedure
Step 1 Run system-view
Sets a weight for an SR-MPLS TE tunnel before UCMP is carried out among
tunnels.
----End
Prerequisites
The SR-MPLS TE tunnel functions have been configured.
Procedure
Step 1 Run the display ospf [ process-id ] segment-routing routing [ ip-address [ mask
| mask-length ] ] command to check routing table information of OSPF Segment
Routing.
Step 2 Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-
id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name
tunnel-name ] [ { incoming-interface | interface | outgoing-interface }
interface-type interface-number ] [ verbose ] command to check tunnel
information.
Step 3 Run the display mpls te tunnel statistics or display mpls sr-te-lsp command to
check LSP statistics.
Step 4 Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to
check information about a tunnel interface on the ingress.
----End
Context
BGP is a dynamic routing protocol used between ASs. BGP SR is a BGP extension
for SR and is used for inter-AS source routing.
BGP SR uses BGP egress peer engineering (EPE) to allocate Peer-Node and Peer-
Adj SIDs to peers or Peer-Set SIDs to peer groups. These SIDs can be reported to
the controller through BGP-LS for orchestrating E2E SR-MPLS TE tunnels.
Procedure
● Configure BGP EPE.
a. Run system-view
The system view is displayed.
b. Run segment-routing
SR is enabled.
c. Run quit
Return to the system view.
d. Run bgp { as-number-plain | as-number-dot }
BGP is enabled, and the BGP view is displayed.
e. Run peer ipv4-address as-number as-number-plain
A BGP peer is created.
Usage Scenario
SR-MPLS TE, a new MPLS TE tunneling technology, has unique advantages in label
distribution, protocol simplification, large-scale expansion, and fast path
adjustment. SR-MPLS TE can better cooperate with SDN.
The SR that is extended through an IGP can implement SR-MPLS TE only within an
AS domain. To implement inter-AS E2E SR-MPLS TE tunnels, BGP EPE needs to be
used to allocate peer SIDs to the adjacencies and nodes between AS domains.
Peer SIDs can be advertised to the network controller using BGP-LS. The controller
uses the explicit paths to orchestrate IGP SIDs and BGP peer SIDs to implement
inter-AS optimal path forwarding on the network shown in Figure 1-111.
In addition, the label depth supported by an ordinary forwarder is limited, whereas
the depth of the label stack of an inter-AS SR-MPLS TE tunnel may exceed the
maximum depth supported by a forwarder. To reduce the number of label stack
layers encapsulated by the forwarder, use binding SIDs. When configuring an
intra-AS SR-MPLS TE tunnel, set a binding SID for the tunnel. The binding SID
identifies an SR-MPLS TE tunnel and replaces the label stack of an SR-MPLS TE
tunnel.
Pre-configuration Tasks
Before configuring an inter-AS E2E SR-MPLS TE tunnel, complete the following
tasks:
● Configure an intra-AS SR-MPLS TE tunnel.
● Configure BGP SR between ASBRs.
Context
To reduce the number of label stack layers encapsulated by an NE, a binding SID
needs to be used. A binding SID can represent the label stack of an intra-AS SR-
MPLS TE tunnel. After binding SIDs and BGP peer SIDs are orchestrated properly,
E2E SR-MPLS TE LSPs can be established.
An SR-MPLS TE tunnel is unidirectional. In the following operations, the binding
SID is set for the unidirectional SR-MPLS TE tunnel only within an AS domain.
● To set a binding SID of a reverse SR-MPLS TE tunnel, perform the
configuration on the ingress of the reverse tunnel.
Procedure
Step 1 Run system-view
----End
Context
An SR-MPLS TE tunnel is unidirectional. To configure a reverse tunnel, perform the
configuration on the ingress of the reverse tunnel.
Procedure
Step 1 Run system-view
An inter-AS E2E tunnel interface is created, and the tunnel interface view is
displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure an IP address for the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
A tunnel ID is set.
Step 9 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Context
SR is configured on a PCC (forwarder). The PCC delegates LSPs to a controller for
path calculation. After the controller calculates a path, the controller sends a PCEP
message to deliver path information to the PCC (forwarder).
The path information can also be delivered by a third-party adapter to the forwarder. In this
situation, SR does not need to be configured on the PCC, and the following operation can
be skipped.
Procedure
Step 1 Run system-view
----End
Prerequisites
The inter-AS E2E SR-MPLS TE tunnel has been configured.
Procedure
Step 1 Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-
id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name
tunnel-name ] [ { incoming-interface | interface | outgoing-interface }
interface-type interface-number ] [ verbose ] command to check tunnel
information.
Step 3 Run the display mpls te binding-sid [ label label-value ] command to check the
mapping between binding SIDs and tunnels.
----End
Usage Scenario
SR-MPLS TE, a new MPLS TE tunneling technology, has unique advantages in label
distribution, protocol simplification, large-scale expansion, and fast path
adjustment. SR-MPLS TE can better cooperate with SDN.
The SR that is extended through an IGP can implement SR-MPLS TE only within an
AS domain. To implement inter-AS E2E SR-MPLS TE tunnels, BGP EPE needs to be
used to allocate peer SIDs to the adjacencies and nodes between AS domains. The
controller uses the explicit paths to orchestrate IGP SIDs and BGP peer SIDs to
implement inter-AS optimal path forwarding on the network shown in Figure
1-112.
Pre-configuration Tasks
Before configuring an inter-AS E2E SR-MPLS TE tunnel, complete the following
tasks:
● Configure BGP EPE between ASBRs. For details, see Configuring BGP SR.
Context
To reduce the number of label stack layers encapsulated by an NE, a binding SID
needs to be used. A binding SID can represent the label stack of an intra-AS SR-
MPLS TE tunnel. After binding SIDs and BGP peer SIDs are orchestrated properly,
E2E SR-MPLS TE LSPs can be established.
Procedure
Step 1 Run system-view
----End
Context
An explicit path refers to a vector path on which a series of nodes are arranged in
a configuration sequence. To plan a path over which an SR-MPLS TE LSP is
established, you can specify either next-hop labels or next-hop IP addresses for an
explicit path. An IP address specified on an explicit path is the IP address of an
interface. An explicit path in use can be dynamically updated.
Procedure
Step 1 Run system-view
In the following example, two AS domains are connected. If there are multiple AS
domains, add configurations based on the network topology.
When the first hop of an explicit path is assigned a binding SID, the explicit path supports
a maximum of three hops.
2. Run next sid label label-value type adjacency
An inter-AS adjacency label is specified.
3. Run next sid label label-value type binding-sid
A binding SID is specified for the second AS domain on an explicit path.
In the case of multiple AS domains, this binding SID can be the binding SID of
a new E2E SR-MPLS TE tunnel.
----End
Context
An SR-MPLS TE tunnel is unidirectional. To configure a reverse tunnel, perform the
configuration on the ingress of the reverse tunnel.
Procedure
Step 1 Run system-view
An inter-AS E2E tunnel interface is created, and the tunnel interface view is
displayed.
Step 3 Run either of the following commands to assign an IP address to the tunnel
interface:
● To configure an IP address for the tunnel interface, run the ip address ip-
address { mask | mask-length } [ sub ] command.
The secondary IP address of the tunnel interface can be configured only after
the primary IP address is configured.
● To configure the tunnel interface to borrow the IP address of another
interface, run the ip address unnumbered interface interface-type interface-
number command.
The SR-MPLS TE tunnel is unidirectional and does not need a peer IP address. A separate IP
address for the tunnel interface is not recommended. Use the LSR ID of the ingress as the
tunnel interface's IP address.
A tunnel ID is set.
The path-name value must be the same as that specified in the explicit-path
path-name command.
Step 9 (Optional) Run match dscp { ipv4 | ipv6 } { default | { dscp-value1 [ to dscp-
value2 ] } &<1-32> }
A DSCP value is set for IPv4/IPv6 packets that enter an SR-MPLS TE tunnel.
----End
Prerequisites
The inter-AS E2E SR-MPLS TE tunnel has been configured.
Procedure
Step 1 Run the display mpls te tunnel [ destination ip-address ] [ lsp-id lsr-id session-
id local-lsp-id | lsr-role { all | egress | ingress | remote | transit } ] [ name
tunnel-name ] [ { incoming-interface | interface | outgoing-interface }
interface-type interface-number ] [ verbose ] command to check tunnel
information.
Step 3 Run the display mpls te binding-sid [ label label-value ] command to check the
mapping between binding SIDs and tunnels.
----End
Usage Scenario
After an SR-MPLS BE tunnel is configured, traffic needs to be steered into the
tunnel for forwarding. This process is called traffic steering. Currently, SR-MPLS BE
tunnels can be used for various routes and services, such as BGP and static routes
as well as BGP4+ 6PE, BGP L3VPN, and EVPN services. The main traffic steering
modes supported by SR-MPLS BE tunnels are as follows:
● Static route: When configuring a static route, set the next hop of the route to
an SR-MPLS BE tunnel interface so that traffic transmitted over the route is
steered into the SR-MPLS BE tunnel. For details about how to configure a
static route, see Creating IPv4 Static Routes.
● Tunnel policy: The tunnel policy mode is implemented through tunnel selector
or tunnel binding configuration. This mode allows both VPN services and non-
labeled public routes to recurse to SR-MPLS BE tunnels. The configuration
varies according to the service type.
This section describes how to configure routes and services to recurse to SR-MPLS
BE tunnels through tunnel policies.
Pre-configuration Tasks
Before configuring traffic steering into SR-MPLS BE tunnels, complete the
following tasks:
● Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services,
BGP L3VPNv6 services, and EVPN services correctly.
● Configure a filter, such as an IP prefix list, if you want to restrict the route
recursive to a specified SR-MPLS BE tunnel.
Procedure
Step 1 Configure a tunnel policy.
Select either of the following modes based on the traffic steering mode you select.
Comparatively speaking, the tunnel selection sequence mode applies to all
scenarios, and the tunnel selector mode applies to inter-AS VPN Option B and
inter-AS VPN Option C scenarios.
● Tunnel selection sequence
a. Run system-view
The system view is displayed.
b. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
c. (Optional) Run description description-information
Description information is configured for the tunnel policy.
d. Run tunnel select-seq sr-lsp load-balance-number load-balance-
number [ unmix ]
The tunnel selection sequence and number of tunnels for load balancing
are configured.
e. Run commit
The configuration is committed.
● Tunnel selector
a. Run system-view
The system view is displayed.
b. Run tunnel-selector tunnel-selector-name { permit | deny } node node
A tunnel selector is created and the view of the tunnel selector is
displayed.
c. (Optional) Configure if-match clauses.
d. Run apply tunnel-policy tunnel-policy-name
A specified tunnel policy is applied to the tunnel selector.
e. Run commit
The configuration is committed.
Step 2 Configure routes and services to recurse to SR-MPLS BE tunnels.
● Configure non-labeled public BGP routes and static routes to recurse to SR-
MPLS BE tunnels.
a. Run system-view
The system view is displayed.
b. Run route recursive-lookup tunnel [ ip-prefix ip-prefix-name ]
[ tunnel-policy policy-name ]
The function to recurse non-labeled public BGP routes and static routes
to SR-MPLS BE tunnels is enabled.
c. Run commit
The configuration is committed.
● Configure non-labeled public BGP routes to recurse to SR-MPLS BE tunnels.
For details about how to configure a non-labeled public BGP route, see
Configuring Basic BGP Functions.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
c. Run unicast-route recursive-lookup tunnel [ tunnel-selector tunnel-
selector-name ]
The function to recurse non-labeled public BGP routes to SR-MPLS BE
tunnels is enabled.
The unicast-route recursive-lookup tunnel command and route
recursive-lookup tunnel command are mutually exclusive. You can select
either of them for configuration.
d. Run commit
The configuration is committed.
● Configure static routes to recurse to SR-MPLS BE tunnels.
For details about how to configure a static route, see Creating IPv4 Static
Routes.
a. Run system-view
The system view is displayed.
b. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ]
[ tunnel-policy policy-name ]
The function to recurse static routes to SR-MPLS BE tunnels for MPLS
forwarding is enabled.
The ip route-static recursive-lookup tunnel command and route
recursive-lookup tunnel command are mutually exclusive. You can select
either of them for configuration.
c. Run commit
The configuration is committed.
● Configure BGP L3VPN services to recurse to SR-MPLS BE tunnels.
For details about how to configure a BGP L3VPN service, see Configuring
Basic BGP/MPLS IP VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family
The VPN instance-specific IPv4 address family view is displayed.
Usage Scenario
After an SR-MPLS TE tunnel is configured, traffic needs to be steered into the
tunnel for forwarding. This process is called traffic steering. Currently, SR-MPLS TE
tunnels can be used for various routes and services, such as BGP and static routes
as well as BGP4+ 6PE, BGP L3VPN, and EVPN services. The main traffic steering
modes supported by SR-MPLS TE tunnels are as follows:
● Static route: When configuring a static route, set the outbound interface of
the route to an SR-MPLS TE tunnel interface so that traffic transmitted over
the route is steered into the SR-MPLS TE tunnel. For details about how to
configure a static route, see Creating IPv4 Static Routes.
● Auto route: An IGP uses an auto route related to an SR-MPLS TE tunnel that
functions as a logical link to compute a path. The tunnel interface is used as
an outbound interface in the auto route. Auto routes support both IGP
shortcut and forwarding adjacency functions. For details about how to
configure the functions, see Configuring the IGP Shortcut and Configuring
Forwarding Adjacency.
● Policy-based routing (PBR): SR-MPLS TE PBR has the same definition as IP
unicast PBR. PBR is implemented by defining a series of matching rules and
behavior. An outbound interface in an apply clause is set to an interface on an
SR-MPLS TE tunnel. If packets do not match PBR rules, they are properly
forwarded using IP; if they match PBR rules, they are forwarded over specific
tunnels. For details about how to configure PBR, see Routing Policy
Configuration.
● Tunnel policy: The tunnel policy mode is implemented through tunnel selector
or tunnel binding configuration. This mode allows both VPN services and non-
Pre-configuration Tasks
Before configuring traffic steering into SR-MPLS TE tunnels, complete the
following tasks:
● Configure BGP routes, static routes, BGP4+ 6PE services, BGP L3VPN services,
BGP L3VPNv6 services, and EVPN services correctly.
● Configure a filter, such as an IP prefix list, if you want to restrict the route
recursive to a specified SR-MPLS TE tunnel.
Procedure
Step 1 Configure a tunnel policy.
Select either of the following modes based on the traffic steering mode you select.
The tunnel binding mode rather than the tunnel selection sequence mode allows
you to specify the SR-MPLS TE tunnel to be used, facilitating QoS deployment. The
tunnel selector mode applies to inter-AS VPN Option B and inter-AS VPN Option C
scenarios.
● Tunnel selection sequence
a. Run system-view
The system view is displayed.
b. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
c. (Optional) Run description description-information
Description information is configured for the tunnel policy.
d. Run tunnel select-seq sr-te load-balance-number load-balance-number
[ unmix ]
The tunnel selection sequence and number of tunnels for load balancing
are configured.
e. Run commit
The configuration is committed.
● Tunnel binding
a. Run system-view
The system view is displayed.
b. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
c. (Optional) Run description description-information
Description information is configured for the tunnel policy.
d. Run tunnel binding destination dest-ip-address te { tunnel-name }
&<1-32> [ ignore-destination-check ] [ down-switch ]
----End
Usage Scenario
The SR and LDP interworking technique allows both segment routing and LDP to
work within the same network. This technique connects an SR network to an LDP
network to implement MPLS forwarding.
In Figure 1-113, an SR domain is established between PE1 and the P. The P
functions as a mapping server, is assigned a mapping between a prefix and a SID,
and advertises the mapping to the mapping client. PE1 functions as a mapping
client and receives the mapping advertised by the P. An LDP domain resides
between the P and PE2. PE2 supports LDP only. To allow PE1 and PE2 to access
each other, establish an SR LSP and an LDP LSP and configure mapping between
the SR LSP and LDP LSP on the P.
Pre-configuration Tasks
Before you configure IS-IS SR to communicate with LDP, complete the following
tasks:
● Configure an SR LSP from PE1 to the P. See Configuring an IS-IS SR-MPLS BE
Tunnel.
● Configure an LDP LSP from the P to PE2. See Configuring an LDP LSP.
Procedure
● Configure the mapping server.
a. Run system-view
The system view is displayed.
b. Run segment-routing
Segment routing is globally enabled, and the segment routing view is
displayed.
c. Run mapping-server prefix-sid-mapping ip-address mask-length begin-
value [ range range-value ] [ attached ]
Mapping between the prefix and SID is configured.
d. Run quit
Exist the SR view.
e. Run isis [ process-id ]
The IS-IS view is displayed.
f. Run segment-routing mapping-server send
The local node is enabled to advertise the local SID label mapping.
g. (Optional) Run segment-routing mapping-server receive
a. Run system-view
----End
Usage Scenario
The SR and LDP interworking technique allows both segment routing and LDP to
work within the same network. This technique connects an SR network to an LDP
network to implement MPLS forwarding.
In Figure 1-114, an SR domain is established between PE1 and the P. The P
functions as a mapping server, is assigned a mapping between a prefix and a SID,
and advertises the mapping to the mapping client. PE1 functions as a mapping
client and receives the mapping advertised by the P. An LDP domain resides
between the P and PE2. PE2 supports LDP only. To allow PE1 and PE2 to access
each other, establish an SR LSP and an LDP LSP and configure mapping between
the SR LSP and LDP LSP on the P.
Pre-configuration Tasks
Before you configure OSPF SR to communicate with LDP, complete the following
tasks:
● Configure an SR LSP from PE1 to the P. See Configuring an OSPF SR-MPLS
BE Tunnel.
● Configure an LDP LSP from the P to PE2. See Configuring an LDP LSP.
Procedure
● Configure the mapping server.
a. Run system-view
The system view is displayed.
b. Run segment-routing
Segment routing is globally enabled, and the segment routing view is
displayed.
c. Run mapping-server prefix-sid-mapping ip-address mask-length begin-
value [ range range-value ] [ attached ]
Mapping between the prefix and SID is configured.
d. Run quit
Exist the SR view.
e. Run ospf [ process-id ]
The OSPF view is displayed.
f. Run segment-routing mapping-server send
The local node is enabled to advertise the local SID label mapping.
g. (Optional) Run segment-routing mapping-server receive
The local node is enabled to receive SID label mapping messages.
h. Run commit
The configuration is committed.
● (Optional) Configure the mapping client.
A device that is not configured as a mapping server functions as a mapping
client by default.
a. Run system-view
The system view is displayed.
b. Run ospf [ process-id ]
The OSPF view is displayed.
c. Run segment-routing mapping-server receive
The local node is enabled to receive SID label mapping messages.
d. Run commit
The configuration is committed.
● Configure the devices connecting the LDP area and the SR area
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run lsp-trigger segment-routing-interworking best-effort host
A policy for triggering backup LDP LSP establishment is configured.
d. Run commit
The configuration is committed.
----End
Usage Scenario
With the development of networks, VoIP and on-line video services require high-
quality real-time transmission. Nevertheless, if an IS-IS fault occurs, multiple
processes, including fault detection, LSP update, LSP flooding, route calculation,
and FIB entry delivery, must be performed to switch traffic to a new link. As a
result, the traffic interruption time is longer than 50 ms, leading to a failure to
satisfy real-time requirements.
TI-LFA fast reroute (FRR) protects links and nodes on segment routing tunnels. If a
link or node fails, traffic is rapidly switched to a backup path, which minimizes
traffic loss.
In some LFA or RLFA scenarios, the P space and Q space do not share nodes or
have direct neighbors. If a link or node fails, no backup path can be calculated,
causing traffic loss and resulting in a failure to meet reliability requirements. In
this situation, TI-LFA can be used.
Pre-configuration Tasks
Before configuring IS-IS TI-LFA FRR, complete the following tasks:
● Configure IP addresses for interfaces to implement connectivity at the
network layer.
● Configure basic IPv4 IS-IS functions.
● Globally enable the segment routing capability.
● Enable the segment routing capability in an IS-IS process.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run isis [ process-id ]
An IS-IS process is created, and the IS-IS view is displayed.
Step 3 Run frr
The IS-IS FRR view is displayed.
Step 4 Run loop-free-alternate [ level-1 | level-2 | level-1-2 ]
IS-IS LFA is enabled, and LFA links can be generated.
Step 5 Run ti-lfa [ level-1 | level-2 | level-1-2 ]
IS-IS TI-LFA is enabled.
Step 6 (Optional) Run tiebreaker { node-protecting | lowest-cost | non-ecmp | srlg-
disjoint } preference preference [ level-1 | level-2 | level-1-2 ]
The solution of selecting a backup path for IS-IS TI-LFA FRR is set.
By default, the preference value is 40 for the node-protection path first solution,
20 for the smallest-cost path first solution, 10 for the non-ECMP first solution, and
5 for the Shared Risk Link Group (SRLG) disjoint first solution. The larger the
value, the higher the priority.
Before configuring the srlg-disjoint parameter, you need to run the isis srlg srlg-
value command in the IS-IS interface view to configure the IS-IS SRLG function.
Step 7 (Optional) After completing the preceding configuration, IS-IS TI-LFA is enabled
on all IS-IS interfaces. If you do not want to enable IS-IS TI-LFA on some
interfaces, perform the following operations:
1. Run quit
Quit the IS-IS FRR view.
2. Run quit
Quit the IS-IS view.
3. Run interface interface-type interface-number
The interface view is displayed.
4. Run isis [ process-id process-id ] ti-lfa disable [ level-1 | level-2 | level-1-2 ]
The IS-IS TI-LFA is disabled on an specified interface.
Step 8 If a network fault occurs or is rectified, an IGP performs route convergence. A
transient forwarding status inconsistency between nodes results in different
convergence rates on devices, which poses the risk of micro loops. To prevent
micro loops, perform the following steps:
1. Run quit
Quit the interface view.
2. Run isis [ process-id ]
The IS-IS process is created, and the IS-IS view is displayed.
3. Run avoid-microloop frr-protected
The IS-IS local anti-microloop is enabled.
4. (Optional) Run avoid-microloop frr-protected rib-update-delay rib-update-
delay
The delay after which IS-IS delivers routes is configured.
5. Run avoid-microloop segment-routing
The IS-IS remote anti-micro-loop function is enabled.
6. (Optional) Run avoid-microloop segment-routing rib-update-delay rib-
update-delay
The delay in delivering IS-IS route in a segment routing scenario is set.
Step 9 Run commit
The configuration is committed.
----End
Usage Scenario
In some LFA or RLFA scenarios, the P space and Q space do not share nodes or
have direct neighbors. If a link or node fails, no backup path can be calculated,
causing traffic loss and resulting in a failure to meet reliability requirements. In
this situation, TI-LFA can be used.
TI-LFA fast reroute (FRR) protects links and nodes on segment routing tunnels. If a
link or node fails, traffic is rapidly switched to a backup path, which minimizes
traffic loss.
Pre-configuration Tasks
Before configuring OSPF TI-LFA FRR, complete the following tasks:
● Configure IP addresses for interfaces to implement connectivity at the
network layer.
● Configure basic OSPF functions
● Globally enable the segment routing capability.
● Enable the segment routing capability in an OSPF process.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ospf [ process-id ]
The OSPF process is enabled, and the OSPF view is displayed.
Step 3 Run frr
The OSPF FRR view is displayed.
Step 4 Run loop-free-alternate
OSPF LFA is enabled, and LFA links can be generated.
Step 5 Run ti-lfa enable
OSPF TI-LFA is enabled.
Step 6 (Optional) After completing the preceding configuration, OSPF TI-LFA is enabled
on all OSPF interfaces. If you do not want to enable OSPF TI-LFA on some
interfaces, perform the following operations:
1. Run quit
Quit the OSPF FRR view.
2. Run quit
Quit the OSPF view.
3. Run interface interface-type interface-number
The interface view is displayed.
4. To disable the OSPF TI-LFA on an specified interface, run either of the
following commands:
----End
Example
All OSPF TI-LFA FRR configurations are complete.
● Run the display ospf [ process-id ] segment-routing routing [ ip-address
[ mask | mask-length ] ] command to check information about the OSPF
segment routing routing table after OSPF TL-LFA FRR is enabled.
Usage Scenario
If SBFD for SR-MPLS BE detects a fault on the primary tunnel, VPN FRR rapidly
switches traffic, which minimizes the impact on traffic.
Pre-configuration Tasks
Before configuring SBFD for SR-MPLS BE tunnel, complete the following tasks:
● Configure an SR-MPLS BE tunnel.
● Run the mpls lsr-id lsr-id command to set the LSR ID, and ensure the peer to
local end lsr-id address is reachable.
Procedure
● Configuring the SBFD Initiator
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is globally enabled.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
The system view is displayed.
d. Run sbfd
SBFD is globally enabled.
e. (Optional) Run destination ipv4 ip-address remote-discriminator
discriminator-value
The mapping between the IP address and discriminator of the SBFD
reflector is configured.
f. Run quit
Return to the system view.
g. Run segment-routing
The Segment Routing view is displayed.
h. Run seamless-bfd enable mode tunnel [ [ filter-policy ip-prefix ip-
prefix-name ] | [ effect-sr-lsp ] ] *
SBFD for SR-MPLS BE tunnel is enabled.
i. (Optional) Run seamless-bfd tunnel { min-rx-interval receive-interval |
min-tx-interval transmit-interval | detect-multiplier multiplier-value } *
SBFD parameters are set.
j. Run commit
The configuration is committed.
● Configuring an SBFD Reflector
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is globally enabled.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
The system view is displayed.
d. Run sbfd
Usage Scenario
If SBFD for SR-MPLS TE LSP detects a fault on the primary LSP, traffic is rapidly
switched to the backup LSP, which minimizes the impact on traffic.
Pre-configuration Tasks
Before configuring SBFD for SR-MPLS TE LSP, complete the following task:
● Configure an SR-MPLS TE tunnel.
● Set each LSR ID using the mpls lsr-id lsr-id command. Ensure that the route
to the local lsr-id is reachable.
Procedure
● Configure an SBFD initiator.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
Return to the system view.
d. Run sbfd
SBFD is enabled globally.
e. Run quit
Return to the system view.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
----End
Usage Scenario
If SBFD for SR-MPLS TE tunnel detects a fault on the primary tunnel, traffic is
rapidly switched to the backup tunnel, which minimizes the impact on traffic.
Pre-configuration Tasks
Before configuring SBFD for SR-MPLS TE tunnel, complete the following task:
● Configure an SR-MPLS TE tunnel.
● Set each LSR ID using the mpls lsr-id lsr-id command. Ensure that the route
to the local lsr-id is reachable.
Procedure
● Configure an SBFD initiator.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
Return to the system view.
d. Run sbfd
SBFD is enabled globally.
e. Run quit
Return to the system view.
f. Run interface tunnel tunnel-number
The SR-MPLS TE tunnel interface view is displayed.
g. Run mpls te bfd tunnel enable seamless
SBFD for SR-MPLS TE tunnel is enabled.
After the configuration is complete, the SBFD initiator automatically
establishes an SBFD session destined for the destination IP address of an
SR-MPLS TE tunnel.
h. Run commit
The configuration is committed.
● Configure the SBFD reflector.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
You can set BFD parameters only after running the bfd command to
enable global BFD.
c. Run quit
Return to the system view.
d. Run sbfd
SBFD is enabled globally.
e. Run reflector discriminator { unsigned-integer-value | ip-address-value }
The discriminator of an SBFD reflector is configured.
f. Run commit
The configuration is committed.
----End
Usage Scenario
If BFD for SR LSP detects a fault on the primary tunnel, VPN FRR rapidly switches
traffic, which minimizes the impact on traffic.
Configure BFD for SR LSP on both the ingress and egress of an SR LSP.
Pre-configuration Tasks
Before configuring BFD for SR LSP, complete the following tasks:
● Configure an IS-IS SR-MPLS BE tunnel or configure an OSPF SR-MPLS BE
tunnel.
● Set each LSR ID using the mpls lsr-id lsr-id command. Ensure that the route
to the local lsr-id is reachable.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
The BFD view is displayed.
Step 3 Run mpls-passive
The egress is enabled to create a BFD session passively.
The egress has to receive an LSP ping request carrying a BFD TLV before creating a
BFD session.
Step 4 Run quit
Return to the system view.
Step 6 Run bfd enable mode tunnel [ filter-policy ip-prefix ip-prefix-name | effect-sr-
lsp ] *
----End
Usage Scenario
If BFD for SR LSP detects a fault on the primary tunnel when SR communicates
with LDP, VPN FRR rapidly switches traffic, which minimizes the impact on traffic.
Pre-configuration Tasks
Before configuring BFD for SR LSP (in the SR and LDP interworking scenario),
complete the following tasks:
Procedure
● Create a BFD session on the SR side.
a. Run system-view
b. Run bfd
The BFD view is displayed.
c. Run mpls-passive
The egress is enabled to create a BFD session passively.
The egress has to receive an LSP ping request carrying a BFD TLV before
creating a BFD session.
d. Run quit
Return to the system view.
e. Run segment-routing
The segment routing view is displayed.
f. Run bfd enable mode tunnel [ filter-policy ip-prefix ip-prefix-name |
effect-sr-lsp | nil-fec ] *
BFD is enabled for SR-MPLS BE tunnels.
If effect-sr-lsp is specified, if BFD Down, SEGR module cancels the SR
LSP.
In an SR and LDP interworking scenario, the ingress node cannot detect
whether LDP LSPs are stitched to SR LSPs in the LDP to SR direction. In
the LSP ping packet triggered by BFD, the encapsulated FEC type is LDP.
When the packet arrives at the egress node (SR node), the FEC type fails
to be verified, preventing BFD from going Up. To resolve this issue,
configure the nil-fec parameter.
g. (Optional) Run bfd tunnel { min-rx-interval receive-interval | min-tx-
interval transmit-interval | detect-multiplier multiplier-value } *
BFD parameters are set for SR tunnels.
h. Run commit
The configuration is committed.
● Create a BFD session on the LDP side.
a. Run system-view
The system view is displayed.
b. Run bfd
The BFD view is displayed.
c. Run mpls-passive
The egress is enabled to create a BFD session passively.
The egress has to receive an LSP ping request carrying a BFD TLV before
creating a BFD session.
d. Run quit
Return to the system view.
e. Run mpls
The MPLS view is displayed.
Usage Scenario
BFD can be used to monitor to SR-MPLS TE tunnels. If the primary tunnel fails,
BFD instructs applications such as VPN FRR to quickly switch traffic, minimizing
the impact on services.
Pre-configuration Tasks
Before configuring static BFD for SR-MPLS TE, configure SR-MPLS TE tunnels.
Procedure
Step 1 Enable BFD globally.
1. Run system-view
The system view is displayed.
2. Run bfd
You can set BFD parameters only after running the bfd command to enable
BFD globally.
3. Run commit
The minimum interval at which the local device sends BFD packets is
changed.
Actual local interval at which BFD packets are sent = MAX { Locally
configured interval at which BFD packets are sent, Remotely configured
interval at which BFD packets are received }
Actual local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval
at which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
– On the remote device, the actual interval between sending BFD packets is
300 ms calculated using the formula MAX {100 ms, 300 ms}, the actual
interval between receiving BFD packets is 600 ms calculated using the
formula MAX {200 ms, 600 ms}, and the actual detection period is 2400
ms calculated by multiplying 600 ms by 4.
6. (Optional) Run min-rx-interval interval
The minimum interval at which the local device receives BFD packets is
changed.
The minimum interval at which the local device sends BFD packets is
changed.
Actual local interval at which BFD packets are sent = MAX { Locally
configured interval at which BFD packets are sent, Remotely configured
interval at which BFD packets are received }
Actual local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval
at which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
The minimum interval at which the local device receives BFD packets is
changed.
----End
– To check statistics about BFD for MPLS-TE sessions, run the display bfd
statistics session mpls-te interface tunnel { tunnel-id | tunnel-number }
te-lsp command.
Usage Scenario
BFD detects the connectivity of SR-MPLS TE LSPs. If a BFD session fails to go up
through negotiation, an SR-MPLS TE LSP cannot go up. Static BFD for SR-MPLS TE
LSP is configured to rapidly switch traffic from a primary LSP to a backup LSP if
the primary LSP fails.
Pre-configuration Tasks
Before configuring static BFD for SR-MPLS TE LSP, configure an SR-MPLS TE
tunnel.
Procedure
Step 1 Enable BFD globally.
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled globally, and the BFD view is displayed.
You can set BFD parameters only after running the bfd command to enable
BFD globally.
3. Run commit
The configuration is committed.
Step 2 Configure BFD parameters on the ingress.
1. Run system-view
The system view is displayed.
2. Run bfd session-name bind mpls-te interface interface-type interface-
number te-lsp [ backup ] [ one-arm-echo ]
BFD is configured to monitor the primary or backup LSP that is bound to an
SR-MPLS TE tunnel.
If one-arm-echo is configured, a one-arm BFD echo session is established to
monitor an LSP bound to the SR-MPLS TE tunnel. A Huawei device at the
ingress cannot use BFD for SR-MPLS TE LSP to communicate with a non-
Huawei device at the egress. In this situation, no BFD session can be
established. To establish a BFD session to monitor an LSP bound to the SR-
MPLS TE tunnel, configure a one-arm BFD echo session.
3. Run discriminator local discr-value
This command does not need to be run if a one-arm BFD echo session is
established.
5. (Optional) Run min-tx-interval interval
The minimum interval at which BFD packets are sent locally is set.
Actual local interval at which BFD packets are sent = MAX { Locally
configured interval at which BFD packets are sent, Remotely configured
interval at which BFD packets are received }
Actual local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval
at which BFD packets are received }
Local BFD detection period = Actual local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
The minimum interval at which BFD packets are received locally is set.
----End
– Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ]
command to check the configurations of static BFD sessions.
● Run the following commands to check BFD statistics:
– Run the display bfd statistics session all [ for-ip | for-lsp | for-te ]
[ verbose ] command to check statistics about all BFD sessions.
– Run the display bfd statistics session static [ discriminator local-
discriminator | for-ip | for-lsp | for-te ] [ verbose ] command to check
statistics about static BFD sessions.
– Run the display bfd statistics session mpls-te interface tunnel
{ tunnel-id | tunnel-number } te-lsp command to check statistics about
BFD sessions that monitor LSPs.
Usage Scenario
BFD detects the connectivity of SR-MPLS TE LSPs. Dynamic BFD for SR-MPLS TE
LSP is configured to rapidly switch traffic from a primary LSP to a backup LSP if
the primary LSP fails. Unlike static BFD for SR-MPLS TE LSP, dynamic BFD for SR-
MPLS TE LSP simplifies the configuration and minimizes manual configuration
errors.
Dynamic BFD can only monitor a part of an SR-MPLS TE tunnel.
Pre-configuration Tasks
Before configuring dynamic BFD for SR-MPLS TE LSP, you should configure an SR-
MPLS TE Tunnel.
Procedure
Step 1 Enabling BFD Globally
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled globally, and the BFD view is displayed.
You can set BFD parameters only after running the bfd command to enable
BFD globally.
3. Run commit
The configuration is committed.
Step 2 Enabling the Ingress to Dynamically Create a BFD Session to Monitor SR-MPLS TE
LSPs
Perform either of the following operations to enable the ingress to dynamically
create a BFD Session to monitor SR-MPLS TE LSPs:
● Effective local interval at which BFD packets are sent = MAX { Locally configured
interval at which BFD packets are sent, Remotely configured interval at which BFD
packets are received }
● Effective local interval at which BFD packets are received = MAX { Remotely
configured interval at which BFD packets are sent, Locally configured interval at which
BFD packets are received }
● Effective local BFD detection period = Effective local interval at which BFD packets are
received x Remotely configured BFD detection multiplier
On the egress that passively creates a BFD session, the BFD parameters cannot be adjusted,
because the default values are the smallest values that can be set on the ingress. Therefore,
if BFD for TE is used, the effective BFD detection period on both ends of an SR-MPLS TE
tunnel is as follows:
● Effective detection period on the ingress = Configured interval at which BFD packets are
received on the ingress x 3
● Effective detection period on the egress = Configured interval at which BFD packets are
sent on the ingress x Configured detection multiplier on the ingress
----End
Usage Scenario
If one-arm BFD for inter-AS E2E SR-MPLS TE tunnel detects a fault on the primary
tunnel, a protection application, for example, VPN FRR, rapidly switches traffic,
which minimizes the impact on traffic.
With one-arm BFD for E2E SR-MPLS TE tunnel enabled, if the reflector can
successfully iterate packets to the E2E SR-MPLS TE tunnel using the IP address of
the initiator, the reflector forwards the packets through the E2E SR-MPLS TE
tunnel. Otherwise, the reflector forwards the packets over IP routes.
Pre-configuration Tasks
Before configuring one-arm BFD for E2E SR-MPLS TE tunnel, configure an inter-AS
E2E SR-MPLS TE tunnel.
Procedure
Step 1 Enable BFD globally.
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled.
3. Run commit
The configuration is committed.
Step 2 Enable the ingress to dynamically create a BFD session to monitor E2E SR-MPLS
TE tunnels.
Perform either of the following operations:
● Globally enable the capability if BFD sessions need to be automatically
created for most E2E SR-MPLS TE tunnels on the ingress.
● Enable the capability on a specific tunnel interface if a BFD session needs to
be automatically created for some E2E SR-MPLS TE tunnels on the ingress.
Run the following commands as needed.
● Enable the capability globally.
a. Run system-view
The system view is displayed.
b. Run mpls
The MPLS view is displayed.
c. Run mpls te bfd tunnel enable one-arm-echo
One-arm BFD for E2E SR-MPLS TE tunnel is enabled.
d. Run interface tunnel interface-number
The view of the E2E SR-MPLS TE tunnel interface is displayed.
e. Run mpls te reverse-lsp binding-sid label label-value
A binding SID is set for a reverse LSP in the E2E SR-MPLS TE tunnel.
In one-arm BFD for E2E SR-MPLS TE mode, BFD does not need to be enabled on the peer,
and the min-tx-interval tx-interval parameter of the local end does not take effect.
Therefore, the actual detection period of the ingress equals the configured interval at which
BFD packets are received on the ingress multiplied by the detection multiplier configured
on the ingress.
----End
Usage Scenario
One-arm BFD monitors inter-AS E2E SR-MPLS TE LSPs. If a specified transit node
on a tunnel fails, traffic is switched to the hot-standby LSP, which reduces the
impact on services.
With one-arm BFD for E2E SR-MPLS TE LSP enabled, if the reflector can
successfully iterate packets to the E2E SR-MPLS TE LSP using the IP address of the
initiator, the reflector forwards the packets through the E2E SR-MPLS TESR-MPLS
TE Otherwise, the reflector forwards the packets over IP routes.
Pre-configuration Tasks
Before configuring one-arm BFD for E2E SR-MPLS TE LSP, configure an inter-AS
E2E SR-MPLS TE tunnel.
Procedure
Step 1 Enable global BFD.
1. Run system-view
The system view is displayed.
2. Run bfd
BFD is enabled.
3. Run commit
The configuration is committed.
Step 2 Enable the ingress to dynamically create a one-arm BFD session to monitor E2E
SR-MPLS TE LSPs.
1. Run system-view
The system view is displayed.
2. Run interface tunnel interface-number
The view of the E2E SR-MPLS TE tunnel interface is displayed.
3. Run mpls te bfd enable one-arm-echo [ primary ]
Automatic creation of a one-arm BFD session to monitor E2E SR-MPLS TE
LSPs is triggered.
4. Run mpls te reverse-lsp binding-sid label label-value
A binding SID is specified for a reverse LSP in the E2E SR-MPLS TE tunnel.
Ensure that the mpls te binding-sid label label-value command has been run
on the ingress of the reverse LSP.
5. Run commit
The configuration is committed.
Step 3 (Optional) Adjust BFD parameters on the ingress.
Adjust BFD parameters on the ingress in either of the following modes:
● Adjust BFD parameters globally. This method is used when BFD parameters
for most E2E SR-MPLS TE tunnels need to be adjusted on the ingress.
● Adjust BFD parameters on a specific tunnel interface. If an E2E SR-MPLS TE
tunnel interface needs BFD parameters different from the globally configured
ones, adjust BFD parameters on the specific tunnel interface.
In one-arm BFD for E2E SR-MPLS TE mode, BFD does not need to be enabled on the peer,
and the min-tx-interval tx-interval parameter of the local end does not take effect.
Therefore, the actual detection period of the ingress equals the configured interval at which
BFD packets are received on the ingress multiplied by the detection multiplier configured
on the ingress.
----End
Usage Scenario
MPLS in UDP is a DCN overlay technology that encapsulates MPLS packets (or SR
MPLS packets) into UDP packets to traverse through some networks that do not
support MPLS or SR MPLS.
Pre-configuration Tasks
Before configuring a device as the egress of an MPLS-in-UDP tunnel, configure an
SR MPLS tunnel.
Procedure
Step 1 Run system-view
The device is enabled to verify the source IP address mapped to the specified
destination IP address.
----End
Context
This function is used to collect statistics about traffic forwarded over an SR-MPLS
BE.
Procedure
Step 1 Run system-view
----End
Follow-up Procedure
To clear SR-MPLS BE traffic statistics before re-collection, run the reset segment-
routing traffic-statistics [ ip-address mask-length ] command.
Usage Scenario
An SR-MPLS TE Policy is a set of candidate paths consisting of one or more
segment lists, that is, segment ID (SID) lists. Each SID list identifies an end-to-end
path from the source to the destination, instructing a device to forward traffic
through the path rather than the shortest path computed by an IGP. The header of
a packet steered into an SR-MPLS TE Policy is augmented with an ordered list of
segments associated with that SR-MPLS TE Policy, so that other devices on the
network can execute the instructions encapsulated into the list.
You can configure an SR-MPLS TE Policy through CLI or NETCONF. For a manually
configured SR-MPLS TE Policy, information, such as the endpoint and color
attributes, the preference values of candidate paths, and segment lists, must be
configured. Moreover, the preference values must be unique. The first-hop label of
a segment list can be a node, adjacency, BGP EPE, parallel, or anycast SID, but
cannot be any binding SID. Ensure that the first-hop label is a local incoming
label, so that the forwarding plane can use this label to search the local
forwarding table for the corresponding forwarding entry.
Pre-configuration Tasks
Before configuring an SR-MPLS TE Policy, configure an IGP to ensure that all
nodes can communicate with each other at the network layer.
Context
An SR-MPLS TE Policy may contain multiple candidate paths that are defined
using segment lists. Because adjacency and node SIDs are required for SR-MPLS TE
Policy configuration, you need to configure the IGP SR function. Node SIDs need
to be configured manually, whereas adjacency SIDs can either be generated
dynamically by an IGP or be configured manually.
● In scenarios where SR-MPLS TE Policies are configured manually, if dynamic
adjacency SIDs are used, the adjacency SIDs may change after an IGP restart.
To keep the SR-MPLS TE Policies up, you must adjust them manually. The
adjustment workload is heavy if a large number of SR-MPLS TE Policies are
configured manually. Therefore, you are advised to configure adjacency SIDs
manually and not to use adjacency SIDs generated dynamically.
● In scenarios where SR-MPLS TE Policies are delivered by a controller
dynamically, you are also advised to configure adjacency SIDs manually.
Although the controller can use BGP-LS to detect adjacency SID changes, the
adjacency SIDs dynamically generated by an IGP change randomly, causing
inconvenience in routine maintenance and troubleshooting.
Procedure
Step 1 Enable MPLS.
1. Run system-view
----End
Context
SR-MPLS TE Policies are used to direct traffic to traverse an SR-MPLS TE network.
Each SR-MPLS TE Policy can have multiple candidate paths with different
preferences. The valid candidate path with the highest preference is selected as
the primary path of the SR-MPLS TE Policy, and the valid candidate path with the
second highest preference as the hot-standby path.
Procedure
● Configure a segment list.
a. Run system-view
The system view is displayed.
b. Run segment-routing
SR is enabled globally and the Segment Routing view is displayed.
c. Run segment-list (segment routing view) list-name
A segment list is created for SR-MPLS TE candidate paths and the
segment list view is displayed.
d. Run index index sid label label
A SID (next-hop label) is configured for the segment list.
You can run the command multiple times to configure multiple SIDs. The
system generates a segment list with a label stack containing SIDs that
are placed by index in ascending order. If a candidate path in the SR-
MPLS TE Policy is preferentially selected, traffic is forwarded using the
segment list of the candidate path. A maximum of 10 SIDs can be
configured for each segment list.
e. Run commit
The configuration is committed.
● Configure an SR-MPLS TE Policy.
a. Run system-view
The system view is displayed.
b. Run segment-routing
SR is enabled globally and the Segment Routing view is displayed.
c. (Optional) Run sr-te-policy bgp-ls enable
The SR-MPLS TE policy is enable to report BGP-LS.
After the sr-te-policy bgp-ls enable command is run, the system sends
the path information to the BGP-LS with the granularity of candidate
path of the SR-MPLS TE policy.
d. Run sr-te policy policy-name [ endpoint ipv4-address color color-value ]
An SR-MPLS TE Policy is created, the endpoint and color value are
configured for the SR-MPLS TE Policy, and the SR-MPLS TE Policy view is
displayed.
e. (Optional) Run binding-sid label-value
A binding SID is configured for the SR-MPLS TE Policy.
The value of label-value needs to be within the scope defined by the
local-block begin-value end-value command.
Context
The route coloring process is as follows:
1. Configure a route-policy and set a specific color value for the desired route.
2. Apply the route-policy to a BGP peer or a VPN instance as an import or export
policy.
Procedure
Step 1 Configure a route-policy.
1. Run system-view
The system view is displayed.
2. Run route-policy route-policy-name { deny | permit } node node
A route-policy node is created and the route-policy view is displayed.
3. (Optional) Configure an if-match clause as a route-policy filter criterion. You
can add or modify the Color Extended Community only for a route-policy that
meets the filter criterion.
Usage Scenario
After an SR-MPLS TE Policy is configured, traffic needs to be steered into the SR-
MPLS TE Policy for forwarding. This process is called traffic steering. Currently, SR-
MPLS TE Policies can be used for various routes, such as BGP, static, BGP4+ 6PE,
BGP L3VPN, and EVPN routes.
Pre-configuration Tasks
Before configuring traffic steering, complete the following tasks:
● Configure a BGP route, static route, BGP4+ 6PE service, BGP L3VPN service,
BGP L3VPNv6 service, or EVPN service correctly.
● Configure an IP prefix list and a tunnel policy if you want to restrict the route
to be recursed to the specified SR-MPLS TE Policy.
Procedure
Step 1 Configure a tunnel policy.
Use either of the following procedures based on the traffic steering mode you
select:
● Color-based traffic steering
a. Run system-view
The system view is displayed.
b. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
c. (Optional) Run description description-information
Description information is configured for the tunnel policy.
d. Run tunnel select-seq sr-te-policy load-balance-number load-balance-
number unmix
The tunnel selection sequence and number of tunnels for load balancing
are configured.
e. Run commit
The configuration is committed.
● DSCP-based traffic steering
a. Run system-view
The system view is displayed.
b. Run segment-routing
The Segment Routing view is displayed.
c. Run sr-te-policy group group-value
An SR-MPLS TE Policy group is created and the SR-MPLS TE Policy group
view is displayed.
d. Run endpoint ipv4-address
An endpoint is configured for the SR-MPLS TE Policy group.
e. Run color color-value match dscp { ipv4 | ipv6 } { { dscp-value1 [ to
dscp-value2 ] } &<1-32> | default }
Mappings between the color values of SR-MPLS TE Policies in an SR-
MPLS TE Policy group and the DSCP values of packets to be steered into
the SR-MPLS TE Policies are configured.
Each SR-MPLS TE Policy in the SR-MPLS TE Policy group has its own color
attribute. To configure mappings between the color values of SR-MPLS TE
Policies in an SR-MPLS TE Policy group and the DSCP values of packets to
be steered into the SR-MPLS TE Policies, run the color match dscp
command. In this way, DSCP values, color values, and SR-MPLS TE
Policies are associated within the SR-MPLS TE Policy group. Based on the
association, packets with the specified DSCP values can be steered into
SR-MPLS TE Policies with the mapping color values.
When using the color match dscp command, pay attention to the
following points:
i. The mappings between color and DSCP values can be separately
configured for an IPv4 or IPv6 address family. However, in the same
IPv4 or IPv6 address family, each DSCP value can be associated with
only one SR-MPLS TE Policy. Only up SR-MPLS TE Policies can be
associated with DSCP values.
ii. You can run the color color-value match dscp { ipv4 | ipv6 } default
command to specify a default SR-MPLS TE Policy for an IPv4 or IPv6
address family. When a DSCP value is not associated with any SR-
MPLS TE Policy in an SR-MPLS TE Policy group, the packets with this
DSCP value are forwarded through the default SR-MPLS TE Policy. In
an SR-MPLS TE Policy group, only one default SR-MPLS TE Policy can
be specified for an IPv4 or IPv6 address family.
iii. In scenarios where no default SR-MPLS TE Policy is specified for an
IPv4 or IPv6 address family in an SR-MPLS TE Policy group:
1) If the mappings between color and DSCP values are configured
in the group and only some of the DSCP values are associated
with SR-MPLS TE Policies, the packets with DSCP values that are
not associated with SR-MPLS TE Policies are forwarded through
the SR-MPLS TE Policy associated with the smallest DSCP value
in the address family.
2) If no DSCP value is associated with an SR-MPLS TE Policy in the
group, for example, the mappings between color and DSCP
values are not configured or DSCP values are not successfully
associated with SR-MPLS TE Policies, the default SR-MPLS TE
Policy specified for the other type of address family (IPv4 or
IPv6) is used to forward packets. If no default SR-MPLS TE Policy
is specified for the other type of address family, packets are
forwarded through the SR-MPLS TE Policy associated with the
smallest DSCP value in the local address family.
f. Run quit
Return to the system view.
g. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
h. (Optional) Run description description-information
Description information is configured for the tunnel policy.
i. Run tunnel binding destination dest-ip-address sr-te-policy group sr-
te-policy-group-id [ ignore-destination-check ] [ down-switch ]
A tunnel binding policy is configured to bind the destination IP address
and SR-MPLS TE Policy group.
The default color used by the L3VPN service when iterating the SR-MPLS
TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
● Configure a BGP L3VPNv6 service to be recursed to an SR-MPLS TE Policy.
For details about how to configure a BGP L3VPNv6 service, see Configuring a
Basic BGP/MPLS IPv6 VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv6-family
The VPN instance IPv6 address family view is displayed.
d. Run tnl-policy policy-name
A tunnel policy is applied to the VPN instance IPv6 address family.
e. Run default-color color-value
The default color used by the L3VPNv6 service when iterating the SR-
MPLS TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
● Configure a BGP4+ 6PE service to be recursed to an SR-MPLS TE Policy.
For details about how to configure a BGP4+ 6PE service, see Configuring
BGP4+ 6PE.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
c. Run ipv6-family unicast
The BGP IPv6 unicast address family view is displayed.
d. Run peer ipv4-address enable
A 6PE peer is enabled.
e. Run peer ipv4-address tnl-policy tnl-policy-name
A tunnel policy is applied to the 6PE peer.
f. Run commit
The configuration is committed.
● Configure an EVPN service to be recursed to an SR-MPLS TE Policy.
For details about how to configure an EVPN service, see 3.2.5 Configuring
BD-EVPN Functions.
To apply a tunnel policy to an EVPN L3VPN instance, perform the following
steps:
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family or ipv6-family
The VPN instance IPv4 or IPv6 address family view is displayed.
d. Run tnl-policy policy-name evpn
A tunnel policy is applied to the EVPN L3VPN instance.
e. Run default-color color-value evpn
The default color used by the EVPN L3VPN service when iterating the SR-
MPLS TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
To apply a tunnel policy to a BD EVPN instance, perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name bd-mode
The BD EVPN instance view is displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the BD EVPN instance.
d. Run default-color color-value
The default color used by the EVPN service when iterating the SR-MPLS
TE Policy tunnel is set.
After the remote cross-route crosses to the EVPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
e. Run commit
The configuration is committed.
To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode,
perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name vpws
The view of the EVPN instance that works in EVPN VPWS mode is
displayed.
----End
Prerequisites
SR-MPLS TE Policies have been configured.
Procedure
Step 1 Run the display sr-te policy [ endpoint ipv4-address color color-value | policy-
name name-value ] command to check SR-MPLS TE Policy details.
Step 2 Run the display sr-te policy statistics command to check SR-MPLS TE Policy
statistics.
Step 3 Run the display sr-te policy status { endpoint ipv4-address color color-value |
policy-name name-value } command to check SR-MPLS TE Policy status to
determine the reason why a specified SR-MPLS TE Policy cannot go up.
Step 4 Run the display sr-te policy last-down-reason [ endpoint ipv4-address color
color-value | policy-name name-value ] command to check records about events
where SR-MPLS TE Policies or segment lists in SR-MPLS TE Policies go down.
Step 5 Run the display ip vpn-instance vpn-instance-name tunnel-info nexthop
nexthopIpv4Addr command to display information about the iterations of the
routes that match the nexthop under each address family in the current VPN
instance.
Step 6 Run the display evpn vpn-instance [ name vpn-instance-name ] tunnel-info
command to displays information about the tunnel associated with the EVPN
instance.
Step 7 Run the display evpn vpn-instance name vpn-instance-name tunnel-info
nexthop nexthopIpv4Addr command to display information about a tunnel
associated with an EVPN with a specified next-hop address.
----End
Usage Scenario
An SR-MPLS TE Policy can either be manually configured on a forwarder through
CLI or NETCONF, or be delivered to a forwarder after being dynamically generated
by a protocol, such as BGP, on a controller. The dynamic mode facilitates network
deployment.
The candidate paths in an SR-MPLS TE Policy are identified using <Protocol-Origin,
originator, discriminator>. If SR-MPLS TE Policies generated in both modes exist,
the forwarder selects an SR-MPLS TE Policy based on the following rules in
descending order:
● Protocol-Origin: The default value of Protocol-Origin is 20 for a BGP-delivered
SR-MPLS TE Policy and is 30 for a manually configured SR-MPLS TE Policy. A
larger value indicates a higher preference.
● <ASN, node-address> tuple: originator of an SR-MPLS TE Policy's primary
path. ASN indicates an AS number, and node-address indicates the address of
the node where the SR-MPLS TE Policy is generated.
– The ASN and node-address values of a manually configured SR-MPLS TE
Policy are fixed at 0 and 0.0.0.0, respectively.
– For an SR-MPLS TE Policy delivered by a controller through BGP, the ASN
and node-address values are as follows:
i. If a BGP SR-MPLS TE Policy peer relationship is established between
the headend and controller, the ASN and node-address values are
the AS number of the controller and the BGP router ID, respectively.
ii. If the headend receives an SR-MPLS TE Policy through a multi-
segment EBGP peer relationship, the ASN and node-address values
are the AS number and BGP router ID of the original EBGP peer,
respectively.
iii. If the controller sends an SR-MPLS TE Policy to a reflector and the
headend receives the SR-MPLS TE Policy from the reflector through
the IBGP peer relationship, the ASN and node-address values are the
AS number and BGP originator ID of the controller, respectively.
For both ASN and node-address, a smaller value indicates a higher preference.
Pre-configuration Tasks
Before configuring an SR-MPLS TE Policy, complete the following tasks:
● Configure an IGP to ensure that all nodes can communicate with each other
at the network layer.
● Configure a routing protocol between the controller and forwarder to ensure
that they can communicate with each other.
Context
An SR-MPLS TE Policy may contain multiple candidate paths that are defined
using segment lists. Because adjacency and node SIDs are required for SR-MPLS TE
Policy configuration, you need to configure the IGP SR function. Node SIDs need
to be configured manually, whereas adjacency SIDs can either be generated
dynamically by an IGP or be configured manually.
● In scenarios where SR-MPLS TE Policies are configured manually, if dynamic
adjacency SIDs are used, the adjacency SIDs may change after an IGP restart.
To keep the SR-MPLS TE Policies up, you must adjust them manually. The
adjustment workload is heavy if a large number of SR-MPLS TE Policies are
configured manually. Therefore, you are advised to configure adjacency SIDs
manually and not to use adjacency SIDs generated dynamically.
● In scenarios where SR-MPLS TE Policies are delivered by a controller
dynamically, you are also advised to configure adjacency SIDs manually.
Although the controller can use BGP-LS to detect adjacency SID changes, the
adjacency SIDs dynamically generated by an IGP change randomly, causing
inconvenience in routine maintenance and troubleshooting.
Procedure
Step 1 Enable MPLS.
1. Run system-view
The system view is displayed.
adjacency SIDs manually. However, you can also use an IGP to generate adjacency
SIDs dynamically.
● If the IGP is IS-IS, configure IGP SR by referring to 1.2.7.3 Configuring Basic
SR-MPLS TE Functions.
● If the IGP is OSPF, configure IGP SR by referring to 1.2.8.3 Configuring Basic
SR-MPLS TE Functions.
----End
Context
The process for a controller to dynamically generate and deliver an SR-MPLS TE
Policy to a forwarder is as follows:
1. The controller collects information, such as network topology and label
information, through BGP-LS.
2. The controller and headend forwarder establish a BGP peer relationship of the
IPv4 SR-MPLS TE Policy address family.
3. The controller computes an SR-MPLS TE Policy and delivers it to the headend
forwarder through the BGP peer relationship. The headend forwarder then
generates SR-MPLS TE Policy entries.
To implement the preceding operations, you need to establish a BGP-LS peer
relationship and a BGP IPv4 SR-MPLS TE Policy peer relationship between the
controller and the specified forwarder.
Procedure
● Configure a BGP-LS peer relationship.
BGP-LS is used to collect network topology information, making topology
information collection simple and efficient. You must configure a BGP-LS peer
relationship between the controller and forwarder, so that topology
information can be properly reported from the forwarder to the controller.
This example provides the procedure for configuring BGP-LS on the forwarder.
The procedure on the controller is similar to that on the forwarder.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
BGP is enabled and the BGP view is displayed.
c. Run peer { group-name | ipv4-address } as-number { as-number-plain |
as-number-dot }
A BGP peer is created.
recursed to an SR-MPLS TE Policy based on the color value and next-hop address
in the route.
Context
The route coloring process is as follows:
1. Configure a route-policy and set a specific color value for the desired route.
2. Apply the route-policy to a BGP peer or a VPN instance as an import or export
policy.
Procedure
Step 1 Configure a route-policy.
1. Run system-view
The system view is displayed.
2. Run route-policy route-policy-name { deny | permit } node node
A route-policy node is created and the route-policy view is displayed.
3. (Optional) Configure an if-match clause as a route-policy filter criterion. You
can add or modify the Color Extended Community only for a route-policy that
meets the filter criterion.
For details about the configuration, see (Optional) Configuring an if-match
Clause.
4. Run apply extcommunity color color
The Color Extended Community is configured.
5. Run commit
The configuration is committed.
Step 2 Apply the route-policy.
● Apply the route-policy to a BGP IPv4 unicast peer.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv4-address | group-name } as-number { as-number-plain |
as-number-dot }
A BGP peer is created.
d. Run ipv4-family unicast
The IPv4 unicast address family view is displayed.
e. Run peer { ipv4-address | group-name } route-policy route-policy-name
{ import | export }
A BGP import or export route-policy is configured.
f. Run commit
The configuration is committed.
c. Run ipv6-family
The VPN instance IPv6 address family view is displayed.
d. Run import route-policy policy-name
An import route-policy is configured for the VPN instance IPv6 address
family.
e. Run export route-policy policy-name
An export route-policy is configured for the VPN instance IPv6 address
family.
f. Run commit
The configuration is committed.
----End
Usage Scenario
After an SR-MPLS TE Policy is configured, traffic needs to be steered into the SR-
MPLS TE Policy for forwarding. This process is called traffic steering. Currently, SR-
MPLS TE Policies can be used for various routes, such as BGP, static, BGP4+ 6PE,
BGP L3VPN, and EVPN routes.
Pre-configuration Tasks
Before configuring traffic steering, complete the following tasks:
● Configure a BGP route, static route, BGP4+ 6PE service, BGP L3VPN service,
BGP L3VPNv6 service, or EVPN service correctly.
● Configure an IP prefix list and a tunnel policy if you want to restrict the route
to be recursed to the specified SR-MPLS TE Policy.
Procedure
Step 1 Configure a tunnel policy.
Use either of the following procedures based on the traffic steering mode you
select:
with SR-MPLS TE Policies, the packets with DSCP values that are
not associated with SR-MPLS TE Policies are forwarded through
the SR-MPLS TE Policy associated with the smallest DSCP value
in the address family.
2) If no DSCP value is associated with an SR-MPLS TE Policy in the
group, for example, the mappings between color and DSCP
values are not configured or DSCP values are not successfully
associated with SR-MPLS TE Policies, the default SR-MPLS TE
Policy specified for the other type of address family (IPv4 or
IPv6) is used to forward packets. If no default SR-MPLS TE Policy
is specified for the other type of address family, packets are
forwarded through the SR-MPLS TE Policy associated with the
smallest DSCP value in the local address family.
f. Run quit
Return to the system view.
g. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
h. (Optional) Run description description-information
Description information is configured for the tunnel policy.
i. Run tunnel binding destination dest-ip-address sr-te-policy group sr-
te-policy-group-id [ ignore-destination-check ] [ down-switch ]
A tunnel binding policy is configured to bind the destination IP address
and SR-MPLS TE Policy group.
The ignore-destination-check keyword is used to disable the function to
check the consistency between the destination IP address specified using
destination dest-ip-address and the endpoint of the corresponding SR-
MPLS TE Policy.
j. Run commit
The configuration is committed.
a. Run system-view
The system view is displayed.
b. Run ip route-static recursive-lookup tunnel [ ip-prefix ip-prefix-name ]
[ tunnel-policy policy-name ]
The function to recurse a static route to an SR-MPLS TE Policy for MPLS
forwarding is enabled.
c. Run commit
The configuration is committed.
● Configure a BGP L3VPN service to be recursed to an SR-MPLS TE Policy.
For details about how to configure a BGP L3VPN service, see Configuring a
Basic BGP/MPLS IP VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family
The VPN instance IPv4 address family view is displayed.
d. Run tnl-policy policy-name
A tunnel policy is applied to the VPN instance IPv4 address family.
e. Run default-color color-value
The default color used by the L3VPN service when iterating the SR-MPLS
TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
● Configure a BGP L3VPNv6 service to be recursed to an SR-MPLS TE Policy.
For details about how to configure a BGP L3VPNv6 service, see Configuring a
Basic BGP/MPLS IPv6 VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv6-family
The VPN instance IPv6 address family view is displayed.
d. Run tnl-policy policy-name
A tunnel policy is applied to the VPN instance IPv6 address family.
e. Run default-color color-value
The default color used by the L3VPNv6 service when iterating the SR-
MPLS TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
● Configure a BGP4+ 6PE service to be recursed to an SR-MPLS TE Policy.
For details about how to configure a BGP4+ 6PE service, see Configuring
BGP4+ 6PE.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
c. Run ipv6-family unicast
The BGP IPv6 unicast address family view is displayed.
d. Run peer ipv4-address enable
A 6PE peer is enabled.
e. Run peer ipv4-address tnl-policy tnl-policy-name
A tunnel policy is applied to the 6PE peer.
f. Run commit
The configuration is committed.
● Configure an EVPN service to be recursed to an SR-MPLS TE Policy.
For details about how to configure an EVPN service, see 3.2.5 Configuring
BD-EVPN Functions.
To apply a tunnel policy to an EVPN L3VPN instance, perform the following
steps:
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family or ipv6-family
The VPN instance IPv4 or IPv6 address family view is displayed.
d. Run tnl-policy policy-name evpn
A tunnel policy is applied to the EVPN L3VPN instance.
e. Run default-color color-value evpn
The default color used by the EVPN L3VPN service when iterating the SR-
MPLS TE Policy tunnel is set.
After the remote cross-route crosses to the VPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
f. Run commit
The configuration is committed.
To apply a tunnel policy to a BD EVPN instance, perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name bd-mode
The BD EVPN instance view is displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the BD EVPN instance.
d. Run default-color color-value
The default color used by the EVPN service when iterating the SR-MPLS
TE Policy tunnel is set.
After the remote cross-route crosses to the EVPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
e. Run commit
The configuration is committed.
To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode,
perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name vpws
The view of the EVPN instance that works in EVPN VPWS mode is
displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the EVPN instance that works in EVPN VPWS
mode.
d. Run default-color color-value
The default color used by the EVPN service when iterating the SR-MPLS
TE Policy tunnel is set.
After the remote cross-route crosses to the EVPN instance, if the route
does not carry the color extended community attribute, use The default
color value is SR-MPLS TE Policy tunnel iteration.
e. Run commit
The configuration is committed.
To apply a tunnel policy to a basic EVPN instance, perform the following
steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name
The EVPN instance view is displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the basic EVPN instance.
d. Run commit
The configuration is committed.
----End
Prerequisites
SR-MPLS TE Policies have been configured.
Procedure
Step 1 Run the display sr-te policy [ endpoint ipv4-address color color-value | policy-
name name-value ] command to check SR-MPLS TE Policy details.
Step 2 Run the display sr-te policy statistics command to check SR-MPLS TE Policy
statistics.
Step 3 Run the display sr-te policy status { endpoint ipv4-address color color-value |
policy-name name-value } command to check SR-MPLS TE Policy status to
determine the reason why a specified SR-MPLS TE Policy cannot go up.
Step 4 Run the display sr-te policy last-down-reason [ endpoint ipv4-address color
color-value | policy-name name-value ] command to check records about events
where SR-MPLS TE Policies or segment lists in SR-MPLS TE Policies go down.
----End
Usage Scenario
SBFD for SR-MPLS TE Policy can quickly detect segment list faults. If all the
segment lists of a candidate path are faulty, SBFD triggers a candidate path
switchover to reduce impacts on services.
Pre-configuration Tasks
Before configuring SBFD for SR-MPLS TE Policy, complete the following tasks:
● Run the mpls lsr-id lsr-id command to configure an LSR ID and ensure that
the route from the peer end to the local address specified using lsr-id is
reachable.
Procedure
● Configure an SBFD initiator.
a. Run system-view
The system view is displayed.
b. Run bfd
BFD is enabled globally.
BFD can be configured only after this function is enabled globally using
the bfd command.
c. Run quit
Return to the system view.
d. Run sbfd
SBFD is enabled globally.
e. Run quit
Return to the system view.
f. Run segment-routing
SR is enabled globally and the Segment Routing view is displayed.
g. Configure SBFD for SR-MPLS TE Policy.
SBFD for SR-MPLS TE Policy supports both global configuration and
individual configuration. Select either of the following configuration
modes as needed:
▪ Global configuration
1) Run sr-te-policy seamless-bfd enable
SBFD is enabled for all SR-MPLS TE Policies.
2) (Optional) Run sr-te-policy seamless-bfd { min-rx-interval
receive-interval | min-tx-interval transmit-interval | detect-
multiplier multiplier-value } *
SBFD parameters are set for the SR-MPLS TE Policies.
▪ Individual configuration
1) Run sr-te policy policy-name
The SR-MPLS TE Policy view is displayed.
2) Run seamless-bfd { enable | disable }
SBFD is enabled or disabled for a single SR-MPLS TE Policy.
If an SR-MPLS TE Policy has both global configuration and individual
configuration, the individual configuration takes effect.
h. (Optional) Run sr-te-policy delete-delay delete-delay-value
BFD can be configured only after this function is enabled globally using
the bfd command.
c. Run quit
----End
Usage Scenario
SBFD for SR-MPLS TE Policy can quickly detect segment list faults. If all the
segment lists of the primary path are faulty, SBFD triggers a candidate path
switchover to reduce impacts on services.
Pre-configuration Tasks
Before configuring HSB for SR-MPLS TE Policy, configure one or more SR-MPLS TE
Policies.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing
SR is enabled globally and the Segment Routing view is displayed.
Step 3 Enable HSB for SR-MPLS TE Policy.
HSB for SR-MPLS TE Policy supports both global configuration and individual
configuration. Select either of the following configuration modes as needed:
● Global configuration
a. Run sr-te-policy backup hot-standby enable
HSB is enabled for all SR-MPLS TE Policies.
● Individual configuration
a. Run sr-te policy policy-name
The SR-MPLS TE Policy view is displayed.
b. Run backup hot-standby { enable | disable }
HSB is enabled or disabled for a single SR-MPLS TE Policy.
If an SR-MPLS TE Policy has both global configuration and individual
configuration, the individual configuration takes effect.
Step 4 Run commit
The configuration is committed.
----End
Context
If the number of SR-MPLS TE Policies or segment lists exceeds the upper limit,
adding new SR-MPLS TE Policies or segment lists may affect existing services. To
prevent this, you can configure alarm thresholds for the number of SR-MPLS TE
Policies and the number of segment lists. In this way, when the number of SR-
MPLS TE Policies or segment lists reaches the specified threshold, an alarm is
generated to notify users to handle the problem.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
SR-MPLS TE Policy traffic statistics collection enables you to collect statistics about
the traffic forwarded through a specified or all SR-MPLS TE Policies.
This function supports both global configuration and individual configuration. If
global configuration and individual configuration both exist, the individual
configuration takes effect.
Perform the following steps on the router:
Procedure
● Global configuration
a. Run system-view
The system view is displayed.
b. Run segment-routing
The Segment Routing view is displayed.
c. Run sr-te-policy traffic-statistics enable
Traffic statistics collection is enabled for all SR-MPLS TE Policies.
d. Run commit
The configuration is committed.
● Individual configuration
a. Run system-view
The system view is displayed.
b. Run segment-routing
The Segment Routing view is displayed.
c. Run sr-te policy policy-name
The SR-MPLS TE Policy view is displayed.
d. Run traffic-statistics (SR-MPLS TE Policy view) { enable | disable }
Traffic statistics collection is enabled for a specified SR-MPLS TE Policy.
e. Run commit
The configuration is committed.
----End
Follow-up Procedure
To clear existing SR-MPLS TE Policy traffic statistics, run the reset sr-te policy
traffic-statistics [ endpoint ipv4-address color color-value | policy-name name-
value | binding-sid binding-sid ] command.
Context
In traditional BGP FlowSpec-based traffic optimization, traffic transmitted over
paths with the same source and destination nodes can be redirected to only one
path, which does not achieve accurate traffic steering. With the function to
redirect a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, a device can
redirect traffic transmitted over paths with the same source and destination nodes
to different SR-MPLS TE Policies.
Prerequisites
Before redirecting a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy in
manual configuration mode, complete the following tasks:
Context
If no controller is deployed, perform the following operations to manually redirect
a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy:
Procedure
● Configure a BGP FlowSpec route.
a. Run system-view
The system view is displayed.
b. Run flow-route flowroute-name
A static BGP FlowSpec route is created, and the Flow-Route view is
displayed.
c. (Optional) Configure if-match clauses. For details, see Configuring Static
BGP FlowSpec.
d. Run apply redirect ip redirect-ip-rt color colorvalue
The device is enabled to precisely redirect the traffic that matches the if-
match clauses to the SR-MPLS TE Policy.
To enable the device to process the redirection next hop attribute that is received
from a peer and configured using the apply redirect ip redirect-ip-rt color
colorvalue command, run the peer redirect ip command.
e. Run commit
The configuration is committed.
● (Optional) Configure a BGP peer relationship in the BGP-Flow address family.
Establish a BGP FlowSpec peer relationship between the ingress of the SR-
MPLS TE Policy and the device on which the BGP FlowSpec route is manually
generated. If the BGP FlowSpec route is manually generated on the ingress of
the SR-MPLS TE Policy, skip this step.
a. Run system-view
Prerequisites
Before redirecting a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy in
controller-based dynamic delivery mode, complete the following tasks:
● Configure a BGP peer relationship.
● Use a controller to dynamically deliver an SR-MPLS TE Policy.
Context
If a controller is deployed, the controller can be used to configure a public IPv4
BGP FlowSpec route to be redirected to an SR-MPLS TE Policy. The configuration
process is as follows:
1. Establish a BGP-LS peer relationship between the controller and forwarder,
enabling the controller to collect network information, such as topology and
label information, through BGP-LS.
Procedure
Step 1 Establish BGP-LS and BGP IPv4 SR-MPLS TE Policy peer relationships. For details,
see 1.2.33.2 Configuring BGP Peer Relationships Between the Controller and
Forwarder.
Step 2 Establish a BGP-Flow address family peer relationship.
1. Run system-view
The system view is displayed.
2. Run bgp as-number
The BGP view is displayed.
3. Run ipv4-family flow
The BGP-Flow address family view is displayed.
4. Run peer ipv4-address enable
The BGP FlowSpec peer relationship is enabled.
After the BGP FlowSpec peer relationship is established in the BGP-Flow
address family view, the manually generated BGP FlowSpec route is
automatically imported to the BGP-Flow routing table and then sent to each
peer.
5. Run peer ipv4-address redirect ip
The device is enabled to process the BGP FlowSpec route's redirection next
hop attribute that is received from a peer and configured using the apply
redirect ip command.
6. Run redirect ip recursive-lookup tunnel [ tunnel-selector tunnel-selector-
name ]
The device is enabled to recurse the received routes that carry the redirection
next hop attribute to tunnels.
7. Run commit
The configuration is committed.
----End
Networking Requirements
On the network shown in Figure 1-115:
● CE1 and CE2 belong to vpna.
● The VPN-target attribute of vpna is 111:1.
L3VPN services recurse to an IS-IS SR-MPLS BE tunnel to allow users within the
same VPN to securely access each other. Since multiple links exist between PEs on
a public network, traffic needs to be balanced on the public network.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring L3VPN recursion to an IS-IS SR-MPLS BE tunnel, note the
following:
After an interface that connects a PE to a CE is bound to a VPN instance, Layer 3
features on this interface such as the IP address and routing protocol must be
deleted and then reconfigured if required.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IS-IS on the backbone network to ensure that PEs interwork with each
other.
2. Configure MPLS and Segment Routing on the backbone network and
establish SR LSPs. Enable TI-LFA FRR.
3. Configure IPv4 address family VPN instances on the PEs and bind each
interface that connects a PE to a CE to a VPN instance.
4. Enable Multi-protocol Extensions for Interior Border Gateway Protocol (MP-
IBGP) on PEs to exchange VPN routing information.
5. Configure External Border Gateway Protocol (EBGP) on the CEs and PEs to
exchange VPN routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and P
● vpna's VPN-target and RD
● SRGB ranges on the PEs and P
Procedure
Step 1 Configure IP addresses for interfaces.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
Step 4 Configure Segment Routing on the backbone network and enable TI-LFA FRR.
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*PE1-isis-1] frr
[*PE1-isis-1-frr] loop-free-alternate level-1
[*PE1-isis-1-frr] ti-lfa level-1
[*PE1-isis-1-frr] quit
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*P1-isis-1] frr
[*P1-isis-1-frr] loop-free-alternate level-1
[*P1-isis-1-frr] ti-lfa level-1
[*P1-isis-1-frr] quit
[*P1-isis-1] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis prefix-sid index 20
[*P1-LoopBack1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*PE2-isis-1] frr
[*PE2-isis-1-frr] loop-free-alternate level-1
[*PE2-isis-1-frr] ti-lfa level-1
[*PE2-isis-1-frr] quit
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid index 30
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*P2-isis-1] frr
[*P2-isis-1-frr] loop-free-alternate level-1
[*P2-isis-1-frr] ti-lfa level-1
[*P2-isis-1-frr] quit
[*P2-isis-1] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis prefix-sid index 40
[*P2-LoopBack1] quit
[*P2] commit
# After completing the configuration, run the display tunnel-info all command
on PEs, and you can view that SR LSPs are set up between PEs. In the following
example, the command output on PE1 is used.
[~PE1] display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------------------------
0x000000002900000003 srbe-lsp 4.4.4.9 UP
0x000000002900000004 srbe-lsp 2.2.2.9 UP
0x000000002900000005 srbe-lsp 3.3.3.9 UP
--- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 5/6/12 ms
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
# After completing the configuration, run the display bgp peer or display bgp
vpnv4 all peer command on PEs, and you can view that a BGP peer relationship is
set up between PEs and the BGP peer relationship is in the Established state. In
the following example, the command output on PE1 is used.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 6 Configure VPN instances in the IPv4 address family on each PE and connect each
PE to a CE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IP addresses by specifying -a source-ip-address in the ping -vpn-instance vpn-instance-
name -a source-ip-address dest-ip-address command to ping the CE connected to the
remote PE. Otherwise, the ping fails.
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
The configuration of CE2 is similar to the configuration of CE1, and are not provided here.
For details, see "Configuration Files".
# Configure PE1.
[~PE1] bgp 100
The procedure for configuring PE2 is similar to the procedure for configuring PE1, and the
detailed configuration is not provided here. For details, see "Configuration Files".
After the configuration, run the display bgp vpnv4 vpn-instance peer command
on the PEs, and you can view that BGP peer relationships between PEs and CEs
have been established and are in the Established state.
In the following example, the peer relationship between PE1 and CE1 is used.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.9
Local AS number : 100
CEs within the same VPN can ping each other. For example, CE1 successfully pings
CE2 at 10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
--- 10.22.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 34/48/72 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy policy1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
#
tunnel-policy policy1
tunnel select-seq sr-lsp load-balance-number 2
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
peer 10.2.1.1 as-number 65420
#
tunnel-policy policy1
tunnel select-seq sr-lsp load-balance-number 2
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.19.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
isis prefix-sid index 40
#
return
Networking Requirements
In Figure 1-116:
● CE1 and CE2 belong to vpna.
● The VPN-target attribute of vpna is 111:1.
L3VPN services recurse to an OSPF SR-MPLS BE tunnel to allow users within the
same VPN to securely access each other. Since multiple links exist between PEs on
a public network, traffic needs to be balanced on the public network.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring L3VPN recursion to an OSPF SR-MPLS BE tunnel, note the
following:
After an interface that connects a PE to a CE is bound to a VPN instance, Layer 3
features on this interface such as the IP address and routing protocol must be
deleted and then reconfigured if required.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable OSPF on the backbone network to ensure that PEs interwork with
each other.
2. Configure MPLS and segment routing on the backbone network and establish
SR LSPs. Enable TI-LFA FRR.
3. Configure IPv4 address family VPN instances on the PEs and bind each
interface that connects a PE to a CE to a VPN instance.
4. Enable Multi-protocol Extensions for Interior Border Gateway Protocol (MP-
IBGP) on PEs to exchange VPN routing information.
5. Configure External Border Gateway Protocol (EBGP) on the CEs and PEs to
exchange VPN routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and P
● vpna's VPN-target and RD
● SRGB ranges on the PEs and P
Procedure
Step 1 Configure IP addresses for interfaces.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 172.19.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P1.
[~P1] ospf 1
[*P1-ospf-1] opaque-capability enable
[*P1-ospf-1] area 0
[*P1-ospf-1-area-0.0.0.0] quit
[*P1-ospf-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ospf enable 1 area 0
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ospf enable 1 area 0
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ospf enable 1 area 0
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ospf enable 1 area 0
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ospf enable 1 area 0
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ospf enable 1 area 0
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] ospf 1
[*P2-ospf-1] opaque-capability enable
[*P2-ospf-1] area 0
[*P2-ospf-1-area-0.0.0.0] quit
[*P2-ospf-1] quit
[*P2] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
Step 4 Configure segment routing on the backbone network and enable TI-LFA FRR.
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] ospf 1
[*PE1-ospf-1] segment-routing mpls
[*PE1-ospf-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*PE1-ospf-1] frr
[*PE1-ospf-1-frr] loop-free-alternate
[*PE1-ospf-1-frr] ti-lfa enable
[*PE1-ospf-1-frr] quit
[*PE1-ospf-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] ospf prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] commit
[~P1] ospf 1
[*P1-ospf-1] segment-routing mpls
[*P1-ospf-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*P1-ospf-1] frr
[*P1-ospf-1-frr] loop-free-alternate
[*P1-ospf-1-frr] ti-lfa enable
[*P1-ospf-1-frr] quit
[*P1-ospf-1] quit
[*P1] interface loopback 1
[*P1-LoopBack1] ospf prefix-sid index 20
[*P1-LoopBack1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] ospf 1
[*PE2-ospf-1] segment-routing mpls
[*PE2-ospf-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*PE2-ospf-1] frr
[*PE2-ospf-1-frr] loop-free-alternate
[*PE2-ospf-1-frr] ti-lfa enable
[*PE2-ospf-1-frr] quit
[*PE2-ospf-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] ospf prefix-sid index 30
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] commit
[~P2] ospf 1
[*P2-ospf-1] segment-routing mpls
[*P2-ospf-1] segment-routing global-block 16000 23999
The value range of SRGB changes dynamically, depending on the actual situation of the
equipment. Here is an example only.
[*P2-ospf-1] frr
[*P2-ospf-1-frr] loop-free-alternate
[*P2-ospf-1-frr] ti-lfa enable
[*P2-ospf-1-frr] quit
[*P2-ospf-1] quit
[*P2] interface loopback 1
[*P2-LoopBack1] ospf prefix-sid index 40
[*P2-LoopBack1] quit
[*P2] commit
# After completing the configuration, run the display tunnel-info all command
on PEs, and you can view that SR LSPs are set up between PEs. In the following
example, the command output on PE1 is used.
[~PE1] display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------------------------
0x000000002900000003 srbe-lsp 4.4.4.9 UP
0x000000002900000004 srbe-lsp 2.2.2.9 UP
0x000000002900000005 srbe-lsp 3.3.3.9 UP
--- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 3/54/256 ms
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
# After completing the configuration, run the display bgp peer or display bgp
vpnv4 all peer command on PEs, and you can view that a BGP peer relationship is
set up between PEs and the BGP peer relationship is in the Established state. In
the following example, the command output on PE1 is used.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 5 5 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 1
Step 6 Configure VPN instances in the IPv4 address family on each PE and connect each
PE to a CE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IP addresses by specifying -a source-ip-address in the ping -vpn-instance vpn-instance-
name -a source-ip-address dest-ip-address command to ping the CE connected to the
remote PE. Otherwise, the ping fails.
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 2
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
The configuration of CE2 is similar to the configuration of CE1, and are not provided here.
For details, see "Configuration Files".
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
The procedure for configuring PE2 is similar to the procedure for configuring PE1, and the
detailed configuration is not provided here. For details, see "Configuration Files".
After the configuration, run the display bgp vpnv4 vpn-instance peer command
on the PEs, and you can view that BGP peer relationships between PEs and CEs
have been established and are in the Established state.
In the following example, the peer relationship between PE1 and CE1 is used.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.9
Local AS number : 100
CEs within the same VPN can ping each other. For example, CE1 successfully pings
CE2 at 10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=252 time=428 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=252 time=4 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=252 time=5 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=252 time=3 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=252 time=4 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy policy1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate
ti-lfa enable
area 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
ospf enable 1 area 0
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
route-distinguisher 200:1
tnl-policy policy1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate
ti-lfa enable
area 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.19.1.2 255.255.255.0
ospf enable 1 area 0
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
ospf enable 1 area 0
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
ospf enable 1 area 0
ospf prefix-sid index 30
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
peer 10.2.1.1 as-number 65420
#
tunnel-policy policy1
tunnel select-seq sr-lsp load-balance-number 2
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 23999
frr
loop-free-alternate
ti-lfa enable
area 0.0.0.0
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.2 255.255.255.0
ospf enable 1 area 0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.19.1.1 255.255.255.0
ospf enable 1 area 0
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
ospf enable 1 area 0
ospf prefix-sid index 40
#
return
Networking Requirements
If an Internet user sends packets to a carrier network that performs IP forwarding
to access the Internet, core carrier devices on a forwarding path must learn many
Internet routes. This imposes a heavy load on the core carrier devices and affects
the performance of these devices. To tackle the problems, a user access device can
be configured to recurse non-labeled public network BGP or static routes to a
segment routing (SR) tunnel. User packets travel through the SR tunnel to access
the Internet. The recursion to the SR tunnel prevents the problems induced by
insufficient performance, heavy burdens, and service transmission on the core
devices on the carrier network.
In Figure 1-117, non-labeled public BGP routes are configured to recurse to an
SR-MPLS BE tunnel.
Precautions
When configuring non-labeled public BGP route recursion to an SR-MPLS BE
tunnel, note the following:
When establishing a peer, if the specified IP address of the peer is a loopback
interface address or a sub-interface address, you need to run the peer connect-
interface command on the two ends of the peer to ensure that the two ends are
correctly connected.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IS-IS on the backbone network to ensure that PEs interwork with each
other.
2. MPLS and segment routing are configured on the backbone network and SR
LSPs are established.
3. Enable IBGP on PEs to exchange VPN routing information.
4. Enable PEs to recurse non-labeled public BGP routes to the SR-MPLS BE
tunnel.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and P
● SRGB ranges on the PEs and P
Procedure
Step 1 Configure IP addresses for interfaces.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure the P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.9 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 172.17.1.2 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.17.1.1 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] quit
[*P] commit
[~P] interface loopback 1
[*P-LoopBack1] isis enable 1
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] isis enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] isis enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure the P.
[~P] mpls lsr-id 2.2.2.9
[*P] mpls
[*P-mpls] commit
[~P-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] tunnel-prefer segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] segment-routing global-block 160000 161000
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] segment-routing
[*P-segment-routing] tunnel-prefer segment-routing
[*P-segment-routing] quit
[*P] commit
[~P] isis 1
[*P-isis-1] cost-style wide
[*P-isis-1] segment-routing mpls
[*P-isis-1] segment-routing global-block 160000 161000
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis prefix-sid index 20
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] tunnel-prefer segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 160000 161000
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid index 30
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] commit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] commit
[~PE2-bgp] quit
After the completing the configuration, run the display bgp peer command on
the PEs. BGP peer relationships between PEs have been established and are in the
Established state. In the following example, the command output on PE1 is used.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
Step 6 Enable PEs to recurse non-labeled public BGP routes to the SR-MPLS BE tunnel.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 1
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] tunnel-selector s1 permit node 10
[*PE1-tunnel-selector] apply tunnel-policy p1
[*PE1-tunnel-selector] quit
[*PE1] bgp 100
[*PE1-bgp] unicast-route recursive-lookup tunnel tunnel-selector s1
[*PE1-bgp] commit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-lsp load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] tunnel-selector s1 permit node 10
[*PE2-tunnel-selector] apply tunnel-policy p1
[*PE2-tunnel-selector] quit
[*PE2] bgp 100
[*PE2-bgp] unicast-route recursive-lookup tunnel tunnel-selector s1
[*PE2-bgp] commit
[~PE2-bgp] quit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
unicast-route recursive-lookup tunnel tunnel-selector s1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
tunnel-policy p1
tunnel select-seq sr-lsp load-balance-number 1
#
tunnel-selector s1 permit node 10
apply tunnel-policy p1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
unicast-route recursive-lookup tunnel tunnel-selector s1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
tunnel-policy p1
tunnel select-seq sr-lsp load-balance-number 1
#
tunnel-selector s1 permit node 10
apply tunnel-policy p1
#
return
Networking Requirements
In Figure 1-118, an SR domain is established between PE1 and the P, and an LDP
domain resides between the P and PE2. PE1 and PE2 need to access each other.
Precautions
When configuring IS-IS SR to communicate with LDP, note that a device in the SR
domain must be able to map LDP prefix information to SR SIDs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IS-IS on the backbone network to ensure that PEs interwork with each
other.
2. Enable MPLS on the backbone network. Configure segment routing to
establish an SR LSP from PE1 to the P. Configure LDP to establish an LDP LSP
form the P to PE2.
3. Configure the mapping server function on the P to map LDP prefix
information to SR SIDs.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and P
● SRGB ranges on the PE1 and P
Procedure
Step 1 Configure IP addresses for interfaces.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure the P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.9 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 172.17.1.1 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] quit
[*P] commit
[~P] interface loopback 1
[*P-LoopBack1] isis enable 1
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] isis enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] isis enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
Step 3 Configure the basic MPLS functions on the MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure the P.
[~P] mpls lsr-id 2.2.2.9
[*P] mpls
[*P-mpls] commit
[~P-mpls] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] mpls
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
Step 4 Configure segment routing between PE1 and the P on the backbone network.
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] tunnel-prefer segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] segment-routing global-block 160000 161000
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] segment-routing
[*P-segment-routing] tunnel-prefer segment-routing
[*P-segment-routing] quit
[*P] commit
[~P] isis 1
[*P-isis-1] cost-style wide
[*P-isis-1] segment-routing mpls
[*P-isis-1] segment-routing global-block 160000 161000
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis prefix-sid index 20
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] commit
[~PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] mpls ldp
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
Step 6 Configure the mapping server function on the P, and configure the P to allow SR
and LDP to communicate with each other.
# Configure the P.
[~P] segment-routing
[*P-segment-routing] mapping-server prefix-sid-mapping 3.3.3.9 32 22
[*P-segment-routing] quit
[*P] commit
[~P] isis 1
[*P-isis-1] segment-routing mapping-server send
[*P-isis-1] quit
[*P] commit
[~P] mpls
[*P-mpls] lsp-trigger segment-routing-interworking best-effort host
[*P-mpls] commit
[~P-mpls] quit
Total information(s): 1
The command output shows that the forwarding entry for the route to 3.3.3.9/32
exists and has its outbound interface is the mapping LDP, which indicates that the
P has successfully stitched the SR LSP to the MPLS LDP LSP.
# Configure the PEs to ping each other. For example, PE1 pings PE2 at 3.3.3.9.
[~PE1] ping lsp segment-routing ip 3.3.3.9 32 version draft2 remote 3.3.3.9
LSP PING FEC: IPV4 PREFIX 3.3.3.9/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.9: bytes=100 Sequence=1 time=72 ms
Reply from 3.3.3.9: bytes=100 Sequence=2 time=34 ms
Reply from 3.3.3.9: bytes=100 Sequence=3 time=50 ms
Reply from 3.3.3.9: bytes=100 Sequence=4 time=50 ms
Reply from 3.3.3.9: bytes=100 Sequence=5 time=34 ms
--- FEC: IPV4 PREFIX 3.3.3.9 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 34/48/72 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.9
#
mpls
lsp-trigger segment-routing-interworking best-effort host
#
mpls ldp
#
segment-routing
tunnel-prefer segment-routing
mapping-server prefix-sid-mapping 3.3.3.9 32 22
#
isis 1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
segment-routing global-block 160000 161000
segment-routing mapping-server send
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
As shown in Figure 1-119, traffic can reach CE1 through either PE2 or PE3. IS-IS
anycast FRR can be configured to enable PE2 and PE3 to protect each other,
improving network reliability.
Precautions
None
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IS-IS on the backbone network to ensure that PEs interwork with each
other.
2. MPLS and segment routing are configured on the backbone network and SR
LSPs are established.
3. Enable TI-LFA FRR on PE1 and configure the delayed switchback function.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and P
● SRGB ranges on the PEs
Procedure
Step 1 Assign an IP address to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 2.2.2.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
<HUAWEI> system-view
[~HUAWEI] sysname PE3
[*HUAWEI] commit
[~PE3] interface loopback 0
[*PE3-LoopBack0] ip address 4.4.4.9 32
[*PE3-LoopBack0] quit
[~PE3] interface loopback 1
[*PE3-LoopBack1] ip address 2.2.2.9 32
[*PE3-LoopBack1] quit
[*PE3] interface gigabitethernet1/0/0
[*PE3-GigabitEthernet1/0/0] ip address 172.18.1.2 24
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0002.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] isis 1
[*PE3-isis-1] is-level level-1
[*PE3-isis-1] network-entity 10.0000.0000.0004.00
[*PE3-isis-1] quit
[*PE3] commit
[~PE3] interface loopback 1
[*PE3-LoopBack1] isis enable 1
[*PE3-LoopBack1] quit
[*PE3] interface gigabitethernet1/0/0
[*PE3-GigabitEthernet1/0/0] isis enable 1
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
Step 3 Configure the basic MPLS functions on the MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.9
[*PE3] mpls
[*PE3-mpls] commit
[~PE3-mpls] quit
Step 4 Configure segment routing on the backbone network and enable TI-LFA FRR the
anti-micro-loop function for a switchback.
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] tunnel-prefer segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] segment-routing global-block 160000 161000
The SRGP range various according to a live network. Set the range as needed. The SRGB
setting here is an example.
[*PE1-isis-1] frr
[*PE1-isis-1-frr] loop-free-alternate level-1
[*PE1-isis-1-frr] ti-lfa level-1
[*PE1-isis-1-frr] quit
[*PE1-isis-1] avoid-microloop segment-routing
[*PE1-isis-1] avoid-microloop segment-routing rib-update-delay 6000
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] tunnel-prefer segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 160000 161000
The SRGP range various according to a live network. Set the range as needed. The SRGB
setting here is an example.
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid index 20
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure PE3.
[~PE3] segment-routing
[*PE3-segment-routing] tunnel-prefer segment-routing
[*PE3-segment-routing] quit
[*PE3] commit
[~PE3] isis 1
[*PE3-isis-1] cost-style wide
[*PE3-isis-1] segment-routing mpls
[*PE3-isis-1] segment-routing global-block 160000 161000
The SRGP range various according to a live network. Set the range as needed. The SRGB
setting here is an example.
[*PE3] interface loopback 1
[*PE3-LoopBack1] isis prefix-sid index 20
[*PE3-LoopBack1] quit
[*PE3] commit
Total information(s): 1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 160000 161000
avoid-microloop segment-routing
avoid-microloop segment-routing rib-update-delay 6000
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
● PE3 configuration file
#
sysname PE3
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 160000 161000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.2 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 4.4.4.9 255.255.255.255
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
Networking Requirements
On the network shown in Figure 1-120, SR-MPLS BE tunnels between public
network PEs are deployed. To improve network reliability, SBFD needs to be
deployed. If SBFD detects a fault on an SR-MPLS BE tunnel, a protection
application, for example, VPN FRR, rapidly switches traffic, which minimizes the
impact on traffic.
Precautions
None
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IS-IS on the backbone network to ensure that PEs interwork with each
other.
2. Configure MPLS and Segment Routing on the backbone network to establish
SR LSPs. Enable topology independent-loop free alternate (TI-LFA) FRR.
3. Configure SBFD to establish sessions between PEs to monitor SR-MPLS BE
tunnels.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.18.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 172.19.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 172.18.1.2 24
[*P2-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure the basic MPLS functions on the MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
The SRGB range various according to a live network. Set a range as needed. The SRGB
setting here is an example.
[*PE1-isis-1] frr
[*PE1-isis-1-frr] loop-free-alternate level-1
[*PE1-isis-1-frr] ti-lfa level-1
[*PE1-isis-1-frr] quit
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 160000 161000
The SRGB range various according to a live network. Set a range as needed. The SRGB
setting here is an example.
[*P1-isis-1] frr
[*P1-isis-1-frr] loop-free-alternate level-1
[*P1-isis-1-frr] ti-lfa level-1
[*P1-isis-1-frr] quit
[*P1-isis-1] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis prefix-sid index 20
[*P1-LoopBack1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 160000 161000
The SRGB range various according to a live network. Set a range as needed. The SRGB
setting here is an example.
[*PE2-isis-1] frr
[*PE2-isis-1-frr] loop-free-alternate level-1
[*PE2-isis-1-frr] ti-lfa level-1
[*PE2-isis-1-frr] quit
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid index 30
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] segment-routing global-block 160000 161000
The SRGB range various according to a live network. Set a range as needed. The SRGB
setting here is an example.
[*P2-isis-1] frr
[*P2-isis-1-frr] loop-free-alternate level-1
[*P2-isis-1-frr] ti-lfa level-1
[*P2-isis-1-frr] quit
[*P2-isis-1] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis prefix-sid index 40
[*P2-LoopBack1] quit
[*P2] commit
# After completing the configuration, run the display tunnel-info all command
on a PE. The SR LSP has been established. In the following example, the command
output on PE1 is used.
[~PE1] display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------------------------
0x000000002900000003 srbe-lsp 4.4.4.9 UP
0x000000002900000004 srbe-lsp 2.2.2.9 UP
0x000000002900000005 srbe-lsp 3.3.3.9 UP
--- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 5/6/12 ms
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] commit
[~PE2] quit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
sbfd
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
seamless-bfd enable mode tunnel
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 160000 161000
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
segment-routing global-block 160000 161000
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
segment-routing global-block 160000 161000
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.19.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 160000 161000
frr
loop-free-alternate level-1
ti-lfa level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.18.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.19.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
isis prefix-sid index 40
#
return
Networking Requirements
On the network shown in Figure 1-121, IS-IS is enabled, and the SR-MPLS BE
function is configured. The cost of the link between Device C and Device D is 100,
and the cost of other links is 10. The optimal path from Device A to Device F is
Device A -> Device B -> Device E -> Device F. TI-LFA FRR can be configured on
Device B to provide local protection, enabling traffic to be quickly switched to the
backup path (Device A -> Device B -> Device C -> Device D -> Device E -> Device
F) when the link between Device B and Device E fails.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
In this example, TI-LFA FRR is configured on Device B to protect the link between
Device B and Device E. On a live network, you are advised to configure TI-LFA FRR
on all nodes in the SR domain.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure interface IP addresses.
# Configure Device A.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface loopback 1
[*DeviceA-LoopBack1] ip address 1.1.1.9 32
[*DeviceA-LoopBack1] quit
[*DeviceA] interface gigabitethernet1/0/0
[*DeviceA-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] commit
Repeat this step for the other devices. For configuration details, see Configuration
Files in this section.
# Configure Device A.
[~DeviceA] isis 1
[*DeviceA-isis-1] is-level level-1
[*DeviceA-isis-1] network-entity 10.0000.0000.0001.00
[*DeviceA-isis-1] cost-style wide
[*DeviceA-isis-1] quit
[*DeviceA] commit
[*DeviceA] interface loopback 1
[*DeviceA-LoopBack1] isis enable 1
[*DeviceA-LoopBack1] quit
[*DeviceA] interface gigabitethernet1/0/0
[*DeviceA-GigabitEthernet1/0/0] isis enable 1
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] commit
Repeat this step for the other devices. For configuration details, see Configuration
Files in this section.
Step 3 Configure basic MPLS capabilities on the backbone network.
# Configure Device A.
[~DeviceA] mpls lsr-id 1.1.1.9
[*DeviceA] mpls
[*DeviceA-mpls] commit
[~DeviceA-mpls] quit
# Configure Device B.
[~DeviceB] mpls lsr-id 2.2.2.9
[*DeviceB] mpls
[*DeviceB-mpls] commit
[~DeviceB-mpls] quit
# Configure Device C.
[~DeviceC] mpls lsr-id 3.3.3.9
[*DeviceC] mpls
[*DeviceC-mpls] commit
[~DeviceC-mpls] quit
# Configure Device D.
[~DeviceD] mpls lsr-id 4.4.4.9
[*DeviceD] mpls
[*DeviceD-mpls] commit
[~DeviceD-mpls] quit
# Configure Device E.
[~DeviceE] mpls lsr-id 5.5.5.9
[*DeviceE] mpls
[*DeviceE-mpls] commit
[~DeviceE-mpls] quit
# Configure Device F.
[~DeviceF] mpls lsr-id 6.6.6.9
[*DeviceF] mpls
[*DeviceF-mpls] commit
[~DeviceF-mpls] quit
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceA-isis-1] quit
[*DeviceA] interface loopback 1
[*DeviceA-LoopBack1] isis prefix-sid index 10
[*DeviceA-LoopBack1] quit
[*DeviceA] commit
# Configure Device B.
[~DeviceB] segment-routing
[*DeviceB-segment-routing] quit
[*DeviceB] commit
[~DeviceB] isis 1
[*DeviceB-isis-1] segment-routing mpls
[*DeviceB-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceB-isis-1] quit
[*DeviceB] interface loopback 1
[*DeviceB-LoopBack1] isis prefix-sid index 20
[*DeviceB-LoopBack1] quit
[*DeviceB] commit
# Configure Device C.
[~DeviceC] segment-routing
[*DeviceC-segment-routing] quit
[*DeviceC] commit
[~DeviceC] isis 1
[*DeviceC-isis-1] segment-routing mpls
[*DeviceC-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceC-isis-1] quit
[*DeviceC] interface loopback 1
[*DeviceC-LoopBack1] isis prefix-sid index 30
[*DeviceC-LoopBack1] quit
[*DeviceC] commit
# Configure Device D.
[~DeviceD] segment-routing
[*DeviceD-segment-routing] quit
[*DeviceD] commit
[~DeviceD] isis 1
[*DeviceD-isis-1] segment-routing mpls
[*DeviceD-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceD-isis-1] quit
[*DeviceD] interface loopback 1
[*DeviceD-LoopBack1] isis prefix-sid index 40
[*DeviceD-LoopBack1] quit
[*DeviceD] commit
# Configure Device E.
[~DeviceE] segment-routing
[*DeviceE-segment-routing] quit
[*DeviceE] commit
[~DeviceE] isis 1
[*DeviceE-isis-1] segment-routing mpls
[*DeviceE-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceE-isis-1] quit
[*DeviceE] interface loopback 1
[*DeviceE-LoopBack1] isis prefix-sid index 50
[*DeviceE-LoopBack1] quit
[*DeviceE] commit
# Configure Device F.
[~DeviceF] segment-routing
[*DeviceF-segment-routing] quit
[*DeviceF] commit
[~DeviceF] isis 1
[*DeviceF-isis-1] segment-routing mpls
[*DeviceF-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceF-isis-1] quit
[*DeviceF] interface loopback 1
[*DeviceF-LoopBack1] isis prefix-sid index 60
[*DeviceF-LoopBack1] quit
[*DeviceF] commit
Total information(s): 6
After completing the configurations, run the display isis route [ level-1 | level-2 ]
[ process-id ] [ verbose ] command on Device B. The command output shows IS-
IS TI-LFA FRR backup entries.
[~DeviceB] display isis route level-1 verbose
Route information for ISIS(1)
-----------------------------
Interface : GE2/0/0
NextHop : 10.2.1.2 LsIndex : 0x00000003
Backup Label Stack (Top -> Bottom): {48142}
# Run the tracert command on Device A again to check the connectivity of the SR-
MPLS BE tunnel. For example:
[~DeviceA] tracert lsp segment-routing ip 6.6.6.9 32 version draft2
LSP Trace Route FEC: SEGMENT ROUTING IPV4 PREFIX 6.6.6.9/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[16060 ]
1 10.1.1.2 3 ms Transit 10.2.1.2/[16060 ]
2 10.2.1.2 46 ms Transit 10.3.1.2/[16060 ]
3 10.3.1.2 33 ms Transit 10.4.1.2/[16060 ]
4 10.4.1.2 48 ms Transit 10.6.1.2/[3 ]
5 6.6.6.9 4 ms Egress
The preceding command output shows that the SR-MPLS BE tunnel has been
switched to the TI-LFA FRR backup path.
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
isis cost 100
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
return
● Device D configuration file
#
sysname DeviceD
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
isis cost 100
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
isis prefix-sid index 40
#
return
● Device E configuration file
● #
sysname DeviceE
#
mpls lsr-id 5.5.5.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0005.00
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.6.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.5.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
isis enable 1
isis prefix-sid index 50
#
return
Networking Requirements
On the network shown in Figure 1-122:
● CE1 and CE2 belong to vpna.
To ensure secure communication between CE1 and CE2, configure L3VPN over an
SR-MPLS TE tunnel.
Precautions
When you configure L3VPN over an SR-MPLS TE tunnel, note the following:
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs on the PEs and P1
● VPN target and RD of vpna
● SRGB range on the PEs and P1
Procedure
Step 1 Configure an IP address for each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 172.16.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 172.17.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 3 Configure basic MPLS functions and enable MPLS TE on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] quit
[*PE1] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] quit
[*P1] commit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] commit
Step 4 On the backbone network, configure SR, establish an SR-MPLS TE tunnel, specify
the tunnel IP address, tunnel protocol, and destination IP address, and use explicit
paths for path computation.
# Configure PE1.
[~PE1] segment-routing
[*PE1-segment-routing] quit
[*PE1] commit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] segment-routing global-block 16000 20000
[*PE1-isis-1] quit
The SRGB value range varies according to a live network. The following example is for
reference only.
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid absolute 16100
[*PE1-LoopBack1] quit
[*PE1] commit
[~PE1] explicit-path pe2
[*PE1-explicit-path-pe2] next sid label 16200 type prefix
[*PE1-explicit-path-pe2] next sid label 16300 type prefix
[*PE1-explicit-path-pe2] quit
[*PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface LoopBack1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 3.3.3.9
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te signal-protocol segment-routing
[*PE1-Tunnel1] mpls te path explicit-path pe2
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-2
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 16000 20000
[*P1-isis-1] quit
The SRGB value range varies according to a live network. The following example is for
reference only.
[*P1] interface loopback 1
[*P1-LoopBack1] isis prefix-sid absolute 16200
[*P1-LoopBack1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 16000 20000
[*PE2-isis-1] quit
The SRGB value range varies according to a live network. The following example is for
reference only.
# After the configuration is complete, run the display tunnel-info all command
on each PE. The command output shows that the SR-MPLS TE tunnel has been
established. The command output on PE1 is used as an example.
[~PE1] display tunnel-info all
Tunnel ID Type Destination Status
----------------------------------------------------------------------------------------
1 sr-te 3.3.3.9 UP
# Run the ping command on PE1 to check the connectivity of the SR-MPLS TE
tunnel. For example:
[~PE1] ping lsp segment-routing te Tunnel 1
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100 data bytes, press
CTRL_C to break
Reply from 3.3.3.9: bytes=100 Sequence=1 time=7 ms
Reply from 3.3.3.9: bytes=100 Sequence=2 time=11 ms
Reply from 3.3.3.9: bytes=100 Sequence=3 time=11 ms
Reply from 3.3.3.9: bytes=100 Sequence=5 time=10 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 5/8/11 ms
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp peer or display bgp
vpnv4 all peer command on each PE. The command output shows that the MP-
IBGP peer relationship has been set up and is in the Established state. The
command output on PE1 is used as an example.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 6 On each PE, create a VPN instance, enable the IPv4 address family in the VPN
instance, and bind the PE interface connected to a CE to the VPN instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 7 Configure a tunnel policy on each PE, and specify SR-MPLS TE as the preferred
tunnel.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
Step 8 Set up EBGP peer relationships between the PEs and CEs.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 10.11.1.1 32
[*CE1-LoopBack1] quit
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.2 as-number 100
[*CE1-bgp] network 10.11.1.1 32
[*CE1-bgp] quit
[*CE1] commit
Repeat this step on CE2. For configuration details, see "Configuration Files" in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
Repeat this step on PE2. For configuration details, see "Configuration Files" in this section.
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on each PE. The command output shows that the BGP peer
relationships have been established and are in the Established state.
The BGP peer relationship between PE1 and CE1 is used as an example.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.9
Local AS number : 100
The CEs can ping each other. For example, CE1 can ping CE2 (10.22.2.2).
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
--- 10.22.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 34/48/72 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path pe2
next sid label 16200 type prefix
next sid label 16300 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe2
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
#
explicit-path pe1
next sid label 16200 type prefix
next sid label 16100 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16300
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
peer 10.2.1.1 as-number 65420
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
return
Networking Requirements
Interface 1, sub-interface 1.1, interface 2, and Sub-interface 2.1 in this example represent
GE 1/0/0, GE 1/0/0.1, GE 2/0/0, and GE 2/0/0.1, respectively.
On the network shown in Figure 1-123, CE1 and CE2 are on the same VPLS
network. They access the MPLS core network through PE1 and PE2 respectively.
OSPF is used as the IGP on the MPLS backbone network.
LDP VPLS and an SR-MPLS TE tunnel must be deployed between PE1 and PE2 to
transmit VPLS services.
Configuration Notes
When configuring LDP VPLS over SR-MPLS TE, note that PEs on the same L2VPN
must be configured with the same VSI ID.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign IP addresses to interfaces.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.10.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.9 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 10.10.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 10.20.1.1 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.20.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure the P.
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] commit
# Configure the P.
[~P] ospf 1
[*P-ospf-1] opaque-capability enable
[*P-ospf-1] area 0.0.0.0
[*P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[*P-ospf-1-area-0.0.0.0] network 10.10.1.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] network 10.20.1.0 0.0.0.255
[*P-ospf-1-area-0.0.0.0] mpls-te enable
[*P-ospf-1-area-0.0.0.0] quit
[*P-ospf-1] quit
[*P] commit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] opaque-capability enable
[*PE2-ospf-1] area 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] network 10.20.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] mpls-te enable
[*PE2-ospf-1-area-0.0.0.0] quit
[*PE2-ospf-1] quit
[*PE2] commit
# Configure the P.
[~P] segment-routing
[*P1-segment-routing] quit
[*P] ospf 1
[*P-ospf-1] segment-routing mpls
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] ospf 1
[*PE2-ospf-1] segment-routing mpls
[*PE2-ospf-1] segment-routing global-block 16000 47999
[*PE2-ospf-1] quit
# Configure PE2.
[~PE2] explicit-path path2pe1
[*PE2-explicit-path-path2pe1] next sid label 16300 type prefix
[*PE2-explicit-path-path2pe1] next sid label 16200 type prefix
[*PE2-explicit-path-path2pe1] quit
# Configure PE2.
[~PE2] interface Tunnel 10
[*PE2-Tunnel10] ip address unnumbered interface loopback1
[*PE2-Tunnel10] tunnel-protocol mpls te
[*PE2-Tunnel10] destination 1.1.1.9
[*PE2-Tunnel10] mpls te signal-protocol segment-routing
[*PE2-Tunnel10] mpls te tunnel-id 100
[*PE2-Tunnel10] mpls te path explicit-path path2pe1
[*PE2-Tunnel10] mpls te reserved-for-binding
[*PE2-Tunnel10] quit
[*PE2] commit
Run the display tunnel-info all command in the system view. The command
output shows that the TE tunnel with the destination address being the peer MPLS
LSR ID exists between PEs. The following example uses the command output on
PE1.
[~PE1] display tunnel-info all
Tunnel ID Type Destination Status
-----------------------------------------------------------------------------
0x000000000300000001 sr-te 3.3.3.9 UP
# Configure PE2.
[~PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] mpls ldp remote-peer 1.1.1.9
[*PE2-mpls-ldp-remote-1.1.1.9] remote-ip 1.1.1.9
[*PE2-mpls-ldp-remote-1.1.1.9] quit
[*PE2] commit
# Configure PE2.
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.9 te Tunnel10
[*PE2-tunnel-policy-policy1] quit
[*PE2] commit
# Configure PE2.
[~PE2] mpls l2vpn
[*PE2] commit
Step 10 Create VSIs on PEs and bind the tunnel policy to the VSIs.
# Configure PE1.
[~PE1] vsi a2
[*PE1-vsi-a2] pwsignal ldp
[*PE1-vsi-a2-ldp] vsi-id 2
[*PE1-vsi-a2-ldp] peer 3.3.3.9 tnl-policy policy1
[*PE1-vsi-a2-ldp] quit
[*PE1] commit
# Configure PE2.
[~PE2] vsi a2
[*PE2-vsi-a2] pwsignal ldp
[*PE2-vsi-a2-ldp] vsi-id 2
[*PE2-vsi-a2-ldp] peer 1.1.1.9 tnl-policy policy1
[*PE2-vsi-a2-ldp] quit
[*PE2] commit
# Configure PE2.
[~PE2] interface gigabitethernet2/0/0.1
[*PE2-GigabitEthernet2/0/0.1] vlan-type dot1q 10
[*PE2-GigabitEthernet2/0/0.1] l2 binding vsi a2
[*PE2-GigabitEthernet2/0/0.1] quit
[*PE2] commit
# Configure CE1.
[~CE1] interface gigabitethernet1/0/0.1
[*CE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE1-GigabitEthernet1/0/0.1] ip address 10.1.1.1 255.255.255.0
[*CE1-GigabitEthernet1/0/0.1] quit
[*CE1] commit
# Configure CE2.
[~CE2] interface gigabitethernet1/0/0.1
[*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet1/0/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet1/0/0.1] quit
[*CE2] commit
VSI ID :2
*Peer Router ID : 3.3.3.9
primary or secondary : primary
ignore-standby-state : no
VC Label : 18
Peer Type : dynamic
Session : up
Tunnel ID : 0x000000000300000001
Broadcast Tunnel ID : --
Broad BackupTunnel ID : --
Tunnel Policy Name : policy1
CKey : 33
NKey : 1610612843
Stp Enable :0
PwIndex :0
Control Word : disable
**PW Information:
Run the display vsi pw out-interface vsi a2 command on PE2. The command
output shows that the outbound interface of the MPLS TE tunnel between 1.1.1.9
and 3.3.3.9 is Tunnel 10.
[~PE1] display vsi pw out-interface vsi a2
Total: 1
--------------------------------------------------------------------------------
Vsi Name peer vcid interface
--------------------------------------------------------------------------------
a2 3.3.3.9 2 Tunnel10
----End
Configuration Files
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
explicit-path path2pe1
next sid label 16300 type prefix
next sid label 16100 type prefix
#
segment-routing
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
l2 binding vsi a2
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.20.1.2 255.255.255.0
ospf cost 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
ospf enable 1 area 0.0.0.0
ospf prefix-sid absolute 16200
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path path2pe1
mpls te reserved-for-binding
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 47999
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.20.1.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.9 te Tunnel10
#
return
Networking Requirements
On the network shown in Figure 1-124, the EVPN and VPN functions are
configured to transmit Layer 2 and Layer 3 traffic to allow communication
between different sites on the backbone network. If Site 1 and Site 2 are
connected through the same subnet, create an EVPN instance on each PE to store
EVPN routes. Layer 2 forwarding is based on an EVPN route that matches a MAC
address. If Site 1 and Site 2 are connected through different subnets, create a VPN
instance on each PE to store VPN routes. In this situation, Layer 2 traffic is
terminated, and Layer 3 traffic is forwarded through a Layer 3 gateway. In this
example, PEs transmit service traffic over SR-MPLS TE tunnels.
Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and
GE 1/0/0.1, respectively.
Configuration Notes
When configuring BD EVPN IRB over SR-MPLS TE, note the following:
● On the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites, and the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of a PE as the EVPN source address
is recommended.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name (evrf1) and VPN instance name (vpn1)
● EVPN instance evrf1's RD (100:1) and RT (1:1) on PE1, EVPN instance evrf1's
RD (200:1) and RT (1:1) on PE2, VPN instance vpn1's RD (100:2) and RT (2:2)
on PE1, and VPN instance vpn1's RD (200:2) and RT (2:2) on PE2
Procedure
Step 1 Configure IP addresses for the interfaces connecting PEs and P2 according to
Figure 1-124.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.2 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.3 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 2 Configure an IGP to allow communication between PE1, PE2, and P. IS-IS is used
as an IGP protocol in this example.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 00.1111.1111.1111.00
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface GigabitEthernet 2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-2
[*P-isis-1] network-entity 00.1111.1111.2222.00
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis enable 1
[*P-LoopBack1] quit
[*P] interface GigabitEthernet 1/0/0
[*P-GigabitEthernet1/0/0] isis enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface GigabitEthernet 2/0/0
[*P-GigabitEthernet2/0/0] isis enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 00.1111.1111.3333.00
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface GigabitEthernet 2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
After completing the configuration, run the display isis peer command to check
that the status of the IS-IS neighbor relationship between PE1, PE2, and P is Up.
Run the display ip routing-table command to view that the PEs have learned the
routes to Loopback1 of each other.
The following example uses the command output on PE1.
[~PE1] display isis peer
Peer information for ISIS(1)
The next sid label command uses the adjacency label from PE1 to P which is dynamically
generated using IS-IS. This adjacency label can be obtained using the display segment-routing
adjacency mpls forwarding command.
[~PE1] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
# Configure the P.
[~P] mpls lsr-id 2.2.2.2
[*P] mpls
[*P-mpls] mpls te
[*P-mpls] quit
[*P] segment-routing
[*P-segment-routing] quit
[*P] isis 1
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.3
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 153616 153800
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2] explicit-path pe2tope1
[*PE2-explicit-path-pe2tope1] next sid label 48120 type adjacency
[*PE2-explicit-path-pe2tope1] next sid label 48121 type adjacency
[*PE2-explicit-path-pe2tope1] quit
[*PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.1
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te signal-protocol segment-routing
[*PE2-Tunnel1] mpls te path explicit-path pe2tope1
[*PE2-Tunnel1] mpls te reserved-for-binding
[*PE2-Tunnel1] quit
[*PE2] commit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 200:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv4-family
[*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:2
[*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 evpn
[*PE2-vpn-instance-vpn1-af-ipv4] quit
[*PE2-vpn-instance-vpn1] evpn mpls routing-enable
[*PE2-vpn-instance-vpn1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE2.
[~PE2] evpn source-address 3.3.3.3
[*PE2] commit
Step 6 Configure the Layer 2 Ethernet sub-interfaces connecting PEs and CEs.
# Configure PE1.
[~PE1] interface GigabitEthernet 1/0/0
[*PE1-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.1111
[*PE1-Gigabitethernet1/0/0] quit
[*PE1] interface GigabitEthernet 1/0/0.1 mode l2
[*PE1-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
[*PE1-GigabitEthernet 1/0/0.1] rewrite pop single
[*PE1-GigabitEthernet 1/0/0.1] bridge-domain 10
[*PE1-GigabitEthernet 1/0/0.1] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface GigabitEthernet 1/0/0
[*PE2-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.2222
[*PE2-Gigabitethernet1/0/0] quit
[*PE2] interface GigabitEthernet 1/0/0.1 mode l2
[*PE2-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
[*PE2-GigabitEthernet 1/0/0.1] rewrite pop single
[*PE2-GigabitEthernet 1/0/0.1] bridge-domain 10
[*PE2-GigabitEthernet 1/0/0.1] quit
[*PE2] commit
Step 7 Configure a vBDIF interface on each PE and bind the vBDIF interface to a VPN
instance.
# Configure PE1.
[~PE1] interface Vbdif10
# Configure PE2.
[~PE2] interface Vbdif10
[*PE2-Vbdif10] ip binding vpn-instance vpn1
[*PE2-Vbdif10] ip address 192.168.2.1 255.255.255.0
[*PE2-Vbdif10] quit
[*PE2] commit
Step 8 Configure and apply a tunnel policy so that EVPN can recurse to SR-MPLS TE
tunnels.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
[*PE1-tunnel-policy-srte] quit
[*PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] tnl-policy srte
[*PE1-evpn-instance-evrf1] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] tnl-policy srte evpn
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy srte
[*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
[*PE2-tunnel-policy-srte] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] tnl-policy srte
[*PE2-evpn-instance-evrf1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] tnl-policy srte evpn
[*PE2-vpn-instance-vpn1] quit
[*PE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1.1.1.1 as-number 100
[*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1.1.1.1 enable
[*PE2-bgp-af-evpn] peer 1.1.1.1 advertise irb
[*PE2-bgp-af-evpn] quit
After completing the configuration, run the display bgp evpn peer command to
check that BGP peer relationships have been established between PEs and are in
the Established state. The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
# Configure CE2.
[~CE2] interface GigabitEthernet 1/0/0.1 mode l2
[*CE2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*CE2-GigabitEthernet1/0/0.1] rewrite pop single
[*CE2-GigabitEthernet1/0/0.1] quit
EVPN-Instance evrf1:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*> 0011.1111.1111.1111.1111:0 127.0.0.1
*>i 0011.1111.1111.1111.2222:0 3.3.3.3
*>i 0011.1111.1111.1111.2222:4294967295 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 2
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*>i 0:48:00e0-fc12-3456:32:192.168.2.2 3.3.3.3
*> 0:48:00e0-fc12-7890:32:192.168.1.2 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 3.3.3.3
EVPN-Instance evrf1:
Number of ES Routes: 2
Network(ESI) NextHop
*> 0011.1111.1111.1111.1111 127.0.0.1
*>i 0011.1111.1111.1111.2222 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 1
BGP routing table entry information of 0:48:00e0-fc12-3456:32:192.168.2.2:
Route Distinguisher: 200:1
Remote-Cross route
Label information (Received/Applied): 54011 48182/NULL
From: 3.3.3.3 (10.2.1.2)
Route Duration: 0d20h42m36s
Relay Tunnel Out-Interface: Tunnel1
Original nexthop: 3.3.3.3
Qos information : 0x0
Ext-Community: RT <1 : 1>, Mac Mobility <flag:1 seq:0 res:0>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:
0000.0000.0000.0000.0000
Not advertised to any peer yet
[~PE1] display bgp evpn all routing-table prefix-route 0:192.168.2.0:24
BGP local router ID : 10.1.1.1
Local AS number : 100
Total routes of Route Distinguisher(200:2): 1
BGP routing table entry information of 0:192.168.2.0:24:
Label information (Received/Applied): 48185/NULL
From: 3.3.3.3 (10.2.1.2)
Route Duration: 0d20h38m31s
Relay IP Nexthop: 10.1.1.2
Relay Tunnel Out-Interface: GigabitEthernet1/0/0
Original nexthop: 3.3.3.3
Qos information : 0x0
Ext-Community: RT <2 : 2>
AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP
cost 20
Route Type: 5 (Ip Prefix Route)
Ethernet Tag ID: 0, IP Prefix/Len: 192.168.2.0/24, ESI: 0000.0000.0000.0000.0000, GW IP Address: 0.0.0.0
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
tnl-policy srte
vpn-target 1:1 export-extcommunity
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise irb
#
tunnel-policy srte
tunnel binding destination 3.3.3.3 te Tunnel1
#
evpn source-address 1.1.1.1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.1111.1111.2222.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 153616 153800
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
isis prefix-sid absolute 153710
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 200:1
tnl-policy srte
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:2
apply-label per-instance
Networking Requirements
On the network shown in Figure 1-125, EVPN L3VPNv6 is configured on the
backbone network to allow IPv6 interworking at Layer 3 between CEs. In this
example, PEs transmit service traffic over SR-MPLS TE tunnels.
Configuration Notes
When configuring EVPN L3VPNv6 over SR-MPLS TE, note the following:
● For the same VPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites, and the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of a PE as the EVPN source address
is recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP to allow communication between PE1, PE2, and P.
2. Configure an SR-MPLS TE tunnel on the backbone network.
3. Configure an IPv6 VPN instance on each PE and bind the IPv6 VPN instance to
an interface.
4. Establish BGP EVPN peer relationships between PEs.
5. Configure and apply a tunnel policy so that EVPN routes can be iterated over
SR-MPLS TE tunnels.
6. Establish VPN BGP peer relationships between PEs and CEs.
Data Preparation
To complete the configuration, you need the following data:
● Name of the IPv6 VPN instance: vpn1
● RD (100:1) and RT (1:1) of the IPv6 VPN instance
Procedure
Step 1 Configure interface IP addresses of each device.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.2 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.3 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 2 Configure an IGP to allow communication between PE1, PE2, and P. IS-IS is used
as an IGP protocol in this example.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 00.1111.1111.1111.00
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface GigabitEthernet 2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-2
[*P-isis-1] network-entity 00.1111.1111.2222.00
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis enable 1
[*P-LoopBack1] quit
[*P] interface GigabitEthernet 1/0/0
[*P-GigabitEthernet1/0/0] isis enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface GigabitEthernet 2/0/0
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 00.1111.1111.3333.00
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface GigabitEthernet 2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
After completing the configurations, run the display isis peer command to check
that the status of the IS-IS neighbor relationship between PE1, PE2, and the P is
Up. Run the display ip routing-table command to view that the PEs have learned
the routes to Loopback1 of each other.
The following example uses the command output on PE1.
[~PE1] display isis peer
Peer information for ISIS(1)
Total Peer(s): 1
[~PE1] display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 11 Routes : 11
The next sid label command uses the adjacency label from PE1 to P which is dynamically
generated using IS-IS. This adjacency label can be obtained using the display segment-routing
adjacency mpls forwarding command.
[~PE1] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
# Configure the P.
[~P] mpls lsr-id 2.2.2.2
[*P] mpls
[*P-mpls] mpls te
[*P-mpls] quit
[*P] segment-routing
[*P-segment-routing] quit
[*P] isis 1
[*P-isis-1] cost-style wide
[*P-isis-1] traffic-eng level-2
[*P-isis-1] segment-routing mpls
[*P-isis-1] segment-routing global-block 153616 153800
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis prefix-sid absolute 153710
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.3
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 153616 153800
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid absolute 153720
[*PE2-LoopBack1] quit
[*PE2] explicit-path pe2tope1
[*PE2-explicit-path-pe2tope1] next sid label 48142 type adjacency
[*PE2-explicit-path-pe2tope1] next sid label 48140 type adjacency
[*PE2-explicit-path-pe2tope1] quit
[*PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.1
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te signal-protocol segment-routing
[*PE2-Tunnel1] mpls te path explicit-path pe2tope1
[*PE2-Tunnel1] mpls te reserved-for-binding
[*PE2-Tunnel1] quit
[*PE2] commit
Step 4 Configure an IPv6 VPN instance on each PE and bind the IPv6 VPN instance to an
interface.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*PE1-vpn-instance-vpn1-af-ipv6] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface GigabitEthernet 1/0/0
[*PE1-GigabitEthernet1/0/0] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:1::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv6-family
[*PE2-vpn-instance-vpn1-af-ipv6] route-distinguisher 100:1
[*PE2-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*PE2-vpn-instance-vpn1-af-ipv6] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] interface GigabitEthernet 1/0/0
[*PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpn1
[*PE2-GigabitEthernet1/0/0] ipv6 enable
[*PE2-GigabitEthernet1/0/0] ipv6 address 2001:DB8:2::1 64
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
Step 5 Configure and apply a tunnel policy so that EVPN can iterate SR-MPLS TE tunnels.
# Configure PE1.
[~PE1] tunnel-policy srte
# Configure PE2.
[~PE2] tunnel-policy srte
[*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
[*PE2-tunnel-policy-srte] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv6-family
[*PE2-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
[*PE2-vpn-instance-vpn1-af-ipv6] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1.1.1.1 as-number 100
[*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1.1.1.1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv6-family vpn-instance vpn1
[*PE2-bgp-6-vpn1] import-route direct
[*PE2-bgp-6-vpn1] advertise l2vpn evpn
[*PE2-bgp-6-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
After completing the configurations, run the display bgp evpn peer command to
check that BGP peer relationships have been established between PEs and are in
the Established state. The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
Step 7 Establish VPN BGP peer relationships between PEs and CEs.
# Configure EBGP on PE1.
The configurations of establishing a VPN BGP peer relationship between PE2 and
CE2 is similar to those of establishing a VPN BGP peer relationship between PE1
and CE1. For configuration details, see Figure 1-125 in this section.
After completing the configurations, run the display bgp evpn all routing-table
command on PEs to view the EVPN routes sent from the peer PEs. The following
example uses the command output on PE1.
[~PE1] display bgp evpn all routing-table
Local AS number : 100
Run the display ipv6 routing-table vpn-instance vpn1 command on the PEs to
view the VPN instance routing table information. The following example uses the
command output on PE1.
[~PE1] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1
Destinations : 6 Routes : 6
Run the display ipv6 routing-table brief command on the CEs to view the IPv6
routing table information. The following example uses the command output on
CE1.
[~CE1] display ipv6 routing-table brief
::FFFF:127.0.0.1/128 Direct
::1 InLoopBack0
2001:DB8:1::/64 Direct
2001:DB8:1::2 GigabitEthernet1/0/0
2001:DB8:1::2/128 Direct
::1 GigabitEthernet1/0/0
2001:DB8:2::/64 EBGP
2001:DB8:1::1 GigabitEthernet1/0/0
2001:DB8:4::4/128 Direct
::1 LoopBack1
2001:DB8:5::5/128 EBGP
2001:DB8:1::1 GigabitEthernet1/0/0
FE80::/10 Direct
:: NULL0
The CEs can ping each other. For example, CE1 can ping CE2 (2001:DB8:5::5).
[~CE1] ping ipv6 2001:DB8:5::5
PING 2001:DB8:5::5 : 56 data bytes, press CTRL_C to
break
Reply from 2001:DB8:5::5
bytes=56 Sequence=1 hop limit=61 time=385
ms
Reply from 2001:DB8:5::5
bytes=56 Sequence=2 hop limit=61 time=24 ms
Reply from 2001:DB8:5::5
bytes=56 Sequence=3 hop limit=61 time=25 ms
Reply from 2001:DB8:5::5
bytes=56 Sequence=4 hop limit=61 time=24 ms
Reply from 2001:DB8:5::5
bytes=56 Sequence=5 hop limit=61 time=18 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
explicit-path pe1tope2
next sid label 48140 type adjacency
next sid label 48141 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.1111.1111.1111.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 153616 153800
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
isis prefix-sid absolute 153700
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te reserved-for-binding
mpls te tunnel-id 1
mpls te path explicit-path pe1tope2
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv6-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
peer 2001:DB8:1::2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
tunnel-policy srte
tunnel binding destination 3.3.3.3 te Tunnel1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.1111.1111.2222.00
traffic-eng level-2
segment-routing mpls
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te reserved-for-binding
mpls te tunnel-id 1
mpls te path explicit-path pe2tope1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv6-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
peer 2001:DB8:2::2 as-number 65420
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
#
tunnel-policy srte
tunnel binding destination 1.1.1.1 te Tunnel1
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2/64
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:4::4/128
#
bgp 65410
router-id 4.4.4.4
peer 2001:DB8:1::1 as-number 100
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
import-route direct
peer 2001:DB8:1::1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:2::2/64
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:5::5/128
#
bgp 65420
router-id 5.5.5.5
Networking Requirements
In Figure 1-126, PE1 is to establish a tunnel to PE2 and an LSP to PE2. Segment
routing (SR) is used to generate path information and forward data. PE1 is the
ingress, and PE2 is the egress. IS-IS neighbor relationships are established between
PEs and Ps.
● IS-IS assigns labels to each neighbor and collects network topology
information. P1 runs BGP-LS to collect topology information and reports the
information to the controller.
● The controller uses the information to calculate a path and runs PCEP to
deliver path information to ingress PE1.
● The controller sends the tunnel configuration information to the ingress node
PE1 through Netconf.
● The ingress node PE1 uses the delivered tunnel configurations and label
stacks to establish an SR-MPLS TE tunnel. PE1 delegates the tunnel to the
controller through PCE.
Figure 1-126 Example for configuring the controller to run NETCONF to deliver
configurations to create an SR-MPLS TE tunnel
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to every interface and configure a loopback
interface address as an LSR ID on every node.
2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each
LSR.
3. Enable SR globally on each node.
4. Configure a label allocation mode and a topology information collection
mode. In this example, the forwarders assign labels.
5. Configure the PCC and segment routing on each forwarder.
6. Configure the PCE server on the controller.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces as shown in Figure 1-126
● IS-IS process ID (1), IS-IS system ID of each node (converted from a
loopback0 IP address), and IS-IS level (level-2)
● BGP-LS peer relationship between the controller and P1, as shown in Figure
1-126.
Procedure
Step 1 Assign an IP address and a mask to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 0
[*PE1-LoopBack0] ip address 1.1.1.1 32
[*PE1-LoopBack0] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.1.2.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 0
[*P1-LoopBack0] ip address 2.2.2.2 32
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.2.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.1.3.2 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet3/0/0
[*P1-GigabitEthernet3/0/0] ip address 10.2.1.1 24
[*P1-GigabitEthernet3/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 0
[*PE2-LoopBack0] ip address 3.3.3.3 32
[*PE2-LoopBack0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.1.3.1 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each loopback address
that is used as an LSR ID.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 10.0000.0000.0002.00
[*PE1-isis-1] quit
[*PE1] interface loopback 0
[*PE1-LoopBack0] isis enable 1
[*PE1-LoopBack0] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] network-entity 10.0000.0000.0003.00
[*P1-isis-1] quit
[*P1] interface loopback 0
[*P1-LoopBack0] isis enable 1
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet3/0/0
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 10.0000.0000.0004.00
[*PE2-isis-1] quit
[*PE2] interface loopback 0
[*PE2-LoopBack0] isis enable 1
[*PE2-LoopBack0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 3 Configure PCE on the forwarders and controller. For configuration details, see
Configuration Files in this section.
Step 4 Configure basic MPLS functions and enable MPLS TE.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
The configurations on P1 and PE2 are the same as the configuration on PE1. The
configuration details are not provided.
Step 5 Enable SR globally on each node.
# Configure PE1.
[~PE1] segment-routing
[*PE1] commit
The configurations on P1 and PE2 are the same as the configuration on PE1. The
configuration details are not provided.
Step 6 Configure a label allocation mode and a topology information collection mode. In
this example, the forwarders assign labels.
● Enable IS-IS SR-MPLS TE.
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] bgp-ls enable level-2
[*PE1-isis-1] commit
[~PE1-isis-1] quit
The configurations on P1 and PE2 are the same as the configuration on PE1.
The configuration details are not provided.
● Configure the BGP-LS route advertisement capability on P1.
# Enable BGP-LS on P1 and establish a BGP-LS peer relationship with the
controller.
[~P1] bgp 100
Step 7 The controller sends the tunnel configuration information to PE1 through
NETCONF.
The detailed tunnel configuration delivered by the controller through NETCONF is
as follows:
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 0
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 3.3.3.3
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te signal-protocol segment-routing
[*PE1-Tunnel1] mpls te pce delegate
[*PE1-Tunnel1] quit
[*PE1] commit
Run the display mpls te tunnel path command on PE1 to view path information
on the SR-MPLS TE tunnel.
[~PE1] display mpls te tunnel path
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.1 :1 :21
Hop Information
Hop 0 Label 330002 NAI 10.1.2.2
Hop 1 Label 330002 NAI 10.1.3.1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
pce-client
capability segment-routing
connect-server 10.2.1.2
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
traffic-eng level-2
bgp-ls enable level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack0
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te pce delegate
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
traffic-eng level-2
bgp-ls enable level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.3.2 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 10.2.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
peer 10.2.1.2 enable
#
link-state-family unicast
peer 10.2.1.2 enable
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
traffic-eng level-2
bgp-ls enable level-2
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● Controller configuration file
#
sysname Controller
#
pce-server
source-address 10.2.1.2
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0005.00
traffic-eng level-2
#
interface GigabitEthernet3/0/0
undo shutdown
Networking Requirements
On the network shown in Figure 1-127, establish a tunnel and LSP from PE1 to
PE2, and use segment routing (SR) for path generation and data forwarding. PE1
and PE2 are the path's ingress and egress, respectively. P1 collects the network
topology and reports it to the controller over IS-IS. The controller calculates a
label path based on the collected topology information and delivers the path
information to the third-party adapter. The third-party adapter then sends the
path information to PE1.
You do not need to configure a PCE client (PCC) because the third-party adapter delivers
the path information.
If a Huawei device connects to a non-Huawei device but the non-Huawei device does not
support BFD, configure one-arm BFD to monitor the link.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and a mask to each interface, and configure a loopback
address as an LSR ID on each node.
2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each
LSR.
3. Enable segment routing globally on each node.
4. Configure IS-IS TE on each node.
5. Configure an IS-IS neighbor relationship between P1 and the controller so
that P1 reports the network topology to the controller over IS-IS.
6. Configure a tunnel interface on PE1, and specify an IP address, tunnel
protocol, destination IP address, and tunnel bandwidth.
7. Configure a BFD session on PE1 to monitor the primary SR-MPLS TE tunnel.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface, as shown in Figure 1-127
● IS-IS process ID: 1; system ID: loopback0 address; IS-IS level: level-2
● IS-IS neighbor relationship between P1 and the controller, as shown in Figure
1-127
● Name of a BFD session
● Local and remote discriminators of a BFD session
Procedure
Step 1 Assign an IP address and a mask to each interface.
# Configure PE1.
<HUAWEI> system-view
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 0
[*P1-LoopBack0] ip address 10.31.2.9 32
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.23.3 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.20.34.3 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet1/0/1
[*P1-GigabitEthernet1/0/1] ip address 10.7.2.10 24
[*P1-GigabitEthernet1/0/1] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 0
[*PE2-LoopBack0] ip address 10.41.2.9 32
[*PE2-LoopBack0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.20.34.4 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 2 Configure IS-IS to advertise the route to each network segment of each interface
and to advertise the host route to each loopback address (used as an LSR ID).
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 11.1111.1111.1111.00
[*PE1-isis-1] quit
[*PE1] interface loopback 0
[*PE1-LoopBack0] isis enable 1
[*PE1-LoopBack0] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] network-entity 11.2222.2222.2222.00
[*P1-isis-1] quit
[*P1] interface loopback 0
[*P1-LoopBack0] isis enable 1
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet1/0/1
[*P1-GigabitEthernet1/0/1] isis enable 1
[*P1-GigabitEthernet1/0/1] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 11.3333.3333.3333.00
[*PE2-isis-1] quit
[*PE2] interface loopback 0
[*PE2-LoopBack0] isis enable 1
[*PE2-LoopBack0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure PE1.
[~PE1] mpls lsr-id 10.21.2.9
[~PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] quit
[~PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
The configurations except the LSR ID configuration on P1 and PE2 are the same as
those on PE1.
# Configure PE1.
[~PE1] segment-routing
[~PE1] commit
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel10] ip address unnumbered interface loopback 0
[*PE1-Tunnel10] tunnel-protocol mpls te
[*PE1-Tunnel10] destination 10.41.2.9
[*PE1-Tunnel10] mpls te tunnel-id 1
[*PE1-Tunnel10] mpls te signal-protocol segment-routing
[*PE1-Tunnel10] commit
[~PE1-Tunnel10] quit
After completing the configuration, run the display interface tunnel command
on PE1. You can check that the tunnel interface is Up.
Run the display mpls te tunnel command on each node to check MPLS TE tunnel
establishment.
[~PE1] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
10.21.2.9 10.41.2.9 1 --/20 I Tunnel10
[~PE2] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
10.41.2.9 10.21.2.9 1 --/120 I Tunnel10
# After completing the configuration, run the display bfd session { all |
discriminator discr-value | mpls-te interface interface-type interface-number }
[ verbose ] command on PE1 and PE2. You can check that the BFD session is Up.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 10.21.2.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 11.1111.1111.1111.00
segment-routing mpls
import-route static
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.23.2 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 10.21.2.9 255.255.255.255
isis enable 1
#
interface Tunnel10
ip address unnumbered interface LoopBack0
tunnel-protocol mpls te
destination 10.41.2.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
#
bfd
#
bfd pe2tope1 bind mpls-te interface Tunnel10
discriminator local 12
discriminator remote 21
min-tx-interval 100
min-rx-interval 100
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 10.31.2.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 11.2222.2222.2222.00
segment-routing mpls
import-route static
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.23.3 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.7.2.10 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.20.34.3 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 10.31.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 10.41.2.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 11.3333.3333.3333.00
segment-routing mpls
import-route static
traffic-eng level-2
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.20.34.4 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 10.41.2.9 255.255.255.255
isis enable 1
#
interface Tunnel10
ip address unnumbered interface LoopBack0
tunnel-protocol mpls te
destination 10.21.2.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
#
bfd
#
bfd pe2tope1 bind mpls-te interface Tunnel10
discriminator local 21
discriminator local 12
min-tx-interval 100
min-rx-interval 100
#
return
Networking Requirements
In Figure 1-128, PE1 is to establish a tunnel to PE2 and the primary and backup
LSPs to PE2. Segment routing (SR) is used to generate path information and
forward data. PE2 collects network topology information and runs IS-IS to flood
the information to the controller. The controller uses the information to calculate
the primary and backup LSPs and delivers LSP information to a third-party
adapter, and the third-party adapter forwards the LSP information to the ingress
PE1.
Hot standby is enabled for the tunnel. If the primary LSP fails, traffic is switched to
the backup LSP. After the primary LSP recovers, traffic is switched back.
There is no need to configure a PCE client (PCC) because the third-party adapter is used to
deliver paths.
If a Huawei device connects to a BFD-incapable non-Huawei device, one-arm BFD can be
configured to monitor links.
Figure 1-128 Networking diagram for configuring dynamic BFD for SR-MPLS TE
LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to every interface and configure a loopback
interface address as an LSR ID on every node.
2. Configure LSR IDs and enable MPLS TE globally and on interfaces on each
LSR.
3. Enable SR globally on each node.
4. Configure a label allocation mode and a topology information collection
mode. In this example, the controller collects assigns labels to forwarders.
5. Establish an IS-IS neighbor relationship between PE2 and the controller so
that PE2 can flood network topology information to the controller.
6. Configure a tunnel interface on the ingress PE1 and set a tunnel IP address, a
tunneling protocol, a destination IP address, and the tunnel bandwidth.
7. Configure CR-LSP hot standby.
8. Enable BFD on the ingress PE1, configure BFD for MPLS TE, and set the
minimum intervals at which BFD packets are sent and received and the local
detection multiplier
9. Enable the egress to passively create a BFD session.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address and a mask to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 0
[*PE1-LoopBack0] ip address 1.1.1.1 32
[*PE1-LoopBack0] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.1.2.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 0
[*P1-LoopBack0] ip address 2.2.2.2 32
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.1.2.2 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet3/0/0
[*P1-GigabitEthernet3/0/0] ip address 10.1.3.2 24
[*P1-GigabitEthernet3/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 0
[*PE2-LoopBack0] ip address 3.3.3.3 32
[*PE2-LoopBack0] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 10.1.3.1 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
Step 2 Configure IS-IS to advertise the route to each network segment to which each
interface is connected and to advertise the host route to each loopback address
that is used as an LSR ID.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 10.0000.0000.0002.00
[*PE1-isis-1] quit
[*PE1] interface loopback 0
[*PE1-LoopBack0] isis enable 1
[*PE1-LoopBack0] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] network-entity 10.0000.0000.0003.00
[*P1-isis-1] quit
[*P1] interface loopback 0
[*P1-LoopBack0] isis enable 1
[*P1-LoopBack0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface gigabitethernet3/0/0
[*P1-GigabitEthernet3/0/0] isis enable 1
[*P1-GigabitEthernet3/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 10.0000.0000.0004.00
[*PE2-isis-1] quit
[*PE2] interface loopback 0
[*PE2-LoopBack0] isis enable 1
[*PE2-LoopBack0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
Step 3 Configure an IS-IS neighbor relationship between the controller and PE2.
Running IS-IS between the controller and P1 allows the two devices to
communicate with each other so that PE2 can flood network topology information
to the controller. For configuration details, see Configuration Files in this section.
Step 4 Configure basic MPLS functions and enable MPLS TE.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
The configurations on P1 and PE2 are the same as the configuration on PE1. The
configuration details are not provided.
Step 5 Enable SR globally on each node.
# Configure PE1.
[~PE1] segment-routing
[*PE1] commit
The configurations on P1 and PE2 are the same as the configuration on PE1. The
configuration details are not provided.
Step 6 Configure a label allocation mode and a topology information collection mode. In
this example, the controller collects assigns labels to forwarders.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] commit
[~PE1-isis-1] quit
The configurations on P1 and PE2 are the same as the configuration on PE1. The
configuration details are not provided.
Step 7 Configure a tunnel interface and hot standby on the ingress PE1.
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 0
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 3.3.3.3
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te signal-protocol segment-routing
[*PE1-Tunnel1] mpls te pce delegate
[*PE1-Tunnel1] mpls te backup hot-standby
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
Run the display mpls te tunnel command on PE1 to view SR-MPLS TE tunnel
information.
[~PE1] display mpls te tunnel
-------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/OutLabel R Tunnel-name
-------------------------------------------------------------------------------
- - - 101/101 T lsp
1.1.1.1 3.3.3.3 21 -/330000 I Tunnel1
1.1.1.1 3.3.3.3 26 -/330002 I Tunnel1
-------------------------------------------------------------------------------
R: Role, I: Ingress, T: Transit, E: Egress
Run the display mpls te tunnel path command on PE1 to view path information
on the SR-MPLS TE tunnel.
[~PE1] display mpls te tunnel path
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.1 :1 :21
Hop Information
Lsp ID : 1.1.1.1 :1 :26
Hop 0 Label 330000 NAI 10.1.1.2
Step 9 Enable BFD and configure BFD for MPLS TE on the ingress PE1.
# Enable BFD for MPLS TE on the tunnel interface of PE1. Set the minimum
intervals at which BFD packets are sent and received to 100 ms and the local
detection multiplier to 3.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] interface tunnel 1
[*PE1-Tunnel1] mpls te bfd enable
[*PE1-Tunenl1] mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-multiplier 3
[*PE1-Tunenl1] commit
[~PE1-Tunenl1] quit
# After completing the configuration, run the display bfd session mpls-te
interface Tunnel command on PE1 and PE2. The BFD session status is Up.
[~PE1] display bfd session mpls-te interface Tunnel 1 te-lsp
(w): State in WTR
(*): State is invalid
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
16399 16386 3.3.3.3 Up D_TE_LSP Tunnel1
--------------------------------------------------------------------------------
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
traffic-eng level-2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.2.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Tunnel1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.3.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
1.2.40.8 Example for Configuring TI-LFA FRR for a Loose SR-MPLS TE Tunnel
Topology-Independent Loop-Free Alternate (TI-LFA) FRR can be configured to
enhance the reliability of a segment routing (SR) network.
Networking Requirements
On the network shown in Figure 1-129, IS-IS is enabled. The cost of the link
between Device C and Device D is 100, and the cost of other links is 10. Based on
node SIDs, an SR-MPLS TE tunnel (Device A -> Device B -> Device E -> Device F) is
established from Device A to Device F through static explicit paths.
The SR-MPLS TE tunnel established based on node SIDs is a loose one that
supports TI-LFA FRR. TI-LFA FRR can be configured on Device B to provide local
protection, enabling traffic to be quickly switched to the backup path (Device A ->
Device B -> Device C -> Device D -> Device E -> Device F) when the link between
Device B and Device E fails.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
In this example, TI-LFA FRR is configured on Device B to protect the link between
Device B and Device E. On a live network, you are advised to configure TI-LFA FRR
on all nodes in the SR domain.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IS-IS on the entire network to implement interworking between
devices.
2. Enable MPLS on the entire network and configure SR.
3. Configure an explicit path on Device A and establish an SR-MPLS TE tunnel.
4. Enable TI-LFA FRR and anti-microloop on Device B.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR ID of each device
● SRGB range of each device
Procedure
Step 1 Configure interface IP addresses.
# Configure Device A.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface loopback 1
[*DeviceA-LoopBack1] ip address 1.1.1.9 32
[*DeviceA-LoopBack1] quit
[*DeviceA] interface gigabitethernet1/0/0
[*DeviceA-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] commit
Repeat this step for the other devices. For configuration details, see Configuration
Files in this section.
Step 2 Configure an IGP to implement interworking. IS-IS is used as an example.
# Configure Device A.
[~DeviceA] isis 1
[*DeviceA-isis-1] is-level level-1
[*DeviceA-isis-1] network-entity 10.0000.0000.0001.00
[*DeviceA-isis-1] cost-style wide
[*DeviceA-isis-1] quit
[*DeviceA] commit
[*DeviceA] interface loopback 1
[*DeviceA-LoopBack1] isis enable 1
[*DeviceA-LoopBack1] quit
[*DeviceA] interface gigabitethernet1/0/0
[*DeviceA-GigabitEthernet1/0/0] isis enable 1
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] commit
Repeat this step for the other devices. For configuration details, see Configuration
Files in this section.
Step 3 Configure basic MPLS capabilities on the backbone network.
# Configure Device A.
[~DeviceA] mpls lsr-id 1.1.1.9
[*DeviceA] mpls
[*DeviceA-mpls] mpls te
[*DeviceA-mpls] commit
[~DeviceA-mpls] quit
# Configure Device B.
[~DeviceB] mpls lsr-id 2.2.2.9
[*DeviceB] mpls
[*DeviceB-mpls] mpls te
[*DeviceB-mpls] commit
[~DeviceB-mpls] quit
# Configure Device C.
[~DeviceC] mpls lsr-id 3.3.3.9
[*DeviceC] mpls
[*DeviceC-mpls] mpls te
[*DeviceC-mpls] commit
[~DeviceC-mpls] quit
# Configure Device D.
[~DeviceD] mpls lsr-id 4.4.4.9
[*DeviceD] mpls
[*DeviceD-mpls] mpls te
[*DeviceD-mpls] commit
[~DeviceD-mpls] quit
# Configure Device E.
[~DeviceE] mpls lsr-id 5.5.5.9
[*DeviceE] mpls
[*DeviceE-mpls] mpls te
[*DeviceE-mpls] commit
[~DeviceE-mpls] quit
# Configure Device F.
[~DeviceF] mpls lsr-id 6.6.6.9
[*DeviceF] mpls
[*DeviceF-mpls] mpls te
[*DeviceF-mpls] commit
[~DeviceF-mpls] quit
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceA-isis-1] quit
[*DeviceA] interface loopback 1
[*DeviceA-LoopBack1] isis prefix-sid index 10
[*DeviceA-LoopBack1] quit
[*DeviceA] commit
# Configure Device B.
[~DeviceB] segment-routing
[*DeviceB-segment-routing] quit
[*DeviceB] commit
[~DeviceB] isis 1
[*DeviceB-isis-1] segment-routing mpls
[*DeviceB-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceB-isis-1] quit
[*DeviceB] interface loopback 1
[*DeviceB-LoopBack1] isis prefix-sid index 20
[*DeviceB-LoopBack1] quit
[*DeviceB] commit
# Configure Device C.
[~DeviceC] segment-routing
[*DeviceC-segment-routing] quit
[*DeviceC] commit
[~DeviceC] isis 1
[*DeviceC-isis-1] segment-routing mpls
[*DeviceC-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceC-isis-1] quit
[*DeviceC] interface loopback 1
[*DeviceC-LoopBack1] isis prefix-sid index 30
[*DeviceC-LoopBack1] quit
[*DeviceC] commit
# Configure Device D.
[~DeviceD] segment-routing
[*DeviceD-segment-routing] quit
[*DeviceD] commit
[~DeviceD] isis 1
[*DeviceD-isis-1] segment-routing mpls
[*DeviceD-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceD-isis-1] quit
[*DeviceD] interface loopback 1
[*DeviceD-LoopBack1] isis prefix-sid index 40
[*DeviceD-LoopBack1] quit
[*DeviceD] commit
# Configure Device E.
[~DeviceE] segment-routing
[*DeviceE-segment-routing] quit
[*DeviceE] commit
[~DeviceE] isis 1
[*DeviceE-isis-1] segment-routing mpls
[*DeviceE-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceE-isis-1] quit
[*DeviceE] interface loopback 1
[*DeviceE-LoopBack1] isis prefix-sid index 50
[*DeviceE-LoopBack1] quit
[*DeviceE] commit
# Configure Device F.
[~DeviceF] segment-routing
[*DeviceF-segment-routing] quit
[*DeviceF] commit
[~DeviceF] isis 1
[*DeviceF-isis-1] segment-routing mpls
[*DeviceF-isis-1] segment-routing global-block 16000 23999
The SRGB range varies according to the device. The configuration in this example is for
reference only.
[*DeviceF-isis-1] quit
[*DeviceF] interface loopback 1
[*DeviceF-LoopBack1] isis prefix-sid index 60
[*DeviceF-LoopBack1] quit
[*DeviceF] commit
# Configure Device B.
[~DeviceB] isis 1
[*DeviceB-isis-1] avoid-microloop frr-protected
[*DeviceB-isis-1] avoid-microloop frr-protected rib-update-delay 5000
[*DeviceB-isis-1] avoid-microloop segment-routing
[*DeviceB-isis-1] avoid-microloop segment-routing rib-update-delay 5000
[*DeviceB-isis-1] frr
[*DeviceB-isis-1-frr] loop-free-alternate level-1
[*DeviceB-isis-1-frr] ti-lfa level-1
[*DeviceB-isis-1-frr] quit
After completing the configurations, run the display isis route [ level-1 | level-2 ]
[ process-id ] [ verbose ] command on Device B. The command output shows IS-
IS TI-LFA FRR backup entries.
[~DeviceB] display isis route level-1 verbose
Route information for ISIS(1)
-----------------------------
# Run the tracert command on Device A again to check the connectivity of the SR-
MPLS TE tunnel. For example:
[~DeviceA] tracert lsp segment-routing te Tunnel 1
LSP Trace Route FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to
break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[16050 16060 ]
1 10.1.1.2 3 ms Transit 10.2.1.2/[16050 ]
2 10.2.1.2 4 ms Transit 10.3.1.2/[16050 ]
3 10.3.1.2 4 ms Transit 10.4.1.2/[3 ]
4 10.4.1.2 3 ms Transit 10.6.1.2/[3 ]
5 6.6.6.9 5 ms Egress
The preceding command output shows that the SR-MPLS TE tunnel has been
switched to the TI-LFA FRR backup path.
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path p1
next sid label 16020 type prefix
next sid label 16050 type prefix
next sid label 16060 type prefix
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 6.6.6.9
mpls te signal-protocol segment-routing
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
isis cost 100
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
return
● Device D configuration file
#
sysname DeviceD
#
mpls lsr-id 4.4.4.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
isis cost 100
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
isis prefix-sid index 40
#
return
● Device E configuration file
● #
sysname DeviceE
#
mpls lsr-id 5.5.5.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0005.00
segment-routing mpls
Networking Requirements
On the network shown in Figure 1-130, PE1 and ASBR1 are in AS 100, PE2 and
ASBR2 are in AS 200, and ASBR1 and ASBR2 are directly connected using two
physical links. PE1 needs to establish a bidirectional E2E tunnel to PE2. The
Segment Routing (SR) protocol is used to generate and forward data. In the
direction from PE1 to PE2, PE1 is the ingress, and PE2 is the egress. In the direction
from PE2 to PE1, PE2 is the ingress, and PE1 is the egress.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure intra-AS SR-MPLS TE tunnels in AS 100 and AS 200. Set binding
SIDs for the SR-MPLS TE tunnels.
2. Configure an EBGP peer relationship between ASBR1 and ASBR2, enable BGP
EPE and BGP-LS, and enable the devices to generate BGP peer SIDs.
3. Create an E2E SR-MPLS TE tunnel interface on PE1 and PE2. Specify the IP
address, tunneling protocol, and destination address of each tunnel. Explicit
paths are used for path calculation.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces as shown in Figure 1-130
● IS-IS process ID (1), IS-IS level (Level-2), and IS-IS system IDs
(10.0000.0000.0001.00, 10.0000.0000.0002.00, 10.0000.0000.0003.00, and
10.0000.0000.0004.00)
● AS number (100) of PE1 and ASBR1 and that (200) of PE2 and ASBR2
● SR-MPLS TE tunnel interface names in AS 100 (tunnel 1) and AS 200 (tunnel
2); tunnel interface name of the PE1-to-PE2 E2E SR-MPLS TE tunnel (tunnel
3) and that of the PE2-to-PE1 E2E SR-MPLS TE tunnel (tunnel 3)
Procedure
Step 1 Assign an IP address and a mask to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 172.16.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure ASBR1.
<HUAWEI> system-view
[~HUAWEI] sysname ASBR1
[*HUAWEI] commit
[~ASBR1] interface loopback 1
[*ASBR1-LoopBack1] ip address 2.2.2.2 32
[*ASBR1-LoopBack1] quit
[*ASBR1] interface gigabitethernet1/0/0
[*ASBR1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*ASBR1-GigabitEthernet1/0/0] quit
[*ASBR1] interface gigabitethernet2/0/0
[*ASBR1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*ASBR1-GigabitEthernet2/0/0] quit
[*ASBR1] interface gigabitethernet3/0/0
[*ASBR1-GigabitEthernet3/0/0] ip address 10.0.1.2 24
[*ASBR1-GigabitEthernet3/0/0] quit
[*ASBR1] commit
# Configure ASBR2.
<HUAWEI> system-view
[~HUAWEI] sysname ASBR1
[*HUAWEI] commit
[~ASBR2] interface loopback 1
[*ASBR2-LoopBack1] ip address 3.3.3.3 32
[*ASBR2-LoopBack1] quit
[*ASBR2] interface gigabitethernet1/0/0
[*ASBR2-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*ASBR2-GigabitEthernet1/0/0] quit
[*ASBR2] interface gigabitethernet2/0/0
[*ASBR2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
[*ASBR2-GigabitEthernet2/0/0] quit
[*ASBR2] interface gigabitethernet3/0/0
[*ASBR2-GigabitEthernet3/0/0] ip address 10.9.1.1 24
[*ASBR2-GigabitEthernet3/0/0] quit
[*ASBR2] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 4.4.4.4 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 172.17.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
[*PE1] segment-routing
[*PE1-segment-routing] quit
[*PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] traffic-eng level-2
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] isis prefix-sid absolute 16100
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls te
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
[~PE1] explicit-path path2asbr1
[*PE1-explicit-path-path2asbr1] next sid label 330102 type adjacency
[*PE1-explicit-path-path2asbr1] quit
[*PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 2.2.2.2
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te signal-protocol segment-routing
[*PE1-Tunnel1] mpls te path explicit-path path2asbr1
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
In the preceding step, a PE1-to-ASBR1 adjacency label is used in the next sid label command
and is dynamically generated using IS-IS. To obtain the label value, run the display segment-
routing adjacency mpls forwarding command. For example:
[~PE1] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
Total information(s): 1
# Configure ASBR1.
[~ASBR1] mpls lsr-id 2.2.2.2
[~ASBR1] mpls
[*ASBR1-mpls] mpls te
[*ASBR1-mpls] quit
[*ASBR1] segment-routing
[*ASBR1-segment-routing] quit
[*ASBR1] isis 1
[*ASBR1-isis-1] is-level level-2
[*ASBR1-isis-1] network-entity 10.0000.0000.0002.00
[*ASBR1-isis-1] cost-style wide
[*ASBR1-isis-1] traffic-eng level-2
[*ASBR1-isis-1] segment-routing mpls
[*ASBR1-isis-1] quit
[*ASBR1] commit
[*ASBR1] interface loopback 1
[*ASBR1-LoopBack1] isis enable 1
[*ASBR1-LoopBack1] isis prefix-sid absolute 16200
[*ASBR1-LoopBack1] quit
[*ASBR1] interface gigabitethernet3/0/0
[*ASBR1-GigabitEthernet3/0/0] isis enable 1
[*ASBR1-GigabitEthernet3/0/0] mpls
[*ASBR1-GigabitEthernet3/0/0] mpls te
[*ASBR1-GigabitEthernet3/0/0] quit
[*ASBR1] commit
[~ASBR1] explicit-path path2pe1
[*ASBR1-explicit-path-path2pe1] next sid label 330201 type adjacency
[*ASBR1-explicit-path-path2pe1] quit
[*ASBR1] interface tunnel1
[*ASBR1-Tunnel1] ip address unnumbered interface loopback 1
[*ASBR1-Tunnel1] tunnel-protocol mpls te
[*ASBR1-Tunnel1] destination 1.1.1.1
[*ASBR1-Tunnel1] mpls te tunnel-id 1
[*ASBR1-Tunnel1] mpls te signal-protocol segment-routing
[*ASBR1-Tunnel1] mpls te path explicit-path path2pe1
[*ASBR1-Tunnel1] commit
[~ASBR1-Tunnel1] quit
In the preceding step, an ASBR1-to-PE1 adjacency label is used in the next sid label command
and is dynamically generated using IS-IS. To obtain the label value, run the display segment-
routing adjacency mpls forwarding command. For example:
[~ASBR1] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
Total information(s): 1
[*ASBR2-Tunnel2] commit
[~ASBR2-Tunnel2] quit
In the preceding step, an ASBR2-to-PE2 adjacency label is used in the next sid label command
and is dynamically generated using IS-IS. To obtain the label value, run the display segment-
routing adjacency mpls forwarding command. For example:
[~ASBR2] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
Total information(s): 1
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.4
[~PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 10.0000.0000.0001.00
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] isis prefix-sid absolute 16400
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls te
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
[~PE2] explicit-path path2asbr2
[*PE2-explicit-path-path2asbr2] next sid label 330403 type adjacency
[*PE2-explicit-path-path2asbr2] quit
[*PE2] interface tunnel2
[*PE2-Tunnel2] ip address unnumbered interface loopback 1
[*PE2-Tunnel2] tunnel-protocol mpls te
[*PE2-Tunnel2] destination 3.3.3.3
[*PE2-Tunnel2] mpls te tunnel-id 1
[*PE2-Tunnel2] mpls te signal-protocol segment-routing
[*PE2-Tunnel2] mpls te path explicit-path path2asbr2
[*PE2-Tunnel2] commit
[~PE2-Tunnel2] quit
In the preceding step, a PE2-to-ASBR2 adjacency label is used in the next sid label command
and is dynamically generated using IS-IS. To obtain the label value, run the display segment-
routing adjacency mpls forwarding command. For example:
[~PE2] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
Total information(s): 1
Step 4 Establish an EBGP peer relationship between ASBRs and enable BGP EPE and BGP-
LS.
In this example, a loopback interface is used to establish a multi-hop EBGP peer
relationship. Before the configuration, ensure that the loopback interfaces of
ASBR1 and ASBR2 are routable to each other.
BGP EPE supports only EBGP peer relationships. Multi-hop EBGP peers must be
directly connected using physical links. If intermediate nodes exist, no BGP peer
SID is set on them, which causes forwarding failures.
# Configure ASBR1.
[~ASBR1] ip route-static 3.3.3.3 32 gigabitethernet1/0/0 10.1.1.2 description asbr1toasbr2
[*ASBR1] ip route-static 3.3.3.3 32 gigabitethernet2/0/0 10.2.1.2 description asbr1toasbr2
[~ASBR1] bgp 100
[~ASBR1-bgp] peer 3.3.3.3 as-number 200
[*ASBR1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*ASBR1-bgp] peer 3.3.3.3 ebgp-max-hop 2
[*ASBR1-bgp] peer 3.3.3.3 egress-engineering
[*ASBR1-bgp] link-state-family unicast
[*ASBR1-bgp-af-ls] quit
[*ASBR1-bgp] ipv4-family unicast
[*ASBR1-bgp-af-ipv4] network 2.2.2.2 32
[*ASBR1-bgp-af-ipv4] network 10.1.1.0 24
[*ASBR1-bgp-af-ipv4] network 10.2.1.0 24
[*ASBR1-bgp-af-ipv4] import-route isis 1
[*ASBR1-bgp-af-ipv4] commit
[~ASBR1-bgp-af-ipv4] quit
[~ASBR1-bgp] quit
# Configure ASBR2.
[~ASBR2] ip route-static 2.2.2.2 32 gigabitethernet1/0/0 10.1.1.1 description asbr2toasbr1
[~ASBR2] ip route-static 2.2.2.2 32 gigabitethernet2/0/0 10.2.1.1 description asbr2toasbr1
[~ASBR2] bgp 200
[~ASBR2-bgp] peer 2.2.2.2 as-number 100
[*ASBR2-bgp] peer 2.2.2.2 connect-interface loopback 1
[*ASBR2-bgp] peer 2.2.2.2 ebgp-max-hop 2
[*ASBR2-bgp] peer 2.2.2.2 egress-engineering
[*ASBR1-bgp] link-state-family unicast
[*ASBR1-bgp-af-ls] quit
[*ASBR2-bgp] ipv4-family unicast
[*ASBR2-bgp-af-ipv4] network 3.3.3.3 32
[*ASBR2-bgp-af-ipv4] network 10.1.1.0 24
[*ASBR2-bgp-af-ipv4] network 10.2.1.0 24
[*ASBR2-bgp-af-ipv4] import-route isis 1
[*ASBR2-bgp-af-ipv4] commit
[~ASBR2-bgp-af-ipv4] quit
[~ASBR2-bgp] quit
Step 5 Set binding SIDs for the SR-MPLS TE tunnels within AS domains.
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] mpls te binding-sid label 1000
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
# Configure ASBR2.
[~ASBR2] interface tunnel2
[*ASBR2-Tunnel2] mpls te binding-sid label 2000
[*ASBR2-Tunnel2] commit
[~ASBR2-Tunnel2] quit
# Configure ASBR1.
[~ASBR1] interface tunnel1
[*ASBR1-Tunnel1] mpls te binding-sid label 4000
[*ASBR1-Tunnel1] commit
[~ASBR1-Tunnel1] quit
Step 6 Configure a bidirectional E2E SR-MPLS TE tunnel between PE1 and PE2.
In the direction from PE1 to PE2:
# Configure PE1. There are multiple links between ASBRs. You can use one of
them. In this example, the link of ASBR1 (GE 1/0/0) -> ASBR2 (GE 1/0/0) is used.
[~PE1] explicit-path path2pe2
[*PE1-explicit-path-path2pe2] next sid label 1000 type binding-sid
[*PE1-explicit-path-path2pe2] next sid label 32770 type adjacency
[*PE1-explicit-path-path2pe2] next sid label 2000 type binding-sid
[*PE1-explicit-path-path2pe2] quit
[*PE1] interface tunnel3
[*PE1-Tunnel3] ip address unnumbered interface loopback 1
[*PE1-Tunnel3] tunnel-protocol mpls te
[*PE1-Tunnel3] destination 4.4.4.4
[*PE1-Tunnel3] mpls te tunnel-id 100
[*PE1-Tunnel3] mpls te signal-protocol segment-routing
[*PE1-Tunnel3] mpls te path explicit-path path2pe2
[*PE1-Tunnel3] commit
[~PE1-Tunnel3] quit
[*PE2-Tunnel3] commit
[~PE2-Tunnel3] quit
ExcludeAny : 0x0
Affinity Prop/Mask : 0x0/0x0 Resv Style : SE
Configured Bandwidth Information:
CT0 Bandwidth(Kbit/sec): 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec): 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec): 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec): 0 CT7 Bandwidth(Kbit/sec): 0
Actual Bandwidth Information:
CT0 Bandwidth(Kbit/sec): 0 CT1Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec): 0 CT3Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec): 0 CT5Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec): 0 CT7Bandwidth(Kbit/sec): 0
Explicit Path Name : path2pe2 Hop Limit: -
Record Route : Disabled Record Label : Disabled
Route Pinning : Disabled
FRR Flag : Disabled
IdleTime Remain :-
BFD Status : DOWN
Soft Preemption : Disabled
Reroute Flag : Enabled
Pce Flag : Normal
Path Setup Type : CSPF
Create Modify LSP Reason: -
# Check the status on PE2.
[~PE2] display mpls te tunnel-interface tunnel3
Tunnel Name : tunnel3
Signalled Tunnel Name: -
Tunnel State Desc : CR-LSP is Up
Tunnel Attributes :
Active LSP : Primary LSP
Traffic Switch :-
Session ID : 400
Ingress LSR ID : 4.4.4.4 Egress LSR ID: 1.1.1.1
Admin State : UP Oper State : UP
Signaling Protocol : Segment-Routing
FTid : 65
Tie-Breaking Policy : None Metric Type : None
Bfd Cap : None
Reopt : Disabled Reopt Freq : -
Auto BW : Disabled Threshold : -
Current Collected BW: - Auto BW Freq : -
Min BW :- Max BW :-
Offload : Disabled Offload Freq : -
Low Value :- High Value : -
Readjust Value :-
Offload Explicit Path Name: -
Tunnel Group : Primary
Interfaces Protected: -
Excluded IP Address : -
Referred LSP Count : 0
Primary Tunnel :- Pri Tunn Sum : -
Backup Tunnel :-
Group Status : Up Oam Status : None
IPTN InLabel :- Tunnel BFD Status : -
BackUp LSP Type : None BestEffort : Disabled
Secondary HopLimit : -
BestEffort HopLimit : -
Secondary Explicit Path Name: -
Secondary Affinity Prop/Mask: 0x0/0x0
BestEffort Affinity Prop/Mask: 0x0/0x0
IsConfigLspConstraint: -
Hot-Standby Revertive Mode: Revertive
Hot-Standby Overlap-path: Disabled
Hot-Standby Switch State: CLEAR
Bit Error Detection: Disabled
Bit Error Detection Switch Threshold: -
Bit Error Detection Resume Threshold: -
Ip-Prefix Name : -
P2p-Template Name : -
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
network-entity 10.0000.0000.0001.00
cost-style wide
traffic-eng level-2
segment-routing mpls
#
interface loopback 1
ip address 1.1.1.1 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.1.1 255.255.255.0
isis enable 1
mpls
mpls te
#
explicit-path path2asbr1
next sid label 330102 type adjacency
#
explicit-path path2pe2
next sid label 1000 type binding-sid
next sid label 32770 type adjacency
next sid label 2000 type binding-sid
#
interface tunnel1
ip address unnumbered interface loopback 1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 1
mpls te signal-protocol segment-routing
mpls te path explicit-path path2asbr1
mpls te binding-sid label 1000
#
interface tunnel3
ip address unnumbered interface loopback 1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 100
mpls te signal-protocol segment-routing
mpls te path explicit-path path2pe2
#
return
#
● ASBR1 configuration file
#
sysname ASBR1
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
network-entity 10.0000.0000.0002.00
cost-style wide
traffic-eng level-2
segment-routing mpls
#
interface loopback 1
ip address 2.2.2.2 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.0.1.2 255.255.255.0
isis enable 1
mpls
mpls te
#
ip route-static 3.3.3.3 32 gigabitethernet1/0/0 10.1.1.2 description asbr1toasbr2
ip route-static 3.3.3.3 32 gigabitethernet2/0/0 10.2.1.2 description asbr1toasbr2
#
bgp 100
peer 3.3.3.3 as-number 200
peer 3.3.3.3 connect-interface loopback 1
peer 3.3.3.3 ebgp-max-hop 2
peer 3.3.3.3 egress-engineering
ipv4-family unicast
network 2.2.2.2 32
network 10.1.1.0 24
network 10.2.1.0 24
import-route isis 1
link-state-family unicast
#
explicit-path path2pe1
next sid label 330201 type adjacency
#
interface tunnel1
ip address unnumbered interface loopback 1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 1
mpls te signal-protocol segment-routing
mpls te path explicit-path path2pe1
mpls te binding-sid label 4000
#
return
#
● ASBR2 configuration file
#
sysname ASBR2
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
network-entity 10.0000.0000.0003.00
cost-style wide
traffic-eng level-2
segment-routing mpls
#
interface loopback 1
ip address 3.3.3.3 255.255.255.255
isis enable 1
isis prefix-sid absolute 16300
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.9.1.1 255.255.255.0
isis enable 1
mpls
mpls te
#
ip route-static 2.2.2.2 32 gigabitethernet1/0/0 10.1.1.1 description asbr2toasbr1
ip route-static 2.2.2.2 32 gigabitethernet2/0/0 10.2.1.1 description asbr2toasbr1
#
bgp 200
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface loopback 1
peer 2.2.2.2 ebgp-max-hop 2
peer 2.2.2.2 egress-engineering
ipv4-family unicast
network 3.3.3.3 32
network 10.1.1.0 24
network 10.2.1.0 24
import-route isis 1
link-state-family unicast
#
explicit-path path2pe2
next sid label 330304 type adjacency
#
interface tunnel2
ip address unnumbered interface loopback 1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 1
mpls te signal-protocol segment-routing
mpls te path explicit-path path2pe2
mpls te binding-sid label 2000
#
return
#
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
network-entity 10.0000.0000.0004.00
cost-style wide
traffic-eng level-2
segment-routing mpls
#
interface loopback 1
ip address 4.4.4.4 255.255.255.255
isis enable 1
isis prefix-sid absolute 16400
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.9.1.2 255.255.255.0
isis enable 1
mpls
mpls te
#
explicit-path path2asbr2
next sid label 330403 type adjacency
#
explicit-path path2pe1
Networking Requirements
On the network shown in Figure 1-131, CE1 and CE2 belong to the same L3VPN.
They access the public network through PE1 and PE2, respectively. Various types of
services are transmitted between CE1 and CE2. Transmitting a large number of
common services deteriorates the efficiency of transmitting important services. To
prevent this problem, configure the CBTS function. This function allows traffic of a
specific service class to be transmitted along a specified tunnel.
In this example, tunnel 1 and tunnel 2 on PE1 transmit important services, and
tunnel 3 transmits other services.
NOTICE
If the CBTS function is configured, you are advised not to configure any of the
following functions:
● Mixed load balancing
● Seamless MPLS
● Dynamic load balancing
Precautions
None
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to each interface and configure a loopback
interface address as an LSR ID on each node.
2. Enable IS-IS globally, configure network entity names, specify the cost type,
and enable IS-IS TE. Enable IS-IS on interfaces, including loopback interfaces.
3. Set MPLS LSR IDs and enable MPLS and MPLS TE globally.
4. Enable MPLS and MPLS TE on each interface.
5. Configure the maximum reservable link bandwidth and BC for the outbound
interface of each involved tunnel.
6. Create a tunnel interface on the ingress and configure the IP address, tunnel
protocol, destination IP address, and tunnel bandwidth.
7. Configure multi-field classification on PE1.
8. Configure a VPN instance and apply a tunnel policy on PE1.
Data Preparation
To complete the configuration, you need the following data:
● IS-IS area ID, originating system ID, and IS-IS level on each node
● Maximum reservable link bandwidth of each tunnel
● Tunnel interface number, IP address, destination IP address, tunnel ID, and
tunnel bandwidth
● Traffic classifier name, traffic behavior name, and traffic policy name
Procedure
Step 1 Assign an IP address and its mask to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 3.3.3.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.2.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 4.4.4.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] network-entity 00.0005.0000.0000.0002.00
[*P1-isis-1] is-level level-2
[*P1-isis-1] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] network-entity 00.0005.0000.0000.0003.00
[*P2-isis-1] is-level level-2
[*P2-isis-1] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] network-entity 00.0005.0000.0000.0004.00
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp peer or display bgp
vpnv4 all peer command on the PEs to check whether a BGP peer relationship
has been established between the PEs. If the Established state is displayed in the
command output, the BGP peer relationship has been established successfully. The
following example uses the command output on PE1.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
4.4.4.9 4 100 2 6 0 00:11:25 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.9
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
4.4.4.9 4 100 19 21 0 00:19:43 Established 0
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls te
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] mpls lsr-id 3.3.3.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls te
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 4.4.4.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls te
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] interface gigabitethernet 2/0/0
[~P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 100000
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
[~P2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 100000
[*P2-GigabitEthernet1/0/0] mpls te bandwidth bc0 100000
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
# Configure multi-field classification and set a service class for each type of
service packet on PE1.
[~PE1] acl 2001
[*PE1-acl4-basic-2001] rule 10 permit source 10.40.0.0 0.0.255.255
[*PE1-acl4-basic-2001] quit
[*PE1] acl 2002
[*PE1-acl4-basic-2002] rule 20 permit source 10.50.0.0 0.0.255.255
[*PE1-acl4-basic-2002] quit
[*PE1] traffic classifier service1
[*PE1-classifier-service1] if-match acl 2001
[*PE1-classifier-service1] commit
[~PE1-classifier-service1] quit
[~PE1] traffic behavior behavior1
[*PE1-behavior-behavior1] service-class af1 color green
[*PE1-behavior-behavior1] commit
[*PE1-behavior-behavior1] quit
[*PE1] traffic classifier service2
[*PE1-classifier-service2] if-match acl 2002
[*PE1-classifier-service2] commit
[~PE1-classifier-service2] quit
[~PE1] traffic behavior behavior2
[*PE1-behavior-behavior2] service-class af2 color green
[*PE1-behavior-behavior2] commit
[~PE1-behavior-behavior2] quit
[~PE1] traffic policy policy1
[*PE1-trafficpolicy-policy1] classifier service1 behavior behavior1
[*PE1-trafficpolicy-policy1] classifier service2 behavior behavior2
[*PE1-trafficpolicy-policy1] commit
[~PE1-trafficpolicy-policy1] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] traffic-policy policy1 inbound
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-2
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 16000 19000
[*P1-isis-1] commit
[~P1-isis-1] quit
[~P1] interface LoopBack 1
[~P1-LoopBack1] isis prefix-sid index 20
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] traffic-eng level-2
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] segment-routing global-block 16000 19000
[*P2-isis-1] commit
[~P2-isis-1] quit
[~P2] interface LoopBack 1
[~P2-LoopBack1] isis prefix-sid index 30
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 16000 19000
[*PE2-isis-1] commit
[~PE2-isis-1] quit
[~PE2] interface LoopBack 1
[~PE2-LoopBack1] isis prefix-sid index 40
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display the node SID of each node. The following example uses the command
output on PE1.
[~PE1] display segment-routing prefix mpls forwarding
# On the ingress of each tunnel, create a tunnel interface and set the IP address,
tunnel protocol, destination IP address, tunnel ID, dynamic signaling protocol,
tunnel bandwidth, and service classes for packets transmitted on the tunnel.
Run the mpls te service-class { service-class & <1-8> | default } command to set a service
class for packets carried by each tunnel.
# Configure PE1.
[~PE1] interface tunnel1
[*PE1-Tunnel1] ip address unnumbered interface loopback 1
[*PE1-Tunnel1] tunnel-protocol mpls te
[*PE1-Tunnel1] destination 4.4.4.9
[*PE1-Tunnel1] mpls te tunnel-id 1
[*PE1-Tunnel1] mpls te bandwidth ct0 20000
[*PE1-Tunnel1] mpls te signal-protocol segment-routing
[*PE1-Tunnel1] mpls te path explicit-path pe1_pe2
[*PE1-Tunnel1] mpls te service-class af1
[*PE1-Tunnel1] commit
[~PE1-Tunnel1] quit
[~PE1] interface tunnel2
[*PE1-Tunnel2] ip address unnumbered interface loopback 1
[*PE1-Tunnel2] tunnel-protocol mpls te
[*PE1-Tunnel2] destination 4.4.4.9
[*PE1-Tunnel2] mpls te tunnel-id 2
[*PE1-Tunnel2] mpls te bandwidth ct0 20000
[*PE1-Tunnel2] mpls te signal-protocol segment-routing
[*PE1-Tunnel2] mpls te path explicit-path pe1_pe2
[*PE1-Tunnel2] mpls te service-class af2
[*PE1-Tunnel2] commit
[~PE1-Tunnel2] quit
[~PE1] interface tunnel3
[*PE1-Tunnel3] ip address unnumbered interface loopback 1
[*PE1-Tunnel3] tunnel-protocol mpls te
[*PE1-Tunnel3] destination 4.4.4.9
[*PE1-Tunnel3] mpls te tunnel-id 3
[*PE1-Tunnel3] mpls te bandwidth ct0 20000
[*PE1-Tunnel3] mpls te signal-protocol segment-routing
[*PE1-Tunnel3] mpls te path explicit-path pe1_pe2
[*PE1-Tunnel3] mpls te service-class default
[~PE1-Tunnel3] commit
[~PE1-Tunnel3] quit
[*PE1] tunnel-policy policy1
[*PE1-tunnel-policy-policy1] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-policy1] commit
[~PE1-tunnel-policy-policy1] quit
# Configure PE2.
[~PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.9
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te bandwidth ct0 20000
[*PE2-Tunnel1] mpls te signal-protocol segment-routing
[*PE2-Tunnel1] mpls te path explicit-path pe2_pe1
[*PE2-Tunnel1] commit
[~PE2-Tunnel1] quit
[~PE2] tunnel-policy policy1
[*PE2-tunnel-policy-policy1] tunnel select-seq sr-te load-balance-number 3
[*PE2-tunnel-policy-policy1] commit
[~PE2-tunnel-policy-policy1] quit
# Configure PE2.
[~PE2] ip vpn-instance vpn2
[*PE2-vpn-instance-vpn2] ipv4-family
[*PE2-vpn-instance-vpn2-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpn2-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpn2-af-ipv4] commit
[~PE2-vpn-instance-vpn2-af-ipv4] quit
[~PE2-vpn-instance-vpn2] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpn2
[*PE2-GigabitEthernet2/0/0] ip-address 10.11.1.1 24
[~PE2-GigabitEthernet2/0/0] commit
Step 10 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] ip address 10.10.1.2 24
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.10.1.1 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] quit
[*CE1] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see Configuration Files in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] peer 10.10.1.2 as-number 65410
[*PE1-bgp-vpn1] commit
[*PE1-bgp-vpn1] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs to check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
# Run the ping command on PE1 to check the connectivity of each SR-MPLS TE
tunnel. For example:
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 13/17/20 ms
[~PE1] ping lsp segment-routing te Tunnel 2
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 : 100 data bytes, press
CTRL_C to break
Reply from 4.4.4.9: bytes=100 Sequence=1 time=20 ms
Reply from 4.4.4.9: bytes=100 Sequence=2 time=18 ms
Reply from 4.4.4.9: bytes=100 Sequence=3 time=14 ms
Reply from 4.4.4.9: bytes=100 Sequence=4 time=20 ms
Reply from 4.4.4.9: bytes=100 Sequence=5 time=21 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 14/18/21 ms
[~PE1] ping lsp segment-routing te Tunnel 3
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel3 : 100 data bytes, press
CTRL_C to break
Reply from 4.4.4.9: bytes=100 Sequence=1 time=14 ms
Reply from 4.4.4.9: bytes=100 Sequence=2 time=16 ms
Reply from 4.4.4.9: bytes=100 Sequence=3 time=13 ms
Reply from 4.4.4.9: bytes=100 Sequence=4 time=12 ms
Reply from 4.4.4.9: bytes=100 Sequence=5 time=16 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel3 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 12/14/16 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
tnl-policy policy1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path pe1_pe2
next sid label 16020 type prefix
next sid label 16030 type prefix
next sid label 16040 type prefix
#
acl number 2001
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
traffic-eng level-2
network-entity 00.0005.0000.0000.0004.00
segment-routing mpls
segment-routing global-block 16000 19000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 100000
mpls te bandwidth bc0 100000
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn2
ip address 10.11.1.1 255.255.255.0
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
isis prefix-sid index 40
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te bandwidth ct0 20000
mpls te tunnel-id 1
mpls te path explicit-path pe2_pe1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpn2
peer 10.11.1.2 as-number 65420
#
tunnel-policy policy1
tunnel select-seq sr-te load-balance-number 3
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.2 255.255.255.0
#
bgp 65410
peer 10.10.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.10.1.1 enable
#
return
1.2.40.11 Example for Configuring CBTS in an L3VPN over LDP over SR-MPLS
TE Scenario
This section provides an example for configuring class-of-service-based tunnel
selection (CBTS) in an L3VPN over LDP over SR-MPLS TE scenario.
Networking Requirements
On the network shown in Figure 1-132, CE1 and CE2 belong to the same L3VPN.
They access the public network through PE1 and PE2, respectively. Various types of
services are transmitted between CE1 and CE2. Transmitting a large number of
common services deteriorates the efficiency of transmitting important services. To
prevent this problem, configure the CBTS function. This function allows traffic of a
specific service class to be transmitted along a specified tunnel.
This example requires tunnel 1 to transmit important services and tunnel 2 to
transmit other services.
Figure 1-132 CBTS networking in an L3VPN over LDP over SR-MPLS TE scenario
Precautions
The destination IP address of a tunnel must be equal to the LSR ID of the egress
of the tunnel.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address and its mask to each interface and configure a loopback
interface address as an LSR ID on each node. In addition, configure an IGP to
advertise routes.
2. Create an SR-MPLS TE tunnel in each TE-capable area, and specify a service
class for packets that can be transmitted by the tunnel.
3. Enable MPLS LDP in the TE-incapable area, and configure a remote LDP peer
for each edge node in the TE-capable area.
4. Configure the TE forwarding adjacency function.
5. Configure multi-field classification on nodes connected to the L3VPN as well
as behavior aggregate classification on LDP over SR-MPLS TE links.
Data Preparation
To complete the configuration, you need the following data:
● IS-IS process ID, level, network entity ID, and cost type
● Policy that is used for triggering LSP establishment
● Name and IP address of each remote LDP peer of P1 and P2
● Link bandwidth attributes of each tunnel
● Tunnel interface number, IP address, destination IP address, tunnel ID, tunnel
signaling protocol, tunnel bandwidth, TE metric value, and link cost on P1 and
P2
● Multi-field classifier name and traffic policy name
Procedure
Step 1 Assign an IP address and its mask to each interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.2 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.4 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.4.1.2 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P3.
<HUAWEI> system-view
[~HUAWEI] sysname P3
[*HUAWEI] commit
[~P3] interface loopback 1
[*P3-LoopBack1] ip address 3.3.3.3 32
[*P3-LoopBack1] quit
[*P3] interface gigabitethernet1/0/0
[*P3-GigabitEthernet1/0/0] ip address 10.2.1.2 24
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet2/0/0
[*P3-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*P3-GigabitEthernet2/0/0] quit
[*P3] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 5.5.5.5 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.4.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
Step 2 Enable IS-IS to advertise the route of the network segment connected to each
interface and the host route destined for each LSR ID.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] is-level level-2
[*P1-isis-1] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] network-entity 00.0005.0000.0000.0003.00
[*P2-isis-1] is-level level-2
[*P2-isis-1] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure P3.
[~P3] isis 1
[*P3-isis-1] network-entity 00.0005.0000.0000.0004.00
[*P3-isis-1] is-level level-2
[*P3-isis-1] quit
[*P3] interface gigabitethernet 1/0/0
[*P3-GigabitEthernet1/0/0] isis enable 1
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] network-entity 00.0005.0000.0000.0005.00
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 5.5.5.5 as-number 100
[*PE1-bgp] peer 5.5.5.5 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 5.5.5.5 enable
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.1 as-number 100
[*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp peer or display bgp
vpnv4 all peer command on the PEs to check whether a BGP peer relationship
has been established between the PEs. If the Established state is displayed in the
command output, the BGP peer relationship has been established successfully. The
following example uses the command output on PE1.
[~PE1] display bgp peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
5.5.5.5 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
5.5.5.5 4 100 12 18 0 00:09:38 Established 0
Step 4 Configure basic MPLS functions, and enable LDP between PE1 and P1 and
between PE2 and P2.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] lsp-trigger all
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls ldp
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.2
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] lsp-trigger all
[*P1-mpls] quit
[*P1] mpls ldp
[*P1-mpls-ldp] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] mpls
[*P1-GigabitEthernet1/0/0] mpls ldp
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] mpls
[*P1-GigabitEthernet2/0/0] mpls te
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P3.
[~P3] mpls lsr-id 3.3.3.3
[*P3] mpls
[*P3-mpls] mpls te
[*P3-mpls] quit
[*P3] interface gigabitethernet 1/0/0
[*P3-GigabitEthernet1/0/0] mpls
[*P3-GigabitEthernet1/0/0] mpls te
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
[*P3-GigabitEthernet2/0/0] mpls
[*P3-GigabitEthernet2/0/0] mpls te
[*P3-GigabitEthernet2/0/0] commit
[~P3-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.4
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] lsp-trigger all
[*P2-mpls] quit
[*P2] mpls ldp
[*P2-mpls-ldp] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] mpls
[*P2-GigabitEthernet1/0/0] mpls te
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] mpls
[*P2-GigabitEthernet2/0/0] mpls ldp
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 5.5.5.5
[*PE2] mpls
[*PE2-mpls] lsp-trigger all
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] mpls
[*PE2-GigabitEthernet1/0/0] mpls ldp
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
After you complete the preceding configurations, local LDP sessions are
successfully established between PE1 and P1 and between PE2 and P2.
# Run the display mpls ldp session command on PE1, P1, P2, or PE2 to view LDP
session information.
[~PE1] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
A '*' before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
2.2.2.2:0 Operational DU Passive 0000:00:05 23/23
--------------------------------------------------------------------------
TOTAL: 1 Session(s) Found.
# Run the display mpls ldp peer command to view LDP peer information. The
following example uses the command output on PE1.
[~PE1] display mpls ldp peer
LDP Peer Information in Public network
A '*' before a peer means the peer is being deleted.
-------------------------------------------------------------------------
PeerID TransportAddress DiscoverySource
-------------------------------------------------------------------------
2.2.2.2:0 2.2.2.2 GigabitEthernet1/0/0
-------------------------------------------------------------------------
TOTAL: 1 Peer(s) Found.
# Run the display mpls lsp command to view LDP LSP information. The following
example uses the command output on PE1.
[~PE1] display mpls lsp
----------------------------------------------------------------------
LSP Information: LDP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL GE1/0/0/-
2.2.2.2/32 NULL/3 -/GE1/0/0
2.2.2.2/32 1024/3 -/GE1/0/0
10.1.1.0/24 3/NUL GE1/0/0/-
10.2.1.0/24 NULL/3 -/GE1/0/0
10.2.1.0/24 1025/3 -/GE1/0/0
# Configure P2.
[~P2] mpls ldp remote-peer lsrb
[*P2-mpls-ldp-remote-lsrb] remote-ip 2.2.2.2
[*P2-mpls-ldp-remote-lsrb] commit
[~P2-mpls-ldp-remote-lsrb] quit
After you complete the preceding configurations, a remote LDP session is set up
between P1 and P2. Run the display mpls ldp remote-peer command on P1 or
P2 to view information about the remote session entity. The following example
uses the command output on P1.
[~P1] display mpls ldp remote-peer lsrd
LDP Remote Entity Information
------------------------------------------------------------------------------
Remote Peer Name: P2
Remote Peer IP : 4.4.4.4 LDP ID : 2.2.2.2:0
Transport Address : 2.2.2.2 Entity Status : Active
Step 6 Configure bandwidth attributes for the outbound interface of each TE tunnel.
# Configure P1.
[~P1] interface gigabitethernet 2/0/0
[~P1-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P1-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
# Configure P3.
[~P3] interface gigabitethernet 1/0/0
[~P3-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet1/0/0] quit
[*P3] interface gigabitethernet 2/0/0
[*P3-GigabitEthernet2/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P3-GigabitEthernet2/0/0] mpls te bandwidth bc0 20000
[*P3-GigabitEthernet2/0/0] commit
[~P3-GigabitEthernet2/0/0] quit
# Configure P2.
[~P2] interface gigabitethernet 1/0/0
[~P2-GigabitEthernet1/0/0] mpls te bandwidth max-reservable-bandwidth 20000
[*P2-GigabitEthernet1/0/0] mpls te bandwidth bc0 20000
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
Step 7 Configure L3VPN access on PE1 and PE2 and multi-field classification on the
inbound interface of PE1.
# Configure PE1.
[~PE1] ip vpn-instance VPNA
[*PE1-vpn-instance-VPNA] ipv4-family
[*PE1-vpn-instance-VPNA-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-VPNA-af-ipv4] vpn-target 111:1 both
# Configure PE2.
[~PE2] ip vpn-instance VPNB
[*PE2-vpn-instance-VPNB] ipv4-family
[*PE2-vpn-instance-VPNB-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-VPNB-af-ipv4] vpn-target 111:1 both
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance VPNB
[*PE2-GigabitEthernet2/0/0] ip-address 10.11.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 8 Configure behavior aggregate classification on interfaces connecting PE1 and P1.
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] trust upstream default
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
# Configure P1.
[~P1] interface gigabitethernet 1/0/0
[~P1-GigabitEthernet1/0/0] trust upstream default
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
[*P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-2
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 16000 19000
[*P1-isis-1] commit
[~P1-isis-1] quit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] traffic-eng level-2
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] segment-routing global-block 16000 19000
[*P2-isis-1] commit
[~P2-isis-1] quit
# Configure P3.
[~P3] segment-routing
[*P3-segment-routing] quit
[*P3] isis 1
[*P3-isis-1] cost-style wide
[*P3-isis-1] traffic-eng level-2
[*P3-isis-1] segment-routing mpls
[*P3-isis-1] segment-routing global-block 16000 19000
[*P3-isis-1] commit
[~P3-isis-1] quit
Total information(s): 2
[~P3] display segment-routing adjacency mpls forwarding
Total information(s): 2
[~P2] display segment-routing adjacency mpls forwarding
Total information(s): 2
[*P1-explicit-path-p1_p2] commit
[~P1-explicit-path-p1_p2] quit
Step 10 Configure TE tunnels from P1 to P2 and set a service class for each type of packet
that can be transmitted by the tunnels.
Run the mpls te service-class { service-class & <1-8> | default } command to set a service
class for packets carried by each tunnel.
# On P1, enable the IGP shortcut function on each tunnel interface and adjust the
metric value of the forwarding adjacency function to ensure that traffic destined
for P2 or PE2 passes through the corresponding tunnel.
[~P1] interface tunnel1
[*P1-Tunnel1] ip address unnumbered interface LoopBack1
[*P1-Tunnel1] tunnel-protocol mpls te
[*P1-Tunnel1] destination 4.4.4.4
[*P1-Tunnel1] mpls te tunnel-id 100
[*P1-Tunnel1] mpls te bandwidth ct0 10000
[*P1-Tunnel1] mpls te igp shortcut
[*P1-Tunnel1] mpls te igp metric absolute 1
[*P1-Tunnel1] isis enable 1
[*P1-Tunnel1] mpls te signal-protocol segment-routing
[*P1-Tunnel1] mpls te path explicit-path p1_p2
[*P1-Tunnel1] mpls te service-class af1 af2
[*P1-Tunnel1] quit
[*P1] interface tunnel12
[*P1-Tunnel2] ip address unnumbered interface LoopBack1
[*P1-Tunnel2] tunnel-protocol mpls te
[*P1-Tunnel2] destination 4.4.4.4
[*P1-Tunnel2] mpls te tunnel-id 200
[*P1-Tunnel2] mpls te bandwidth ct0 10000
[*P1-Tunnel2] mpls te igp shortcut
[*P1-Tunnel2] mpls te igp metric absolute 1
[*P1-Tunnel2] isis enable 1
[*P1-Tunnel2] mpls te signal-protocol segment-routing
[*P1-Tunnel2] mpls te path explicit-path p1_p2
[*P1-Tunnel2] mpls te service-class default
[*P1-Tunnel2] quit
[*P1] commit
[*P2-Tunnel1] quit
[*P2] commit
Step 12 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] ip address 10.10.1.2 24
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.10.1.1 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] quit
[*CE1] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see Configuration Files in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance VPNA
[*PE1-bgp-VPNA] peer 10.10.1.2 as-number 65410
[*PE1-bgp-VPNA] commit
[*PE1-bgp-VPNA] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs to check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
Step 13 Verify the configuration.
# Run the ping command on P1 to check the connectivity of each SR-MPLS TE
tunnel. For example:
[~P1] ping lsp segment-routing te Tunnel 1
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100 data bytes, press
CTRL_C to break
Reply from 4.4.4.4: bytes=100 Sequence=1 time=20 ms
Reply from 4.4.4.4: bytes=100 Sequence=2 time=16 ms
Reply from 4.4.4.4: bytes=100 Sequence=3 time=16 ms
Reply from 4.4.4.4: bytes=100 Sequence=4 time=12 ms
Reply from 4.4.4.4: bytes=100 Sequence=5 time=12 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 12/15/20 ms
[~P1] ping lsp segment-routing te Tunnel 2
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 : 100 data bytes, press
CTRL_C to break
Reply from 4.4.4.4: bytes=100 Sequence=1 time=17 ms
Reply from 4.4.4.4: bytes=100 Sequence=2 time=17 ms
Reply from 4.4.4.4: bytes=100 Sequence=3 time=12 ms
Reply from 4.4.4.4: bytes=100 Sequence=4 time=12 ms
Reply from 4.4.4.4: bytes=100 Sequence=5 time=14 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 12/14/17 ms
# Check information about LDP LSP establishment on each PE. The following
example uses the command output on PE1.
[~PE1] display mpls lsp
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
Flag after LDP FRR: (L) - Logic FRR LSP
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.1/32 3/NULL -/-
2.2.2.2/32 NULL/3 -/GE2/0/0
2.2.2.2/32 48151/3 -/GE2/0/0
3.3.3.3/32 NULL/48150 -/GE2/0/0
3.3.3.3/32 48152/48150 -/GE2/0/0
4.4.4.4/32 NULL/48153 -/GE2/0/0
4.4.4.4/32 48155/48153 -/GE2/0/0
5.5.5.5/32 NULL/48155 -/GE2/0/0
5.5.5.5/32 48157/48155 -/GE2/0/0
10.1.1.0/24 3/NULL -/-
10.2.1.0/24 NULL/3 -/GE2/0/0
10.2.1.0/24 48153/3 -/GE2/0/0
10.3.1.0/24 NULL/48151 -/GE2/0/0
10.3.1.0/24 48154/48151 -/GE2/0/0
10.4.1.0/24 NULL/48154 -/GE2/0/0
10.4.1.0/24 48156/48154 -/GE2/0/0
-------------------------------------------------------------------------------
LSP Information: BGP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
-/32 48090/NULL -/- VPNA
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
lsp-trigger all
#
mpls ldp
#
ipv4-family
#
acl number 2001
rule 10 permit source 10.40.0.0 0.0.255.255
#
acl number 2002
rule 20 permit source 10.50.0.0 0.0.255.255
#
traffic classifier service1
#
explicit-path p1_p2
next sid label 48090 type adjacency
next sid label 48091 type adjacency
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer lsrd
remote-ip 4.4.4.4
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 19000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
trust upstream default
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
mpls te igp shortcut
mpls te igp metric absolute 1
isis enable 1
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te bandwidth ct0 10000
mpls te path explicit-path p1_p2
mpls te service-class af1 af2
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
mpls te igp shortcut
mpls te igp metric absolute 1
isis enable 1
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te bandwidth ct0 10000
mpls te path explicit-path p1_p2
mpls te service-class default
#
return
● P3 configuration file
#
sysname P3
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0004.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 19000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
lsp-trigger all
#
explicit-path p2_p1
next sid label 48090 type adjacency
next sid label 48090 type adjacency
#
mpls ldp
#
ipv4-family
#
mpls ldp remote-peer lsrb
remote-ip 2.2.2.2
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 19000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls te bandwidth max-reservable-bandwidth 20000
mpls te bandwidth bc0 20000
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
mpls te igp shortcut
mpls te igp metric absolute 1
isis enable 1
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 101
mpls te bandwidth ct0 10000
mpls te path explicit-path p2_p1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance VPNB
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 5.5.5.5
#
mpls
lsp-trigger all
#
mpls ldp
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0005.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 19000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance VPNB
ip address 10.11.1.1 255.255.255.0
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance VPNB
peer 10.11.1.2 as-number 65420
#
return
Networking Requirements
In SR-MPLS TE scenarios, you can configure the class-of-service-based tunnel
selection (CBTS) function to allow traffic of a specific service class to be
transmitted along a specified tunnel. However, with the increase of service classes,
the CBTS function is gradually incapable of meeting service requirements. Similar
to the CBTS function, DSCP-based IP service recursion steers IPv4 or IPv6 packets
into SR-MPLS TE tunnels based on the DSCP values of the packets.
On the network shown in Figure 1-133:
● CE1 and CE2 belong to vpna. The VPN target used by vpna is 111:1.
● Two SR-MPLS TE tunnels (tunnel 1 and tunnel 2) are created between PE1
and PE2.
There are multiple types of services between CE1 and CE2. To ensure the
forwarding quality of important services, you can configure the services to recurse
to SR-MPLS TE tunnels based on the DSCP values of IP packets.
This example requires tunnel 1 to transmit services with DSCP values ranging from
0 to 10 and tunnel 2 to transmit other services by default.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
If a VPN instance is bound to a PE interface connected to a CE, Layer 3
configurations, such as IP address and routing protocol configurations, on this
interface are automatically deleted. If needed, these items need to be
reconfigured.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IS-IS on the backbone network for the PEs to communicate.
2. Enable MPLS on the backbone network and configure SR and static adjacency
labels.
3. Configure SR-MPLS TE tunnels between the PEs.
4. Establish an MP-IBGP peer relationship between the PEs.
5. Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the instance to the PE interface connecting the PE to a CE.
6. Establish an EBGP peer relationship between each pair of a CE and a PE.
7. Configure a tunnel policy on each PE.
8. On each PE, configure tunnel 1 to transmit services with DSCP values ranging
from 0 to 10 and tunnel 2 to transmit other services.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs on the PEs and Ps
● VPN target and RD of vpna
Procedure
Step 1 Configure interface IP addresses.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP on the backbone network for the PEs and Ps to communicate.
The following example uses IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] mpls te
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] mpls te
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] mpls te
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE2.
[*PE2] explicit-path path1
[*PE2-explicit-path-path1] next sid label 330000 type adjacency
[*PE2-explicit-path-path1] next sid label 330003 type adjacency
[*PE2-explicit-path-path1] quit
[*PE2] explicit-path path2
[*PE2-explicit-path-path2] next sid label 330001 type adjacency
[*PE2-explicit-path-path2] next sid label 330002 type adjacency
[*PE2-explicit-path-path2] quit
[*PE2] interface Tunnel1
[*PE2-Tunnel1] ip address unnumbered interface LoopBack1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.9
[*PE2-Tunnel1] mpls te signal-protocol segment-routing
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te path explicit-path path1
[*PE2-Tunnel1] quit
[*PE2] interface Tunnel2
[*PE2-Tunnel2] ip address unnumbered interface LoopBack1
[*PE2-Tunnel2] tunnel-protocol mpls te
[*PE2-Tunnel2] destination 1.1.1.9
[*PE2-Tunnel2] mpls te signal-protocol segment-routing
[*PE2-Tunnel2] mpls te tunnel-id 2
[*PE2-Tunnel2] mpls te path explicit-path path2
[*PE2-Tunnel2] quit
[*PE2] commit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
After the configuration is complete, run the display bgp peer or display bgp
vpnv4 all peer command on the PEs and check whether a BGP peer relationship
has been established between the PEs. If the Established state is displayed in the
command output, the BGP peer relationship has been established successfully. The
following example uses the command output on PE1.
[~PE1] display bgp peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 7 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the instance to the PE interface connecting the PE to a CE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 8 Configure a tunnel policy on each PE, so that SR-MPLS TE tunnels are prioritized.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
Step 9 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 10.11.1.1 32
[*CE1-LoopBack1] quit
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.2 as-number 100
[*CE1-bgp] network 10.11.1.1 32
[*CE1-bgp] quit
[*CE1] commit
Repeat this step on CE2. For configuration details, see Configuration Files in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
Repeat this step on PE2. For configuration details, see Configuration Files in this section.
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Destination: 10.22.2.2/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3.3.3.9 Neighbour: 3.3.3.9
State: Active Adv Relied Age: 00h49m58s
Tag: 0 Priority: low
Label: 48120 QoSInfo: 0x0
IndirectID: 0x10000DA Instance:
RelayNextHop: 0.0.0.0 Interface: Tunnel1
TunnelID: 0x000000000300000001 Flags: RD
The preceding command output shows that the corresponding VPN route has
successfully recursed to an SR-MPLS TE tunnel.
Run the ping command. The command output shows that CEs in the same VPN
can ping each other. For example, CE1 can ping CE2 at 10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Step 10 Configure the SR-MPLS TE tunnels to transmit services with different DSCP values.
# Configure PE1.
[*PE1] interface Tunnel1
[*PE1-Tunnel1] match dscp ipv4 0 to 10
[*PE1-Tunnel1] quit
[*PE1] interface Tunnel2
[*PE1-Tunnel2] match dscp default
[*PE1-Tunnel2] quit
[*PE1] commit
# Configure PE2.
[*PE2] interface Tunnel1
[*PE2-Tunnel1] match dscp ipv4 0 to 10
[*PE2-Tunnel1] quit
[*PE2] interface Tunnel2
[*PE2-Tunnel2] match dscp default
[*PE2-Tunnel2] quit
[*PE2] commit
IP packets have a default DSCP value. You can also configure multi-field
classification to configure desired DSCP values for the IP packets. After the
configuration is complete, tunnel 1 transmits services with DSCP values ranging
from 0 to 10, and tunnel 2 transmits other services by default.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path path1
next sid label 330000 type adjacency
next sid label 330002 type adjacency
#
explicit-path path2
next sid label 330001 type adjacency
next sid label 330003 type adjacency
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
#
explicit-path path1
next sid label 330000 type adjacency
next sid label 330003 type adjacency
#
explicit-path path2
next sid label 330001 type adjacency
next sid label 330002 type adjacency
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.14.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
Networking Requirements
On the network shown in Figure 1-134:
● CE1 and CE2 belong to a VPN instance named vpna.
● The VPN target used by vpna is 111:1.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3
configurations, such as the IP address and routing protocol configuration, on the
interface will be deleted. Reconfigure them if needed.
Configuration Roadmap
The configuration roadmap is as follows:
3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.
4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.
5. Establish an MP-IBGP peer relationship between PEs for them to exchange
routing information.
6. Apply an import or export route-policy to a specified VPNv4 peer on each PE,
and set the Color Extended Community. In this example, an import route-
policy with the Color Extended Community is applied.
7. Create a VPN instance and enable the IPv4 address family on each PE. Then,
bind each PE's interface connecting the PE to a CE to the corresponding VPN
instance.
8. Configure a tunnel selection policy on each PE.
9. Establish an EBGP peer relationship between each CE-PE pair for the CE and
PE to exchange routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
● VPN target and RD of vpna
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
[*PE1-segment-routing-te-policy] quit
[*PE1-segment-routing] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list pe2
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list pe2backup
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list pe2backup
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] candidate-path preference 200
[*PE2-segment-routing-te-policy-path] segment-list pe2
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] quit
[*PE2] commit
After completing the configuration, run the display sr-te policy command to
check SR-MPLS TE Policy information. The following example uses the command
output on PE1.
[~PE1] display sr-te policy
PolicyName : policy100
Endpoint : 3.3.3.9 Color : 100
TunnelId :1 TunnelType : SR-TE Policy
Binding SID : 115 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode :-
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 2
# Configure PE1.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] sbfd
[*PE1-sbfd] reflector discriminator 1.1.1.9
[*PE1-sbfd] quit
[*PE1] segment-routing
[*PE1-segment-routing] sr-te-policy seamless-bfd enable
[*PE1-segment-routing] sr-te-policy backup hot-standby enable
[*PE1-segment-routing] commit
[~PE1-segment-routing] quit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
# Configure PE1.
[~PE1] route-policy color100 permit node 1
[*PE1-route-policy] apply extcommunity color 0:100
[*PE1-route-policy] quit
[*PE1] commit
# Configure PE2.
[~PE2] route-policy color200 permit node 1
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish an MP-IBGP peer relationship between PEs, apply an import route-policy
to a specified VPNv4 peer on each PE, and set the Color Extended Community.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
[*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color200 import
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp peer or display bgp
vpnv4 all peer command on each PE. The following example uses the command
output on PE1. The command output shows that a BGP peer relationship has been
established between the PEs and is in the Established state.
[~PE1] display bgp peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 9 Create a VPN instance and enable the IPv4 address family on each PE. Then, bind
each PE's interface connecting the PE to a CE to the corresponding VPN instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IP address by specifying -a source-ip-address in the ping -vpn-instance vpn-instance-name
-a source-ip-address dest-ip-address command to ping the CE connected to the remote PE.
Otherwise, the ping operation fails.
Step 10 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
to be preferentially selected.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see "Configuration Files" in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see "Configuration Files" in this section.
After completing the configuration, run the display bgp vpnv4 vpn-instance peer
command on each PE. The following example uses the peer relationship between
PE1 and CE1. The command output shows that a BGP peer relationship has been
established between the PE and CE and is in the Established state.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Destination: 10.22.2.2/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3.3.3.9 Neighbour: 3.3.3.9
State: Active Adv Relied Age: 01h18m38s
Tag: 0 Priority: low
Label: 48180 QoSInfo: 0x0
IndirectID: 0x10000B9 Instance:
RelayNextHop: 0.0.0.0 Interface: policy100
TunnelID: 0x000000003200000041 Flags: RD
The command output shows that the VPN route has been successfully recursed to
the specified SR-MPLS TE Policy.
CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at
10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
--- 10.22.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe1backup
candidate-path preference 200
segment-list pe1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
Networking Requirements
On the network shown in Figure 1-135:
● CE1 and CE2 belong to a VPN instance named vpna.
● The VPN target used by vpna is 111:1.
Configure L3VPN routes to be recursed to SR-MPLS TE Policies to ensure secure
communication between users of the same VPN. Because multiple links exist
between PEs on the public network, other links must be able to provide protection
for the primary link.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3
configurations, such as the IP address and routing protocol configuration, on the
interface will be deleted. Reconfigure them if needed.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IS-IS for each device on the backbone network to implement
interworking between PEs and Ps.
2. Enable MPLS and SR for each device on the backbone network, and configure
static adjacency SIDs.
3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.
4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.
5. Establish an MP-IBGP peer relationship between PEs for them to exchange
routing information.
6. Create a VPN instance and enable the IPv4 address family on each PE. Then,
bind each PE's interface connecting the PE to a CE to the corresponding VPN
instance.
7. Configure an SR-MPLS TE Policy group on each PE and define mappings
between the color and DSCP values.
8. Configure a tunnel selection policy on each PE for the specified SR-MPLS TE
Policy group to be preferentially selected.
9. Establish an EBGP peer relationship between each CE-PE pair for the CE and
PE to exchange routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
● VPN target and RD of vpna
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list pe2
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list pe2backup
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] diffserv-mode pipe af1 green
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list pe2backup
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] candidate-path preference 200
[*PE2-segment-routing-te-policy-path] segment-list pe2
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] quit
[*PE2] commit
After completing the configuration, run the display sr-te policy command to
check SR-MPLS TE Policy information. The following example uses the command
output on PE1.
[~PE1] display sr-te policy
PolicyName : policy100
Endpoint : 3.3.3.9 Color : 100
TunnelId :1 TunnelType : SR-TE Policy
Binding SID : 115 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode : Pipe, AF1, Green
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 2
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp peer or display bgp
vpnv4 all peer command on each PE. The following example uses the command
output on PE1. The command output shows that a BGP peer relationship has been
established between the PEs and is in the Established state.
[~PE1] display bgp peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv4 all peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 8 Create a VPN instance and enable the IPv4 address family on each PE. Then, bind
each PE's interface connecting the PE to a CE to the corresponding VPN instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IP address by specifying -a source-ip-address in the ping -vpn-instance vpn-instance-name
-a source-ip-address dest-ip-address command to ping the CE connected to the remote PE.
Otherwise, the ping operation fails.
Step 9 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
group to be preferentially selected.
# Configure PE1.
[~PE1] segment-routing
[~PE1-segment-routing] sr-te-policy group 1
[*PE1-segment-routing-te-policy-group-1] endpoint 3.3.3.9
[*PE1-segment-routing-te-policy-group-1] color 100 match dscp ipv4 20
[*PE1-segment-routing-te-policy-group-1] quit
[*PE1-segment-routing] quit
[*PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel binding destination 3.3.3.9 sr-te-policy group 1
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[*PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] sr-te-policy group 1
[*PE2-segment-routing-te-policy-group-1] endpoint 1.1.1.9
[*PE2-segment-routing-te-policy-group-1] color 200 match dscp ipv4 20
[*PE2-segment-routing-te-policy-group-1] quit
[*PE2-segment-routing] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel binding destination 1.1.1.9 sr-te-policy group 1
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
After completing the configuration, run the display sr-te policy group 1
command on each PE to check SR-MPLS TE Policy group status.
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see "Configuration Files" in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see "Configuration Files" in this section.
After completing the configuration, run the display bgp vpnv4 vpn-instance peer
command on each PE. The following example uses the peer relationship between
PE1 and CE1. The command output shows that a BGP peer relationship has been
established between the PE and CE and is in the Established state.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Destination: 10.22.2.2/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3.3.3.9 Neighbour: 3.3.3.9
State: Active Adv Relied Age: 01h18m38s
Tag: 0 Priority: low
Label: 48180 QoSInfo: 0x0
IndirectID: 0x10000B9 Instance:
RelayNextHop: 0.0.0.0 Interface: SR-TE Policy Group
TunnelID: 0x000000003200000041 Flags: RD
The command output shows that the VPN route has been successfully recursed to
the specified SR-MPLS TE Policy.
CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at
10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
--- 10.22.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 34/48/72 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te-policy group 1
endpoint 3.3.3.9
color 100 match dscp ipv4 20
sr-te policy policy100 endpoint 3.3.3.9 color 100
diffserv-mode pipe af1 green
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe1backup
candidate-path preference 200
segment-list pe1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
#
tunnel-policy p1
tunnel binding destination 3.3.3.9 sr-te-policy group 1
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.14.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
On the network shown in Figure 1-136:
● CE1 and CE2 belong to a VPN instance named vpna.
● The VPN target used by vpna is 111:1.
Configure L3VPN routes to be recursed to SR-MPLS TE Policies to ensure secure
communication between users of the same VPN. Because multiple links exist
between PEs on the public network, other links must be able to provide protection
for the primary link.
Manual SR-MPLS TE Policy configuration is complex and cannot adapt to dynamic
network changes. In this example, a controller is deployed. Therefore, the
controller is used to dynamically deliver SR-MPLS TE Policies. Because the
controller is capable of detecting adjacency SID changes through BGP-LS, it is
recommended that an IGP be used to dynamically generate adjacency SIDs.
Interfaces 1 through 4 in this example represent GE 1/0/0, GE 2/0/0, GE 3/0/0, and GE 4/0/0,
respectively.
Precautions
If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3
configurations, such as the IP address and routing protocol configuration, on the
interface will be deleted. Reconfigure them if needed.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IS-IS for each device on the backbone network to implement
interworking between PEs and Ps.
2. Enable MPLS and SR for each device on the backbone network. In addition,
configure IS-IS SR and use IS-IS to dynamically generate adjacency SIDs and
advertise the SIDs to neighbors.
3. Establish a BGP IPv4 SR-MPLS TE Policy address family-specific peer
relationship between each PE and the controller, and configure the PEs to
report network topology and label information to the controller through BGP-
LS. After completing path computation, the controller delivers SR-MPLS TE
Policy routes to the PEs through BGP.
4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.
5. Establish an MP-IBGP peer relationship between PEs for them to exchange
routing information.
6. Apply an import or export route-policy to a specified VPNv4 peer on each PE,
and set the Color Extended Community. In this example, an import route-
policy with the Color Extended Community is applied.
7. Create a VPN instance and enable the IPv4 address family on each PE. Then,
bind each PE's interface connecting the PE to a CE to the corresponding VPN
instance.
8. Configure a tunnel selection policy on each PE.
9. Establish an EBGP peer relationship between each CE-PE pair for the CE and
PE to exchange routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
● VPN target and RD of vpna
● SRGB ranges on PEs and Ps
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] interface gigabitethernet4/0/0
[*PE1-GigabitEthernet4/0/0] ip address 10.3.1.1 24
[*PE1-GigabitEthernet4/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure a controller.
<HUAWEI> system-view
[~HUAWEI] sysname Controller
[*HUAWEI] commit
[*Controller] interface gigabitethernet1/0/0
[*Controller-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*Controller-GigabitEthernet1/0/0] quit
[*Controller] interface gigabitethernet2/0/0
[*Controller-GigabitEthernet2/0/0] ip address 10.4.1.2 24
[*Controller-GigabitEthernet2/0/0] quit
[*Controller] commit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
The SRGB range in this example is for reference only. It varies according to the device and
needs to be set as needed.
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis prefix-sid index 10
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] traffic-eng level-1
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] segment-routing global-block 16000 23999
The SRGB range in this example is for reference only. It varies according to the device and
needs to be set as needed.
[*P1-isis-1] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis prefix-sid index 20
[*P1-LoopBack1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[PE2-isis-1] traffic-eng level-1
[*PE2-isis-1] segment-routing global-block 16000 23999
The SRGB range in this example is for reference only. It varies according to the device and
needs to be set as needed.
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis prefix-sid index 30
[*PE2-LoopBack1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] traffic-eng level-1
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] segment-routing global-block 16000 23999
The SRGB range in this example is for reference only. It varies according to the device and
needs to be set as needed.
[*P2-isis-1] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis prefix-sid index 40
[*P2-LoopBack1] quit
[*P2] commit
Step 5 Configure BGP LS address family-specific and BGP IPv4 SR-MPLS TE Policy address
family-specific peer relationships.
To allow a PE to report topology information to a controller through BGP LS, you
must enable IS-IS-based topology advertisement on the PE. Generally, in an IGP
domain, only one device requires this function to be enabled. In this example, this
function is enabled on both PE1 and PE2 to improve reliability.
# Enable IS-IS SR-MPLS TE on PE1.
[~PE1] isis 1
[*PE1-isis-1] bgp-ls enable level-1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
[*PE2-bgp-af-srpolicy] commit
[~PE2-bgp-af-srpolicy] quit
[~PE2-bgp] quit
After the configuration is complete, the PEs can receive the SR-MPLS TE Policy
routes delivered by the controller and then generate SR-MPLS TE Policies. Run the
display sr-te policy command to check SR-MPLS TE Policy information. The
following example uses the command output on PE1.
[~PE1] display sr-te policy
PolicyName : policy100
Endpoint : 3.3.3.9 Color : 100
TunnelId :1 TunnelType : SR-TE Policy
Binding SID : 115 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode :-
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 2
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
# Configure PE2.
[~PE2] route-policy color200 permit node 1
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish an MP-IBGP peer relationship between PEs, apply an import route-policy
to a specified VPNv4 peer on each PE, and set the Color Extended Community.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3.3.3.9 enable
[*PE1-bgp-af-vpnv4] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv4] peer 1.1.1.9 route-policy color200 import
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp peer or display bgp
vpnv4 all peer command on each PE. The following example uses the command
output on PE1. The command output shows that a BGP peer relationship has been
established between the PEs and is in the Established state.
Step 9 Create a VPN instance and enable the IPv4 address family on each PE. Then, bind
each PE's interface connecting the PE to a CE to the corresponding VPN instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.2 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.3.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IP address by specifying -a source-ip-address in the ping -vpn-instance vpn-instance-name
-a source-ip-address dest-ip-address command to ping the CE connected to the remote PE.
Otherwise, the ping operation fails.
Step 10 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
to be preferentially selected.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see "Configuration Files" in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see "Configuration Files" in this section.
After completing the configuration, run the display bgp vpnv4 vpn-instance peer
command on each PE. The following example uses the peer relationship between
PE1 and CE1. The command output shows that a BGP peer relationship has been
established between the PE and CE and is in the Established state.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Destination: 10.22.2.2/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3.3.3.9 Neighbour: 3.3.3.9
State: Active Adv Relied Age: 01h18m38s
Tag: 0 Priority: low
Label: 48180 QoSInfo: 0x0
IndirectID: 0x10000B9 Instance:
RelayNextHop: 0.0.0.0 Interface: policy100
TunnelID: 0x000000003200000041 Flags: RD
The command output shows that the VPN route has been successfully recursed to
the specified SR-MPLS TE Policy.
CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at
10.22.2.2.
[~CE1] ping -a 10.11.1.1 10.22.2.2
PING 10.22.2.2: 56 data bytes, press CTRL_C to break
Reply from 10.22.2.2: bytes=56 Sequence=1 ttl=251 time=72 ms
Reply from 10.22.2.2: bytes=56 Sequence=2 ttl=251 time=34 ms
Reply from 10.22.2.2: bytes=56 Sequence=3 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=4 ttl=251 time=50 ms
Reply from 10.22.2.2: bytes=56 Sequence=5 ttl=251 time=34 ms
--- 10.22.2.2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 34/48/72 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-1
segment-routing mpls
segment-routing global-block 16000 23999
bgp-ls enable level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
peer 10.3.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
link-state-family unicast
peer 10.3.1.2 enable
#
ipv4-family sr-policy
peer 10.3.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
peer 3.3.3.9 route-policy color100 import
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
#
route-policy color100 permit node 1
apply extcommunity color 0:100
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-1
segment-routing mpls
segment-routing global-block 16000 23999
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-1
segment-routing mpls
segment-routing global-block 16000 23999
bgp-ls enable level-1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.14.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.12.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid index 30
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
peer 10.4.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
link-state-family unicast
peer 10.4.1.2 enable
#
ipv4-family sr-policy
peer 10.4.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
peer 1.1.1.9 route-policy color200 import
#
ipv4-family vpn-instance vpna
peer 10.2.1.1 as-number 65420
#
route-policy color200 permit node 1
apply extcommunity color 0:200
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P2 configuration file
#
sysname P2
#
Networking Requirements
On the network shown in Figure 1-137:
● CE1 and CE2 belong to a VPN instance named vpna.
● The VPN target used by vpna is 111:1.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
If an interface connecting a PE to a CE is bound to a VPN instance, Layer 3
configurations, such as the IPv6 address and routing protocol configuration, on the
interface will be deleted. Reconfigure them if needed.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
● VPN target and RD of vpna
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list pe2
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list pe2backup
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list pe2backup
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] candidate-path preference 200
After completing the configuration, run the display sr-te policy command to
check SR-MPLS TE Policy information. The following example uses the command
output on PE1.
[~PE1] display sr-te policy
PolicyName : policy100
Endpoint : 3.3.3.9 Color : 100
TunnelId :1 TunnelType : SR-TE Policy
Binding SID : 115 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode :-
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 2
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
# Configure PE2.
[~PE2] route-policy color200 permit node 1
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish an MP-IBGP peer relationship between PEs, apply an import route-policy
to a specified VPNv6 peer on each PE, and set the Color Extended Community.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] ipv6-family vpnv6
[*PE1-bgp-af-vpnv6] peer 3.3.3.9 enable
[*PE1-bgp-af-vpnv6] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp-af-vpnv6] commit
[~PE1-bgp-af-vpnv6] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv6-family vpnv6
[*PE2-bgp-af-vpnv6] peer 1.1.1.9 enable
[*PE2-bgp-af-vpnv6] peer 1.1.1.9 route-policy color200 import
[*PE2-bgp-af-vpnv6] commit
[~PE2-bgp-af-vpnv6] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp peer or display bgp
vpnv6 all peer command on each PE. The following example uses the command
output on PE1. The command output shows that a BGP peer relationship has been
established between the PEs and is in the Established state.
[~PE1] display bgp peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
[~PE1] display bgp vpnv6 all peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 12 18 0 00:09:38 Established 0
Step 9 Create a VPN instance and enable the IPv6 address family on each PE. Then, bind
each PE's interface connecting the PE to a CE to the corresponding VPN instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv6-family
[*PE1-vpn-instance-vpna-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv6] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv6] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ipv6 enable
[*PE1-GigabitEthernet2/0/0] ipv6 address 2001:db8::1:2 96
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv6] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv6] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ipv6 enable
[*PE2-GigabitEthernet2/0/0] ipv6 address 2001:db8::2:2 96
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, you must specify a source
IPv6 address by specifying -a source-ipv6-address in the ping ipv6 -vpn-instance vpn-
instance-name -a source-ipv6-address dest-ipv6-address command to ping the CE
connected to the remote PE. Otherwise, the ping operation fails.
Step 10 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
to be preferentially selected.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv6-family
[*PE1-vpn-instance-vpna-af-ipv6] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv6] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv6] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see "Configuration Files" in this section.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] peer 2001:db8::1:1 as-number 65410
[*PE1-bgp-6-vpna] commit
[*PE1-bgp-6-vpna] quit
[*PE1-bgp] quit
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see "Configuration Files" in this section.
After completing the configuration, run the display bgp vpnv6 vpn-instance peer
command on each PE. The following example uses the peer relationship between
PE1 and CE1. The command output shows that a BGP peer relationship has been
established between the PE and CE and is in the Established state.
[~PE1] display bgp vpnv6 vpn-instance vpna peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
The command output shows that the VPN route has been successfully recursed to
the specified SR-MPLS TE Policy.
CEs in the same VPN can ping each other. For example, CE1 can ping CE2 at
2001:db8::22:2.
[~CE1] ping ipv6 -a 2001:db8::11:1 2001:db8::22:2
PING 2001:db8::22:2 : 56 data bytes, press CTRL_C to break
Reply from 2001:db8::22:2
bytes=56 Sequence=1 hop limit=62 time = 170 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe1backup
candidate-path preference 200
segment-list pe1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:db8::1:2 96
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv6-family vpnv6
policy vpn-target
peer 3.3.3.9 enable
peer 3.3.3.9 route-policy color100 import
#
ipv6-family vpn-instance vpna
peer 2001:db8::1:1 as-number 65410
#
route-policy color100 permit node 1
apply extcommunity color 0:100
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe2
index 10 sid label 330000
index 20 sid label 330003
segment-list pe2backup
index 10 sid label 330001
index 20 sid label 330002
sr-te policy policy200 endpoint 1.1.1.9 color 200
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe2backup
candidate-path preference 200
segment-list pe2
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.14.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:db8::2:2 96
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.12.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
bgp 100
bgp 65410
router-id 10.10.10.10
peer 2001:db8::1:2 as-number 100
#
ipv6-family unicast
undo synchronization
peer 2001:db8::1:2 enable
network 2001:db8::11:1 128
#
return
Networking Requirements
If an Internet user uses a carrier network that performs IP forwarding to access
the Internet, core carrier devices on the forwarding path need to learn many
Internet routes. This imposes heavy loads on core carrier devices and affects the
performance of these devices. To solve this problem, configure the corresponding
access device to recurse non-labeled public routes to SR-MPLS TE Policies, so that
packets can be forwarded over the SR-MPLS TE Policies.
Figure 1-138 shows an example for configuring non-labeled public BGP routes to
be recursed to manually configured SR-MPLS TE Policies.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When creating a peer, if the IP address of the peer is a loopback interface address
or a sub-interface address, you need to run the peer connect-interface command
on both ends of the peer relationship to ensure that the two ends are correctly
connected.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.14.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip address 10.12.1.2 24
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] isis enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list pe2
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list pe2backup
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list pe2backup
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] candidate-path preference 200
[*PE2-segment-routing-te-policy-path] segment-list pe2
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] quit
[*PE2] commit
After completing the configuration, run the display sr-te policy command to
check SR-MPLS TE Policy information. The following example uses the command
output on PE1.
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
[*PE1-route-policy] quit
[*PE1] commit
# Configure PE2.
[~PE2] route-policy color200 permit node 1
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish an MP-IBGP peer relationship between PEs, apply an import route-policy
to a specified VPNv4 peer on each PE, and set the Color Extended Community.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp] commit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] peer 1.1.1.9 route-policy color200 import
[*PE2-bgp] commit
[~PE2-bgp] quit
After completing the configuration, run the display bgp peer command on each
PE. The following example uses the command output on PE1. The command
output shows that a BGP peer relationship has been established between the PEs
and is in the Established state.
[~PE1] display bgp peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
3.3.3.9 4 100 2 6 0 00:00:12 Established 0
Step 9 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
to be preferentially selected.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] route recursive-lookup tunnel tunnel-policy p1
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] route recursive-lookup tunnel tunnel-policy p1
[*PE2] commit
# Configure PE2.
[~PE2] interface loopback 2
[*PE2-LoopBack1] ip address 9.9.9.9 32
[*PE2-LoopBack1] quit
[*PE2] bgp 100
[*PE2-bgp] network 9.9.9.9 32
[*PE2-bgp] quit
[*PE2] commit
Destination: 9.9.9.9/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3.3.3.9 Neighbour: 3.3.3.9
State: Active Adv Relied Age: 00h19m06s
Tag: 0 Priority: low
Label: NULL QoSInfo: 0x0
IndirectID: 0x10000C7 Instance:
RelayNextHop: 0.0.0.0 Interface: policy100
TunnelID: 0x000000003200000001 Flags: RD
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe1backup
candidate-path preference 200
segment-list pe1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface LoopBack2
ip address 8.8.8.8 255.255.255.255
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
network 8.8.8.8 255.255.255.255
peer 3.3.3.9 enable
peer 3.3.3.9 route-policy color100 import
#
route-policy color100 permit node 1
apply extcommunity color 0:100
#
route recursive-lookup tunnel tunnel-policy p1
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe2
index 10 sid label 330000
index 20 sid label 330003
segment-list pe2backup
index 10 sid label 330001
index 20 sid label 330002
sr-te policy policy200 endpoint 1.1.1.9 color 200
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe2backup
candidate-path preference 200
segment-list pe2
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.14.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.12.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
interface LoopBack2
ip address 9.9.9.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
network 9.9.9.9 255.255.255.255
peer 1.1.1.9 enable
peer 1.1.1.9 route-policy color200 import
#
route-policy color200 permit node 1
apply extcommunity color 0:200
#
route recursive-lookup tunnel tunnel-policy p1
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.14.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
If an SR network is deployed between two discontinuous IPv6 networks, you can
recurse BGP4+ 6PE routes to SR-MPLS TE Policies for the IPv6 networks to
communicate with each other.
As shown in Figure 1-139, there is no direct link between the IPv6 network where
CE1 resides and the IPv6 network where CE2 resides. To allow CE1 and CE2 to
communicate with each other through the SR network, establish a 6PE peer
relationship between PE1 and PE2. 6PE peers use MP-BGP to send IPv6 routes
learned from their respective CEs and use SR-MPLS TE Policies to forward IPv6
traffic.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
In the BGP4+ 6PE scenario, no VPN instance needs to be configured.
Configuration Roadmap
The configuration roadmap is as follows:
3. Configure an SR-MPLS TE Policy with primary and backup paths on each PE.
4. Configure SBFD and HSB on each PE to enhance SR-MPLS TE Policy reliability.
5. Establish an MP-IBGP peer relationship between PEs and enable them to
advertise labeled routes.
6. Apply an import or export route-policy to a specified VPNv4 peer on each PE,
and set the Color Extended Community. In this example, an import route-
policy with the Color Extended Community is applied.
7. Configure a tunnel selection policy on each PE.
8. Establish an EBGP peer relationship between each CE-PE pair for the CE and
PE to exchange routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
Procedure
Step 1 Configure interface IP addresses for each device on the backbone network.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP for each device on the backbone network to implement
interworking between PEs and Ps. In this example, the IGP is IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] isis enable 1
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 3 Configure basic MPLS functions for each device on the backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.9
[*PE1] mpls
[*PE1-mpls] commit
[~PE1-mpls] quit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
[*PE1-isis-1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE1.
[~PE1] segment-routing
[~PE1-segment-routing] segment-list pe1
[*PE1-segment-routing-segment-list] index 10 sid label 330000
[*PE1-segment-routing-segment-list] index 20 sid label 330002
[*PE1-segment-routing-segment-list] quit
[*PE1-segment-routing] segment-list pe1backup
[*PE1-segment-routing-segment-list] index 10 sid label 330001
[*PE1-segment-routing-segment-list] index 20 sid label 330003
[*PE1-segment-routing-segment-list] quit
[*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
[*PE1-segment-routing-te-policy] binding-sid 115
[*PE1-segment-routing-te-policy] mtu 1000
[*PE1-segment-routing-te-policy] candidate-path preference 100
[*PE1-segment-routing-te-policy-path] segment-list pe1backup
[*PE1-segment-routing-te-policy-path] quit
[*PE1-segment-routing-te-policy] candidate-path preference 200
[*PE1-segment-routing-te-policy-path] segment-list pe1
[*PE1-segment-routing-te-policy-path] quit
[*PE1-segment-routing-te-policy] quit
[*PE1-segment-routing] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list pe2
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list pe2backup
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list pe2backup
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] candidate-path preference 200
[*PE2-segment-routing-te-policy-path] segment-list pe2
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] quit
[*PE2] commit
After completing the configuration, run the display sr-te policy command to
check SR-MPLS TE Policy information. The following example uses the command
output on PE1.
[~PE1] display sr-te policy
PolicyName : policy100
Endpoint : 3.3.3.9 Color : 100
TunnelId :1 TunnelType : SR-TE Policy
Binding SID : 115 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode :-
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 2
# Configure PE1.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] sbfd
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
# Configure PE2.
[~PE2] route-policy color200 permit node 1
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish an MP-IBGP peer relationship between PEs and enable them to advertise
labeled routes. Apply an import route-policy to a specified BGP4+ peer on each PE,
and set the Color Extended Community.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] ipv6-family unicast
[*PE1-bgp-af-ipv6] peer 3.3.3.9 enable
[*PE1-bgp-af-ipv6] peer 3.3.3.9 label-route-capability
[*PE1-bgp-af-ipv6] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp-af-ipv6] commit
[~PE1-bgp-af-ipv6] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] ipv6-family unicast
[*PE2-bgp-af-ipv6] peer 1.1.1.9 enable
[*PE2-bgp-af-ipv6] peer 1.1.1.9 label-route-capability
[*PE2-bgp-af-ipv6] peer 1.1.1.9 route-policy color200 import
[*PE2-bgp-af-ipv6] commit
[~PE2-bgp-af-ipv6] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp ipv6 peer command on
each PE. The following example uses the command output on PE1. The command
output shows that a BGP peer relationship has been established between the PEs
and is in the Established state.
[~PE1] display bgp ipv6 peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Step 9 Configure a tunnel selection policy on each PE for the specified SR-MPLS TE Policy
to be preferentially selected.
# Configure PE1.
[~PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE1-tunnel-policy-p1] quit
[*PE1] commit
[~PE1] bgp 100
[*PE1-bgp] ipv6-family unicast
[*PE1-bgp-af-ipv6] peer 3.3.3.9 tnl-policy p1
[*PE1-bgp-af-ipv6] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] commit
[~PE2] bgp 100
[*PE2-bgp] ipv6-family unicast
[*PE2-bgp-af-ipv6] peer 1.1.1.9 tnl-policy p1
[*PE2-bgp-af-ipv6] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ipv6 enable
[*CE1-LoopBack1] ipv6 address 2001:db8::11:1 128
[*CE1-LoopBack1] quit
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] ipv6 enable
[*CE1-GigabitEthernet1/0/0] ip address 2001:db8::1:1 96
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] bgp 65410
[*CE1-bgp] router-id 10.10.10.10
[*CE1-bgp] peer 2001:db8::1:2 as-number 100
[*CE1-bgp] ipv6-family
[*CE1-bgp-af-ipv6] peer 2001:db8::1:2 enable
[*CE1-bgp-af-ipv6] network 2001:db8::11:1 128
[*CE1-bgp-af-ipv6] quit
[*CE1-bgp] quit
[*CE1] commit
The configuration of CE2 is similar to the configuration of CE1. For configuration details,
see "Configuration Files" in this section.
# Configure PE1.
The configuration of PE2 is similar to the configuration of PE1. For configuration details,
see "Configuration Files" in this section.
After completing the configuration, run the display bgp ipv6 peer command on
each PE. The following example uses the peer relationship between PE1 and CE1.
The command output shows that a BGP4+ peer relationship has been established
between the PE and CE and is in the Established state.
[~PE1] display bgp ipv6 peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 2 Peers in established state : 2
The command output shows that the BGP4+ route has been successfully recursed
to the specified SR-MPLS TE Policy.
CEs can ping each other. For example, CE1 can ping CE2 at 2001:db8::22:2.
[~CE1] ping ipv6 -a 2001:db8::11:1 2001:db8::22:2
PING 2001:db8::22:2 : 56 data bytes, press CTRL_C to break
Reply from 2001:db8::22:2
bytes=56 Sequence=1 hop limit=62 time = 170 ms
Reply from 2001:db8::22:2
bytes=56 Sequence=2 hop limit=62 time = 140 ms
Reply from 2001:db8::22:2
bytes=56 Sequence=3 hop limit=62 time = 150 ms
Reply from 2001:db8::22:2
bytes=56 Sequence=4 hop limit=62 time = 140 ms
Reply from 2001:db8::22:2
bytes=56 Sequence=5 hop limit=62 time = 170 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list pe1backup
candidate-path preference 200
segment-list pe1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8::1:2 96
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
peer 2001:db8::1:1 as-number 65410
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv6-family unicast
undo synchronization
peer 3.3.3.9 route-policy color100 import
peer 3.3.3.9 label-route-capability
peer 3.3.3.9 tnl-policy p1
peer 2001:db8::1:1 enable
#
route-policy color100 permit node 1
apply extcommunity color 0:100
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.12.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list pe2
index 10 sid label 330000
index 20 sid label 330003
segment-list pe2backup
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.14.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8::1:1 96
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:2001::1 128
#
bgp 65410
router-id 10.10.10.10
peer 2001:db8::1:2 as-number 100
#
ipv6-family unicast
undo synchronization
peer 2001:db8::1:2 enable
network 2001:2001::1 128
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:db8::2:1 96
#
interface LoopBack1
ipv6 enable
ipv6 address 2002:2002::2 128
#
bgp 65420
router-id 10.20.20.20
peer 2001:db8::2:2 as-number 100
#
ipv6-family unicast
undo synchronization
peer 2001:db8::2:2 enable
network 2002:2002::2 128
#
return
Networking Requirements
On the network shown in Figure 1-140, PE1, P1, P2, and PE2 are in the same AS
and run IS-IS to implement networking. An SR-MPLS TE Policy is configured on
PE1 and PE2 to carry EVPN services over an SR path.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Configuration Notes
When configuring EVPN route recursion based on an SR-MPLS TE Policy, note the
following:
Configuration Roadmap
The configuration roadmap is as follows:
3. Enable MPLS and SR for each device on the backbone network, and configure
static adjacency SIDs.
4. Configure an SR-MPLS TE Policy on PEs, and configure path information for
the SR-MPLS TE Policy.
5. Configure SBFD and HSB on the PE to enhance the reliability of the SR-MPLS
TE Policy.
6. Establish BGP EVPN peer relationships between PEs, apply an import route-
policy, and add the extended community attribute Color to routes.
7. Configure MP-IBGP on PEs to exchange VPN routing information.
8. Configure IPv4 address family VPN instances on PEs and bind each interface
that connects a PE to a CE to a VPN instance.
9. Configure a tunnel policy on each PE.
10. Configure EBGP between CEs and PEs to exchange VPN routing information.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of PEs and Ps
● VPN targets and RDs of the EVPN instance named evrf1 and the L3VPN
instance named vpn1
Procedure
Step 1 Assign IP addresses to each device interface.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.13.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.11.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.12.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.12.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.14.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.13.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.14.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP on each device so that PEs and Ps on the backbone network can
communicate with each other. IS-IS is used as an example.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
[~PE1] isis 1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] segment-routing mpls
[*PE1-isis-1] quit
[*PE1] commit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.11.1.2 remote-ip-addr 10.11.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.1 remote-ip-addr 10.12.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.13.1.2 remote-ip-addr 10.13.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.14.1.1 remote-ip-addr 10.14.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
[*PE1-segment-routing-te-policy-path] quit
[*PE1-segment-routing-te-policy] quit
[*PE1-segment-routing] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing
[~PE2-segment-routing] segment-list path1
[*PE2-segment-routing-segment-list] index 10 sid label 330000
[*PE2-segment-routing-segment-list] index 20 sid label 330003
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] segment-list path2
[*PE2-segment-routing-segment-list] index 10 sid label 330001
[*PE2-segment-routing-segment-list] index 20 sid label 330002
[*PE2-segment-routing-segment-list] quit
[*PE2-segment-routing] sr-te policy policy100 endpoint 1.1.1.9 color 100
[*PE2-segment-routing-te-policy] binding-sid 115
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy] candidate-path preference 100
[*PE2-segment-routing-te-policy-path] segment-list path1
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] sr-te policy policy200 endpoint 1.1.1.9 color 200
[*PE2-segment-routing-te-policy] binding-sid 215
[*PE2-segment-routing-te-policy] mtu 1000
[*PE2-segment-routing-te-policy-path] segment-list path2
[*PE2-segment-routing-te-policy-path] quit
[*PE2-segment-routing-te-policy] quit
[*PE2-segment-routing] quit
[*PE2] commit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] sbfd
[*PE2-sbfd] reflector discriminator 3.3.3.9
[*PE2-sbfd] quit
[*PE2] segment-routing
[*PE2-segment-routing] sr-te-policy seamless-bfd enable
[*PE2-segment-routing] sr-te-policy backup hot-standby enable
[*PE2-segment-routing] commit
[~PE2-segment-routing] quit
# Configure PE2.
[~PE2] route-policy color100 permit node 1
[*PE2-route-policy] if-match route-type evpn mac
[*PE2-route-policy] apply extcommunity color 0:100
[*PE2-route-policy] quit
[*PE2] route-policy color100 permit node 2
[*PE2-route-policy] if-match route-type evpn prefix
[*PE2-route-policy] apply extcommunity color 0:200
[*PE2-route-policy] quit
[~PE2] route-policy color100 permit node 3
[*PE2-route-policy] quit
[*PE2] commit
Step 8 Establish BGP EVPN peer relationships between PEs, apply an import route-policy,
and add the extended community attribute Color to routes.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.9 enable
[*PE1-bgp-af-evpn] peer 3.3.3.9 route-policy color100 import
[*PE1-bgp-af-evpn] peer 3.3.3.9 advertise irb
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1.1.1.9 as-number 100
[*PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1.1.1.9 enable
[*PE2-bgp-af-evpn] peer 1.1.1.9 route-policy color100 import
[*PE2-bgp-af-evpn] peer 1.1.1.9 advertise irb
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After completing the configurations, run the display bgp evpn peer command on
PEs to check that the BGP EVPN peer relationship has been established between
the PEs and is in the Established state. The following example uses the command
output on PE1.
[~PE1] display bgp evpn peer
BGP local router ID : 10.13.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Step 9 Configure EVPN instances and L3VPN instances on PEs and connect sites to the
PEs
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 200:1
[*PE1-evpn-instance-evrf1] vpn-target 2:2
[*PE1-evpn-instance-evrf1] quit
[*PE1] evpn source-address 1.1.1.9
[*PE1] bridge-domain 10
[*PE1-bd10] evpn binding vpn-instance evrf1
[*PE1-bd10] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] esi 0011.1001.1001.1001.1001
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] interface gigabitethernet3/0/0.1 mode l2
[*PE1-GigabitEthernet3/0/0.1] encapsulation dot1q vid 10
[*PE1-GigabitEthernet3/0/0.1] rewrite pop single
[*PE1-GigabitEthernet3/0/0.1] bridge-domain 10
[*PE1-GigabitEthernet3/0/0.1] quit
[*PE1] interface Vbdif10
[*PE1-Vbdif10] ip binding vpn-instance vpn1
[*PE1-Vbdif10] ip address 10.1.1.1 255.255.255.0
[*PE1-Vbdif10] arp collect host enable
[*PE1-Vbdif10] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv4-family
[*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*PE2-vpn-instance-vpn1-af-ipv4] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 200:1
[*PE2-evpn-instance-evrf1] vpn-target 2:2
[*PE2-evpn-instance-evrf1] quit
[*PE2] evpn source-address 3.3.3.9
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] esi 0011.1001.1001.1001.1002
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface gigabitethernet3/0/0.1 mode l2
[*PE2-GigabitEthernet3/0/0.1] encapsulation dot1q vid 10
[*PE2-GigabitEthernet3/0/0.1] rewrite pop single
[*PE2-GigabitEthernet3/0/0.1] bridge-domain 10
[*PE2-GigabitEthernet3/0/0.1] quit
[*PE2] interface Vbdif10
[*PE2-Vbdif10] ip binding vpn-instance vpn1
[*PE2-Vbdif10] ip address 10.1.1.1 255.255.255.0
[*PE2-Vbdif10] arp collect host enable
[*PE2-Vbdif10] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-instance vpn1
[*PE2-bgp-vpn1] import-route direct
[*PE2-bgp-vpn1] advertise l2vpn evpn
[*PE2-bgp-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE2.
[~PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq sr-te-policy load-balance-number 1 unmix
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv4-family
[*PE2-vpn-instance-vpn1-af-ipv4] tnl-policy p1 evpn
[*PE2-vpn-instance-vpn1-af-ipv4] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] tnl-policy p1
[*PE2-evpn-instance-evrf1] quit
[*PE2] commit
Run the display bgp evpn all routing-table prefix-route 0:10.2.1.0:24 command
on PE1 to view information about the IP prefix routes sent from PE2.
[~PE1] display bgp evpn all routing-table prefix-route 0:10.2.1.0:24
BGP local router ID : 10.13.1.1
Local AS number : 100
Total routes of Route Distinguisher(100:1): 1
BGP routing table entry information of 0:10.2.1.0:24:
Label information (Received/Applied): 330000/NULL
From: 3.3.3.9 (10.12.1.2)
Route Duration: 0d00h09m04s
Relay IP Nexthop: 10.11.1.2
Relay IP Out-Interface:GigabitEthernet1/0/0
Relay Tunnel Out-Interface:
Original nexthop: 3.3.3.9
Qos information : 0x0
Ext-Community: RT <1 : 1>, Color <0 : 200>
AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP
cost 20
Route Type: 5 (Ip Prefix Route)
Ethernet Tag ID: 0, IP Prefix/Len: 10.2.1.0/24, ESI: 0000.0000.0000.0000.0000, GW IP Address: 0.0.0.0
Not advertised to any peer yet
Run the display bgp evpn all routing-table mac-route command on PE1 to view
information about the MAC routes sent from PE2.
[~PE1] display bgp evpn all routing-table mac-route
Local AS number : 100
EVPN-Instance evrf1:
Number of Mac Routes: 4
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*>i 0:48:38ba-2e49-fa04:0:0.0.0.0 3.3.3.9
*>i 0:48:38ba-2e49-fa04:32:10.2.1.1 3.3.3.9
*> 0:48:38ba-d70d-3801:0:0.0.0.0 0.0.0.0
*> 0:48:38ba-d70d-3801:32:10.1.1.1 0.0.0.0
EVPN-Instance evrf1:
Number of Mac Routes: 1
BGP routing table entry information of 0:48:38ba-2e49-fa04:0:0.0.0.0:
Route Distinguisher: 200:1
Remote-Cross route
Label information (Received/Applied): 330000/NULL
From: 3.3.3.9 (10.12.1.2)
Route Duration: 2d02h23m40s
Relay Tunnel Out-Interface: policy100(srtepolicy)
Original nexthop: 3.3.3.9
Qos information : 0x0
Ext-Community: RT <2 : 2>, Color <0 : 100>, Mac Mobility <flag:1 seq:0 res:0>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 0, MAC Address/Len: 38ba-2e49-fa04/48, IP Address/Len: 0.0.0.0/0, ESI:
0000.0000.0000.0000.0000
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 200:1
tnl-policy p1
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
tnl-policy p1 evpn
evpn mpls routing-enable
#
bfd
#
sbfd
reflector discriminator 1.1.1.9
#
mpls lsr-id 1.1.1.9
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
segment-routing
ipv4 adjacency local-ip-addr 10.11.1.1 remote-ip-addr 10.11.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.13.1.1 remote-ip-addr 10.13.1.2 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list path1
index 10 sid label 330000
index 20 sid label 330002
segment-list path2
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list path1
sr-te policy policy200 endpoint 3.3.3.9 color 200
binding-sid 215
mtu 1000
candidate-path preference 100
segment-list path2
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.13.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
esi 0011.1001.1001.1001.1001
#
interface GigabitEthernet3/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.13.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.14.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 200:1
tnl-policy p1
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
tnl-policy p1 evpn
evpn mpls routing-enable
#
bfd
#
sbfd
reflector discriminator 3.3.3.9
#
mpls lsr-id 3.3.3.9
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
segment-routing
ipv4 adjacency local-ip-addr 10.12.1.2 remote-ip-addr 10.12.1.1 sid 330000
ipv4 adjacency local-ip-addr 10.14.1.2 remote-ip-addr 10.14.1.1 sid 330001
sr-te-policy backup hot-standby enable
sr-te-policy seamless-bfd enable
segment-list path1
index 10 sid label 330000
index 20 sid label 330003
segment-list path2
index 10 sid label 330001
index 20 sid label 330002
sr-te policy policy100 endpoint 1.1.1.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 100
segment-list path1
sr-te policy policy200 endpoint 1.1.1.9 color 200
binding-sid 215
mtu 1000
candidate-path preference 200
segment-list path2
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
segment-routing mpls
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.12.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.14.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
esi 0011.1001.1001.1001.1002
#
interface GigabitEthernet3/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.9 enable
peer 1.1.1.9 route-policy color100 import
peer 1.1.1.9 advertise irb
#
route-policy color100 permit node 1
if-match route-type evpn mac
apply extcommunity color 0:100
#
route-policy color100 permit node 2
if-match route-type evpn prefix
apply extcommunity color 0:200
#
route-policy color100 permit node 3
#
tunnel-policy p1
tunnel select-seq sr-te-policy load-balance-number 1 unmix
#
evpn source-address 3.3.3.9
#
return
1.2.41.8 Example for Redirecting Public IPv4 BGP FlowSpec Routes to SR-
MPLS TE Policies
This section provides an example for redirecting public IPv4 BGP FlowSpec routes
to SR-MPLS TE Policies to meet the steering requirements of different services.
Networking Requirements
In traditional BGP FlowSpec-based traffic optimization, traffic transmitted over
paths with the same source and destination nodes can be redirected to only one
path, which does not achieve accurate traffic steering. With the function to
redirect a public IPv4 BGP FlowSpec route to an SR-MPLS TE Policy, a device can
redirect traffic transmitted over paths with the same source and destination nodes
to different SR-MPLS TE Policies.
On the network shown in Figure 1-141, there are two SR-MPLS TE Policies in the
direction from PE1 to PE2. PE2 is connected to two local networks (10.8.8.8/32
and 10.9.9.9/32). It is required that traffic destined for different networks be
redirected to different SR-MPLS TE Policies.
Figure 1-141 Networking for redirecting public IPv4 BGP FlowSpec routes to SR-
MPLS TE Policies
Precautions
None
Configuration Roadmap
The configuration roadmap is as follows:
2. Enable MPLS on the backbone network and configure SR and static adjacency
labels.
3. Configure two SR-MPLS TE Policies from PE1 to PE2.
4. Configure static FlowSpec routes on PE1 and redirect the routes to the SR-
MPLS TE Policies.
5. Establish a BGP FlowSpec peer relationship between the PEs.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs on the PEs and Ps
● Adjacency labels on the PEs and Ps
Procedure
Step 1 Configure interface IP addresses.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.9 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
<HUAWEI> system-view
[~HUAWEI] sysname P1
[*HUAWEI] commit
[~P1] interface loopback 1
[*P1-LoopBack1] ip address 2.2.2.9 32
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.9 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] ip address 10.2.1.2 24
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.4.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure P2.
<HUAWEI> system-view
[~HUAWEI] sysname P2
[*HUAWEI] commit
[~P2] interface loopback 1
[*P2-LoopBack1] ip address 4.4.4.9 32
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] ip address 10.3.1.2 24
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] ip address 10.4.1.1 24
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
Step 2 Configure an IGP on the backbone network for the PEs and Ps to communicate.
The following example uses IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] quit
[*PE1] commit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] isis enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] quit
[*P1] commit
[~P1] interface loopback 1
[*P1-LoopBack1] isis enable 1
[*P1-LoopBack1] quit
[*P1] interface gigabitethernet1/0/0
[*P1-GigabitEthernet1/0/0] isis enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet2/0/0
[*P1-GigabitEthernet2/0/0] isis enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] quit
[*PE2] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] isis enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] quit
[*P2] commit
[~P2] interface loopback 1
[*P2-LoopBack1] isis enable 1
[*P2-LoopBack1] quit
[*P2] interface gigabitethernet1/0/0
[*P2-GigabitEthernet1/0/0] isis enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet2/0/0
[*P2-GigabitEthernet2/0/0] isis enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] commit
# Configure P1.
[~P1] mpls lsr-id 2.2.2.9
[*P1] mpls
[*P1-mpls] commit
[~P1-mpls] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.9
[*PE2] mpls
[*PE2-mpls] commit
[~PE2-mpls] quit
# Configure P2.
[~P2] mpls lsr-id 4.4.4.9
[*P2] mpls
[*P2-mpls] commit
[~P2-mpls] quit
# Configure P1.
[~P1] segment-routing
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.1.1.2 remote-ip-addr 10.1.1.1 sid 330003
[*P1-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.1 remote-ip-addr 10.2.1.2 sid 330002
[*P1-segment-routing] quit
[*P1] commit
[~P1] isis 1
[*P1-isis-1] cost-style wide
[*P1-isis-1] segment-routing mpls
[*P1-isis-1] quit
[*P1] commit
# Configure PE2.
[~PE2] segment-routing
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.2.1.2 remote-ip-addr 10.2.1.1 sid 330000
[*PE2-segment-routing] ipv4 adjacency local-ip-addr 10.4.1.2 remote-ip-addr 10.4.1.1 sid 330001
[*PE2-segment-routing] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] quit
[*PE2] commit
# Configure P2.
[~P2] segment-routing
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.3.1.2 remote-ip-addr 10.3.1.1 sid 330002
[*P2-segment-routing] ipv4 adjacency local-ip-addr 10.4.1.1 remote-ip-addr 10.4.1.2 sid 330003
[*P2-segment-routing] quit
[*P2] commit
[~P2] isis 1
[*P2-isis-1] cost-style wide
[*P2-isis-1] segment-routing mpls
[*P2-isis-1] quit
[*P2] commit
# Configure PE1.
[~PE1] segment-routing
[~PE1-segment-routing] segment-list pe1
[*PE1-segment-routing-segment-list] index 10 sid label 330000
[*PE1-segment-routing-segment-list] index 20 sid label 330002
[*PE1-segment-routing-segment-list] quit
[*PE1-segment-routing] segment-list pe1backup
[*PE1-segment-routing-segment-list] index 10 sid label 330001
[*PE1-segment-routing-segment-list] index 20 sid label 330003
[*PE1-segment-routing-segment-list] quit
[*PE1-segment-routing] sr-te policy policy100 endpoint 3.3.3.9 color 100
[*PE1-segment-routing-te-policy] binding-sid 115
[*PE1-segment-routing-te-policy] mtu 1000
[*PE1-segment-routing-te-policy] candidate-path preference 200
[*PE1-segment-routing-te-policy-path] segment-list pe1
[*PE1-segment-routing-te-policy-path] quit
[*PE1-segment-routing-te-policy] quit
[*PE1-segment-routing] sr-te policy policy200 endpoint 3.3.3.9 color 200
[*PE1-segment-routing-te-policy] binding-sid 125
[*PE1-segment-routing-te-policy] mtu 1000
[*PE1-segment-routing-te-policy] candidate-path preference 200
[*PE1-segment-routing-te-policy-path] segment-list pe1backup
[*PE1-segment-routing-te-policy-path] quit
[*PE1-segment-routing-te-policy] quit
[*PE1-segment-routing] quit
[*PE1] commit
After the configuration is complete, you can run the display sr-te policy
command to check SR-MPLS TE Policy information.
PolicyName : policy200
Endpoint : 3.3.3.9 Color : 200
TunnelId : 8194 TunnelType : SR-TE Policy
Binding SID : 125 MTU : 1000
Policy State : UP BFD : Disable
Admin State : UP DiffServ-Mode :-
Traffic Statistics : Disable Backup Hot-Standby : Disable
Candidate-path Count : 1
Step 6 Establish a BGP peer relationship between the PEs and import local network
routes to PE2.
In this example, loopback addresses 10.8.8.8/32 and 10.9.9.9/32 are used to
simulate two local networks on PE2.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3.3.3.9 as-number 100
[*PE1-bgp] peer 3.3.3.9 connect-interface loopback 1
[*PE1-bgp] commit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] interface loopback 2
[*PE2-LoopBack2] ip address 10.8.8.8 32
[*PE2-LoopBack2] quit
[~PE2] interface loopback 3
[*PE2-LoopBack3] ip address 10.9.9.9 32
[*PE2-LoopBack3] quit
After the configuration is complete, run the display bgp peer command on the
PEs and check whether a BGP peer relationship has been established between the
PEs. If the Established state is displayed in the command output, the BGP peer
relationship has been established successfully. The following example uses the
command output on PE1.
[~PE1] display bgp peer
# Configure PE1.
[~PE1] bgp 100
[*PE1] ipv4-family flow
[*PE1-bgp-af-ipv4-flow] peer 3.3.3.9 enable
[*PE1-bgp-af-ipv4-flow] commit
[~PE1-bgp-af-ipv4-flow] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2] ipv4-family flow
[*PE2-bgp-af-ipv4-flow] peer 1.1.1.9 enable
[*PE2-bgp-af-ipv4-flow] commit
[~PE2-bgp-af-ipv4-flow] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp flow peer command on
the PEs and check whether a BGP FlowSpec peer relationship has been established
between the PEs. If the Established state is displayed in the command output, the
BGP FlowSpec peer relationship has been established successfully. The following
example uses the command output on PE1.
[~PE1] display bgp flow peer
In this example, the traffic destined for 10.8.8.8/32 needs to be redirected to the
SR-MPLS TE Policy named policy100, and the traffic destined for 10.9.9.9/32 needs
to be redirected to the SR-MPLS TE Policy named policy200.
# Configure PE1.
[~PE1] flow-route PE1toPE2
[*PE1-flow-route] if-match destination 10.8.8.8 255.255.255.255
[*PE1-flow-route] apply redirect ip 3.3.3.9:0 color 0:100
[*PE1-flow-route] quit
[*PE1] flow-route PE1toPE2b
[*PE1-flow-route] if-match destination 10.9.9.9 255.255.255.255
[*PE1-flow-route] apply redirect ip 3.3.3.9:0 color 0:200
[~PE1-flow-route] commit
[~PE1-flow-route] quit
# Display the traffic redirection information carried in a single BGP FlowSpec route
based on the ReIndex value shown in the preceding command output.
[~PE1] display bgp flow routing-table 24577
BGP local router ID : 10.3.1.1
Local AS number : 100
ReIndex : 24577
Dissemination Rules :
Destination IP : 10.8.8.8/32
Match action :
apply redirect ip 3.3.3.9:0 color 0:200
Route Duration: 0d03h11m39s
AS-path Nil, origin igp, MED 0, pref-val 0, valid, local, best, pre 255
Advertised to such 1 peers:
3.3.3.9
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
mpls lsr-id 1.1.1.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.1.1.1 remote-ip-addr 10.1.1.2 sid 330000
ipv4 adjacency local-ip-addr 10.3.1.1 remote-ip-addr 10.3.1.2 sid 330001
segment-list pe1
index 10 sid label 330000
index 20 sid label 330002
segment-list pe1backup
index 10 sid label 330001
index 20 sid label 330003
sr-te policy policy100 endpoint 3.3.3.9 color 100
binding-sid 115
mtu 1000
candidate-path preference 200
segment-list pe1
sr-te policy policy200 endpoint 3.3.3.9 color 200
binding-sid 125
mtu 1000
candidate-path preference 200
segment-list pe1backup
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family flow
undo shutdown
ip address 10.4.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
interface LoopBack2
ip address 10.8.8.8 255.255.255.255
#
interface LoopBack3
ip address 10.9.9.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
network 10.8.8.8 255.255.255.255
network 10.9.9.9 255.255.255.255
peer 1.1.1.9 enable
#
ipv4-family flow
peer 1.1.1.9 enable
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
#
mpls
#
segment-routing
ipv4 adjacency local-ip-addr 10.3.1.2 remote-ip-addr 10.3.1.1 sid 330002
ipv4 adjacency local-ip-addr 10.4.1.1 remote-ip-addr 10.4.1.2 sid 330003
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
segment-routing mpls
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
Definition
Segment Routing IPv6 (SRv6) is a protocol designed to forward IPv6 data packets
on a network using the source routing model. IPv6 forwarding plane-based SRv6
enables the ingress to add a segment routing header (SRH) to an IPv6 packet and
push an explicit IPv6 address stack into the SRH. After receiving the packet, each
transit node updates the IPv6 destination IP address in the packet and offsets the
address stack to implement hop-by-hop forwarding.
Purpose
Future networks will be 5G oriented. Transport networks also need to be adapted
to this and face the trends in simplifying networks, providing low latency, and
implementing software-defined networking (SDN) and network functions
virtualization (NFV).
To develop 5G networks, customers hope to use IPv6 addresses to more easily
implement VPNs. The SRv6 technology uses existing IPv6 forwarding techniques
and extends the IPv6 header to implement label forwarding-like processing. Some
IPv6 addresses are defined as instantiated segment IDs (SIDs). Each SID has its
own explicit functions. SIDs are operated to implement simplified VPNs and
flexibly plan paths.
Benefits
SRv6 offers the following benefits to users:
● Streamlines network configurations to more easily implement VPNs.
SRv6 does not use MPLS techniques and is fully compatible with existing IPv6
networks. Nodes only need to support IPv6 forwarding instead of MPLS
forwarding. Transit nodes can be incapable of SRv6 and forward IPv6 packets
carrying the SRH over routes.
● Provides topology independent-loop-free alternate (TI-LFA), which improves
FRR protection.
SRv6, in combination with the remote loop-free alternate (RLFA) FRR
algorithm, supports any topology in theory and overcomes drawbacks in
conventional tunnel protection.
● Facilitates traffic optimization on IPv6 forwarding paths.
SIDs with various service types are used to flexibly plan explicit paths on the
ingress to adjust service traffic.
SRH
An IPv6 packet consists of a standard IPv6 header, extension header (0...n), and
payload. A segment routing header (SRH) is added as an extension header to
implement segment routing IPv6 (SRv6) based on the IPv6 forwarding plane. The
SRH specifies an IPv6 explicit path and stores IPv6 segment lists that function in
the same way as segment lists in SR-MPLS.
The ingress adds an SRH to each IPv6 packet, so that transit nodes can forward
the packets based on path information contained in the SRH. Figure 2-1 shows
the SRH format.
Hdr Ext Len 8 bits Length of the SRH spanning from Segment List [0] to
Segment List [n].
Routing 8 bits Route header type. SRHs have a routing type value of
Type 4.
Segment 128 bits Segment list that is numbered from the last segment
List [n] ×n of a path. A segment list is in the format of an IPv6
where n address.
is an
integer
According to SRv6 standards, segment list information can also be expressed in inverse
order, for example, (Segment List [2], Segment List [1], Segment List [0]). This format is
equivalent to <Segment List [0], Segment List [1], Segment List [2]>. Note the difference
between the <> and () symbols:
● <Segment List [0], Segment List [1], Segment List [2]> represents a SID list where
Segment List [0] is the first SID and Segment List [2] is the last SID to be processed.
● (Segment List [2], Segment List [1], Segment List [0]) represents the same SID list but
encoded in an inverse mode where the rightmost SID is the first SID and the leftmost
SID is the last SID to be processed.
For SRH expression of IPv6 packets, the (Segment List [2], Segment List [1], Segment List
[0]) format is more convenient.
In SRv6, every time a packet passes through an SRv6-capable node, the value of
the Segments Left (SL) field decreases by 1 and the IPv6 DA changes, as shown in
Figure 2-3. The SL and Segments List fields are both used to determine an IPv6
DA.
● If the value of the SL field is n (n – 0 = n), the IPv6 DA value is equal to the
Segments List [0] value.
● If the value of the SL field is n minus 1, the IPv6 DA value is equal to the
Segments List [1] value.
● If the value of the SL field is n minus 2, the IPv6 DA value is equal to the
Segments List [2] value.
● ...
● If the Segments Left field value is 0 (n – n = 0), the IPv6 DA value is equal to
the Segments List [n] value.
An SRv6 segment ID (SID) is an IPv6 address. It consists of two parts: Locator and
Function, expressed in the Locator:Function format, as shown in Figure 2-4. The
Locator part occupies the most significant bits and the Function part occupies the
remaining bits.
The Locator part provides the location function. Therefore, in general, each locator
value must be unique in an SR domain. However, in special scenarios, such as
anycast protection, multiple devices can be configured with the same locator
value. After a locator value is configured for a node, the system generates a
locator route and propagates the route in the SR domain through an IGP, allowing
other nodes to locate the node based on the route information received. In
addition, all SRv6 segments advertised by the node can be reached through the
route. The Function part identifies an instruction bound to the node that
generates the SRv6 SID. A function may require additional arguments that can be
placed after the Function part. In this case, the SRv6 SID is expressed in the
Locator:Function:Arguments format. The Arguments part occupies the least
significant bits of the IPv6 address and is used to define packet flow and service
information. The Function and Arguments parts can both be defined, resulting in
an SRv6 SID structure that improves network programmability.
End SID
Endpoint SID that identifies the prefix of a destination address. It is similar to a
prefix SID in SR MPLS.
After an End SID is generated on a node, the node propagates the SID to all other
nodes in the SRv6 domain through an IGP. All nodes in the SRv6 domain know
how to implement the instruction bound to the SID.
End.X SID
End.X SID is a layer-3 cross-connect endpoint SID that identifies a link. It is similar
to an adjacency SID in SR MPLS.
After an End.X SID is generated on a node, the node propagates the SID to all the
other nodes in the SRv6 domain through an IGP. Although the SID can be
obtained by all nodes in the SRv6 domain, only the node generating the SID
knows how to implement the instruction bound to the SID.
End.DT4 SID
End.DT4 SID is a provider edge (PE)-specific endpoint SID that identifies an IPv4
VPN instance. The instruction bound to the End.DT4 SID is to decapsulate packets
and search the routing table of an IPv4 VPN instance for packet forwarding. The
End.DT4 SID, which is similar to an IPv4 VPN label, is used in VPN scenarios.
End.DT6 SID
End.DT6 SID is a PE-specific endpoint SID that identifies an IPv6 VPN instance. The
instruction bound to the End.DT6 SID is to decapsulate packets and search the
routing table of an IPv6 VPN instance for packet forwarding. The End.DT6 SID,
which is similar to an IPv6 VPN label, is used in VPNv6 scenarios.
End.DX4 SID
End.DX4 SID is a provider edge (PE)-specific layer-3 cross-connect endpoint SID
that identifies an IPv4 CE. The instruction bound to the End.DX4 SID is to
decapsulate packets and forward the decapsulated IPv4 packet to the Layer 3
interface bound to the SID. The End.DX4 SID, which is similar to an IPv4 CE label,
is used in VPN scenarios.
End.DX6 SID
End.DX6 SID is a provider edge (PE)-specific layer-3 cross-connect endpoint SID
that identifies an IPv6 CE. The instruction bound to the End.DX6 SID is to
decapsulate packets and forward the decapsulated IPv6 packet to the Layer 3
interface bound to the SID. The End.DX6 SID, which is similar to an IPv6 CE label,
is used in VPNv6 scenarios.
End.DT2U SID
End.DT2U SID is a layer-2 cross-connect endpoint SID to which unicast MAC table
lookup is bound. It also identifies an endpoint. The instruction bound to the
End.DT2U SID is to pop the IPv6 header and its extension headers, search the MAC
address table based on the exposed destination MAC address, and forward the
remaining packet content to the outbound interface bound to the SID. The
End.DT2U SID is used in EVPN VPLS unicast scenarios.
If a bypass tunnel exists on the network, an End.DT2UL SID is generated
automatically and used to send unicast traffic over the bypass tunnel when a CE is
dual-homed to two PEs.
End.DT2M SID
End.DT2M SID is a layer-2 cross-connect endpoint SID to which broadcast-based
flooding is bound. It also identifies an endpoint. The instruction bound to the
End.DT2M SID is to pop the IPv6 header and its extension headers and then
broadcast the remaining packet content in the bridge domain (BD). The End.DT2M
SID is used in EVPN VPLS broadcast, unknown-unicast, multicast (BUM) scenarios.
End.OP SID
End.OP SID is an OAM SID that specifies the punt behavior to be implemented for
an OAM packet in ping and tracert scenarios. On the network shown in Figure
2-14, if ingress node A attempts to ping End.X SID A4:4::45 between nodes D and
E through End.X SID A2:2::1, node D needs to process and respond to the ICMPv6
Echo Request packet sent by node A. When constructing the ping packet, node A
needs to insert End.OP SID A4:4::1 of node D into the packet. After receiving the
packet, node D finds that the destination address of the packet is its own End.OP
SID. Then, node D checks whether A4:4::45 is its local SID. If yes, node D returns a
ping success packet. If not, node D reports an error indicating that the SID is not
its local SID.
SRv6 Nodes
SRv6 nodes perform multiple roles, which are described as follows:
● Source SRv6 node: a source node that encapsulates packets into SRv6 packets
● Transit node: an IPv6 node that forwards SRv6 packets but does not perform
SRv6 processing
● SRv6 segment endpoint node: a node that receives and processes SRv6
packets in which the destination IPv6 address is the node's local SID or local
interface address
A node's role is determined by the task it performs in SRv6 packet forwarding. A
node can play different roles. For example, a node can be the source node on one
SRv6 path while also being a transit or endpoint node on another SRv6 path.
On the network shown in Figure 2-15, the SRv6 source node encapsulates packets
into SRv6 packets, the transit node processes and forwards the packets as
common IPv6 packets, and the endpoint nodes process SRv6 SIDs and SRHs in the
packets.
Transit series instructions are triggered by IPv6 addresses that are neither local
interface addresses nor local SRv6 SIDs advertised by nodes. For example, assume
Flavors
Flavors are behaviors defined to enhance the End series instructions. They are
supplements to endpoint and transit node behaviors. Flavors are optional and will
change the End series instructions once used, meeting richer service requirements.
Table 2-3 lists the common flavors and their functions.
PSP Penultimate segment pop of the SRH (PSP). This function pops
the SRH from a packet on the penultimate endpoint node and
is similar to penultimate hop popping (PHP) in MPLS. Because
the SRH has lost its usage value on the last-hop endpoint
node, PSP can be performed to reduce the node's burden. If
PSP is disabled, the SRH is removed only on the egress.
USP Ultimate segment POP of the SRH (USP). This function pops
the SRH from a packet on the last endpoint node.
2.1.2.4 SRv6 BE
Figure 2-18 describes route advertisement and data forwarding from CE2 to CE1
in an L3VPNv4 over SRv6 BE scenario.
Figure 2-18 Route advertisement and data forwarding in an L3VPNv4 over SRv6
BE scenario
5. PE2 searches My Local SID Table based on A2:1::B100 and finds an End.DT4
SID. According to the instruction specified by the SID, PE2 removes the IPv6
packet header and searches the routing table of the VPN instance identified
by the End.DT4 SID for packet forwarding. At this time, the packet is restored
to a common IPv4 packet.
preferential route selection. If PE1 determines to install the route, this route is
associated with an SRv6 VPN SID.
7. PE1 advertises the route to CE1.
CE1 learns the route from PE1 using a static route or using RIP, OSPF, IS-IS, or
BGP. This process is similar to that from CE2 to PE2.
Figure 2-20 shows route advertisement and data forwarding in an EVPN L3VPNv4
over SRv6 BE scenario.
Figure 2-20 Route advertisement and data forwarding in an EVPN L3VPNv4 over
SRv6 BE scenario
destination IPv4 address prefix of the packet. After finding the associated
SRv6 VPN SID and next hop, PE1 encapsulates the packet into an IPv6 packet
using the SRv6 VPN SID A2:1::C100 as the destination address.
3. PE1 finds the route A2:1::/64 based on the longest match rule and forwards
the packet to the P device over the shortest path.
4. Likewise, the P device finds the route A2:1::/64 based on the longest match
rule and forwards the packet to PE2 over the shortest path.
5. PE2 searches My Local SID Table based on A2:1::C100 and hits an End.DT4
SID. According to the instruction specified by the SID, PE2 removes the IPv6
packet header and searches the routing table of the VPN instance identified
by the End.DT4 SID for packet forwarding.
5. PE2 advertises the EVPN route to its BGP EVPN peer PE1 through a BGP
Update message, which also carries RT and SRv6 VPN SID attributes.
6. PE1 receives the EVPN route.
If the next hop carried in the EVPN route is reachable and the route matches
the BGP route import policy, PE1 performs a series of actions to determine
whether to install the route in its IPv6 routing table of the EVPN instance.
These actions include route leaking, route recursion to an SRv6 BE path, and
preferential route selection. If PE1 determines to install the route, this route is
associated with an SRv6 VPN SID.
7. PE1 advertises the route to CE1.
CE1 learns the route from PE1 using a static route or using RIPng, OSPFv3, IS-
IS, or BGP4+. This process is similar to that from CE2 to PE2.
Figure 2-22 shows route advertisement and data forwarding in an EVPN L3VPNv6
over SRv6 BE scenario.
Figure 2-22 Route advertisement and data forwarding in an EVPN L3VPNv6 over
SRv6 BE scenario
5. After receiving the EVPN route, PE1 leaks the route to its VPNv6 routing table,
converts it into a common IPv6 route, and advertises it to CE1.
In the data forwarding phase:
1. CE1 sends a common IPv6 packet to PE1.
2. After receiving the packet through the interface to which a VPN instance is
bound, PE1 searches the IPv6 routing table of the VPN instance against the
destination IPv6 address prefix of the packet. After finding the associated
SRv6 VPN SID and next hop, PE1 encapsulates the packet into an IPv6 packet
using the SRv6 VPN SID A2:1::D100 as the destination address.
3. PE1 finds the route A2:1::/64 based on the longest match rule and forwards
the packet to the P device over the shortest path.
4. Likewise, the P device finds the route A2:1::/64 based on the longest match
rule and forwards the packet to PE2 over the shortest path.
5. PE2 searches My Local SID Table based on A2:1::D100 and hits an End.DT6
SID. According to the instruction specified by the SID, PE2 removes the IPv6
packet header and searches the routing table of the VPN instance identified
by the End.DT6 SID for packet forwarding.
The data forwarding process in the EVPN VPWS over SRv6 BE dual-homing active-
active scenario is similar to that in the single-homing scenario. For details, see
related description in the single-homing scenario.
The route advertisement process in the EVPN VPWS over SRv6 BE dual-homing
single-active scenario is similar to that in the dual-homing active-active scenario.
The only difference is that the all-active mode information carried by PE2 and PE3
is different in the two scenarios. In Figure 2-26, the master/backup status of PE2
and PE3 is determined by E-Trunk negotiation. Through negotiation, PE2 stays in
the master state and PE3 in the backup state.
After receiving the Ethernet A-D per EVI routes from PE2 and PE3, PE1 selects the
route advertised by PE2 as the preferred route and preferentially forwards user
traffic to PE2.
PE2 then sends a BGP Withdraw message to its BGP EVPN peer PE1 to withdraw
the advertised Ethernet A-D per EVI route. After receiving the message, PE1 selects
the Ethernet A-D per EVI route received from PE3 as the preferred route and
forwards subsequent data traffic to PE3.
Figure 2-28 MAC address learning in an EVPN VPLS over SRv6 BE scenario
Figure 2-29 Unicast data forwarding in an EVPN VPLS over SRv6 BE scenario
1. CE1 sends a common Layer 2 packet to PE1. In the packet, the destination
MAC address is CE3's MAC address MAC3.
2. After receiving the packet through the BD interface, PE1 searches its MAC
address table and finds the SRv6 VPN SID and the next hop associated with
MAC3. PE1 then encapsulates the packet into an IPv6 packet using the SRv6
SID A3:1::B300 as the destination address.
3. PE1 finds the route A3:1::/96 based on the longest match rule and forwards
the packet to the P device over the shortest path.
4. Similarly, the P device finds the route A3:1::/96 based on the longest match
rule and forwards the packet to PE3 over the shortest path.
5. PE3 searches My Local SID Table based on the destination address A3:1::B300
and finds an End.DT2U SID. According to the instruction specified by the SID,
PE3 removes the IPv6 packet header and finds the BD corresponding to the
End.DT2U SID. PE3 then forwards the original Layer 2 packet to CE3 based on
the destination MAC address MAC3.
Figure 2-30 MDT for BUM traffic in an EVPN VPLS over SRv6 BE scenario
Figure 2-31 shows how BUM traffic is forwarded in an EVPN VPLS over SRv6 BE
scenario.
Figure 2-31 BUM traffic forwarding in an EVPN VPLS over SRv6 BE scenario
1. CE1 sends a Layer 2 broadcast packet to PE1. In this packet, the destination
MAC address is a broadcast address.
2. After receiving the packet through the BD interface, PE1 searches its MDT for
leaf node information. After encapsulating the BUM traffic into IPv6 packets
using the SRv6 VPN SIDs of the leaf nodes as the destination addresses, PE1
replicates the packets to all the involved leaf nodes in the MDT. The SRv6 VPN
SID is A2:1::B201 for BUM traffic from CE1 to CE2, and is A3:1::B301 for BUM
traffic from CE1 to CE3.
3. PE1 finds the desired routes based on the longest match rule and forwards
the packets to the P device over the shortest path. The route is A2:1::/96 for
BUM traffic from CE1 to CE2, and is A3:1::/96 for BUM traffic from CE1 to
CE3.
4. The P device finds the desired routes based on the longest match rule and
forwards the packets to PE2 and PE3 over the shortest path.
5. PE2 and PE3 search their My Local SID Table based on SRv6 VPN SIDs and
find End.DT2M SIDs. According to the instruction specified by the End.DT2M
SIDs, PE2 and PE3 remove the IPv6 packet headers and find the BDs
corresponding to the End.DT2M SIDs. PE2 and PE3 then flood the original
Layer 2 packet through their AC interfaces to receivers.
SRv6 End.X SID Advertises SRv6 SIDs IS-IS TLVs of types 22,
sub-TLV on a point-to-point 23, 141, 222, and 223
(P2P) network.
SRv6 LAN End.X Advertises SRv6 SIDs IS-IS TLVs of types 22,
SID sub-TLV on a local area 23, 141, 222, and 223
network (LAN).
Flags 8 bits Flag bit. Currently, only the D bit is available. When a
SID is leaked from Level-2 to Level-1, the D bit must
be set. SIDs with the D bit set must not be leaked
from Level-1 to Level-2 to prevent route loops.
Sub-TLVs Variable Included sub-TLVs, for example, SRv6 End SID sub-TLV.
(variable) length
SRv6 locators can be advertised using the SRv6 Locator TLV. After receiving the
TLV, SRv6-capable IS-IS devices install corresponding locator route entries to their
local forwarding tables, but SRv6-incapable IS-IS devices do not.
SRv6 locators can also be advertised using the Prefix Reachability TLV 236 or 237.
In this advertisement mode, SRv6-incapable devices can install corresponding
locator route entries to their local forwarding tables, thereby achieving
interworking with SRv6-capable devices. In cases where a locator advertisement is
received in both a Prefix Reachability TLV and an SRv6 Locator TLV, the Prefix
Reachability TLV is preferentially used.
capabilities supported by the local node. Figure 2-34 shows the format of the
SRv6 Capabilities sub-TLV.
Flags 8 bits Flag bit. Figure 2-37 shows the format of this field.
Weight 8 bits Weight of the End.X SID for the purpose of load
balancing.
Compared with the SRv6 End.X SID sub-TLV, the SRv6 LAN End.X SID sub-TLV
simply has an additional System ID field. Table 2-10 describes the fields in this
sub-TLV.
End (with Y N N
PSP)
End (with Y N N
USP)
End.X (with N Y Y
PSP)
End.X (with N Y Y
USP)
End.X (with N Y Y
PSP & USP)
End.DT4 Y N N
End.DT6 Y N N
End.DX4 N Y Y
End.DX6 N Y Y
End.OP Y N N
IGP for SR allocates SIDs only within an autonomous system (AS). Proper SID
orchestration in an AS facilitates the planning of an optimal path in the AS.
However, IGP for SR does not work if paths cross multiple ASs on a large-scale
network. BGP for SRv6 is an extension of BGP for Segment Routing. This function
enables a device to allocate BGP peer SIDs based on BGP peer information and
report the SID information to a controller. An SRv6 TE Policy uses BGP peer SIDs
for path orchestration, thereby obtaining the optimal inter-AS path. BGP for SRv6
mainly involves the BGP egress peer engineering (EPE) extension and BGP-LS
extension.
BGP EPE
BGP EPE allocates BGP peer SIDs to inter-AS paths. BGP-LS advertises the BGP
peer SIDs to the controller. A forwarder that does not establish a BGP-LS peer
relationship with the controller can run BGP-LS to advertise a peer SID to a BGP
peer that has established a BGP-LS peer relationship with the controller. The BGP
peer then runs BGP-LS to advertise the peer SID to the network controller. As
shown in Figure 2-40, BGP EPE can allocate peer node segments (Peer-Node SIDs)
and peer adjacency segments (Peer-Adj SIDs).
● A Peer-Node SID identifies a peer node. The peers at both ends of each BGP
session are assigned Peer-Node SIDs. An EBGP peer relationship established
based on loopback interfaces may traverse multiple physical links. In this case,
the Peer-Node SID of a peer is mapped to multiple outbound interfaces.
Traffic can be forwarded through any outbound interface or be load-balanced
among multiple outbound interfaces.
● A Peer-Adj SID identifies an adjacency to a peer. An EBGP peer relationship
established based on loopback interfaces may traverse multiple physical links.
In this case, each adjacency is assigned a Peer-Adj SID. Only a specified link
(mapped to a specified outbound interface) can be used for forwarding.
On the network shown in Figure 2-40, ASBR1 and ASBR3 are directly connected
through two physical links. An EBGP peer relationship is established between
ASBR1 and ASBR3 through loopback interfaces. ASBR1 runs BGP EPE to assign the
Peer-Node SID 2001:db8::1 to its peer (ASBR3) and to assign the Peer-Adj SIDs
2001:db8:1::2 and 2001:db8:1::3 to the physical links. For an EBGP peer
relationship established between directly connected physical interfaces, BGP EPE
allocates a Peer-Node SID rather than a Peer-Adj SID. For example, on the
network shown in Figure 2-40, BGP EPE allocates only Peer-Node SIDs
2001:db8::4, 2001:db8::5, and 2001:db8::6 to the ASBR1-ASBR5, ARBR2-ASBR4, and
ASBR2-ASBR5 peer relationships, respectively.
Peer-Node SIDs and Peer-Adj SIDs are effective only on local devices. Different
devices can have the same Peer-Node SID or Peer-Adj SID. Currently, BGP EPE
supports only EBGP peers. For multi-hop EBGP peers, they must be directly
connected through physical links. This is because transit nodes do not have BGP
peer SID information, resulting in forwarding failures.
BGP EPE allocates SIDs to BGP peers and links, but cannot be used to construct a
forwarding tunnel. BGP peer SIDs must be used with IGP SIDs to establish an E2E
tunnel.
BGP-LS
Inter-AS SRv6 TE policies can be established by manually specifying explicit paths,
or they can be orchestrated by a controller. In a controller-based orchestration
scenario, intra- and inter-AS SIDs are reported to the controller using BGP-LS.
Inter-AS links must support TE link attribute configuration and reporting to the
controller. This enables the controller to compute primary and backup paths based
on the link attributes. BGP EPE discovers network topology information and
allocates SIDs. BGP-LS reports the topology and SID information to the controller
through the Link network layer reachability information (NLRI) field and SRv6 SID.
Figure 2-41 and Figure 2-42 show Link NLRI formats.
Figure 2-42 Format of SRv6 SID information (Peer-Node SIDs) advertised by BGP-
LS
Flags 8 bits Flags field used in an SRv6 BGP Peer-Node SID TLV.
Figure 2-44 shows the format of this field.
In this field:
● B: backup flag. If set, the SID is eligible for
protection.
● S: set flag. If set, the SID refers to a set of BGP
peering sessions.
● P: persistent flag. If set, the SID is persistently
allocated.
Weight 8 bits Weight of the SID, used for load balancing purposes.
(not
currently
supported)
Definition
SRv6 TE Policy is a tunneling technology developed based on SRv6. An SRv6 TE
Policy is a set of candidate paths consisting of one or more segment lists, that is,
segment ID (SID) lists. Each SID list identifies an end-to-end path from the source
to the destination, instructing a device to forward traffic through the path rather
than the shortest path computed using an IGP. The header of a packet steered
into an SRv6 TE Policy is augmented with an ordered list of segments associated
with that SRv6 TE Policy, so that other devices on the network can execute the
instructions encapsulated into the list.
A candidate path can contain multiple segment lists, each of which carries a
Weight attribute. Each segment list is an explicit SID stack that instructs a network
device to forward packets, and multiple segment lists can work in load balancing
mode.
An SRv6 TE Policy can be created using End SIDs, End.X SIDs, anycast SIDs, binding
SIDs, or a combination of them.
Figure 2-46 shows an example of manually configuring an SRv6 TE Policy through
CLI or NETCONF. For a manually configured SRv6 TE Policy, information, such as
the endpoint and color attributes, the preference values of candidate paths, and
segment lists, must be configured. Moreover, the preference values must be
unique. The first-hop SID of a segment list can be an End SID or End.X SID, but
cannot be any binding SID.
Figure 2-47 shows the process in which a controller dynamically generates and
delivers an SRv6 TE Policy to a forwarder. The process is as follows:
1. The controller collects information, such as network topology and SID
information, through BGP-LS.
2. The controller and headend forwarder establish a BGP peer relationship in the
IPv6 SR Policy address family.
3. The controller computes an SRv6 TE Policy and delivers it to the headend
forwarder through the BGP peer relationship. The headend forwarder then
generates SRv6 TE Policy entries.
If an SRv6 TE Policy is manually configured, you can specify a binding SID in the
static locator range for the SRv6 TE Policy. If a controller delivers an SRv6 TE Policy
to a forwarder, the SRv6 TE Policy itself does not carry any binding SID when
being delivered. After receiving the SRv6 TE Policy, the forwarder randomly
allocates a binding SID in the dynamic locator range to the SRv6 TE Policy and
then reports the policy's status information carrying the binding SID to the
controller through BGP-LS. In so doing, the controller can obtain the binding SID
of the SRv6 TE Policy and use the binding SID to orchestrate an SRv6 path.
Based on the VPN SID 4::100, PE2 searches its "My Local SID Table" for a
matching End.DT4 SID. According to the instruction bound to the SID, PE2
decapsulates the packet, pops the SRH and IPv6 header, searches the routing
table of the VPN instance corresponding to the VPN SID 4::100 based on the
destination address in the packet payload, and forwards the packet to CE2.
SBFD Implementation
Figure 2-50 shows how SBFD is implemented through the communication
between an initiator and a reflector. Before link detection, the initiator and
reflector exchange SBFD control packets to advertise information, such as the
SBFD discriminator. In link detection, the initiator proactively sends an SBFD echo
packet, and the reflector loops this packet back. The initiator then determines the
local status based on the looped packet.
● The initiator is responsible for detection and runs an SBFD state machine and
a detection mechanism. The state machine only has up and down states, so
the initiator can only send packets in the up or down state and only receive
packets in the up or admin down state.
The initiator first sends an SBFD packet with an initial down state and a
destination port number of 7784 to the reflector.
● The reflector does not run any SBFD state machine or detection mechanism.
Therefore, it does not proactively send any SBFD echo packet, but constructs
an SBFD packet to be looped back instead.
After receiving the SBFD packet sent by the initiator, the reflector checks
whether the SBFD discriminator carried by the packet matches the locally
configured global SBFD discriminator. If they do not match, the packet is
discarded. If they match and the reflector is in the working state, the reflector
constructs an SBFD packet to be looped back. If they match and the reflector
is not in the working state, the reflector sets the packet state to admin down.
● Initial state: Before an SBFD packet is sent from the initiator to the reflector,
the initial state on the initiator is down.
● State transition: After receiving a looped packet that is in the up state, the
initiator sets the local state to up. If the initiator receives a looped packet that
is in the admin down state, it sets the local state to down. However, if the
initiator does not receive any looped packet before the timer expires, it also
sets the local state to down.
● State retention: If the initiator is in the up state and receives a looped packet
that is also in the up state, the initiator remains in the local up state. If the
initiator is in the down state and receives a looped packet that is in the admin
down state or receives no packet after the timer expires, the initiator remains
in the local down state.
Figure 2-52 shows the SBFD for SRv6 TE Policy detection process.
1. After SBFD for SRv6 TE Policy is enabled on the headend, the mapping
between the destination IPv6 address and discriminator is configured on the
headend. If multiple segment lists exist in the SRv6 TE Policy, the remote
discriminators of the corresponding SBFD sessions are the same.
2. The headend sends an SBFD packet encapsulated with a SID stack
corresponding to the SRv6 TE Policy.
3. After the endpoint device receives the SBFD packet, it returns an SBFD reply
through the shortest IPv6 link.
4. If the headend receives the SBFD reply, it considers that the corresponding
segment list in the SRv6 TE Policy is normal. Otherwise, it considers that the
segment list is faulty. If all the segment lists referenced by a candidate path
are faulty, SBFD triggers a candidate path switchover.
SBFD return packets are forwarded over IPv6. If the primary paths of multiple
SRv6 TE Policies between two nodes differ due to different path constraints but
SBFD return packets are transmitted over the same path, a fault in the return path
may cause all involved SBFD sessions to go down. As a result, all the SRv6 TE
Policies between the two nodes go down. The SBFD sessions of multiple segment
lists in the same SRv6 TE Policy also have this problem.
By default, if HSB protection is not enabled for an SRv6 TE Policy, SBFD detects all
the segment lists only in the candidate path with the highest preference in the
SRv6 TE Policy. With HSB protection enabled, SBFD can detect all the segment lists
referenced by candidate paths with the highest and second highest priorities in the
SRv6 TE Policy. If all the segment lists referenced by the candidate path with the
highest preference are faulty, a switchover to the HSB path is triggered.
1 or When the BGP peer relationships between the controller and all
8 forwarders go down, the SRv6 TE Policy on PE1 is deleted. As a result,
the corresponding VPNv4 route cannot recurse to the SRv6 TE Policy.
Because route recursion is still required, the VPNv4 route recurses to an
SRv6 BE path (if any exists on the network). In this hard convergence
scenario, packet loss occurs when traffic is switched to the SRv6 BE
path.
Egress Protection
Services that recurse to an SRv6 TE Policy must be strictly forwarded through the
path defined using the segment lists in the SRv6 TE Policy. Considering that the
egress of the SRv6 TE Policy is also fixed, if a single point of failure occurs on the
egress, service forwarding may fail. To prevent this problem, configure egress
protection for the SRv6 TE Policy.
Figure 2-54 shows a typical SRv6 egress protection scenario where an SRv6 TE
Policy is deployed between PE3 and PE1. PE1 is the egress of the SRv6 TE Policy,
and PE2 provides protection for PE1 to enhance reliability.
In scenarios where public IP routes are recursed to SRv6 TE Policies, TTLs are
processed in either of the following modes:
● Uniform mode: The headend reduces the TTL value in an inner packet by 1
and maps it to the IPv6 TTL field. The TTL is then processed on the IPv6
network in a standard way. The endpoint reduces the IPv6 TTL value by 1 and
maps it to the TTL field in the inner packet.
● Pipe mode: The TTL value in an inner packet is reduced by 1 only on the
headend and endpoint, with the entire SRv6 TE Policy being treated as one
hop.
IPv6 TI-LFA
In Figure 2-55, if the P space and Q space do not intersect, RLFA requirements fail
to be fulfilled, and RLFA cannot compute a backup path. If a fault occurs on the
link between DeviceB and DeviceE, DeviceB forwards data packets to DeviceC.
DeviceC is not a Q node and has no FRR entry directly to the destination IP
address of DeviceF. In this situation, DeviceC has to recompute a path. The cost of
the link between DeviceC and DeviceD is 1000. DeviceC considers that the optimal
path to DeviceF passes through DeviceB. Therefore, DeviceC transmits the packet
back to DeviceB, leading to a loop and resulting in a forwarding failure.
TI-LFA FRR protects links and nodes on SRv6 tunnels. Figure 2-56 shows how SRv6
TI-LFA FRR is implemented.
If a fault occurs on the link between DeviceB and DeviceE, DeviceB directly
enables TI-LFA FRR backup entries and adds new path information (End.X SIDs of
DeviceC and DeviceD) to the packets to ensure that the data packets can be
forwarded along the backup path.
Microloops are transient loops that could occur on a network following non-
simultaneous convergence of IGP-running network nodes because IGPs use
distributed link state databases (LSDBs). This type of loop disappears after all the
nodes on the forwarding path complete convergence. Microloops result in a series
of issues, including packet loss, jitter, and out-of-order packets, and therefore must
be avoided.
To prevent the microloop, after DeviceB fails, DeviceA delays its convergence until
DeviceF finishes convergence and switches traffic to a TI-LFA backup path. After
DeviceA completes convergence, it switches traffic from the TI-LFA backup path to
the converged path.
To prevent the microloop, after DeviceA performs a traffic switchback, it adds E2E
explicit path information (for example, End.X SID of DeviceB) to packets and
forwards the packets to DeviceB. After receiving the packets, DeviceB forwards
them to DeviceC based on SRH information in the packets.
After DeviceB converges, DeviceA stops adding an SRH to packets and forwards
the packets to DeviceC in IPv6 forwarding mode.
If the destination address is an SRv6 SID, ICMPv6 ping requires the use of SRv6
OAM extensions to check the reachability to the address. If this requirement is not
met, ICMPv6 ping does not work in this case. Currently, SRv6 OAM extensions are
implemented by setting the O-bit (OAM bit) in the SRH or by inserting End.OP
SIDs at an appropriate place in the SRH.
SRH O-bit
O-bit is located in the Flags field of the SRH shown in Figure 2-1. Figure 2-59
shows the format of the Flags field.
O-bit indicates that the local packet is an OAM packet. If O-bit is set, each SRv6
endpoint node needs to copy the received data packet, add a timestamp to the
packet, and then send the packet to the control plane for further processing, for
example, for analysis by an analyzer. To prevent reprocessing the same packet, the
control plane does not need to respond to upper-layer IPv6 protocol operations,
such as ICMPv6 ping.
As each SRv6 endpoint node needs to process ICMPv6 packets carrying the O-bit,
segment-by-segment detection can be implemented based on the O-bit.
End.OP SID
End.OP SID is a variant of the End function. It instructs a data packet to be sent to
the control plane for OAM processing. For details about the function of the
End.OP SID, see End.OP SID.
Only the SRv6 endpoint node that generates an End.OP SID can process ICMPv6
packets. Therefore, end-to-end detection can be implemented based on the
End.OP SID.
Overview
On an SRv6 network, if data forwarding fails over SRv6, the control plane for SRv6
tunnel establishment cannot detect this failure, making network maintenance
difficult. SRv6 SID ping and tracert can be used to detect such faults. Both SRv6
SID ping and tracert check network connectivity and host reachability. The
difference is that SRv6 SID tracert can also locate faulty points.
Assume that an SRv6 SID ping operation is initiated on Device A. The ping
operation process is as follows:
If no SRv6 SID is assigned to the last hop, Device A must specify Device C's SRv6
SID as the destination IPv6 address of the SRv6 path.
b. After receiving the ICMPv6 request packet, Device B sends an ICMPv6
response packet to Device A and forwards the ICMPv6 request packet to
Device C.
▪ If the next-layer SID is Device C's SRv6 SID, the check is successful.
Device C then sends an ICMPv6 response packet to Device A. In this
case, you can view detailed information about the ping operation on
Device A.
▪ If the next-layer SID is not Device C's SRv6 SID, the check fails.
Device C then discards the ICMPv6 request packet. In this case, a
timeout message is displayed on Device A.
If no SRv6 SID is assigned to the last hop, Device A must specify Device C's SRv6
SID as the destination IPv6 address of the SRv6 tunnel.
b. After receiving the UDP packet, Device B changes the value of the TTL
field to 63, sends an ICMPv6 Port Unreachable packet to Device A, and
forwards the UDP packet to Device C.
▪ If the next-layer SID is Device C's SRv6 SID, the check is successful.
Device C then sends an ICMPv6 Port Unreachable packet to Device A.
In this case, you can view detailed information about the tracert
operation on Device A.
▪ If the next-layer SID is not Device C's SRv6 SID, the check fails.
Device C then discards the UDP packet. In this case, a timeout
message is displayed on Device A.
Overview
On an SRv6 network, if data forwarding fails over SRv6, the control plane for SRv6
tunnel establishment cannot detect this failure, making network maintenance
difficult. SRv6 TE Policy ping and tracert can be used to detect such faults. SRv6 TE
Policy ping checks network connectivity and host reachability, and SRv6 TE Policy
tracert checks network connectivity and locates fault points.
Assume that an SRv6 TE Policy ping operation is initiated on Device A. The ping
operation process is as follows:
1. Device A obtains the bottom SID from the SID stack of a segment list to
match an End.OP SID. Device A then encapsulates the matching End.OP SID
and segment list into an SRH, and constructs and sends an ICMPv6 request
packet to Device B.
If the bottom SID of the segment list is an End.X SID or binding SID, you must
manually specify an End.OP SID when initiating a ping operation to the SRv6 TE Policy.
– If SRv6 is enabled on Device B, Device B forwards the ICMPv6 request
packet based on the segment routing header (SRH).
– If SRv6 is disabled on Device B, Device B forwards the ICMPv6 request
packet based on the route.
2. After receiving the ICMPv6 request packet, Device C checks whether the SID
type is End.OP SID.
– If the SID type is End.OP SID, Device C sends an ICMPv6 response packet
to Device A. In this case, you can view detailed information about the
ping operation on Device A.
– If the SID type is not End.OP SID, Device C discards the ICMPv6 request
packet. In this case, a message is displayed on Device A, indicating that
the ping operation times out.
If the bottom SID of the segment list is an End.X SID or binding SID, you must
manually specify an End.OP SID when initiating a tracert operation to the SRv6 TE
Policy.
2. After receiving the UDP packet, Device B changes the value of the TTL field to
0 and sends an ICMPv6 timeout packet to Device A.
3. After receiving the ICMPv6 timeout packet, Device A increases the value of
the TTL field by 1 (the value now becomes 2) and continues to send the UDP
packet.
4. After receiving the UDP packet, Device B changes the value of the TTL field to
1 and sends the packet to Device C.
– If SRv6 is enabled on Device B, Device B forwards the UDP packet based
on the SRH.
– If SRv6 is disabled on Device B, Device B forwards the UDP packet based
on the route.
5. After receiving the UDP packet, Device C changes the value of the TTL field to
0 and checks whether the SID type is End.OP SID.
– If the SID type is End.OP SID, Device C sends an ICMPv6 port unreachable
packet to Device A. In this case, you can view detailed information about
the tracert operation on Device A.
– If the SID type is not End.OP SID, Device C discards the UDP packet. In
this case, a message is displayed on Device A, indicating that the tracert
operation times out.
6. If multiple segment lists are available for the SRv6 TE Policy, steps 1 through
5 repeat until the test on each segment list is complete, because each
segment list is serially tested.
Terms
Term Definition
SID segment ID
SR Segment Routing
TE traffic engineering
Configuration Precautions
Restrictions Guidelines Impact
The SRv6 egress does Properly plan services. The unexpected load
not support deep load imbalance may occur.
balancing.
The SRH with the Next None This type of packet is not
Header set to an IPv6 identified and therefore
header is only supported. cannot be forwarded
The SRH extension through IPv6.
header is at the first IPv6
extension header field,
not at the other
extension headers.
Nodes that use SRv6 Plan the SRv6 MTU Packets with a length
binding SIDs for properly. exceeding the IPv6 MTU
forwarding do not are discarded.
support packet
fragmentation.
Pre-configuration Tasks
Before you configure SRv6 services on a CM board, run the active port-srv6
command to activate SRv6 licenses in batches.
Procedure
Step 1 Run system-view
Step 3 Run active port-srv6 slot slot-id card cardid port port-list
----End
Usage Scenario
L3VPNv4 over SRv6 BE allows SRv6 BE paths on public networks to carry L3VPNv4
data. The implementation of L3VPNv4 over SRv6 BE involves establishing SRv6 BE
paths, advertising VPN routes, and forwarding data.
As shown in Figure 2-62, PE1 and PE2 communicate through an IPv6 public
network. The VPN is a traditional IPv4 network. SRv6 BE paths can be deployed on
the IPv6 public network to carry L3VPNv4 services on the VPN.
Pre-configuration Tasks
Before configuring L3VPNv4 over SRv6 BE, complete the following tasks:
● Configure a link layer protocol.
● Configure IP addresses for interfaces to ensure that neighboring devices are
reachable at the network layer.
Procedure
Step 1 Configure IPv6 IS-IS on each PE and P. For details, see Configuring Basic IPv6 IS-IS
Functions.
Step 2 Configure a VPN instance on each PE and enable the IPv4 address family for the
instance.
1. Run system-view
The system view is displayed.
2. Run ip vpn-instance vpn-instance-name
A VPN instance is created, and its view is displayed.
3. Run ipv4-family
The VPN instance IPv4 address family is enabled, and its view is displayed.
4. Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
5. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ]
A VPN target is configured for the VPN instance IPv4 address family.
6. Run commit
The configuration is committed.
7. Run quit
Exit the VPN instance IPv4 address family view.
8. Run quit
Exit the VPN instance view.
Running the ip binding vpn-instance command deletes Layer 3 (including IPv4 and
IPv6) configurations, such as IP address and routing protocol configurations, on the
involved interface. If needed, reconfigure them after running the command.
11. Run ip address ip-address { mask | mask-length }
Some Layer 3 functions, such as route exchange between the PE and CE, can
be configured only after an IPv6 address is configured for the VPN interface
on the PE.
12. Run commit
Step 3 Configure IPv4 route exchange between the PE and CE. For details, see
Configuring Route Exchange Between PEs and CEs.
A router ID is configured.
3. Run peer ipv6-address as-number { as-number-plain | as-number-dot }
The function to exchange VPN-IPv4 routes with the BGP peer is enabled.
7. Run commit
9. Run quit
Exit the BGP view.
Step 5 Establish an SRv6 BE path between PEs.
1. Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
2. Run encapsulation source-address ipv6-address [ ip-ttl ttl-value ]
A source address is specified for SRv6 EVPN encapsulation.
3. Run locator locator-name [ ipv6-prefix ipv6-address prefix-length [ static
static-length | args args-length ] * ]
A SID node route locator is configured.
4. Run opcode func-opcode end-dt4 vpn-instance vpn-instance-name
A static SID's operation code (opcode) is configured.
5. Run quit
Exit the SRv6 locator view.
6. Run quit
Exit the SRv6 view.
7. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
8. Run ipv4-family vpnv4
The BGP VPNv4 address family view is displayed.
9. Run peer ipv6-address prefix-sid
The device is enabled to exchange IPv4 prefix SIDs with a specified IPv6 peer.
10. Run quit
Exit the BGP VPNv4 address family view.
11. Run ipv4-family vpn-instance vpn-instance-name
The BGP VPN instance IPv4 address family view is displayed.
12. Run segment-routing ipv6 best-effort
The device is enabled to perform private network route recursion based on
the SIDs carried in routes.
13. Run segment-routing ipv6 locator locator-name [ auto-sid-disable ]
VPN routes are enabled to carry the SID attribute.
If auto-sid-disable is not specified, dynamic SID allocation is supported. If
static SIDs are configured for the SID node route locator named using locator-
name, use the static SIDs. Otherwise, use dynamically allocated SIDs.
14. Run commit
The configuration is committed.
Step 6 Enable IS-IS SRv6 on PEs.
----End
Usage Scenario
EVPN L3VPN over SRv6 BE uses public SRv6 BE to carry EVPN L3VPN services. The
implementation of EVPN L3VPN over SRv6 BE involves deploying SRv6 BE,
implementing EVPN route interworking, and forwarding data.
As shown in Figure 2-63, PE1 and PE2 communicate through an IPv6 public
network. SRv6 BE is deployed on the public IPv6 network to carry EVPN L3VPN
services.
Pre-configuration Tasks
Before configuring EVPN L3VPN over SRv6 BE, complete the following tasks:
Procedure
Step 1 Configure IPv6 IS-IS on each PE and P. For details, see Configuring Basic IPv6 IS-IS
Functions.
2. Run ipv6-family
The VPN instance IPv6 address family is enabled, and the view of this address
family is displayed.
3. Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv6 address family.
4. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ] evpn
VPN targets are added to the VPN target list of the VPN instance IPv6 address
family, so that routes of the VPN instance can be added to the routing table
of the EVPN instance configured with a matching VPN target.
5. (Optional) Run tnl-policy policy-name evpn
The EVPN routes imported to the VPN instance IPv6 address family are
enabled to be associated with a tunnel policy.
6. (Optional) Run import route-policy policy-name evpn
The VPN instance IPv6 address family is associated with an import route-
policy that is used to filter routes imported from the EVPN instance to the
VPN instance IPv6 address family. To control route import more precisely,
specify an import route-policy to filter routes and set route attributes for
routes that meet the filter criteria.
7. (Optional) Run export route-policy policy-name evpn
The VPN instance IPv6 address family is associated with an export route-
policy that is used to filter routes advertised from the VPN instance IPv6
address family to the EVPN instance. To control route advertisement more
precisely, specify an export route-policy to filter routes and set route
attributes for routes that meet the filter criteria.
8. Run quit
Exit the VPN instance IPv6 address family view.
9. Run quit
Exit the VPN instance view.
Step 3 Bind the L3VPN instance to an access-side interface.
1. Run interface interface-type interface-number
The interface view is displayed.
2. Run ip binding vpn-instance vpn-instance-name
The L3VPN instance is bound to the interface.
3. (Optional) Run ipv6 enable
IPv6 is enabled on the interface.
4. Run ip address ip-address { mask | mask-length } or ipv6 address { ipv6-
address prefix-length | ipv6-address/prefix-length }
An IPv4/IPv6 address is configured for the interface.
5. Run quit
Exit the interface view.
Step 4 Configure BGP EVPN peer relationships.
If a BGP RR needs to be configured on the network, establish BGP EVPN peer relationships
between all the PEs and the RR.
1. Run bgp { as-number-plain | as-number-dot }
If loopback interfaces are used to establish a BGP connection, you are advised to run
the peer connect-interface command on both ends to ensure the connectivity. If this
command is run on only one end, the BGP connection may fail to be established.
4. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The device is enabled to advertise IP prefix routes. IP prefix routes are used to
advertise host IP routes as well as routes on the network segment where the
host resides.
The device is enabled to exchange EVPN routes with the specified peer or
peer group.
The device is enabled to send packets over a default route when the recursive
next-hop IP address is unavailable.
A value is set for the Next Header field in an SRv6 extension header.
By default, the Next Header field value is 143 in an SRv6 extension header. To
be compatible with earlier versions, you can perform this step to change the
value to 59.
2. Run segment-routing ipv6
In EVPN scenarios, end-dx4 and end-dx6 are used to allocate SIDs based on
interfaces to which VPN instances are bound, and end-dt4 and end-dt6 are
used to allocate SIDs based on VPN instances.
6. Run quit
Exit the SRv6 locator view.
7. Run quit
Exit the SRv6 view.
8. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
9. Run l2vpn-family evpn
The BGP EVPN address family view is displayed.
10. Run peer { ipv6-address | group-name } advertise encap-type srv6
The device is enabled to send EVPN routes carrying the SRv6 encapsulation
attribute to peers.
11. Run quit
Exit the BGP EVPN address family view.
12. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The BGP-VPN instance IPv4/IPv6 address family view is displayed.
13. Run segment-routing ipv6 locator locator-name evpn
The device is enabled to add the SID attribute to VPN routes before sending
the routes to EVPN.
14. Run segment-routing ipv6 best-effort evpn
The device is enabled to perform route recursion based on the SID attribute
carried in EVPN routes.
15. (Optional) Configure the device to allocate SIDs to BGP VPN routes based on
next hops, thereby implementing SID-based load balancing.
To configure the device to allocate SIDs to all BGP VPN routes:
– For EVPN L3VPNv4 services, run the segment-routing ipv6 apply-sid all-
nexthop evpn command in the BGP-VPN instance IPv4 address family
view.
– For EVPN L3VPNv6 services, run the segment-routing ipv6 apply-sid all-
nexthop evpn command in the BGP-VPN instance IPv6 address family
view.
To configure the device to allocate a SID to a BGP VPN route with a specified
next hop:
– For EVPN L3VPNv4 services, run the segment-routing ipv6 apply-sid
specify-nexthop evpn command in the BGP-VPN instance IPv4 address
family view, and then run the nexthop <nexthop-address> interface
{ <interface-name> | <interfaceType> <interfaceNum> } command to
specify a next hop and an outbound interface.
----End
Usage Scenario
EVPN VPWS over SRv6 BE uses public SRv6 BE to carry EVPN services. The
implementation of EVPN VPWS over SRv6 BE involves establishing SRv6 BE paths,
advertising EVPN routes, and forwarding data.
As shown in Figure 2-64, PE1 and PE2 communicate through an IPv6 public
network. SRv6 BE is deployed on the public IPv6 network to carry Layer 2 EVPN
services.
Pre-configuration Tasks
Before configuring EVPN VPWS over SRv6 BE, complete the following tasks:
● Configure a link layer protocol.
● Configure IP addresses for interfaces to ensure that neighboring devices are
reachable at the network layer.
Procedure
Step 1 Configure IPv6 IS-IS on each PE and P. For details, see Configuring Basic IPv6 IS-IS
Functions.
Step 2 Configure EVPN and EVPL instances on each PE.
1. Run system-view
The system view is displayed.
2. Run evpn source-address ip-address
An EVPN source address is configured.
In scenarios where a CE is dual-homed or multi-homed to PEs, you need to
configure an EVPN source address on each PE to generate route distinguishers
(RDs) for Ethernet segment routes and Ethernet auto-discovery per ES routes.
3. Run evpn vpn-instance vpn-instance-name vpws
An EVPN instance that works in VPWS mode is created.
4. Run route-distinguisher route-distinguisher
An RD is configured for the EVPN instance.
An EVPN instance takes effect only after an RD is configured for it. The RDs of
different EVPN instances on a PE must be different.
VPN targets are BGP extended community attributes used to control the
receiving and advertisement of EVPN routes. A maximum of eight VPN targets
can be configured using the vpn-target command. To configure more VPN
targets for an EVPN instance address family, run the vpn-target command
multiple times.
An RT of an Ethernet segment route is generated using the middle six bytes of an ESI.
For example, if the ESI is 0011.1001.1001.1001.1002, the Ethernet segment route uses
11.1001.1001.10 as its RT.
6. Run quit
A specified EVPN instance that works in VPWS mode is bound to the current
EVPL instance.
9. Run local-service-id service-id remote-service-id service-id
The packets of the current EVPL instance are configured to carry the local and
remote service IDs.
10. (Optional) Run mtu-match ignore
The MTU matching check is ignored for the EVPL instance. In scenarios where
a Huawei device interworks with a non-Huawei device through an EVPN
VPWS, if the non-Huawei device does not support any MTU matching check
for an EVPL instance working in SRv6 mode, run the mtu-match ignore
command to ignore the MTU matching check.
11. (Optional) Run load-balancing ignore-esi
The device is disabled from checking ESI validity during EVPL instance load
balancing.
Before running this command, ensure that the Layer 2 interface on which a Layer 2
sub-interface is to be created does not have the port link-type dot1q-tunnel
command configuration. If this configuration exists, run the undo port link-type
command to delete the configuration.
In addition to a Layer 2 sub-interface, an Ethernet main interface, Layer 3 sub-
interface, or Eth-Trunk interface can also function as an AC interface.
2. Run evpl instance evpl-id
The interface used to set up a TCP connection with the specified BGP peer is
specified.
5. Run l2vpn-family evpn
The device is enabled to exchange EVPN routes with the specified peer.
7. Run quit
----End
● Run the display bgp evpn evpl command to check all EVPL instance
information.
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table [ { ad-route | es-route |
inclusive-route | mac-route | prefix-route } prefix ] command to check BGP
EVPN routing information.
● Run the display segment-routing ipv6 local-sid end-dx2 evpl-instance
evpl-id forwarding command to check information about the SRv6 BE local
SID table.
Usage Scenario
EVPN VPLS uses the EVPN E-LAN model to carry MP2MP VPLSs. EVPN VPLS over
SRv6 BE uses public SRv6 BE paths to carry EVPN E-LAN services. The
implementation of EVPN VPLS over SRv6 BE involves establishing SRv6 BE paths,
advertising EVPN routes, and forwarding data.
As shown in Figure 2-65, PE1 and PE2 communicate through an IPv6 public
network. SRv6 BE is deployed on the public IPv6 network to carry EVPN E-LAN
services.
Pre-configuration Tasks
Before configuring EVPN VPLS over SRv6 BE, complete the following tasks:
● Configure basic IPv6 IS-IS functions on PEs and the P.
● Configure BD-EVPN functions on PEs.
Procedure
Step 1 Establish a BGP EVPN peer relationship between PEs.
1. Run system-view
The system view is displayed.
2. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
3. (Optional) Run router-id ipv4-address
A BGP router ID is configured.
4. Run peer ipv6-address as-number { as-number-plain | as-number-dot }
The remote PE is configured as a peer.
5. Run peer ipv6-address connect-interface loopback interface-number
The interface on which a TCP connection to the specified BGP peer is
established is specified.
6. Run l2vpn-family evpn
The BGP EVPN address family view is displayed.
7. Run peer ipv6-address enable
The device is enabled to exchange EVPN routes with the specified peer.
8. Run peer { ipv6-address | group-name } advertise encap-type srv6
The device is enabled to send EVPN routes carrying SRv6-encapsulated
attributes to a specified peer or peer group.
9. Run quit
Exit the BGP EVPN address family view.
10. Run quit
Exit the BGP view.
11. Run commit
The configuration is committed.
Step 2 Deploy SRv6 BE between PEs.
1. (Optional) Run evpn srv6 next-header-field { 59 | 143 }
A value is set for the Next Header field in an SRv6 extension header.
By default, the Next Header field value is 143 in an SRv6 extension header. To
be compatible with earlier versions, you can perform this step to change the
value to 59.
----End
Usage Scenario
SRv6 TI-LFA FRR protects links and nodes on SR tunnels. If a link or node fails,
traffic is rapidly switched to a backup path, minimizing traffic loss.
In some LFA or remote LFA (RLFA) scenarios, the P space and Q space do not
share nodes or have direct neighbors. If a link or node fails, no backup path can
be calculated, causing traffic loss and resulting in a failure to meet reliability
requirements. In this situation, TI-LFA can be used to resolve the issue.
Pre-configuration Tasks
Before configuring IS-IS SRv6 TI-LFA FRR, enable IS-IS SRv6.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run isis [ process-id ]
An IS-IS process is created, and the IS-IS view is displayed.
Step 3 Run ipv6 frr
The IPv6 IS-IS FRR view is displayed.
Step 4 Run loop-free-alternate [ level-1 | level-2 | level-1-2 ]
IPv6 IS-IS FRR is enabled, so that a loop-free backup link is generated using the
LFA algorithm.
Step 5 Run ti-lfa [ level-1 | level-2 | level-1-2 ]
IS-IS SRv6 TI-LFA is enabled.
Step 6 (Optional) Run tiebreaker { node-protecting | lowest-cost | non-ecmp | srlg-
disjoint } preference preference [ level-1 | level-2 | level-1-2 ]
The solution of selecting a backup path for IS-IS SRv6 TI-LFA FRR is set.
By default, the preference value is 40 for the node-protection path first solution,
20 for the smallest-cost path first solution, 10 for the non-ECMP path first
solution, and 5 for the shared risk link group (SRLG) disjoint path first solution. A
larger value indicates a higher priority.
Before configuring the srlg-disjoint parameter, you need to run the isis srlgsrlg-
value command in the IS-IS interface view to configure the IS-IS SRLG function.
Step 7 (Optional) After the preceding configurations are complete, IS-IS SRv6 TI-LFA is
enabled on all IPv6 IS-IS interfaces. If you do not want to enable IS-IS SRv6 TI-LFA
on some interfaces, perform the following steps:
1. Run quit
Exit the IS-IS FRR view.
2. Run quit
Exit the IS-IS view.
3. Run interface interface-type interface-number
The interface view is displayed.
4. Run isis ipv6 ti-lfa disable [ level-1 | level-2 | level-1-2 ]
IPv6 IS-IS TI-LFA is disabled on the interface.
Step 8 If a network fault occurs or is rectified, an IGP performs route convergence. A
transient forwarding status inconsistency between nodes results in different
convergence rates on devices, posing the risk of microloops. To prevent
microloops, perform the following steps:
1. Run quit
Exit the interface view.
----End
Pre-configuration Tasks
Before configuring an SRv6 TE Policy, configure an IGP to implement network
layer connectivity.
Context
An SRv6 TE Policy is a set of candidate paths consisting of one or more segment
lists, that is, segment ID (SID) lists. Each SID list identifies an end-to-end path
from the source to the destination, instructing a device to forward traffic through
the path rather than the shortest path computed using an IGP. The header of a
packet steered into an SRv6 TE Policy is augmented with an ordered list of
segments associated with that SR Policy, so that other devices on the network can
execute the instructions encapsulated into the list.
An SRv6 TE Policy consists of the following parts:
● Headend: node where an SRv6 TE Policy is generated.
● Color: extended community attribute of an SRv6 TE Policy. A BGP route can
recurse to an SRv6 TE Policy if the route has the same color attribute as the
policy.
Context
An SRv6 TE Policy can contain multiple candidate paths defined using segment
lists. SIDs, such as End SID and End.X SID, are mandatory for SRv6 TE Policies. The
SIDs can either be configured manually or generated dynamically using an IGP.
● In scenarios where SRv6 TE Policies are configured manually, if dynamic SIDs
are used for the SRv6 TE Policies, the SIDs may change after an IGP restart. In
this case, you need to manually adjust the SRv6 TE Policies for them to
remain up. This complicates maintenance and therefore dynamic SIDs cannot
be used on a large scale. You are advised to configure SIDs manually and not
to use dynamic SIDs.
● In scenarios where SRv6 TE Policies are delivered dynamically by a controller,
you are also advised to configure SIDs manually. Although the controller can
detect SID changes through BGP-LS, SIDs that are generated dynamically
using an IGP change randomly, complicating routine maintenance and fault
locating.
Procedure
Step 1 Enable SRv6.
1. Run system-view
A value is set for the Next Header field in an SRv6 extension header.
By default, the Next Header field value is 143 in an SRv6 extension header. To
be compatible with earlier versions, you can perform this step to change the
value to 59.
3. Run segment-routing ipv6
SRv6 is enabled.
4. Run commit
SRv6 tunnels are established based on SIDs. The SIDs can either be configured
manually or generated dynamically using IS-IS. Use either of the following
methods to configure SRv6 SIDs:
----End
Context
SRv6 TE Policies are used to direct traffic to traverse an SRv6 TE network. Each
SRv6 TE Policy can have multiple candidate paths with different preferences. A
valid candidate path with the highest preference is selected as the primary path of
the SRv6 TE Policy, and a valid candidate path with the second highest preference
as the hot-standby path.
Procedure
Step 1 Configure a segment list.
1. Run system-view
A segment list is created for SRv6 TE Policy candidate paths, and the segment
list view is displayed.
4. Run index index sid ipv6 ipv6address
You can run the command multiple times to configure multiple SIDs. The
system generates a stack of SIDs and places the SIDs by index index in
ascending order. If a candidate path in an SRv6 TE Policy is preferentially
selected, traffic is forwarded using the segment list of the candidate path. A
maximum of 10 SIDs can be configured for each segment list.
5. Run commit
----End
recurse to an SRv6 TE Policy based on the color value and next-hop address in the
route.
Context
The route coloring process is as follows:
1. Configure a route-policy and set a specific color value for the desired route.
2. Apply the route-policy to a BGP peer or a VPN instance as an import or export
policy.
Procedure
Step 1 Configure a route-policy.
1. Run system-view
The system view is displayed.
2. Run route-policy route-policy-name { deny | permit } node node
A route-policy is created and the route-policy view is displayed.
3. (Optional) Configure an if-match clause as a route-policy filter criterion. You
can add or modify the Color Extended Community only for a route-policy that
meets the filter criterion.
For details about the configuration, see (Optional) Configuring an if-match
Clause.
4. Run apply extcommunity color color
The Color Extended Community is configured.
5. Run commit
The configuration is committed.
Step 2 Apply the route-policy.
● Apply the route-policy to a BGP IPv4 unicast peer.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv4-address | group-name } as-number as-number
A BGP peer is created.
d. Run ipv4-family unicast
The IPv4 unicast address family view is displayed.
e. Run peer { ipv4-address | group-name } route-policy route-policy-name
{ import | export }
A BGP import or export route-policy is configured.
f. Run commit
The configuration is committed.
● Apply the route-policy to a BGP IPv6 unicast peer.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv6-address | group-name } as-number as-number
A BGP peer is created.
d. Run ipv6-family unicast
The IPv6 unicast address family view is displayed.
e. Run peer { ipv6-address | group-name } route-policy route-policy-name
{ import | export }
A BGP import or export route-policy is configured.
f. Run commit
The configuration is committed.
● Apply the route-policy to a BGP VPNv4 peer.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv6-address | group-name } as-number as-number
A BGP peer is created.
d. Run ipv4-family vpnv4
The BGP VPNv4 address family view is displayed.
e. Run peer { ipv6-address | group-name } enable
The BGP VPNv4 peer relationship is enabled.
f. Run peer { ipv6-address | group-name } route-policy route-policy-name
{ import | export }
A BGP import or export route-policy is configured.
g. Run commit
The configuration is committed.
● Apply the route-policy to a BGP EVPN peer.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The BGP view is displayed.
c. Run peer { ipv6-address | group-name } as-number as-number
A BGP peer is created.
d. Run l2vpn-family evpn
The BGP EVPN address family view is displayed.
e. Run peer { ipv6-address | group-name } enable
The BGP EVPN peer relationship is enabled.
----End
Context
After an SRv6 TE Policy is configured, traffic needs to be steered into the SRv6 TE
Policy for forwarding. This process is called traffic steering. Currently, SRv6 TE
Policies can be used for various services, such as BGP L3VPN and EVPN services.
Procedure
Step 1 Configure a tunnel policy.
1. Run system-view
The system view is displayed.
2. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
3. (Optional) Run description description-information
A description is configured for the tunnel policy.
4. Run tunnel select-seq ipv6 srv6-te-policy load-balance-number load-
balance-number
The tunnel selection sequence and number of tunnels for load balancing are
configured.
5. Run commit
The configuration is committed.
Step 2 Configure a service to recurse to an SRv6 TE Policy.
● Configure a BGP L3VPN service to recurse to an SRv6 TE Policy.
For details about how to configure a BGP L3VPN service, see Configuring a
Basic BGP/MPLS IP VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family
The VPN instance IPv4 address family view is displayed.
d. Run tnl-policy policy-name
A tunnel policy is applied to the VPN instance IPv4 address family.
e. Run commit
The configuration is committed.
● Configure a BGP L3VPNv6 service to recurse to an SRv6 TE Policy.
For details about how to configure a BGP L3VPNv6 service, see Configuring a
Basic BGP/MPLS IPv6 VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv6-family
The VPN instance IPv6 address family view is displayed.
d. Run tnl-policy policy-name
A tunnel policy is applied to the VPN instance IPv6 address family.
e. Run commit
The configuration is committed.
● Configure an EVPN service to recurse to an SRv6 TE Policy.
For details about how to configure an EVPN service, see 3.2.5 Configuring
BD-EVPN Functions.
To apply a tunnel policy to an EVPN L3VPN instance, perform the following
steps:
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family or ipv6-family
The VPN instance IPv4 or IPv6 address family view is displayed.
d. Run tnl-policy policy-name evpn
A tunnel policy is applied to the EVPN L3VPN instance.
e. Run commit
The configuration is committed.
To apply a tunnel policy to a BD EVPN instance, perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name bd-mode
The BD EVPN instance view is displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the BD EVPN instance.
d. Run commit
The configuration is committed.
To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode,
perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name vpws
The view of the EVPN instance that works in EVPN VPWS mode is
displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the EVPN instance that works in EVPN VPWS
mode.
d. Run commit
----End
Prerequisites
SRv6 TE Policies have been configured.
Procedure
Step 1 Run the display srv6-te policy [ endpoint endpoint-ip color color-value | policy-
name name-value | binding-sid binding-sid ] command to check SRv6 TE Policy
details.
Step 2 Run the display srv6-te policy statistics command to check SRv6 TE Policy
statistics.
Step 3 Run the display srv6-te policy status { endpoint endpoint-ip color color-value |
policy-name name-value | binding-sid binding-sid } command to check SRv6 TE
Policy status to determine the reason why a specified SRv6 TE Policy cannot go up.
Step 4 Run the display srv6-te policy last-down-reason { endpoint endpoint-ip color
color-value | policy-name name-value | binding-sid binding-sid } command to
check records about events where SRv6 TE Policies or segment lists in SRv6 TE
Policies go down.
----End
Pre-configuration Tasks
Before configuring an SRv6 TE Policy, configure an IGP to implement network
layer connectivity.
Context
An SRv6 TE Policy can either be manually configured on a forwarder through CLI
or NETCONF, or be delivered to a forwarder after being dynamically generated
through a protocol, such as BGP, on a controller. The dynamic mode facilitates
network deployment.
The process for a controller to dynamically generate and deliver an SRv6 TE Policy
to a forwarder is as follows:
Context
An SRv6 TE Policy can contain multiple candidate paths defined using segment
lists. SIDs, such as End SID and End.X SID, are mandatory for SRv6 TE Policies. The
SIDs can either be configured manually or generated dynamically using an IGP.
● In scenarios where SRv6 TE Policies are configured manually, if dynamic SIDs
are used for the SRv6 TE Policies, the SIDs may change after an IGP restart. In
this case, you need to manually adjust the SRv6 TE Policies for them to
remain up. This complicates maintenance and therefore prevents dynamic
SIDs from being used on a large scale. You are advised to configure SIDs
manually and not to use dynamic SIDs.
● In scenarios where SRv6 TE Policies are delivered dynamically by a controller,
you are also advised to configure SIDs manually. Although the controller can
detect SID changes through BGP-LS, SIDs that are generated dynamically
using an IGP change randomly, complicating routine maintenance and fault
locating.
Procedure
Step 1 Enable SRv6.
1. Run system-view
A value is set for the Next Header field in an SRv6 extension header.
By default, the Next Header field value is 143 in an SRv6 extension header. To
be compatible with earlier versions, you can perform this step to change the
value to 59.
3. Run te ipv6-router-id ipv6-address
SRv6 tunnels are established based on SIDs. The SIDs can either be configured
manually or generated dynamically using IS-IS. Use either of the following
methods to configure SRv6 SIDs:
----End
Context
TE link attributes are as follows:
● Link bandwidth
This attribute is mandatory if you want to limit the bandwidth of a path of an
SRv6 TE Policy.
● Link-specific TE metric
The IGP metric or link-specific TE metric can be used for SRv6 TE Policy
computation by a controller. This makes SRv6 TE Policy computation more
independent of the IGP, facilitating path control.
● Administrative group and affinity attribute
The affinity attribute of an SRv6 TE Policy determines the link attributes used
by the policy. The administrative group and affinity attribute are used
together to determine the links that can be used by the policy.
● SRLG
A shared risk link group (SRLG) is a group of links that share a public physical
resource, such as an optical fiber. Links in an SRLG are at the same risk of
faults. If one of the links fails, other links in the SRLG also fail.
An SRLG enhances SRv6 TE Policy reliability on a network with CR-LSP hot
standby or TE FRR enabled. Links that share the same physical resource have
the same risk. For example, links on an interface and its sub-interfaces are in
an SRLG. The interface and its sub-interfaces have the same risk. If the
interface goes down, its sub-interfaces will also go down. If the link of the
primary path of an SRv6 TE Policy and the links of the backup paths of the
SRv6 TE Policy are in an SRLG and the primary path goes down, the backup
paths will most likely go down.
Procedure
● Enable TE attributes.
a. Run system-view
a. Run system-view
● The maximum reservable link bandwidth cannot be higher than the physical
link bandwidth. You are advised to set the maximum reservable link
bandwidth to be less than or equal to 80% of the physical link bandwidth.
● The BC0 bandwidth cannot be higher than the maximum reservable link
bandwidth.
e. Run commit
The configuration is committed.
● (Optional) Configure a TE metric value for a link.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view of a TE link is displayed.
c. Run te metric value
A TE metric value is configured for the link.
d. Run commit
The configuration is committed.
● (Optional) Configure the administrative group and affinity attribute in
hexadecimal format.
a. Run system-view
The system view is displayed.
b. Run interface interface-type interface-number
The interface view of a TE link is displayed.
c. Run te link administrative group group-value
A TE link administrative group is configured.
d. Run commit
The configuration is committed.
● (Optional) Configure the administrative group and affinity attribute through
affinity names and the group name.
a. Run system-view
The system view is displayed.
b. Run path-constraint affinity-mapping
An affinity name mapping template is configured, and the template view
is displayed.
This template must be configured on each node that is involved in
computing an SRv6 TE Policy, and the same mappings between affinity
names and values must be configured on each node.
c. Run attribute bit-name bit-sequence bit-number
To delete the SRLG configuration from all interfaces on a device, run the undo te srlg
all-config command.
d. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run isis [ process-id ]
The IS-IS view is displayed.
----End
Context
The process for a controller to dynamically generate and deliver an SRv6 TE Policy
to a forwarder is as follows:
1. The controller collects information, such as network topology and SID
information, through BGP-LS.
2. The controller and headend forwarder establish an IPv6 SR Policy address
family-specific BGP peer relationship.
3. The controller computes an SRv6 TE Policy and delivers it to the headend
forwarder through the BGP peer relationship. The headend forwarder then
generates SRv6 TE Policy entries.
To implement the preceding operations, you need to establish a BGP-LS peer
relationship and a BGP IPv6 SR Policy peer relationship between the controller and
the specified forwarder.
Procedure
Step 1 Configure a BGP-LS peer relationship.
BGP-LS is used to collect network topology information, making topology
information collection simpler and more efficient. You must configure a BGP-LS
peer relationship between the controller and forwarder, so that topology
information can be properly reported from the forwarder to the controller. This
example provides the procedure for configuring BGP-LS on the forwarder. The
procedure on the controller is similar to that on the forwarder.
1. Run system-view
The system view is displayed.
This example provides the procedure for configuring a BGP IPv6 SR Policy peer
relationship on the forwarder. The procedure on the controller is similar to that on
the forwarder.
1. Run system-view
----End
Procedure
Step 1 Run system-view
----End
Context
The route coloring process is as follows:
1. Configure a route-policy and set a specific color value for the desired route.
2. Apply the route-policy to a BGP peer or a VPN instance as an import or export
policy.
Procedure
Step 1 Configure a route-policy.
1. Run system-view
The system view is displayed.
2. Run route-policy route-policy-name { deny | permit } node node
A route-policy is created and the route-policy view is displayed.
3. (Optional) Configure an if-match clause as a route-policy filter criterion. You
can add or modify the Color Extended Community only for a route-policy that
meets the filter criterion.
For details about the configuration, see (Optional) Configuring an if-match
Clause.
4. Run apply extcommunity color color
The Color Extended Community is configured.
5. Run commit
The configuration is committed.
Step 2 Apply the route-policy.
● Apply the route-policy to a BGP IPv4 unicast peer.
a. Run system-view
The system view is displayed.
----End
Context
After an SRv6 TE Policy is configured, traffic needs to be steered into the SRv6 TE
Policy for forwarding. This process is called traffic steering. Currently, SRv6 TE
Policies can be used for various services, such as BGP L3VPN and EVPN services.
Procedure
Step 1 Configure a tunnel policy.
1. Run system-view
The system view is displayed.
2. Run tunnel-policy policy-name
A tunnel policy is created and the tunnel policy view is displayed.
3. (Optional) Run description description-information
A description is configured for the tunnel policy.
4. Run tunnel select-seq ipv6 srv6-te-policy load-balance-number load-
balance-number
The tunnel selection sequence and number of tunnels for load balancing are
configured.
5. Run commit
The configuration is committed.
Step 2 Configure a service to recurse to an SRv6 TE Policy.
● Configure a BGP L3VPN service to recurse to an SRv6 TE Policy.
For details about how to configure a BGP L3VPN service, see Configuring a
Basic BGP/MPLS IP VPN.
a. Run system-view
The system view is displayed.
b. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
c. Run ipv4-family
d. Run commit
The configuration is committed.
To apply a tunnel policy to an EVPN instance that works in EVPN VPWS mode,
perform the following steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name vpws
The view of the EVPN instance that works in EVPN VPWS mode is
displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the EVPN instance that works in EVPN VPWS
mode.
d. Run commit
The configuration is committed.
To apply a tunnel policy to a basic EVPN instance, perform the following
steps:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name
The basic EVPN instance view is displayed.
c. Run tnl-policy policy-name
A tunnel policy is applied to the basic EVPN instance.
d. Run commit
The configuration is committed.
----End
Prerequisites
SRv6 TE Policies have been configured.
Procedure
Step 1 Run the display srv6-te policy [ endpoint endpoint-ip color color-value | policy-
name name-value | binding-sid binding-sid ] command to check SRv6 TE Policy
details.
Step 2 Run the display srv6-te policy statistics command to check SRv6 TE Policy
statistics.
Step 3 Run the display srv6-te policy status { endpoint endpoint-ip color color-value |
policy-name name-value | binding-sid binding-sid } command to check SRv6 TE
Policy status to determine the reason why a specified SRv6 TE Policy cannot go up.
Step 4 Run the display srv6-te policy last-down-reason { endpoint endpoint-ip color
color-value | policy-name name-value | binding-sid binding-sid } command to
check records about events where SRv6 TE Policies or segment lists in SRv6 TE
Policies go down.
----End
Usage Scenario
L3VPNv4 over SRv6 TE Policy uses public SRv6 TE Policies to carry L3VPNv4 data.
The implementation of L3VPNv4 over SRv6 TE Policy involves establishing SRv6 TE
Policies, advertising VPN routes, and forwarding data.
As shown in Figure 2-66, PE1 and PE2 communicate through an IPv6 public
network. The VPN is a traditional IPv4 network. SRv6 TE Policies can be deployed
on the IPv6 public network to carry L3VPNv4 services on the VPN.
Pre-configuration Tasks
Before configuring L3VPNv4 over SRv6 TE Policy, complete the following tasks:
Procedure
Step 1 Configure IPv6 IS-IS on each PE and P. For details, see Configuring Basic IPv6 IS-IS
Functions.
Step 2 Configure a VPN instance on each PE and enable the IPv4 address family for the
instance.
1. Run system-view
The system view is displayed.
2. Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
3. Run ipv4-family
The VPN instance IPv4 address family is enabled, and the view of this address
family is displayed.
4. Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
5. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ]
VPN targets are configured for the VPN instance IPv4 address family.
6. (Optional) Run default-color color-value
The default color value is specified for the L3VPN service to recurse to an
SRv6 TE Policy.
If a remote VPN route without carrying the Color Extended Community is
leaked to a local VPN instance, the default color value is used for the
recursion.
7. Run commit
The configuration is committed.
8. Run quit
Exit the VPN instance IPv4 address family view.
9. Run quit
Exit the VPN instance view.
10. Run interface interface-type interface-number
The view of the interface to which the VPN instance needs to be bound is
displayed.
11. Run ip binding vpn-instance vpn-instance-name
The VPN instance is bound to the interface.
Running the ip binding vpn-instance command deletes Layer 3 (including IPv4 and
IPv6) configurations, such as IP address and routing protocol configurations, on the
involved interface. If needed, reconfigure them after running the command.
12. Run ip address ip-address { mask | mask-length }
An IP address is configured for the interface.
Some Layer 3 features such as route exchange between the PE and CE can be
configured only after an IP address is configured for the VPN interface on the
PE.
----End
the route with the specified next hop recurses in each address family of the
current VPN instance.
● Run the ping srv6-te policy { policy-name <policyname> | endpoint-ip
<endpointipv6> color <colorid> | binding-sid <bsid> } [ end-op <endop> ] [ -
a <sourceaddr6> | -c <count> | -m <interval> | -s <packetsize> | -t <timeout> |
-tc <tc> | -h <hoplimit> ] * command with the policy-name policyname,
endpoint-ip <endpointipv6> color <colorid>, or binding-sid <bsid> parameter
to initiate a ping operation on the specified SRv6 TE Policy for connectivity
check.
● Run the ping command to check the connectivity between CEs.
Usage Scenario
EVPN L3VPNv4 over SRv6 TE Policy uses public SRv6 TE Policies to carry EVPN
L3VPNv4 services. As shown in Figure 2-67, PE1 and PE2 communicate through an
IPv6 public network. An SRv6 TE Policy is configured on the network to carry
EVPN L3VPNv4 services.
Pre-configuration Tasks
Before configuring EVPN L3VPNv4 over SRv6 TE Policy, configure an SRv6 TE
Policy manually or use a controller to dynamically deliver an SRv6 TE Policy.
Procedure
Step 1 Configure an L3VPN instance.
1. Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
2. Run ipv4-family
The VPN instance IPv4 address family is enabled, and the view of this address
family is displayed.
3. Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
4. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ] evpn
VPN targets are added to the VPN target list of the VPN instance IPv4 address
family, so that routes of the VPN instance can be added to the routing table
of the EVPN instance configured with a matching VPN target.
5. (Optional) Run default-color color-value evpn
The default color value is specified for the L3VPN service to recurse to an
SRv6 TE Policy.
If a remote EVPN route without carrying the Color Extended Community is
leaked to a local VPN instance, the default color value is used for the
recursion.
6. (Optional) Run import route-policy policy-name evpn
The VPN instance IPv4 address family is associated with an import route-
policy that is used to filter routes imported from the EVPN instance to the
VPN instance IPv4 address family. To control route import more precisely,
specify an import route-policy to filter routes and set route attributes for
routes that meet the filter criteria.
7. (Optional) Run export route-policy policy-name evpn
The VPN instance IPv4 address family is associated with an export route-
policy that is used to filter routes advertised from the VPN instance IPv4
address family to the EVPN instance. To control route advertisement more
precisely, specify an export route-policy to filter routes and set route
attributes for routes that meet the filter criteria.
8. Run quit
Exit the VPN instance IPv4 address family view.
9. Run quit
Exit the VPN instance view.
If a BGP RR needs to be configured on the network, establish BGP EVPN peer relationships
between all the PEs and the RR.
1. Run bgp { as-number-plain | as-number-dot }
If loopback interfaces are used to establish a BGP connection, you are advised to run
the peer connect-interface command on both ends to ensure the connectivity. If this
command is run on only one end, the BGP connection may fail to be established.
4. Run ipv4-family vpn-instance vpn-instance-name
The device is enabled to advertise IP prefix routes. IP prefix routes are used to
advertise host IP routes as well as routes on the network segment where the
host resides.
The device is enabled to exchange EVPN routes with a specified peer or peer
group.
10. Run peer { ipv6-address | group-name } advertise encap-type srv6
----End
Usage Scenario
EVPN L3VPNv6 over SRv6 TE Policy uses public SRv6 TE Policies to carry EVPN
L3VPNv6 services. As shown in Figure 2-68, PE1 and PE2 communicate through an
IPv6 public network. An SRv6 TE Policy is configured on the network to carry
EVPN L3VPNv6 services.
Pre-configuration Tasks
Before configuring EVPN L3VPNv6 over SRv6 TE Policy, configure an SRv6 TE
Policy manually or use a controller to dynamically deliver an SRv6 TE Policy.
Procedure
Step 1 Configure an L3VPN instance.
1. Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
2. Run ipv6-family
A VPN instance IPv6 address family is enabled, and the view of this address
family is displayed.
3. Run route-distinguisher route-distinguisher
A route distinguisher (RD) is configured for the VPN instance IPv6 address
family.
4. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ] evpn
VPN targets are added to the VPN target list of the VPN instance IPv6 address
family, so that routes of the VPN instance can be added to the routing table
of the EVPN instance configured with a matching VPN target.
If a BGP RR needs to be configured on the network, establish BGP EVPN peer relationships
between all the PEs and the RR.
1. Run bgp { as-number-plain | as-number-dot }
If loopback interfaces are used to establish a BGP connection, you are advised to run
the peer connect-interface command on both ends to ensure the connectivity. If this
command is run on only one end, the BGP connection may fail to be established.
4. Run ipv6-family vpn-instance vpn-instance-name
The BGP-VPN instance IPv6 address family view is displayed.
5. Run import-route { direct | isis process-id | ospf process-id | rip process-id |
static | ospfv3 process-id | ripng process-id } [ med med | route-policy route-
policy-name ] *
Other protocol routes are imported to the BGP-VPN instance IPv6 address
family. To advertise host IP routes, configure import of direct routes. To
advertise routes on the network segment where the host resides, use a
dynamic routing protocol (such as OSPF) to advertise routes on the network
segment and configure the dynamic routing protocol to import routes.
6. Run advertise l2vpn evpn [ import-route-multipath ]
The device is enabled to advertise IP prefix routes. IP prefix routes are used to
advertise host IP routes as well as routes on the network segment where the
host resides.
To implement SID-based load balancing, specify the import-route-multipath
parameter for the device to advertise all IP prefix routes with the same
destination address.
7. Run quit
Exit the BGP-VPN instance IPv6 address family view.
8. Run l2vpn-family evpn
The BGP EVPN address family view is displayed.
9. Run peer { ipv6-address | group-name } enable
The device is enabled to exchange EVPN routes with the specified peer or
peer group.
10. Run peer { ipv6-address | group-name } advertise encap-type srv6
The device is enabled to send EVPN routes carrying SRv6-encapsulated
attributes to the specified peer or peer group.
11. Run quit
Exit the BGP EVPN address family view.
12. Run commit
The configuration is committed.
Step 4 On each PE, configure EVPN L3VPNv6 services to recurse to an SRv6 TE Policy.
Usage Scenario
EVPN VPWS over SRv6 TE Policy uses public SRv6 TE Policies to carry EVPN
VPWSs. As shown in Figure 2-69, PE1 and PE2 communicate through an IPv6
Pre-configuration Tasks
Before configuring EVPN VPWS over SRv6 TE Policy, configure an SRv6 TE Policy
manually or use a controller to dynamically deliver an SRv6 TE Policy.
SRv6 TE Policy configuration requires you to apply a tunnel policy to a specified
EVPN instance that works in VPWS mode when you configure traffic steering.
Procedure
Step 1 Configure EVPN and EVPL instances on each PE.
1. Run system-view
The system view is displayed.
2. Run evpn source-address ip-address
An EVPN source address is configured.
In scenarios where a CE is dual-homed or multi-homed to PEs, you need to
configure an EVPN source address on each PE to generate route distinguishers
(RDs) for Ethernet segment routes and Ethernet auto-discovery per ES routes.
3. Run evpn vpn-instance vpn-instance-name vpws
An EVPN instance that works in VPWS mode is created.
4. Run route-distinguisher route-distinguisher
An RD is configured for the EVPN instance.
An EVPN instance takes effect only after an RD is configured for it. The RDs of
different EVPN instances on a PE must be different.
VPN targets are BGP extended community attributes used to control the
receiving and advertisement of EVPN routes. A maximum of eight VPN targets
can be configured using the vpn-target command. To configure more VPN
targets for an EVPN instance address family, run the vpn-target command
multiple times.
An RT of an Ethernet segment route is generated using the middle six bytes of an ESI.
For example, if the ESI is 0011.1001.1001.1001.1002, the Ethernet segment route uses
11.1001.1001.10 as its RT.
6. (Optional) Run default-color color-value
The default color value is specified for the EVPN service to recurse to an SRv6
TE Policy.
A specified EVPN instance that works in VPWS mode is bound to the current
EVPL instance.
10. Run local-service-id service-id remote-service-id service-id
The packets of the current EVPL instance are configured to carry the local and
remote service IDs.
11. (Optional) Run mtu-match ignore
The MTU matching check is ignored for the EVPL instance. In scenarios where
a Huawei device interworks with a non-Huawei device through an EVPN
VPWS, if the non-Huawei device does not support any MTU matching check
for an EVPL instance working in SRv6 mode, run the mtu-match ignore
command to ignore the MTU matching check.
12. (Optional) Run load-balancing ignore-esi
The device is disabled from checking ESI validity during EVPL instance load
balancing.
Before running this command, ensure that the Layer 2 interface on which a Layer 2
sub-interface is to be created does not have the port link-type dot1q-tunnel
command configuration. If this configuration exists, run the undo port link-type
command to delete the configuration.
In addition to a Layer 2 sub-interface, an Ethernet main interface, Layer 3 sub-
interface, or Eth-Trunk interface can also function as an AC interface.
2. Run evpl instance evpl-id
A specified EVPL instance is bound to the Layer 2 sub-interface.
3. (Optional) Run evpn-vpws ignore-ac-state
The interface is enabled to ignore the AC status.
On a network with primary and backup links, if CFM is associated with an AC
interface, run this command to ensure EVPN VPWS continuity. When the AC
status of the interface becomes down, a primary/backup link switchover is
triggered. As the interface has been enabled to ignore the AC status using this
command, the EVPN VPWS does not need to be re-established during the link
switchover.
4. Run quit
Exit the Layer-2 sub-interface view.
Step 3 Establish a BGP EVPN peer relationship between PEs.
1. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
2. Run router-id ipv4-address
A BGP router ID is configured.
3. Run peer ipv6-address as-number { as-number-plain | as-number-dot }
The remote PE is configured as a peer.
4. Run peer ipv6-address connect-interface loopback interface-number
An interface used to set up a TCP connection with the specified BGP peer is
specified.
5. Run l2vpn-family evpn
The BGP EVPN address family view is displayed.
6. Run peer ipv6-address enable
The device is enabled to exchange EVPN routes with the specified peer.
7. Run peer ipv6-address advertise encap-type srv6
The device is enabled to send EVPN routes carrying SRv6-encapsulated
attributes to the specified peer.
8. Run quit
Exit the BGP EVPN address family view.
9. Run quit
Exit the BGP view.
10. Run commit
The configuration is committed.
Step 4 On each PE, configure EVPN VPWSs to recurse to an SRv6 TE Policy.
1. Run evpl instance evpl-id srv6-mode
The view of an EVPL instance that works in SRv6 mode is displayed.
2. Run segment-routing ipv6 locator locator-name
The device is enabled to add the SID attribute to EVPN routes to be sent.
If static SIDs are configured for the SID node route locator named using
locator-name, use the static SIDs. Otherwise, use dynamically allocated SIDs.
3. Run quit
Exit the view of the EVPL instance that works in SRv6 mode.
4. Run evpn vpn-instance vpn-instance-name vpws
The view of the EVPN instance that works in VPWS mode is displayed.
5. Run segment-routing ipv6 traffic-engineer [ best-effort ]
The device is enabled to recurse EVPN VPWSs to SRv6 TE Policies.
If an SRv6 BE path exists on the network, you can set the best-effort
parameter, allowing the SRv6 BE path to function as a best-effort path in the
case of an SRv6 TE Policy fault.
6. Run quit
Return to the system view.
7. Run commit
The configuration is committed.
Step 5 (Optional) Verify EVPN VPWS connectivity.
1. Configure an End.OP SID on the remote PE.
a. Run segment-routing ipv6
The SRv6 view is displayed.
b. Run locator locator-name
The locator view is displayed.
c. Run opcode func-opcode end-op
An End.OP SID opcode is configured.
d. Run commitThe configuration is committed.
2. Perform the following steps on the local PE:
a. Run segment-routing ipv6
The SRv6 view is displayed.
----End
● Run the display bgp evpn evpl command to check all EVPL instance
information.
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table [ { ad-route | es-route |
inclusive-route | mac-route | prefix-route } prefix ] command to check BGP
EVPN route information. The command output shows that the value of Relay
Tunnel Out-Interface is an SRv6 TE Policy.
● Run the display evpn vpn-instance [ name vpn-instance-name ] tunnel-info
command to check information about the tunnel associated with a specified
EVPN instance.
● Run the display evpn vpn-instance name vpn-instance-name tunnel-info
nexthop nexthopIpv6Addr command to check information about the tunnel
that is associated with a specified EVPN instance and matches a specified next
hop.
Usage Scenario
EVPN VPLS uses the EVPN E-LAN model to carry MP2MP VPLSs. EVPN VPLS over
SRv6 TE Policy uses public SRv6 TE Policies to carry EVPN VPLSs. As shown in
Figure 2-70, PE1 and PE2 communicate through an IPv6 public network. An SRv6
TE Policy is configured on the network to carry EVPN VPLSs.
Pre-configuration Tasks
Before configuring EVPN VPLS over SRv6 TE Policy, complete the following tasks:
Procedure
Step 1 Establish a BGP EVPN peer relationship between PEs.
1. Run system-view
The device is enabled to exchange EVPN routes with the specified peer.
8. Run peer { ipv6-address | group-name } advertise encap-type srv6
The device is enabled to add the SID attribute to EVPN routes to be sent.
If an SRv6 BE path exists on the network, you can set the best-effort
parameter, allowing the SRv6 BE path to function as a best-effort path in the
case of an SRv6 TE Policy fault.
4. (Optional) Run default-color color-value
The default color value is specified for the EVPN service to recurse to an SRv6
TE Policy.
----End
Context
In traditional BGP FlowSpec-based traffic optimization, traffic transmitted over
paths with the same source and destination nodes can be redirected to only one
path, which does not achieve accurate traffic steering. With the function to
redirect a public IPv4 BGP FlowSpec route to an SRv6 TE Policy, a device can
redirect traffic transmitted over paths with the same source and destination nodes
to different SRv6 TE Policies.
Prerequisites
Before redirecting a public IPv4 BGP FlowSpec route to an SRv6 TE Policy in
manual configuration mode, complete the following tasks:
Context
If no controller is deployed, perform the following operations to manually redirect
a public IPv4 BGP FlowSpec route to an SRv6 TE Policy:
Procedure
● Configure a public End.DT4 SID.
a. Run system-view
The system view is displayed.
b. Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
c. Run encapsulation source-address ipv6-address [ ip-ttl ttl-value ]
A source address is specified for SRv6 VPN encapsulation.
d. Run locator locator-name [ ipv6-prefix ipv6-address prefix-length
[ static static-length | args args-length ] * ]
A SID node route locator is configured.
e. Run opcode func-opcode end-dt4
A public End.DT4 SID is configured.
f. Run quit
Exit the SRv6 locator view.
g. Run quit
Exit the SRv6 view.
h. Run commit
The configuration is committed.
● Configure a BGP FlowSpec route.
a. Run system-view
The system view is displayed.
b. Run flow-route flowroute-name
A static BGP FlowSpec route is created, and the Flow-Route view is
displayed.
c. (Optional) Configure if-match clauses. For details, see Configuring Static
BGP FlowSpec.
d. Run apply redirect ipv6 redirect-ipv6-rt color colorvalue prefix-sid
prefix-sid-value
The device is enabled to precisely redirect the traffic that matches the if-
match clauses to the SRv6 TE Policy.
e. Run commit
The configuration is committed.
● (Optional) Configure a BGP peer relationship in the BGP-Flow address family.
Establish a BGP FlowSpec peer relationship between the ingress of the SRv6
TE Policy and the device on which the BGP FlowSpec route is manually
generated. If the BGP FlowSpec route is manually generated on the ingress of
the SRv6 TE Policy, skip this step.
a. Run system-view
The system view is displayed.
b. Run bgp as-number
The device is enabled to recurse received routes with the redirection next-
hop IPv6 address, color, and prefix SID attributes to SRv6 TE Policies.
g. Run commit
----End
Prerequisites
Before redirecting a public IPv4 BGP FlowSpec route to an SRv6 TE Policy in
controller-based dynamic delivery mode, complete the following tasks:
Context
If a controller is deployed, the controller can be used to configure a public IPv4
BGP FlowSpec route to be redirected to an SRv6 TE Policy. The configuration
process is as follows:
Procedure
Step 1 Establish BGP-LS and BGP IPv6 SR-Policy peer relationships. For details, see
2.2.10.4 Configuring BGP Peer Relationships Between the Controller and
Forwarder.
The device is enabled to process the redirection next-hop IPv6 address, color,
and prefix SID attributes carried in BGP FlowSpec routes received from a peer.
6. Run redirect tunnelv6 tunnel-selector tunnel-selector-name
The device is enabled to recurse received routes with the redirection next-hop
IPv6 address, color, and prefix SID attributes to SRv6 TE Policies.
7. Run commit
----End
Pre-configuration Tasks
Before configuring SRv6 egress protection, configure an IGP to implement network
layer connectivity
Context
Services that recurse to an SRv6 TE Policy must be strictly forwarded through the
path defined using specified segment lists, with the egress of the SRv6 TE Policy
being fixed. If a single point of failure occurs on the egress, service forwarding
may fail. To prevent this problem, configure egress protection for the SRv6 TE
Policy.
Figure 2-71 shows a typical SRv6 egress protection scenario where an SRv6 TE
Policy is deployed between PE3 and PE1. PE1 is the egress of the SRv6 TE Policy,
and PE2 provides protection for PE1 to enhance reliability.
1. Locators A1::/64 and A2::/64 are configured on PE1 and PE2, respectively.
2. An IPv6 VPNv4 peer relationship is established between PE3 and PE1 as well
as between PE2 and PE1. A VPN instance VPN1 and SRv6 VPN SIDs are
configured on both PE1 and PE2. In addition, IPv4 prefix SID advertisement is
enabled on the two PEs.
3. After receiving the VPN route advertised by CE2, PE1 encapsulates the route
as a VPNv4 route and sends the route to PE3. The route carries the VPN SID,
RT, RD, and color information.
4. A mirror SID is configured on PE2 to protect PE1, generating a <Mirror SID,
Locator> mapping entry, for example, <A2::100, A1::>.
5. PE2 propagates the mirror SID through an IGP and generates a local SID
table. After receiving the mirror SID, P1 generates an FRR entry with the next-
hop address being PE2 and the action being Push mirror SID. In addition, P1
generates a low-priority route that does not support recursion.
6. BGP subscribes to mirror SID configuration. After receiving the VPN route
from PE1, PE2 leaks the route into the routing table of VPN1 based on the RT,
and matches the VPN SID carried by the route with the local <Mirror SID,
Locator> table. If a matching locator is found according to the "longest
match" rule, PE2 generates a <Remote SRv6 VPN SID, VPN> entry.
1. In general, PE3 forwards traffic to the private network through the PE3-P1-
PE1-CE2 path. If PE1 fails, P1 detects that the next hop of PE1 is unreachable
and switches traffic to the FRR path.
2. P1 pushes the mirror SID into the packet header and forwards the packet to
PE2. After parsing the received packet, PE2 obtains the mirror SID, queries the
local SID table, and finds that the instruction bound to the mirror SID is to
query the remote SRv6 VPN SID table. As instructed, PE2 queries the remote
SRv6 VPN SID table based on the VPN SID in the packet and finds the
corresponding VPN instance. Then, PE2 searches the VPN routing table and
forwards the packet to CE2.
If PE1 fails, the peer relationship between PE2 and PE1 is interrupted. As a result,
the VPN route received by PE2 from PE1 is deleted, causing the <Remote SRv6
VPN SID, VPN> entry to be deleted. Therefore, you need to enable GR between
PE2 and PE1 to maintain routes or enable delayed deletion for the <Remote SRv6
VPN SID, VPN> entry on PE2.
Procedure
Step 1 Configure an SRv6 TE Policy along the PE3-P1-PE1 path.
For details about the configuration, see 2.2.9 Configuring an SRv6 TE Policy
(Manual Configuration) and 2.2.10 Configuring an SRv6 TE Policy (Dynamic
Delivery by a Controller).
Supported service types are BGP L3VPN, EVPN L3VPN, EVPN VPWS, and EVPN
VPLS. For details about the configuration, see 2.2.13 Configuring EVPN L3VPNv6
over SRv6 TE Policy and 2.2.14 Configuring EVPN VPWS over SRv6 TE Policy.
opcode func-opcode must be configured within the local locator range. The
locator to be protected is specified using mirror-locator ipv6-address prefix-
length.
5. Run commit
----End
Context
In a scenario where a public IP route recurses to an SRv6 TE Policy, perform the
following operations at both ends of the policy to configure a TTL processing
mode for the policy.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
Step 3 Run ttl-mode { pipe | uniform }
A TTL processing mode is configured for the SRv6 TE Policy.
Step 4 Run commit
The configuration is committed.
----End
Context
If the number of SRv6 TE Policies or segment lists exceeds the upper limit, adding
new SRv6 TE Policies or segment lists may affect existing services. To prevent this,
you can configure alarm thresholds for the number of SRv6 TE Policies and the
number of segment lists. In this way, when the number of SRv6 TE Policies or
segment lists reaches the specified threshold, an alarm is generated to notify users
to handle the problem.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
Step 3 Run srv6-te-policy segment-list-number threshold-alarm upper-limit
upperLimitValue lower-limit lowerLimitValue
An alarm threshold is configured for the number of segment lists.
Step 4 Run srv6-te-policy policy-number threshold-alarm upper-limit upperLimitValue
lower-limit lowerLimitValue
An alarm threshold is configured for the number of SRv6 TE Policies.
Step 5 Run commit
The configuration is committed.
----End
Context
IPv6 packets cannot be fragmented on transit nodes for forwarding. If the size of
an IPv6 packet entering a transit node exceeds the IPv6 MTU of the specified
outbound interface, the node discards the packet. Conversely, if the size of an IPv6
packet is smaller than the MTU configured for an SRv6 BE path or an SRv6 TE
Policy, the bandwidth of the associated link is not fully utilized. Therefore, an
appropriate SRv6 path MTU needs to be configured to prevent packet loss and
maximize link bandwidth utilization.
In TI-LFA protection or binding SID scenarios, an SRH that has segment lists
associated with TI-LFA or the binding SID needs to be inserted into IPv6 packets.
This increases the packet size and must be taken into account when an SRv6 path
MTU is configured.
SRv6 path MTU configuration takes effect for packets forwarded through either an
SRv6 BE path or an SRv6 TE Policy. You need to configure a proper SRv6 path MTU
according to the size of SRv6-encapsulated packets.
Procedure
● Configure a path MTU for an SRv6 BE path.
a. Run system-view
The system view is displayed.
b. Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
c. Run path-mtu mtu-value
A global path MTU is configured for an SRv6 BE path.
d. Run commit
The configuration is committed.
● Configure a path MTU for an SRv6 TE Policy.
a. Run system-view
The system view is displayed.
b. Run segment-routing ipv6
SRv6 is enabled, and the SRv6 view is displayed.
c. Run path-mtu mtu-value
A global path MTU is configured for an SRv6 TE Policy.
d. Run commit
The configuration is committed.
----End
Context
SRv6 TE Policy traffic statistics collection enables you to collect statistics about the
traffic forwarded through a specified or all SRv6 TE Policies.
This function supports both global configuration and single-policy configuration. If
global configuration and single-policy configuration both exist, the single-policy
configuration takes effect.
Procedure
Step 1 Global configuration
1. Run system-view
The system view is displayed.
2. Run segment-routing ipv6
The SRv6 view is displayed.
3. Run srv6-te-policy traffic-statistics enable
Traffic statistics collection is enabled for all SRv6 TE Policies.
4. Run commit
The configuration is committed.
Step 2 Single-policy configuration
1. Run system-view
The system view is displayed.
2. Run segment-routing ipv6
The SRv6 view is displayed.
3. Run srv6-te policy policy-name
The SRv6 TE Policy view is displayed.
4. Run traffic-statistics (SRv6 TE Policy view) { enable | disable }
Traffic statistics collection is enabled for a specified SRv6 TE Policy.
5. Run commit
The configuration is committed.
----End
Follow-up Procedure
To clear existing SRv6 TE Policy traffic statistics, run the reset srv6-te policy
traffic-statistics [ endpoint ipv6-address color color-value | policy-name name-
value | binding-sid binding-sid ] command.
Context
BGP is a dynamic routing protocol used between ASs. BGP SRv6 is a BGP extension
for SR and is used for inter-AS source routing.
BGP SRv6 uses BGP egress peer engineering (EPE) to allocate Peer-Node and Peer-
Adj SIDs to peers. These SIDs can be reported to the controller through BGP-LS for
orchestrating E2E SRv6 TE Policy tunnels.
Procedure
● Configure BGP EPE.
a. Run system-view
SRv6 is enabled.
c. Run quit
If all of the preceding three commands are run, only the last command takes
effect.
Table 2-17 describes the effects of each format of the peer ipv6-address
egress-engineering srv6 [ locator locator-name | static-sid { psp psp-sid
| no-psp-usp no-psp-usp-sid }* ] command with the segment-routing
ipv6 egress-engineering locator command being executed or not. (The
left four column headings of the table indicate the functions
corresponding to the involved command formats.) The segment-routing
ipv6 egress-engineering locator locator-name command is used to
specify a locator to be used for SID allocation to all BGP EPE peers.
g. Run commit
The configuration is committed.
● Configure BGP-LS.
BGP-LS is used to collect network topology information, making topology
information collection simpler and more efficient. You must configure a BGP-
LS peer relationship between the controller and forwarder, so that topology
information can be properly reported from the forwarder to the controller.
This example provides the procedure for configuring BGP-LS on the forwarder.
The procedure on the controller is similar to that on the forwarder.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
BGP is enabled, and the BGP view is displayed.
c. Run peer ipv6-address as-number as-number-plain
Context
After configuring SRv6, you can perform the following configurations in any view
of a client.
Procedure
● Specify SIDs to test the connectivity of an SRv6 network.
To test the connectivity of an SRv6 network, run the ping ipv6-sid [ -a
source-ipv6-address | -c echo-number | -m wait-time | -s packetsize | -t
timeout | -tc traffic-class-value ] * [ segment-by-segment ] sid & <1-11>
command on the ingress to specify SRv6 SIDs to initiate a ping test to the
egress.
<HUAWEI> ping ipv6-sid A2::C31 A4::C50 A4::C52
PING ipv6-sid A2::C31 A4::C50 A4::C52 : 56 data bytes, press CTRL_C to break
Reply from A4::C52
bytes=56 Sequence=1 hop limit=64 time=2 ms
Reply from A4::C52
bytes=56 Sequence=2 hop limit=64 time=1 ms
Reply from A4::C52
bytes=56 Sequence=3 hop limit=64 time=1 ms
Reply from A4::C52
bytes=56 Sequence=4 hop limit=64 time=1 ms
Reply from A4::C52
----End
Context
After configuring SRv6, you can perform the following configurations in any view
of a client.
Procedure
● Specify SIDs to test the path information of an SRv6 network or locate the
fault point on the path.
To test the fault point on an SRv6 network, run the tracert ipv6-sid [ -f first-
hop-limit | -m max-hop-limit | -p port-number | -q probes | -w timeout | -s
packetsize | -a source-ipv6-address ] * [ overlay ] sid & <1-11> command on
the ingress to specify SRv6 SIDs to initiate a tracert test to the egress.
<HUAWEI> tracert ipv6-sid A2::C31 A4::C50 A4::C52
traceroute ipv6-sid A2::C31 A4::C50 A4::C52 30 hops max,60 bytes packet
● Test the path over which an SRv6 TE Policy is established or locate the fault
point on the path.
----End
Networking Requirements
On the network shown in Figure 2-72:
● PE1, the P, and PE2 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● PE1, the P, and PE2 are Level-1 devices in area 1.
It is required that a bidirectional SRv6 BE path be deployed between PE1 and PE2
to carry L3VPNv4 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish an MP-IBGP peer relationship between PEs.
7. Configure SRv6 on PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other devices is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001::1 96
[*PE1-GigabitEthernet1/0/0] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[~P-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
10.1.1.2 4 65410 11 9 0 00:06:37 Established 1
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs and check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully. The following example
uses the command output on PE1.
[~PE1] display bgp vpnv4 all peer
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 3::3
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 best-effort
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] commit
[~PE2-isis-1] quit
Total Locator(s): 1
Total SID(s): 1
Run the display bgp vpnv4 all routing-table command on each PE to check BGP
VPNv4 routing information. The following example uses the command output on
PE1.
[~PE1] display bgp vpnv4 all peer
Destination: 22.22.22.22/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 30::1:0:3C Neighbour: 3::3
State: Active Adv Relied Age: 00h08m50s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x1000133 Instance:
RelayNextHop: 30::1:0:3C Interface: SRv6 BE
TunnelID: 0x0 Flags: RD
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=7 ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=4 ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=5 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 3::3 enable
peer 3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.1.1.2 as-number 65410
#
return
● P configuration file
#
sysname P
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 3::3
locator as1 ipv6-prefix 30::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 2.2.2.2
peer 1::1 as-number 100
peer 1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.2.1.2 as-number 65420
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 11.11.11.11 255.255.255.255
peer 10.1.1.1 enable
#
return
Networking Requirements
On the network shown in Figure 2-73:
● PE1, PE2, P1, and P2 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● PE1, PE2, P1, and P2 are Level-2 devices in area 1.
It is required that a bidirectional SRv6 BE path be deployed between PE1 and PE2
to carry L3VPNv4 services. In addition, to maximize network resource utilization, it
is required that ECMP be performed for L3VPNv4 services carried by the SRv6 BE
path between PE1 and PE2.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring IPv4 VPN over SRv6 BE ECMP, note the following:
● SRv6 BE ECMP depends on equal-cost routes on the network. To ensure
successful configuration, you need to properly plan IGP costs for links. In this
example, each link uses the default cost value 10.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device. In addition, configure dynamic BFD for IPv6 IS-IS.
3. Configure IS-IS SRv6 on PE1 and PE2.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish an MP-IBGP peer relationship between PEs.
7. Configure SRv6 on PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface on PE1, PE2, P1, and P2
● Area ID of PE1, PE2, P1, and P2
● Level on PE1, PE2, P1, and P2
● VPN instance name, RD, and RT on PE1 and PE2
Procedure
Step 1 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
following example uses the configuration of PE1. The configuration of other
When configuring IS-IS, enable dynamic BFD to speed up IS-IS convergence in the
case of a link status change.
# Configure PE1.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] ipv6 bfd all-interfaces enable
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet2/0/0] commit
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure P1.
[~P1] bfd
[*P1-bfd] quit
[*P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] cost-style wide
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] ipv6 enable topology ipv6
[*P1-isis-1] ipv6 bfd all-interfaces enable
[*P1-isis-1] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
[*P1] interface loopback1
[*P1-LoopBack1] isis ipv6 enable 1
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure P2.
[~P2] bfd
[*P2-bfd] quit
[*P2] isis 1
[*P2-isis-1] is-level level-2
[*P2-isis-1] cost-style wide
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] ipv6 enable topology ipv6
[*P2-isis-1] ipv6 bfd all-interfaces enable
[*P2-isis-1] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback1
[*P2-LoopBack1] isis ipv6 enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] ipv6 bfd all-interfaces enable
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
Total Peer(s): 2
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 3/0/0
[*PE1-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet3/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 3/0/0
[*PE2-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet3/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet3/0/0] commit
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
10.1.1.2 4 65410 11 9 0 00:06:37 Established 1
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs and check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3::3 prefix-sid
[~PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 3::3
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 best-effort
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] commit
[~PE2-isis-1] quit
Destination: 22.22.22.22/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 30::1:0:20 Neighbour: 3::3
State: Active Adv Relied Age: 00h01m42s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x10000C7 Instance:
RelayNextHop: 30::1:0:20 Interface: SRv6 BE
TunnelID: 0x0 Flags: RD
RelayNextHop: 30::1:0:21 Interface: SRv6 BE
TunnelID: 0x0 Flags: RD
The preceding command output shows that the VPN route 22.22.22.22/32 has two
outbound interfaces. The interfaces work in ECMP mode during packet forwarding.
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=7 ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=4 ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=5 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
ipv6 bfd all-interfaces enable
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::1/96
isis ipv6 enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 3::3 enable
peer 3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.1.1.2 as-number 65410
#
return
● P1 configuration file
#
sysname P1
#
bfd
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2/96
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
#
interface LoopBack1
ip address 22.22.22.22 255.255.255.255
#
bgp 65420
peer 10.2.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 22.22.22.22 255.255.255.255
peer 10.2.1.1 enable
#
return
2.2.24.3 Example for Configuring VPN FRR for L3VPNv4 over SRv6 BE
This section provides an example for configuring VPN FRR for L3VPNv4 over SRv6
BE.
Networking Requirements
On the network shown in Figure 2-74:
● PE1, PE2, and PE3 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● The PEs are Level-2 devices in area 1.
It is required that a bidirectional SRv6 BE path be deployed between PE1 and PE2
as well as between PE1 and PE3 to carry L3VPNv4 services. In addition, VPN FRR
needs to be configured on PE1 to improve network reliability.
Figure 2-74 Networking of VPN FRR configuration for L3VPNv4 over SRv6 BE
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring VPN FRR for IPv4 VPN over SRv6 BE, note the following:
● In a VPN FRR scenario, after the primary path recovers, traffic switches back
to this path. Because the order in which nodes undergo IGP convergence
differs, packet loss may occur during the switchback. To resolve this problem,
run the route-select delay delay-value command to configure a route
selection delay so that traffic is switched back only after forwarding entries on
the devices along the primary path are updated. The delay specified using
delay-value depends on various factors, such as the number of routes on the
devices. Set a proper delay as needed.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on PE1 and PE2.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish an MP-IBGP peer relationship between PEs.
7. Configure SRv6 on PE1 and PE2.
8. Enable VPN FRR on PE1 and configure BFD to detect locator route
reachability to speed up VPN FRR switching.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each PE interface
● Area ID of each PE
● Level of each PE
● VPN instance name, RD, and RT on each PE
Procedure
Step 1 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
following example uses the configuration of PE1. The configuration of other
devices is similar to the configuration of PE1. For configuration details, see
Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:db8:1::1 96
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ipv6 enable
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0002.00
[*PE2-isis-1] ipv6 bfd all-interfaces enable
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure PE3.
[~PE3] isis 1
[*PE3-isis-1] is-level level-2
[*PE3-isis-1] cost-style wide
[*PE3-isis-1] network-entity 10.0000.0000.0004.00
[*PE3-isis-1] ipv6 enable topology ipv6
[*PE3-isis-1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE3-GigabitEthernet1/0/0] commit
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] interface loopback1
[*PE3-LoopBack1] isis ipv6 enable 1
[*PE3-LoopBack1] commit
[~PE3-LoopBack1] quit
Total Peer(s): 2
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 3/0/0
[*PE1-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet3/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] ip vpn-instance vpna
[*PE3-vpn-instance-vpna] ipv4-family
[*PE3-vpn-instance-vpna-af-ipv4] route-distinguisher 300:1
[*PE3-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE3-vpn-instance-vpna-af-ipv4] quit
[*PE3-vpn-instance-vpna] quit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] peer 10.3.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] router-id 3.3.3.3
[*PE3-bgp] ipv4-family vpn-instance vpna
[*PE3-bgp-vpna] peer 10.3.1.2 as-number 65420
[*PE3-bgp-vpna] import-route direct
[*PE3-bgp-vpna] commit
[*PE3-bgp-vpna] quit
[~PE3-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
# Configure PE3.
[~PE3] bgp 100
[~PE3-bgp] peer 1::1 as-number 100
[*PE3-bgp] peer 1::1 connect-interface loopback 1
[*PE3-bgp] ipv4-family vpnv4
[*PE3-bgp-af-vpnv4] peer 1::1 enable
[*PE3-bgp-af-vpnv4] commit
[~PE3-bgp-af-vpnv4] quit
[~PE3-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs and check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully.
Step 6 Establish an SRv6 BE path between PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 2::2 prefix-sid
[*PE1-bgp-af-vpnv4] peer 3::3 prefix-sid
[~PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 2::2
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 20::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 best-effort
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] commit
[~PE2-isis-1] quit
# Configure PE3.
[~PE3] segment-routing ipv6
[~PE3-segment-routing-ipv6] encapsulation source-address 3::3
[*PE3-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE3-segment-routing-ipv6-locator] quit
[*PE3-segment-routing-ipv6] quit
[*PE3] bgp 100
[*PE3-bgp] ipv4-family vpnv4
[*PE3-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE3-bgp-af-vpnv4] quit
[*PE3-bgp] ipv4-family vpn-instance vpna
[*PE3-bgp-vpna] segment-routing ipv6 best-effort
[*PE3-bgp-vpna] segment-routing ipv6 locator as1
[*PE3-bgp-vpna] commit
[~PE3-bgp-vpna] quit
[~PE3-bgp] quit
[~PE3] isis 1
[*PE3-isis-1] segment-routing ipv6 locator as1
[*PE3-isis-1] commit
[~PE3-isis-1] quit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd pe2tope1 bind peer-ipv6 10::
[*PE2-bfd-session-pe2tope1] discriminator local 200
[*PE2-bfd-session-pe2tope1] discriminator remote 100
[*PE2-bfd-session-pe2tope1] commit
[~PE2-bfd-session-pe2tope1] quit
Destination: 22.22.22.22/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 20::1:0:1E Neighbour: 2::2
State: Active Adv Relied Age: 00h10m54s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x10000CC Instance:
RelayNextHop: 20::1:0:1E Interface: SRv6 BE
TunnelID: 0x0 Flags: RD
BkNextHop: 20::1:0:1F BkInterface: SRv6 BE
BkLabel: 3 SecTunnelID: 0x0
BkPETunnelID: 0x0 BkPESecTunnelID: 0x0
BkIndirectID: 0x10000D2
The preceding command output shows that the VPN route 22.22.22.22/32 has a
backup outbound interface and a VPN FRR routing entry has been generated.
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=7 ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=4 ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=5 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn frr
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
bfd
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::1/96
isis ipv6 enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bfd pe1tope2 bind peer-ipv6 20::
discriminator local 100
discriminator remote 200
#
bgp 100
router-id 1.1.1.1
peer 2::2 as-number 100
peer 2::2 connect-interface LoopBack1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2::2 enable
peer 2::2 prefix-sid
peer 3::3 enable
peer 3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
route-select delay 300
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.1.1.2 as-number 65410
#
return
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
bfd pe2tope1 bind peer-ipv6 10::
discriminator local 200
discriminator remote 100
#
bgp 100
router-id 2.2.2.2
peer 1::1 as-number 100
peer 1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.2.1.2 as-number 65420
#
return
● PE3 configuration file
#
sysname PE3
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 300:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 3::3
locator as1 ipv6-prefix 30::1 64 static 32
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.3.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 3::3/64
Networking Requirements
On the network shown in Figure 2-75, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 BE path be deployed between PE1 and PE2 to carry EVPN L3VPNv4 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure an IPv4 L3VPN instance on each PE and bind the IPv4 L3VPN
instance to an access-side interface.
4. Establish a BGP EVPN peer relationship between PEs.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Configure SRv6 BE on PEs.
Data Preparation
To complete the configuration, you need the following data:
● L3VPN instance name: vpn1
● RD and RT values of the L3VPN instance: 100:1 and 1:1
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other devices is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] quit
[*PE2] commit
Total Peer(s): 1
Step 3 Configure an IPv4 L3VPN instance on each PE and bind the IPv4 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv4-family
[*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpn1-af-ipv4] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-instance vpn1
[*PE2-bgp-vpn1] import-route direct
[*PE2-bgp-vpn1] advertise l2vpn evpn
[*PE2-bgp-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] commit
[*PE1-bgp-vpn1] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 3.3.3.3
[*PE2-bgp] ipv4-family vpn-instance vpn1
[*PE2-bgp-vpn1] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpn1] import-route direct
[*PE2-bgp-vpn1] commit
[*PE2-bgp-vpn1] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpn1 peer
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs to check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
Step 6 Deploy SRv6 BE between PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
[*PE1-segment-routing-ipv6-locator] opcode ::100 end-dt4 vpn-instance vpn1 evpn
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] segment-routing ipv6 locator PE1 evpn
[*PE1-bgp-vpn1] segment-routing ipv6 best-effort evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
[*PE2-segment-routing-ipv6-locator] opcode ::200 end-dt4 vpn-instance vpn1 evpn
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv4-family vpn-instance vpn1
Run the display ip routing-table vpn-instance vpn1 command on each PE. The
command output shows the VPN route sent from the peer end. The following
example uses the command output on PE1.
[~PE1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole
route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 8 Routes : 8
-------------------------------------
Total SID(s): 1
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=127
ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=19
ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=26
ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=17
ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=16
ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
opcode ::100 end-dt4 vpn-instance vpn1 evpn
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE1 evpn
segment-routing ipv6 best-effort evpn
peer 10.1.1.2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 advertise encap-type srv6
#
return
● P configuration file
#
sysname P
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
Networking Requirements
On the network shown in Figure 2-76, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 BE path be deployed between PE1 and PE2 to carry EVPN L3VPNv6 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN
instance to an access-side interface.
4. Establish a BGP EVPN peer relationship between PEs.
5. Configure SRv6 BE on PEs.
Data Preparation
To complete the configuration, you need the following data:
● L3VPN instance name: vpn1
● RD and RT values of the L3VPN instance: 100:1 and 1:1
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other devices is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] quit
[*PE2] commit
Total Peer(s): 1
Step 3 Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv6] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet2/0/0] ipv6 enable
[*PE1-GigabitEthernet2/0/0] ipv6 address 2001:DB8:11::1 64
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpn1
[*PE1-bgp-6-vpn1] import-route direct
[*PE1-bgp-6-vpn1] advertise l2vpn evpn
[*PE1-bgp-6-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv6-family
[*PE2-vpn-instance-vpn1-af-ipv6] route-distinguisher 200:1
[*PE2-vpn-instance-vpn1-af-ipv6] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpn1-af-ipv6] quit
[*PE2-vpn-instance-vpn1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*PE2-GigabitEthernet2/0/0] ipv6 enable
[*PE2-GigabitEthernet2/0/0] ipv6 address 2001:DB8:22::1 64
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] bgp 100
[*PE2-bgp] ipv6-family vpn-instance vpn1
[*PE2-bgp-6-vpn1] import-route direct
[*PE2-bgp-6-vpn1] advertise l2vpn evpn
[*PE2-bgp-6-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] router-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
Step 5 Deploy SRv6 BE between PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv6-family vpn-instance vpn1
[*PE2-bgp-6-vpn1] segment-routing ipv6 locator PE2 evpn
[*PE2-bgp-6-vpn1] segment-routing ipv6 best-effort evpn
[*PE2-bgp-6-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
Run the display ipv6 routing-table vpn-instance vpn1 command on each PE.
The command output shows the VPN route sent from the peer end. The following
example uses the command output on PE1.
[~PE1] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1
Destinations : 4 Routes : 4
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:11::1/64
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE1 evpn
segment-routing ipv6 best-effort evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 advertise encap-type srv6
#
return
● P configuration file
#
sysname P
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:22::1/64
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/128
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE2 evpn
segment-routing ipv6 best-effort evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 advertise encap-type srv6
#
return
Networking Requirements
On the network shown in Figure 2-73:
● PE1, PE2, P1, and P2 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● PE1, PE2, P1, and P2 are Level-2 devices in area 1.
It is required that a bidirectional SRv6 BE path be deployed between PE1 and PE2
to carry EVPN L3VPNv6 services. In addition, to maximize network resource
utilization, it is required that ECMP be performed for EVPN L3VPNv6 services
carried by the SRv6 BE path between PE1 and PE2.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring EVPN L3VPNv6 over SRv6 BE ECMP, note the following:
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure a level, and specify a network entity on each device. In
addition, configure dynamic BFD for IPv6 IS-IS.
3. Configure IS-IS SRv6 on PE1 and PE2.
4. Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN
instance to an access-side interface.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish a BGP EVPN peer relationship between PEs.
7. Configure SRv6 on PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
following example uses the configuration of PE1. The configurations of other
devices are similar to the configuration of PE1. For configuration details, see
Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:db8:1::1 96
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ipv6 enable
[*PE1-GigabitEthernet2/0/0] ipv6 address 2001:db8:3::1 96
[*PE1-GigabitEthernet2/0/0] commit
# Configure P1.
[~P1] bfd
[*P1-bfd] quit
[*P1] isis 1
[*P1-isis-1] is-level level-2
[*P1-isis-1] cost-style wide
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] ipv6 enable topology ipv6
[*P1-isis-1] ipv6 bfd all-interfaces enable
[*P1-isis-1] quit
[*P1] interface gigabitethernet 1/0/0
[*P1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
[*P1] interface gigabitethernet 2/0/0
[*P1-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
[*P1] interface loopback1
# Configure P2.
[~P2] bfd
[*P2-bfd] quit
[*P2] isis 1
[*P2-isis-1] is-level level-2
[*P2-isis-1] cost-style wide
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] ipv6 enable topology ipv6
[*P2-isis-1] ipv6 bfd all-interfaces enable
[*P2-isis-1] quit
[*P2] interface gigabitethernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet1/0/0] commit
[~P2-GigabitEthernet1/0/0] quit
[*P2] interface gigabitethernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet2/0/0] commit
[~P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback1
[*P2-LoopBack1] isis ipv6 enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] ipv6 bfd all-interfaces enable
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display IS-IS neighbor information. The following example uses the command
output on PE1.
[~PE1] display isis peer
Total Peer(s): 2
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv6-family
[*PE1-vpn-instance-vpna-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv6] vpn-target 111:1 evpn
[*PE1-vpn-instance-vpna-af-ipv6] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet3/0/0] ipv6 enable
[*PE1-GigabitEthernet3/0/0] ipv6 address 2001:DB8:11::1 64
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] advertise l2vpn evpn
[*PE1-bgp-6-vpna] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv6] vpn-target 111:1 evpn
[*PE2-vpn-instance-vpna-af-ipv6] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet3/0/0
[*PE2-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet3/0/0] ipv6 enable
[*PE2-GigabitEthernet3/0/0] ipv6 address 2001:DB8:22::1 64
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] bgp 100
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] advertise l2vpn evpn
[*PE2-bgp-6-vpna] quit
[*PE2-bgp] quit
[*PE2] commit
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface LoopBack1
[*CE1-LoopBack1] ipv6 enable
[*CE1-LoopBack1] ipv6 address 11::11 128
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] router-id 10.11.1.1
[*CE1-bgp] peer 2001:DB8:11::1 as-number 100
[*CE1-bgp] ipv6-family unicast
[*CE1-bgp-af-ipv6] import-route direct
[*CE1-bgp-af-ipv6] peer 2001:DB8:11::1 enable
[*CE1-bgp-af-ipv6] commit
[~CE1-bgp-af-ipv6] quit
[~CE1-bgp] quit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] peer 2001:DB8:11::2 as-number 65410
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] commit
[*PE1-bgp-6-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface LoopBack1
[*CE2-LoopBack1] ipv6 enable
[*CE2-LoopBack1] ipv6 address 22::22 128
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] router-id 10.12.1.1
[*CE2-bgp] peer 2001:DB8:22::1 as-number 100
[*CE2-bgp] ipv6-family unicast
[*CE2-bgp-af-ipv6] import-route direct
[*CE2-bgp-af-ipv6] peer 2001:DB8:22::1 enable
[*CE2-bgp-af-ipv6] commit
[~CE2-bgp-af-ipv6] quit
[~CE2-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] peer 2001:DB8:22::2 as-number 65420
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] commit
[*PE2-bgp-6-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv6 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv6 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully. The following
example uses the command output on PE1.
[~PE1] display bgp evpn peer
# Configure PE2.
The preceding command output shows that the VPN route 22::22/128 has two
outbound interfaces. The interfaces work in ECMP mode during packet forwarding.
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping ipv6 -a 11::11 22::22
PING 22::22 : 56 data bytes, press CTRL_C to break
Reply from 22::22
bytes=56 Sequence=1 hop limit=62 time=690 ms
Reply from 22::22
bytes=56 Sequence=2 hop limit=62 time=5 ms
Reply from 22::22
bytes=56 Sequence=3 hop limit=62 time=4 ms
Reply from 22::22
bytes=56 Sequence=4 hop limit=62 time=6 ms
Reply from 22::22
bytes=56 Sequence=5 hop limit=62 time=3 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
bfd
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
ipv6 bfd all-interfaces enable
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::1/96
isis ipv6 enable 1
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:11::1/64
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv6-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 best-effort evpn
peer 2001:DB8:11::2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 3::3 enable
peer 3::3 advertise encap-type srv6
#
return
● P1 configuration file
#
sysname P1
#
bfd
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:2::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
return
● P2 configuration file
#
sysname P2
#
bfd
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
ipv6 bfd all-interfaces enable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
2.2.24.7 Example for Configuring VPN FRR for EVPN L3VPNv6 over SRv6 BE
This section provides an example for configuring VPN FRR for EVPN L3VPNv6 over
SRv6 BE.
Networking Requirements
On the network shown in Figure 2-73:
● PE1, PE2, and PE3 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● The PEs are Level-2 devices in area 1.
It is required that a bidirectional SRv6 BE path be deployed between PE1 and PE2
as well as between PE1 and PE3 to carry EVPN L3VPNv6 services. In addition, VPN
FRR needs to be configured on PE1 to improve network reliability.
Figure 2-78 Networking of VPN FRR configuration for EVPN L3VPNv6 over SRv6
BE
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When configuring VPN FRR for EVPN L3VPNv6 over SRv6 BE, note the following:
● In a VPN FRR scenario, after the primary path recovers, traffic switches back
to this path. Because the order in which nodes undergo IGP convergence
differs, packet loss may occur during the switchback. To resolve this problem,
run the route-select delay delay-value command to configure a route
selection delay so that traffic is switched back only after forwarding entries on
the devices along the primary path are updated. The delay specified using
delay-value depends on various factors, such as the number of routes on the
devices. Set a proper delay as needed.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure a level, and specify a network entity on each device.
3. Configure IS-IS SRv6 on PE1 and PE2.
4. Configure IPv6 VPN instances on PE1 and PE2, and connect PE1 and PE2 to
CEs.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish a BGP EVPN peer relationship between PEs.
7. Configure SRv6 on PE1 and PE2.
8. Enable VPN FRR on PE1 and configure BFD to detect peer locator route
reachability to speed up VPN FRR switching.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each PE interface
● Area ID of each PE
● Level of each PE
● VPN instance name, RD, and RT on each PE
Procedure
Step 1 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
following example uses the configuration of PE1. The configurations of other
devices are similar to the configuration of PE1. For configuration details, see
Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:db8:1::1 96
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ipv6 enable
[*PE1-GigabitEthernet2/0/0] ipv6 address 2001:db8:3::1 96
[*PE1-GigabitEthernet2/0/0] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0002.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure PE3.
[~PE3] isis 1
[*PE3-isis-1] is-level level-2
[*PE3-isis-1] cost-style wide
[*PE3-isis-1] network-entity 10.0000.0000.0004.00
[*PE3-isis-1] ipv6 enable topology ipv6
[*PE3-isis-1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE3-GigabitEthernet1/0/0] commit
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] interface loopback1
[*PE3-LoopBack1] isis ipv6 enable 1
[*PE3-LoopBack1] commit
[~PE3-LoopBack1] quit
Total Peer(s): 2
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv6-family
[*PE1-vpn-instance-vpna-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv6] vpn-target 111:1 evpn
[*PE1-vpn-instance-vpna-af-ipv6] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet3/0/0
[*PE1-GigabitEthernet3/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet3/0/0] ipv6 enable
[*PE1-GigabitEthernet3/0/0] ipv6 address 2001:DB8:11::1 64
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] advertise l2vpn evpn
[*PE1-bgp-6-vpna] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv6] vpn-target 111:1 evpn
[*PE2-vpn-instance-vpna-af-ipv6] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ipv6 enable
[*PE2-GigabitEthernet2/0/0] ipv6 address 2001:DB8:22::1 64
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] bgp 100
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] advertise l2vpn evpn
[*PE2-bgp-6-vpna] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] ip vpn-instance vpna
[*PE3-vpn-instance-vpna] ipv6-family
[*PE3-vpn-instance-vpna-af-ipv6] route-distinguisher 300:1
[*PE3-vpn-instance-vpna-af-ipv6] vpn-target 111:1 evpn
[*PE3-vpn-instance-vpna-af-ipv6] quit
[*PE3-vpn-instance-vpna] quit
[*PE3] interface gigabitethernet2/0/0
[*PE3-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE3-GigabitEthernet2/0/0] ipv6 enable
[*PE3-GigabitEthernet2/0/0] ipv6 address 2001:DB8:33::1 64
[*PE3-GigabitEthernet2/0/0] quit
[*PE3] bgp 100
[*PE3-bgp] ipv6-family vpn-instance vpna
[*PE3-bgp-6-vpna] import-route direct
[*PE3-bgp-6-vpna] advertise l2vpn evpn
[*PE3-bgp-6-vpna] quit
[*PE3-bgp] quit
[*PE3] commit
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface LoopBack1
[*CE1-LoopBack1] ipv6 enable
[*CE1-LoopBack1] ipv6 address 11::11 128
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] router-id 10.11.1.1
[*CE1-bgp] peer 2001:DB8:11::1 as-number 100
[*CE1-bgp] ipv6-family unicast
[*CE1-bgp-af-ipv6] import-route direct
[*CE1-bgp-af-ipv6] peer 2001:DB8:11::1 enable
[*CE1-bgp-af-ipv6] commit
[~CE1-bgp-af-ipv6] quit
[~CE1-bgp] quit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] peer 2001:DB8:11::2 as-number 65410
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] commit
[*PE1-bgp-6-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface LoopBack1
[*CE2-LoopBack1] ipv6 enable
[*CE2-LoopBack1] ipv6 address 22::22 128
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] router-id 10.12.1.1
[*CE2-bgp] peer 2001:DB8:22::1 as-number 100
[*CE2-bgp] peer 2001:DB8:33::1 as-number 100
[*CE2-bgp] ipv6-family unicast
[*CE2-bgp-af-ipv6] import-route direct
[*CE2-bgp-af-ipv6] peer 2001:DB8:22::1 enable
[*CE2-bgp-af-ipv6] peer 2001:DB8:33::1 enable
[*CE2-bgp-af-ipv6] commit
[~CE2-bgp-af-ipv6] quit
[~CE2-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] peer 2001:DB8:22::2 as-number 65420
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] commit
[*PE2-bgp-6-vpna] quit
[~PE2-bgp] quit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] router-id 3.3.3.3
[*PE3-bgp] ipv6-family vpn-instance vpna
[*PE3-bgp-6-vpna] peer 2001:DB8:33::2 as-number 65420
[*PE3-bgp-6-vpna] import-route direct
[*PE3-bgp-6-vpna] commit
[*PE3-bgp-6-vpna] quit
[~PE3-bgp] quit
After the configuration is complete, run the display bgp vpnv6 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv6 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 1::1 as-number 100
[*PE3-bgp] peer 1::1 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 1::1 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully. The following
example uses the command output on PE1.
[~PE1] display bgp evpn peer
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2::2
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 20::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1::1 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] segment-routing ipv6 locator as1 evpn
[*PE2-bgp-6-vpna] segment-routing ipv6 best-effort evpn
[*PE2-bgp-6-vpna] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] segment-routing ipv6
[*PE3-segment-routing-ipv6] encapsulation source-address 3::3
[*PE3-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE3-segment-routing-ipv6-locator] quit
[*PE3-segment-routing-ipv6] quit
[*PE3] isis 1
[*PE3-isis-1] segment-routing ipv6 locator as1
[*PE3-isis-1] quit
[*PE3] bgp 100
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 1::1 advertise encap-type srv6
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] ipv6-family vpn-instance vpna
[*PE3-bgp-6-vpna] segment-routing ipv6 locator as1 evpn
[*PE3-bgp-6-vpna] segment-routing ipv6 best-effort evpn
[*PE3-bgp-6-vpna] quit
[*PE3-bgp] quit
[*PE3] commit
Configure VPN FRR and use BFD to detect locator route reachability. If the locator
route is unreachable, VPN FRR is performed, triggering traffic switching.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[~PE1-vpn-instance-vpna] ipv6-family
[~PE1-vpn-instance-vpna-af-ipv6] vpn frr
[*PE1-vpn-instance-vpna-af-ipv6] route-select delay 300
[*PE1-vpn-instance-vpna-af-ipv6] commit
[~PE1-vpn-instance-vpna-af-ipv6] quit
[~PE1-vpn-instance-vpna] quit
[~PE1] bgp 100
[~PE1-bgp] ipv6-family vpn-instance vpna
[~PE1-bgp-6-vpna] route-select delay 300
[*PE1-bgp-6-vpna] commit
[~PE1-bgp-6-vpna] quit
[~PE1-bgp] quit
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd pe1tope2 bind peer-ipv6 20::
[*PE1-bfd-session-pe1tope2] discriminator local 100
[*PE1-bfd-session-pe1tope2] discriminator remote 200
[*PE1-bfd-session-pe1tope2] commit
[~PE1-bfd-session-pe1tope2] quit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd pe2tope1 bind peer-ipv6 10::
[*PE2-bfd-session-pe2tope1] discriminator local 200
[*PE2-bfd-session-pe2tope1] discriminator remote 100
[*PE2-bfd-session-pe2tope1] commit
[~PE2-bfd-session-pe2tope1] quit
The preceding command output shows that the VPN route 22::22/128 has a
backup outbound interface and a VPN FRR routing entry has been generated.
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping ipv6 -a 11::11 22::22
PING 22::22 : 56 data bytes, press CTRL_C to break
Reply from 22::22
bytes=56 Sequence=1 hop limit=62 time=57 ms
Reply from 22::22
bytes=56 Sequence=2 hop limit=62 time=5 ms
Reply from 22::22
bytes=56 Sequence=3 hop limit=62 time=5 ms
Reply from 22::22
bytes=56 Sequence=4 hop limit=62 time=5 ms
Reply from 22::22
bytes=56 Sequence=5 hop limit=62 time=4 ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 100:1
vpn frr
#
interface GigabitEthernet3/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:11::1/64
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bfd pe1tope2 bind peer-ipv6 20::
discriminator local 100
discriminator remote 200
#
bgp 100
router-id 1.1.1.1
peer 2::2 as-number 100
peer 2::2 connect-interface LoopBack1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family vpn-instance vpna
import-route direct
route-select delay 300
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 best-effort evpn
peer 2001:DB8:11::2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 2::2 enable
peer 2::2 advertise encap-type srv6
peer 3::3 enable
peer 3::3 advertise encap-type srv6
#
return
ipv6 enable
ipv6 address 2001:DB8:1::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:22::1/64
#
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
bfd pe2tope1 bind peer-ipv6 10::
discriminator local 200
discriminator remote 100
#
bgp 100
router-id 2.2.2.2
peer 1::1 as-number 100
peer 1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv6-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 best-effort evpn
peer 2001:DB8:22::2 as-number 65420
#
l2vpn-family evpn
undo policy vpn-target
peer 1::1 enable
peer 1::1 advertise encap-type srv6
#
return
● PE3 configuration file
#
sysname PE3
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 300:1
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 3::3
locator as1 ipv6-prefix 30::1 64 static 32
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:22::2/64
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:33::2/64
#
interface LoopBack1
ipv6 enable
ipv6 address 22::22/128
#
bgp 65420
router-id 10.12.1.1
peer 2001:DB8:22::1 as-number 100
peer 2001:DB8:33::1 as-number 100
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
import-route direct
peer 2001:DB8:22::1 enable
peer 2001:DB8:33::1 enable
#
return
Networking Requirements
On the network shown in Figure 2-79, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 BE path be deployed between PE1 and PE2 to carry EVPN VPWSs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each router and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
router.
3. Configure EVPN VPWS and EVPL instances on each PE and bind the EVPL
instance to an access-side sub-interface.
4. Establish a BGP EVPN peer relationship between PEs.
5. Configure SRv6 BE on PEs.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● RD and RT values of the EVPN instance: 100:1 and 1:1
Procedure
Step 1 Enable IPv6 forwarding on each router and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other routers is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*PE1-LoopBack1] quit
[*PE1] commit
Configure an IPv4 address for the loopback interface because the EVPN source
address needs to be an IPv4 address. The following example uses the
configuration of PE1. The configuration of other devices is similar to the
configuration of PE1. For configuration details, see Configuration Files in this
section.
Step 2 Configure IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[~P-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
--------------------------------
Step 3 Configure EVPN and EVPL instances on each PE and bind the EVPL instance to an
access-side sub-interface.
# Configure PE1.
[*PE1] evpn source-address 1.1.1.1
[~PE1] evpn vpn-instance evrf1 vpws
[*PE1-vpws-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE1-vpws-evpn-instance-evrf1] quit
[*PE1] evpl instance 1 srv6-mode
[*PE1-evpl-srv6-1] evpn binding vpn-instance evrf1
[*PE1-evpl-srv6-1] local-service-id 100 remote-service-id 200
[*PE1-evpl-srv6-1] quit
[*PE1] interface gigabitethernet 2/0/0.1 mode l2
[*PE1-GigabitEthernet2/0/0.1] encapsulation dot1q vid 1
[*PE1-GigabitEthernet2/0/0.1] evpl instance 1
[*PE1-GigabitEthernet2/0/0.1] quit
[*PE1] commit
# Configure PE2.
[*PE2] evpn source-address 3.3.3.3
[~PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] evpl instance 1 srv6-mode
[*PE2-evpl-srv6-1] evpn binding vpn-instance evrf1
[*PE2-evpl-srv6-1] local-service-id 200 remote-service-id 100
[*PE2-evpl-srv6-1] quit
[*PE2] interface gigabitethernet 2/0/0.1 mode l2
[*PE2-GigabitEthernet2/0/0.1] encapsulation dot1q vid 1
[*PE2-GigabitEthernet2/0/0.1] evpl instance 1
[*PE2-GigabitEthernet2/0/0.1] quit
[*PE2] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] route-id 1.1.1.1
[*PE1-bgp] peer 2001:DB8:3::3 as-number 100
[*PE1-bgp] peer 2001:DB8:3::3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] route-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
Step 5 Deploy SRv6 BE between PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:11::11 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] evpl instance 1 srv6-mode
[*PE1-evpl-srv6-1] segment-routing ipv6 locator PE1
[*PE1-evpl-srv6-1] quit
[*PE1] evpn vpn-instance evrf1 vpws
[*PE1-vpws-evpn-instance-evrf1] segment-routing ipv6 best-effort
[*PE1-vpws-evpn-instance-evrf1] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:30::30 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] evpl instance 1 srv6-mode
[*PE2-evpl-srv6-1] segment-routing ipv6 locator PE2
[*PE2-evpl-srv6-1] quit
[*PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] segment-routing ipv6 best-effort
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] commit
EVPL ID : 1
State : up
EVPL Type : srv6-mode
Interface : GigabitEthernet2/0/0.1
Local MTU : 1500
Local Control Word : false
Local Redundancy Mode : all-active
Run the display bgp evpn all routing-table command on each PE. The command
output shows the EVPN route sent from the peer end. The following example uses
the command output on PE1.
[~PE1] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evrf1:
Number of A-D Routes: 2
Network(ESI/EthTagId) NextHop
*> 0000.0000.0000.0000.0000:100 127.0.0.1
*>i 0000.0000.0000.0000.0000:200 2001:DB8:3::3
Run the display bgp evpn all routing-table ad-route command on PE1. The
command output shows the EVPN routes sent from the peer end.
[~PE1] display bgp evpn all routing-table ad-route 0000.0000.0000.0000.0000:200
EVPN-Instance evrf1:
Number of A-D Routes: 1
BGP routing table entry information of
0000.0000.0000.0000.0000:200:
Route Distinguisher: 100:1
Remote-Cross route
Label information (Received/Applied): 3/NULL
From: 2001:DB8:3::3 (3.3.3.3)
Route Duration: 0d00h03m44s
Relay IP Nexthop: FE80::3AC2:67FF:FE31:307
Relay IP Out-
Interface:GigabitEthernet1/0/0
Relay Tunnel Out-Interface:
Original nexthop: 2001:DB8:3::3
Qos information : 0x0
Ext-Community: RT <1 : 1>, EVPN L2 Attributes <MTU:1500 C:0 P:1 B:
0>
Prefix-sid: 2001:DB8:30::1:0:3C
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost
20
Route Type: 1 (Ethernet Auto-Discovery (A-D) route)
ESI: 0000.0000.0000.0000.0000, Ethernet Tag ID: 200
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:1
segment-routing ipv6 best-effort
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1 srv6-mode
evpn binding vpn-instance evrf1
local-service-id 100 remote-service-id 200
segment-routing ipv6 locator PE1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:11::11 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0.1 mode l2
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator PE2 ipv6-prefix 2001:DB8:30::30 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface LoopBack1
ipv6 enable
ip address 3.3.3.3 255.255.255.255
ipv6 address 2001:DB8:3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 advertise encap-type srv6
#
evpn source-address 3.3.3.3
#
return
Networking Requirements
On the network shown in Figure 2-80, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 BE path be deployed between PE1 and PE2 to carry EVPN E-LAN services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure an EVPN instance in BD mode on each PE and bind the EVPN
instance to an access-side sub-interface.
4. Establish a BGP EVPN peer relationship between PEs.
5. Configure SRv6 BE on PEs.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● RD and RT values of the EVPN instance: 100:1 and 1:1
● BD ID: 100
● Names of SID node route locators on PE1: PE1_ARG, PE1; names of SID node
route locators on PE2: PE2_ARG, PE2; dynamically generated opcodes
● Length of the SID node route locators PE1_ARG and PE2_ARG: 10 (as specified
in the args parameter)
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
Configure an IPv4 address for the loopback interface because the EVPN source
address needs to be an IPv4 address. The following example uses the
configuration of PE1. The configuration of other devices is similar to the
configuration of PE1. For configuration details, see Configuration Files in this
section.
Step 2 Configure IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[~P-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure an EVPN instance in BD mode on each PE and bind the EVPN instance
to an access-side sub-interface.
# Configure PE1.
[*PE1] evpn source-address 1.1.1.1
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-evpn-instance-evrf1] vpn-target 1:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] bridge-domain 100
[*PE1-bd100] evpn binding vpn-instance evrf1
[*PE1-bd100] quit
[*PE1] interface gigabitethernet 2/0/0.1 mode l2
[*PE1-GigabitEthernet 2/0/0.1] encapsulation dot1q vid 1
[*PE1-GigabitEthernet 2/0/0.1] rewrite pop single
[*PE1-GigabitEthernet 2/0/0.1] bridge-domain 100
[*PE1-GigabitEthernet 2/0/0.1] quit
[*PE1] commit
# Configure PE2.
[*PE2] evpn source-address 3.3.3.3
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 100
[*PE2-bd100] evpn binding vpn-instance evrf1
[*PE2-bd100] quit
[*PE2] interface gigabitethernet 2/0/0.1 mode l2
[*PE2-GigabitEthernet 2/0/0.1] encapsulation dot1q vid 1
[*PE2-GigabitEthernet 2/0/0.1] rewrite pop single
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] route-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
Step 5 Deploy SRv6 BE between PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:11:: 64
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] locator PE1_ARG ipv6-prefix 2001:DB8:12:: 64 args 10
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] segment-routing ipv6 locator PE1_ARG auto-sid-disable
[*PE1-isis-1] quit
[*PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] segment-routing ipv6 locator PE1_ARG unicast-locator PE1
[*PE1-evpn-instance-evrf1] segment-routing ipv6 best-effort
[*PE1-evpn-instance-evrf1] quit
[*PE1] commit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:21:: 64
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] locator PE2_ARG ipv6-prefix 2001:DB8:22:: 64 args 10
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
Total SID(s): 1
[~PE1] display segment-routing ipv6 local-sid end-dt2u forwarding
My Local-SID End.DT2U Forwarding Table
--------------------------------------
Total SID(s): 1
Run the display bgp evpn all routing-table command on each PE. The command
output shows the EVPN route sent from the peer end. The following example uses
the command output on PE1.
[~PE1] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 2001:DB8:3::3
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
● P configuration file
#
sysname P
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ip address 2.2.2.2 255.255.255.255
ipv6 address 2001:DB8:2::2/64
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
segment-routing ipv6 best-effort
segment-routing ipv6 locator PE2_ARG unicast-locator PE2
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bridge-domain 100
evpn binding vpn-instance evrf1
#
segment-routing ipv6
encapsulation source-address 2001:db8:3::3
locator PE2 ipv6-prefix 2001:DB8:21:: 64
locator PE2_ARG ipv6-prefix 2001:DB8:22:: 64 args 10
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
segment-routing ipv6 locator PE2_ARG auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
rewrite pop single
bridge-domain 100
#
interface LoopBack1
ipv6 enable
ip address 3.3.3.3 255.255.255.255
Networking Requirements
On the network shown in Figure 2-81:
● PE1 and ASBR1 belong to AS 100, and PE2 and ASBR2 belong to AS 200.
Intra-AS IPv6 connectivity needs to be achieved for AS 100 and AS 200
through IS-IS.
● PE1 and ASBR1 belong to IS-IS process 1, and PE2 and ASBR2 belong to IS-IS
process 10. PE1, ASBR1, PE2, and ASBR2 are all Level-1 devices.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Configure locator route import between the ASBRs.
7. Establish an MP-EBGP peer relationship between the PEs.
8. Configure the PEs to exchange BGP VPNv4 routes carrying SIDs through this
peer relationship.
9. Configure SRv6 forwarding on the PEs.
Data Preparation
To complete the configuration, you need the following data:
● Interface IPv6 addresses on each device
● AS numbers of the PEs and ASBRs
● IS-IS process IDs, levels, and network entity names of the PEs and ASBRs
● VPN instance name, RD, and RT on PE1 and PE2
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other devices is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001::1 96
[*PE1-GigabitEthernet1/0/0] commit
# Configure ASBR1.
[~ASBR1] isis 1
[*ASBR1-isis-1] is-level level-1
[*ASBR1-isis-1] cost-style wide
[*ASBR1-isis-1] network-entity 10.0000.0000.0002.00
[*ASBR1-isis-1] ipv6 enable topology ipv6
[*ASBR1-isis-1] quit
[*ASBR1] interface gigabitethernet 1/0/0
[*ASBR1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*ASBR1-GigabitEthernet1/0/0] commit
[~ASBR1-GigabitEthernet1/0/0] quit
[*ASBR1] interface loopback1
[*ASBR1-LoopBack1] isis ipv6 enable 1
[*ASBR1-LoopBack1] commit
[~ASBR1-LoopBack1] quit
# Configure ASBR2.
[~ASBR2] isis 10
[*ASBR2-isis-10] is-level level-1
[*ASBR2-isis-10] cost-style wide
[*ASBR2-isis-10] network-entity 10.0000.0000.0003.00
[*ASBR2-isis-10] ipv6 enable topology ipv6
[*ASBR2-isis-10] quit
[*ASBR2] interface gigabitethernet 1/0/0
[*ASBR2-GigabitEthernet1/0/0] isis ipv6 enable 10
[*ASBR2-GigabitEthernet1/0/0] commit
[~ASBR2-GigabitEthernet1/0/0] quit
[*ASBR2] interface loopback1
[*ASBR2-LoopBack1] isis ipv6 enable 10
[*ASBR2-LoopBack1] commit
[~ASBR2-LoopBack1] quit
# Configure PE2.
[~PE2] isis 10
[*PE2-isis-10] is-level level-1
[*PE2-isis-10] cost-style wide
[*PE2-isis-10] network-entity 10.0000.0000.0004.00
[*PE2-isis-10] ipv6 enable topology ipv6
[*PE2-isis-10] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 10
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 10
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display IS-IS neighbor information. The following example uses the command
output on PE1.
[~PE1] display isis peer
Total Peer(s): 1
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Assign an IP address to each interface on the CEs, as shown in Figure 2-81. For
configuration details, see Configuration Files in this section.
After the configuration is complete, run the display ip vpn-instance verbose
command on the PEs to check VPN instance configurations. The command output
shows that each PE can successfully ping its connected CE.
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs to check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 4::4
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 40::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 10
[*PE2-isis-10] segment-routing ipv6 locator as1
[*PE2-isis-10] commit
[~PE2-isis-10] quit
Configure locator route import in the PE1 -> PE2 direction. Specifically, configure
BGP on ASBR1 to import the locator routes advertised by PE1 through IS-IS and
advertise the routes to ASBR2. In addition, configure IS-IS on ASBR2 to import BGP
routes, thereby also advertising the locator routes of PE1 to PE2 through IS-IS.
Then, configure locator route import in the PE2 -> PE1 direction in a similar way.
# Configure ASBR1.
[~ASBR1] bgp 100
[*ASBR1-bgp] router-id 2.2.2.2
[*ASBR1-bgp] peer 2020::2 as-number 200
[*ASBR1-bgp] peer 2020::2 ebgp-max-hop 255
[*ASBR1-bgp] ipv6-family unicast
[*ASBR1-bgp-af-ipv6] network 10:: 64
[*ASBR1-bgp-af-ipv6] quit
[*ASBR1-bgp] quit
[*ASBR1] ip ipv6-prefix p1 permit 10:: 64
[*ASBR1] route-policy rp1 permit node 10
[*ASBR1-route-policy] if-match ipv6 address prefix-list p1
[*ASBR1-route-policy] quit
[*ASBR1] isis 1
[*ASBR1-isis-1] import-route bgp route-policy rp1
[*ASBR1-isis-1] quit
[*ASBR1] commit
# Configure ASBR2.
[~ASBR2] bgp 200
[*ASBR2-bgp] router-id 3.3.3.3
[*ASBR2-bgp] peer 2020::1 as-number 100
[*ASBR2-bgp] peer 2020::1 ebgp-max-hop 255
[*ASBR2-bgp] ipv6-family unicast
[*ASBR2-bgp-af-ipv6] network 40:: 64
[*ASBR2-bgp-af-ipv6] quit
[*ASBR2-bgp] quit
[*ASBR2] ip ipv6-prefix p1 permit 40:: 64
[*ASBR2] route-policy rp1 permit node 10
[*ASBR2-route-policy] if-match ipv6 address prefix-list p1
[*ASBR2-route-policy] quit
[*ASBR2] isis 10
[*ASBR2-isis-10] import-route bgp route-policy rp1
[*ASBR2-isis-10] quit
[*ASBR2] commit
Step 7 Establish an MP-EBGP peer relationship between the PEs, configure the PEs to
exchange BGP VPNv4 routes carrying SIDs through this peer relationship, and
enable SRv6 BE forwarding on the PEs.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 4::4 as-number 200
[*PE1-bgp] peer 4::4 ebgp-max-hop 255
[*PE1-bgp] peer 4::4 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 4::4 enable
[*PE1-bgp-af-vpnv4] peer 4::4 prefix-sid
[*PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 200
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 ebgp-max-hop 255
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs to check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully. The following example
uses the command output on PE1.
[~PE1] display bgp vpnv4 all peer
Total Locator(s): 1
Total SID(s): 1
Run the ping command on a CE. The command output shows that CEs belonging
to the same VPN can ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=22
ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=13
ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=14
ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=15
ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=34
ms
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 4::4 as-number 200
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::2/96
isis ipv6 enable 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 4::4/64
isis ipv6 enable 10
#
bgp 200
router-id 4.4.4.4
peer 1::1 as-number 100
peer 1::1 ebgp-max-hop 255
peer 1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.2.1.2 as-number 65420
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 11.11.11.11 255.255.255.255
peer 10.1.1.1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
Networking Requirements
On the network shown in Figure 2-82:
● PE1, the P, and PE2 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● PE1, the P, and PE2 are Level-1 devices in area 1.
It is required that a bidirectional SRv6 TE Policy be deployed between PE1 and PE2
to carry L3VPNv4 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish an MP-IBGP peer relationship between PEs.
7. Configure an SRv6 TE Policy on PE1 and PE2.
8. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface on PE1, the P, and PE2
● Process ID of PE1, the P, and PE2
● Level on PE1, the P, and PE2
● VPN instance name, RD, and RT on PE1 and PE2
Procedure
Step 1 Enable the IPv6 forwarding capability and configure IPv6 addresses for interfaces.
# Configure PE1. The configuration of other devices is similar to the configuration
of PE1. For configuration details, see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001::1 96
[*PE1-GigabitEthernet1/0/0] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display IS-IS neighbor information. The following example uses the command
output on PE1.
[~PE1] display isis peer
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
Peer V AS MsgRcvd MsgSent OutQ Up/Down State PrefRcv
10.1.1.2 4 65410 11 9 0 00:06:37 Established 1
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 3::3 as-number 100
[*PE1-bgp] peer 3::3 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3::3 enable
[*PE1-bgp-af-vpnv4] commit
[~PE1-bgp-af-vpnv4] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs and check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully.
The following example uses the command output on PE1.
[~PE1] display bgp vpnv4 all peer
Step 6 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3::3 prefix-sid
[~PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2::2
[*P-segment-routing-ipv6] locator as1 ipv6-prefix 20::1 64 static 32
[*P-segment-routing-ipv6-locator] quit
[*P-segment-routing-ipv6] quit
[~P] isis 1
[*P-isis-1] segment-routing ipv6 locator as1
[*P-isis-1] commit
[~P-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 3::3
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE2-bgp-af-vpnv4] quit
Total Locator(s): 1
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] segment-list list1
[*PE1-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 20::1:0:0
[*PE1-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 30::1:0:0
[*PE1-segment-routing-ipv6-segment-list-list1] commit
[~PE1-segment-routing-ipv6-segment-list-list1] quit
[~PE1-segment-routing-ipv6] srv6-te-policy locator as1
[*PE1-segment-routing-ipv6] srv6-te policy policy1 endpoint 3::3 color 101
[*PE1-segment-routing-ipv6-policy-policy1] binding-sid 10::100
[*PE1-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE1-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE1-segment-routing-ipv6-policy-policy1-path] commit
[~PE1-segment-routing-ipv6-policy-policy1-path] quit
[~PE1-segment-routing-ipv6-policy-policy1] quit
[~PE1-segment-routing-ipv6] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 20::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 10::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] commit
[~PE2-segment-routing-ipv6-segment-list-list1] quit
[~PE2-segment-routing-ipv6] srv6-te-policy locator as1
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 1::1 color 101
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 30::300
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE2-segment-routing-ipv6-policy-policy1-path] commit
[~PE2-segment-routing-ipv6-policy-policy1-path] quit
[~PE2-segment-routing-ipv6-policy-policy1] quit
[~PE2-segment-routing-ipv6] quit
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 route-policy p1 import
[*PE2-bgp-af-vpnv4] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
Summary Count : 1
Destination: 22.22.22.22/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3::3 Neighbour: 3::3
State: Active Adv Relied Age: 00h03m15s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x10000E0 Instance:
RelayNextHop: 0.0.0.0 Interface: policy1
TunnelID: 0x000000003400000001 Flags: RD
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 20::1:0:0
index 10 sid ipv6 30::1:0:0
srv6-te policy policy1 endpoint 3::3 color 101
binding-sid 10::100
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 3::3 enable
peer 3::3 route-policy p1 import
peer 3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.1.1.2 as-number 65410
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● P configuration file
#
sysname P
#
segment-routing ipv6
encapsulation source-address 2::2
locator as1 ipv6-prefix 20::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 3::3
locator as1 ipv6-prefix 30::1 64 static 32
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 20::1:0:0
index 10 sid ipv6 10::1:0:0
srv6-te policy policy1 endpoint 1::1 color 101
binding-sid 30::300
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 2.2.2.2
peer 1::1 as-number 100
peer 1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 route-policy p1 import
peer 1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.2.1.2 as-number 65420
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
Networking Requirements
On the network shown in Figure 2-83, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 TE Policy be deployed between PE1 and PE2 to carry EVPN L3VPNv4 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure an IPv4 L3VPN instance on each PE and bind the IPv4 L3VPN
instance to an access-side interface.
5. Establish a BGP EVPN peer relationship between PEs.
6. Establish an EBGP peer relationship between each PE and its connected CE.
7. Configure an SRv6 TE Policy on PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface on PE1, the P, and PE2
● Area number of PE1, the P, and PE2
● IS-IS level of PE1, the P, and PE2
● VPN instance name, RD, and RT on PE1 and PE2
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
# Configure PE1. The configuration of other devices is similar to the configuration
of PE1. For configuration details, see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] quit
[*PE2] commit
Total Peer(s): 1
Step 3 Configure an IPv4 L3VPN instance on each PE and bind the IPv4 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.11.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] advertise l2vpn evpn
[*PE1-bgp-vpna] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.1.1.1 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] router-id 11.1.1.1
[*CE1-bgp] peer 10.11.1.1 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.11.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.2.2.2 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] router-id 22.2.2.2
[*CE2-bgp] peer 10.22.1.1 as-number 100
[*CE2-bgp] import-route direct
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.22.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] router-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs to check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
Step 6 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[~PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort evpn
[*PE1-bgp-vpna] segment-routing ipv6 locator PE1 evpn
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2001:DB8:2::2
[*P-segment-routing-ipv6] locator P ipv6-prefix 2001:DB8:120::10 64 static 32
[*P-segment-routing-ipv6-locator] quit
[*P-segment-routing-ipv6] quit
[~P] isis 1
[*P-isis-1] segment-routing ipv6 locator P
[*P-isis-1] commit
[~P-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[~PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort evpn
[*PE2-bgp-vpna] segment-routing ipv6 locator PE2 evpn
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] commit
[~PE2-isis-1] quit
Total Locator(s): 1
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 2001:DB8:120::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 2001:DB8:100::1:0:1E
[*PE2-segment-routing-ipv6-segment-list-list1] commit
[~PE2-segment-routing-ipv6-segment-list-list1] quit
[~PE2-segment-routing-ipv6] srv6-te-policy locator PE2
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 2001:DB8:130::350
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE2-segment-routing-ipv6-policy-policy1-path] commit
[~PE2-segment-routing-ipv6-policy-policy1-path] quit
[~PE2-segment-routing-ipv6-policy-policy1] quit
[~PE2-segment-routing-ipv6] quit
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
The following example uses the command output on PE1.
[~PE1] display srv6-te policy
PolicyName : policy1
Color : 101 Endpoint : 2001:DB8:3::3
TunnelId :1 Binding SID : 2001:DB8:100::450
TunnelType : SRv6-TE Policy DelayTimerRemain :-
Policy State : Active State Change Time : 2019-12-30 07:02:14
Admin State : UP Traffic Statistics : Disable
Candidate-path Count : 1
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 route-policy p1 import
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1 evpn
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
After the configuration is complete, run the display ipv6 routing-table vpn-
instance vpna command to check the routing table of the VPN instance. The
command output shows that the VPN route has successfully recursed to the SRv6
TE Policy.
The following example uses the command output on PE1.
[~PE1] display ipv6 routing-table vpn-instance vpna
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole
route
------------------------------------------------------------------------------
Routing Table : vpna
Destinations : 8 Routes : 8
Destination: 22.2.2.2/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 2001:DB8:3::3 Neighbour: 2001:DB8:3::3
State: Active Adv Relied Age: 00h49m52s
Tag: 0 Priority: low
Label: NULL QoSInfo: 0x0
IndirectID: 0x10000DF Instance:
RelayNextHop: :: Interface: policy1
TunnelID: 0x000000003400000001 Flags: RD
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
tnl-policy p1 evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
srv6-te-policy locator PE1
segment-list list1
index 5 sid ipv6 2001:DB8:120::1:0:0
index 10 sid ipv6 2001:DB8:130::1:0:0
srv6-te policy policy1 endpoint 2001:DB8:3::3 color 101
binding-sid 2001:DB8:100::450
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.11.1.1 255.255.255.0
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE1 evpn
segment-routing ipv6 traffic-engineer best-effort evpn
peer 10.11.1.2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 route-policy p1 import
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.22.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/128
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE2 evpn
segment-routing ipv6 traffic-engineer best-effort evpn
peer 10.22.1.2 as-number 65420
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 advertise encap-type srv6
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.1.1.1 32
#
bgp 65410
router-id 11.1.1.1
peer 10.11.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.11.1.1 enable
#
return
Networking Requirements
On the network shown in Figure 2-84, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 TE Policy be deployed between PE1 and PE2 to carry EVPN L3VPNv6 services.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN
instance to an access-side interface.
5. Establish a BGP EVPN peer relationship between PEs.
6. Establish an EBGP peer relationship between each PE and its connected CE.
7. Configure an SRv6 TE Policy on PE1 and PE2.
8. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] quit
[*PE2] commit
Total Peer(s): 1
Step 3 Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN instance
to an access-side interface.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv6-family
[*PE1-vpn-instance-vpna-af-ipv6] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv6] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpna-af-ipv6] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ipv6 enable
[*PE1-GigabitEthernet2/0/0] ipv6 address 2001:DB8:11::1 64
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] advertise l2vpn evpn
[*PE1-bgp-6-vpna] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv6] vpn-target 1:1 evpn
[*PE2-vpn-instance-vpna-af-ipv6] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ipv6 enable
[*PE2-GigabitEthernet2/0/0] ipv6 address 2001:DB8:22::1 64
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] bgp 100
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] advertise l2vpn evpn
[*PE2-bgp-6-vpna] quit
[*PE2-bgp] quit
[*PE2] commit
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ipv6 enable
[*CE1-LoopBack1] ipv6 address 2001:DB8:111::111 128
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] peer 2001:DB8:11::1 as-number 65410
[*PE1-bgp-6-vpna] import-route direct
[*PE1-bgp-6-vpna] commit
[*PE1-bgp-6-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ipv6 enable
[*CE2-LoopBack1] ipv6 address 2001:DB8:222::222 128
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] router-id 10.22.2.2
[*CE2-bgp] peer 2001:DB8:22::1 as-number 100
[*CE2-bgp] ipv6-family unicast
[*CE2-bgp-af-ipv6] peer 2001:DB8:22::1 enable
[*CE2-bgp-af-ipv6] import-route direct
[*CE2-bgp-af-ipv6] quit
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] peer 2001:DB8:22::2 as-number 65420
[*PE2-bgp-6-vpna] import-route direct
[*PE2-bgp-6-vpna] commit
[*PE2-bgp-6-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv6 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv6 vpn-instance vpna peer
BGP local router ID : 1.1.1.1
Local AS number : 100
Total number of peers : 1 Peers in established state : 1
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] router-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
Step 6 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[~PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv6-family vpn-instance vpna
[*PE1-bgp-6-vpna] segment-routing ipv6 traffic-engineer best-effort evpn
[*PE1-bgp-6-vpna] segment-routing ipv6 locator PE1 evpn
[*PE1-bgp-6-vpna] commit
[~PE1-bgp-6-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2001:DB8:2::2
[*P-segment-routing-ipv6] locator P ipv6-prefix 2001:DB8:120::10 64 static 32
[*P-segment-routing-ipv6-locator] quit
[*P-segment-routing-ipv6] quit
[~P] isis 1
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[~PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv6-family vpn-instance vpna
[*PE2-bgp-6-vpna] segment-routing ipv6 traffic-engineer best-effort evpn
[*PE2-bgp-6-vpna] segment-routing ipv6 locator PE2 evpn
[*PE2-bgp-6-vpna] commit
[~PE2-bgp-6-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] commit
[~PE2-isis-1] quit
Total Locator(s): 1
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] segment-list list1
[*PE1-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 2001:DB8:120::1:0:0
[*PE1-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 2001:DB8:130::1:0:0
[*PE1-segment-routing-ipv6-segment-list-list1] commit
[~PE1-segment-routing-ipv6-segment-list-list1] quit
[~PE1-segment-routing-ipv6] srv6-te-policy locator PE1
[*PE1-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:3::3 color 101
[*PE1-segment-routing-ipv6-policy-policy1] binding-sid 2001:DB8:100::450
[*PE1-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE1-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE1-segment-routing-ipv6-policy-policy1-path] commit
[~PE1-segment-routing-ipv6-policy-policy1-path] quit
[~PE1-segment-routing-ipv6-policy-policy1] quit
[~PE1-segment-routing-ipv6] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 2001:DB8:120::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 2001:DB8:100::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] commit
[~PE2-segment-routing-ipv6-segment-list-list1] quit
[~PE2-segment-routing-ipv6] srv6-te-policy locator PE2
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 2001:DB8:130::350
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE2-segment-routing-ipv6-policy-policy1-path] commit
[~PE2-segment-routing-ipv6-policy-policy1-path] quit
[~PE2-segment-routing-ipv6-policy-policy1] quit
[~PE2-segment-routing-ipv6] quit
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
Candidate-path Count : 1
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 route-policy p1 import
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv6-family
[*PE2-vpn-instance-vpna-af-ipv6] tnl-policy p1 evpn
[*PE2-vpn-instance-vpna-af-ipv6] commit
[~PE2-vpn-instance-vpna-af-ipv6] quit
[~PE2-vpn-instance-vpna] quit
After the configuration is complete, run the display ipv6 routing-table vpn-
instance vpna command to check the routing table of the VPN instance. The
command output shows that the VPN route has successfully recursed to the SRv6
TE Policy.
The following example uses the command output on PE1.
[~PE1] display ipv6 routing-table vpn-instance vpna
Routing Table : vpna
Destinations : 6 Routes : 6
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
tnl-policy p1 evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
srv6-te-policy locator PE1
segment-list list1
index 5 sid ipv6 2001:DB8:120::1:0:0
index 10 sid ipv6 2001:DB8:130::1:0:0
srv6-te policy policy1 endpoint 2001:DB8:3::3 color 101
binding-sid 2001:DB8:100::450
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:11::1/64
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
2.2.25.4 Example for Configuring EVPN VPWS over SRv6 TE Policy (Manual
Configuration)
This section provides an example for configuring an SRv6 TE Policy to carry EVPN
VPWSs.
Networking Requirements
On the network shown in Figure 2-85, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 TE Policy be deployed between PE1 and PE2 to carry EVPN VPWSs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure EVPN VPWS and EVPL instances on each PE and bind the EVPL
instance to an access-side sub-interface.
5. Establish a BGP EVPN peer relationship between PEs.
6. Configure an SRv6 TE Policy on PE1 and PE2.
7. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
Configure an IPv4 address for the loopback interface because the EVPN source
address needs to be an IPv4 address. The following example uses the
configuration of PE1. The configuration of other devices is similar to the
configuration of PE1. For configuration details, see "Configuration Files" in this
section.
Step 2 Configure IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[*P-LoopBack1] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] quit
[*PE2] commit
--------------------------------------------------------------------------------
0000.0000.0002 GE1/0/0 0000.0000.0002.01 Up 8s L1 64
Total Peer(s): 1
Step 3 Configure EVPN VPWS and EVPL instances on each PE and bind the EVPL instance
to an access-side sub-interface. In addition, configure a VLAN on each CE.
# Configure PE1.
[*PE1] evpn source-address 1.1.1.1
[~PE1] evpn vpn-instance evrf1 vpws
[*PE1-vpws-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE1-vpws-evpn-instance-evrf1] quit
[*PE1] evpl instance 1 srv6-mode
[*PE1-evpl-srv6-1] evpn binding vpn-instance evrf1
[*PE1-evpl-srv6-1] local-service-id 100 remote-service-id 200
[*PE1-evpl-srv6-1] quit
[*PE1] interface gigabitethernet 2/0/0.1 mode l2
[*PE1-GigabitEthernet 2/0/0.1] encapsulation dot1q vid 1
[*PE1-GigabitEthernet 2/0/0.1] evpl instance 1
[*PE1-GigabitEthernet 2/0/0.1] quit
[*PE1] commit
# Configure PE2.
[*PE2] evpn source-address 3.3.3.3
[~PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] evpl instance 1 srv6-mode
[*PE2-evpl-srv6-1] evpn binding vpn-instance evrf1
[*PE2-evpl-srv6-1] local-service-id 200 remote-service-id 100
[*PE2-evpl-srv6-1] quit
[*PE2] interface gigabitethernet 2/0/0.1 mode l2
[*PE2-GigabitEthernet 2/0/0.1] encapsulation dot1q vid 1
[*PE2-GigabitEthernet 2/0/0.1] evpl instance 1
[*PE2-GigabitEthernet 2/0/0.1] quit
[*PE2] commit
# Configure CE1.
<CE1> system-view
[~CE1] vlan 1
[*CE1-vlan1] quit
[*CE1] interface gigabitethernet 1/0/0
[*CE1-GigabitEthernet1/0/0] portswitch
[*CE1-GigabitEthernet1/0/0] undo shutdown
# Configure CE2.
<CE2> system-view
[~CE2] vlan 1
[*CE2-vlan1] quit
[*CE2] interface gigabitethernet 1/0/0
[*CE2-GigabitEthernet1/0/0] portswitch
[*CE2-GigabitEthernet1/0/0] undo shutdown
[*CE2-GigabitEthernet1/0/0] port link-type access
[*CE2-GigabitEthernet1/0/0] port default vlan 1
[*CE2-GigabitEthernet1/0/0] commit
[~CE2-GigabitEthernet1/0/0] quit
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] router-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs and check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully.
The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
Step 5 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:100::10 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2001:DB8:2::2
[*P-segment-routing-ipv6] locator P ipv6-prefix 2001:DB8:120::10 64 static 32
[*P-segment-routing-ipv6-locator] quit
[*P-segment-routing-ipv6] quit
[~P] isis 1
[*P-isis-1] segment-routing ipv6 locator P
[*P-isis-1] commit
[~P-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[~PE2-bgp-af-evpn] quit
[~PE1-bgp] quit
[~PE2] evpl instance 1 srv6-mode
[*PE2-evpl-srv6-1] segment-routing ipv6 locator PE2
[*PE2-evpl-srv6-1] quit
[*PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] segment-routing ipv6 traffic-engineer best-effort
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] commit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] commit
[~PE2-isis-1] quit
Total Locator(s): 1
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
# Configure PE1.
[~PE1] route-policy p1 permit node 10
[*PE1-route-policy] apply extcommunity color 0:101
[*PE1-route-policy] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 route-policy p1 import
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE1-tunnel-policy-p1] quit
[*PE1] evpn vpn-instance evrf1 vpws
[*PE1-vpws-evpn-instance-evrf] tnl-policy p1
[*PE1-vpws-evpn-instance-evrf] commit
[~PE1-vpws-evpn-instance-evrf] quit
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
EVPL ID : 1
State : up
Evpl Type : srv6-mode
Interface : GigabitEthernet2/0/0.1
Ignore AcState : disable
Local MTU : 1500
Local Control Word : false
Local Redundancy Mode : all-active
Local DF State : primary
Local ESI : 0000.0000.0000.0000.0000
Remote Redundancy Mode : all-active
Remote Primary DF Number : 1
Remote Backup DF Number : 0
Remote None DF Number : 0
Peer IP : 2001:DB8:3::3
Origin Nexthop IP : 2001:DB8:3::3
DF State : primary
Eline Role : primary
Remote MTU : 1500
Remote Control Word : false
Remote ESI : 0000.0000.0000.0000.0000
Tunnel info : 1 tunnels
NO.0 Tunnel Type : srv6te-policy, Tunnel ID : 0x000000003400000001
Last Interface UP Timestamp : 2019-8-14 3:21:34:196
Last Designated Primary Timestamp : 2019-8-14 3:23:45:839
Last Designated Backup Timestamp : --
Run the display bgp evpn all routing-table command on each PE. The command
output shows the EVPN route sent from the peer end.
The following example uses the command output on PE1.
[~PE1] display bgp evpn all routing-table
Network(ESI/EthTagId) NextHop
*> 0000.0000.0000.0000.0000:100 127.0.0.1
*>i 0000.0000.0000.0000.0000:200 2001:DB8:3::3
EVPN-Instance evrf1:
Number of A-D Routes: 2
Network(ESI/EthTagId) NextHop
*> 0000.0000.0000.0000.0000:100 127.0.0.1
*>i 0000.0000.0000.0000.0000:200 2001:DB8:3::3
Run the display bgp evpn all routing-table ad-route command on PE1. The
command output shows the EVPN routes sent from the peer end.
[~PE1] display bgp evpn all routing-table ad-route 0000.0000.0000.0000.0000:200
EVPN-Instance evrf1:
Number of A-D Routes: 1
BGP routing table entry information of
0000.0000.0000.0000.0000:200:
Route Distinguisher: 100:1
Remote-Cross route
Label information (Received/Applied): 3/NULL
From: 2001:DB8:3::3 (3.3.3.3)
Route Duration: 0d00h02m41s
Relay Tunnel Out-Interface: policy1(srv6tepolicy)
Original nexthop: 2001:DB8:3::3
Qos information : 0x0
Ext-Community: RT <1 : 1>, Color <0 : 101>, EVPN L2 Attributes <MTU:1500 C:0 P:1 B:
0>
Prefix-sid: 2001:DB8:130::1:0:5A
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre
255
Route Type: 1 (Ethernet Auto-Discovery (A-D) route)
ESI: 0000.0000.0000.0000.0000, Ethernet Tag ID: 200
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
sysname PE1
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
evpn source-address 1.1.1.1
#
return
● P configuration file
sysname P
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator P ipv6-prefix 2001:DB8:120::10 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator P
#
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
return
● PE2 configuration file
sysname PE2
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:1
segment-routing ipv6 traffic-engineer best-effort
tnl-policy p1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1 srv6-mode
evpn binding vpn-instance evrf1
local-service-id 200 remote-service-id 100
segment-routing ipv6 locator PE2
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator PE2 ipv6-prefix 2001:DB8:130::3 64 static 32
srv6-te-policy locator PE2
segment-list list1
index 5 sid ipv6 2001:DB8:120::1:0:0
index 10 sid ipv6 2001:DB8:100::1:0:0
srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
binding-sid 2001:DB8:130::350
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:22::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ip address 3.3.3.3 255.255.255.255
ipv6 address 2001:DB8:3::3/128
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 advertise encap-type srv6
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
evpn source-address 3.3.3.3
#
return
● CE1 configuration file
sysname CE1
#
vlan batch 1
#
interface GigabitEthernet1/0/0
portswitch
undo shutdown
port link-type access
port default vlan 1
#
return
● CE2 configuration file
sysname CE2
#
vlan batch 1
#
interface GigabitEthernet1/0/0
portswitch
undo shutdown
port link-type access
port default vlan 1
#
return
2.2.25.5 Example for Configuring EVPN VPLS over SRv6 TE Policy (Manual
Configuration)
This section provides an example for configuring an SRv6 TE Policy to carry EVPN
E-LAN services.
Networking Requirements
On the network shown in Figure 2-86, PE1, the P, and PE2 are in the same AS and
run IS-IS to implement IPv6 network connectivity. It is required that a bidirectional
SRv6 TE Policy be deployed between PE1 and PE2 to carry EVPN E-LAN services.
Interface 1, interface 2, sub-interface 1.1, and sub-interface 2.1 in this example represent GE
1/0/0, GE 2/0/0, GE 1/0/0.1, and GE 2/0/0.1, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure an EVPN instance in BD mode on each PE and bind the EVPN
instance to an access-side sub-interface.
4. Establish a BGP EVPN peer relationship between PEs.
5. Configure an SRv6 TE Policy on PE1 and PE2.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● RD and RT values of the EVPN instance: 100:1 and 1:1
● BD ID: 100
● Names of SID node route locators on PE1: PE1_ARG, PE1; names of SID node
route locators on PE2: PE2_ARG, PE2; dynamically generated opcodes
● Length of the SID node route locators PE1_ARG and PE2_ARG: 10 (as specified
in the args parameter)
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:10::1 64
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 64
[*PE1-LoopBack1] quit
[*PE1] commit
Configure an IPv4 address for the loopback interface because the EVPN source
address needs to be an IPv4 address. The following example uses the
configuration of PE1. The configuration of other devices is similar to the
configuration of PE1. For configuration details, see Configuration Files in this
section.
Step 2 Configure IS-IS.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display IS-IS neighbor information. The following example uses the command
output on PE1.
[~PE1] display isis peer
Peer information for ISIS(1)
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure an EVPN instance in BD mode on each PE and bind the EVPN instance
to an access-side sub-interface.
# Configure PE1.
[*PE1] evpn source-address 1.1.1.1
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-evpn-instance-evrf1] vpn-target 1:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] bridge-domain 100
[*PE1-bd100] evpn binding vpn-instance evrf1
[*PE1-bd100] quit
[*PE1] interface gigabitethernet 2/0/0.1 mode l2
[*PE1-GigabitEthernet2/0/0.1] encapsulation dot1q vid 1
[*PE1-GigabitEthernet2/0/0.1] rewrite pop single
[*PE1-GigabitEthernet2/0/0.1] bridge-domain 100
[*PE1-GigabitEthernet2/0/0.1] quit
[*PE1] commit
# Configure PE2.
[*PE2] evpn source-address 3.3.3.3
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 100
[*PE2-bd100] evpn binding vpn-instance evrf1
[*PE2-bd100] quit
[*PE2] interface gigabitethernet 2/0/0.1 mode l2
[*PE2-GigabitEthernet2/0/0.1] encapsulation dot1q vid 1
[*PE2-GigabitEthernet2/0/0.1] rewrite pop single
[*PE2-GigabitEthernet2/0/0.1] bridge-domain 100
[*PE2-GigabitEthernet2/0/0.1] quit
[*PE2] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] route-id 1.1.1.1
[*PE1-bgp] peer 2001:DB8:3::3 as-number 100
[*PE1-bgp] peer 2001:DB8:3::3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 enable
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] route-id 3.3.3.3
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 advertise encap-type srv6
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
After the configuration is complete, run the display bgp evpn peer command on
the PEs to check whether BGP EVPN peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP EVPN peer relationships have been established successfully. The following
example uses the command output on PE1.
[~PE1] display bgp evpn peer
Step 5 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator PE1 ipv6-prefix 2001:DB8:11:: 64
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] locator PE1_ARG ipv6-prefix 2001:DB8:12:: 64 args 10
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator PE1
[*PE1-isis-1] segment-routing ipv6 locator PE1_ARG auto-sid-disable
[*PE1-isis-1] quit
[*PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] segment-routing ipv6 locator PE1_ARG unicast-locator PE1
[*PE1-evpn-instance-evrf1] segment-routing ipv6 traffic-engineer best-effort
[*PE1-evpn-instance-evrf1] quit
[*PE1] commit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2001:DB8:2::2
[*P-segment-routing-ipv6] locator P ipv6-prefix 2001:DB8:120::10 64 static 32
[*P-segment-routing-ipv6-locator] quit
[*P-segment-routing-ipv6] quit
[~P] isis 1
[*P-isis-1] segment-routing ipv6 locator P
[*P-isis-1] commit
[~P-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator PE2 ipv6-prefix 2001:DB8:21:: 64
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] locator PE2_ARG ipv6-prefix 2001:DB8:22:: 64 args 10
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator PE2
[*PE2-isis-1] segment-routing ipv6 locator PE2_ARG auto-sid-disable
[*PE2-isis-1] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] segment-routing ipv6 locator PE2_ARG unicast-locator PE2
[*PE2-evpn-instance-evrf1] segment-routing ipv6 traffic-engineer best-effort
[*PE2-evpn-instance-evrf1] quit
[*PE2] commit
Total Locator(s): 2
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE1.
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 2001:DB8:120::1:0:0
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 2001:DB8:11::1:0:1E
[*PE2-segment-routing-ipv6-segment-list-list1] commit
[~PE2-segment-routing-ipv6-segment-list-list1] quit
[~PE2-segment-routing-ipv6] srv6-te-policy locator PE2
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 2001:DB8:21::350
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE2-segment-routing-ipv6-policy-policy1-path] commit
[~PE2-segment-routing-ipv6-policy-policy1-path] quit
[~PE2-segment-routing-ipv6-policy-policy1] quit
[~PE2-segment-routing-ipv6] quit
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
Weight :1
SID :
2001:DB8:120::1:0:0
2001:DB8:21::1:0:1E
# Configure PE1.
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 route-policy p1 import
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf] tnl-policy p1
[*PE2-evpn-instance-evrf] commit
[~PE2-evpn-instance-evrf] quit
# Configure CE2.
[~CE2] interface gigabitethernet1/0/0
[~CE2-GigabitEthernet1/0/0] undo shutdown
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] interface gigabitethernet1/0/0.1
[*CE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*CE2-GigabitEthernet1/0/0.1] ip address 10.1.1.2 255.255.255.0
[*CE2-GigabitEthernet1/0/0.1] quit
[*CE2] commit
Total SID(s): 1
[~PE1] display segment-routing ipv6 local-sid end-dt2u forwarding
Total SID(s): 1
Run the display bgp evpn all routing-table command on each PE. The command
output shows the EVPN routes sent from the peer end. The following example
uses the command output on PE1.
[~PE1] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evrf1:
Number of Mac Routes: 3
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr)
NextHop
*> 0:48:38c2-6721-0300:0:0.0.0.0 0.0.0.0
*i 2001:DB8:3::3
*> 0:48:38c2-6761-0300:0:0.0.0.0 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp)
NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 2001:DB8:3::3
[~PE1] display bgp evpn all routing-table inclusive-route 0:32:3.3.3.3
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 1
BGP routing table entry information of 0:32:3.3.3.3:
Route Distinguisher: 100:1
Remote-Cross route
Label information (Received/Applied): 3/NULL
From: 2001:DB8:3::3 (3.3.3.3)
Route Duration: 0d00h06m19s
Relay IP Nexthop: FE80::3AC2:67FF:FE51:302
Relay IP Out-
Interface:GigabitEthernet1/0/0
Relay Tunnel Out-Interface: policy1(srv6tepolicy)
Original nexthop: 2001:DB8:3::3
Qos information : 0x0
Ext-Community: RT <1 : 1>, Color <0 : 101>
Prefix-sid: 2001:DB8:22::400:0:7800
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre
255
PMSI: Flags 0, Ingress Replication, Label 0:0:0(3), Tunnel Identifier:
3.3.3.3
Route Type: 3 (Inclusive Multicast Route)
Ethernet Tag ID: 0, Originator IP:3.3.3.3/32
Not advertised to any peer yet
Run the ping command on the CEs. The command output shows that the CEs
belonging to the same VPN can ping each other. For example:
[~CE1] ping 10.1.1.2
PING 10.1.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=3
ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=4
ms
Reply from 10.1.1.2: bytes=56 Sequence=3 ttl=255 time=5
ms
Reply from 10.1.1.2: bytes=56 Sequence=4 ttl=255 time=4
ms
Reply from 10.1.1.2: bytes=56 Sequence=5 ttl=255 time=8
ms
----End
Configuration Files
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.1 255.255.255.0
#
return
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
segment-routing ipv6 traffic-engineer best-effort
segment-routing ipv6 locator PE1_ARG unicast-locator PE1
tnl-policy p1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bridge-domain 100
evpn binding vpn-instance evrf1
#
segment-routing ipv6
encapsulation source-address 2001:db8:1::1
locator PE1 ipv6-prefix 2001:DB8:11:: 64 static 32
locator PE1_ARG ipv6-prefix 2001:DB8:12:: 64 static 32 args 10
srv6-te-policy locator PE1
segment-list list1
index 5 sid ipv6 2001:DB8:120::1:0:0
index 10 sid ipv6 2001:DB8:21::1:0:1E
srv6-te policy policy1 endpoint 2001:DB8:3::3 color 101
binding-sid 2001:DB8:11::450
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
segment-routing ipv6 locator PE1_ARG auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
rewrite pop single
bridge-domain 100
#
interface LoopBack1
ipv6 enable
ip address 1.1.1.1 255.255.255.255
ipv6 address 2001:DB8:1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 route-policy p1 import
peer 2001:DB8:3::3 advertise encap-type srv6
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
evpn source-address 1.1.1.1
#
return
● P configuration file
#
sysname P
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator P ipv6-prefix 2001:DB8:120::10 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator P
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ip address 2.2.2.2 255.255.255.255
ipv6 address 2001:DB8:2::2/64
isis ipv6 enable 1
#
return
tnl-policy p1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bridge-domain 100
evpn binding vpn-instance evrf1
#
segment-routing ipv6
encapsulation source-address 2001:db8:3::3
locator PE2 ipv6-prefix 2001:DB8:21:: 64 static 32
locator PE2_ARG ipv6-prefix 2001:DB8:22:: 64 static 32 args 10
srv6-te-policy locator PE2
segment-list list1
index 5 sid ipv6 2001:DB8:120::1:0:0
index 10 sid ipv6 2001:DB8:11::1:0:1E
srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
binding-sid 2001:DB8:21::350
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
segment-routing ipv6 locator PE2_ARG auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
rewrite pop single
bridge-domain 100
#
interface LoopBack1
ipv6 enable
ip address 3.3.3.3 255.255.255.255
ipv6 address 2001:DB8:3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 advertise encap-type srv6
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
evpn source-address 3.3.3.3
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip address 10.1.1.2 255.255.255.0
#
return
Networking Requirements
On the network shown in Figure 2-87:
● PE1, the P, and PE2 are in the same AS and run IS-IS to implement IPv6
network connectivity.
● PE1, the P, and PE2 are Level-1 devices in area 1.
It is required that a bidirectional SRv6 TE Policy be deployed between PE1 and PE2
to carry L3VPNv4 services. In this example, PE2 is the egress of the SRv6 TE Policy,
and PE3 provides protection for PE2 to enhance reliability.
Figure 2-87 Networking for L3VPNv4 over SRv6 TE Policy egress protection
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure IS-IS SRv6 on each device.
4. Configure VPN instances on PE1 and PE2.
5. Establish an EBGP peer relationship between each PE and its connected CE.
6. Establish an MP-IBGP peer relationship between PEs.
7. Configure an SRv6 TE Policy on PE1 and PE2.
8. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
9. Configure egress protection on PE3.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface on PE1, the P, PE2, and PE3
● Process ID of PE1, the P, PE2, and PE3
● Level on PE1, the P, PE2, and PE3
● VPN instance name, RD, and RT on PE1, PE2, and PE3
Procedure
Step 1 Enable IPv6 forwarding and configure IPv6 addresses for interfaces.
# Configure PE1. The configuration of other devices is similar to the configuration
of PE1. For configuration details, see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001::1 96
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-LoopBack1] quit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-1
[*P-isis-1] cost-style wide
[*P-isis-1] network-entity 10.0000.0000.0002.00
[*P-isis-1] ipv6 enable topology ipv6
[*P-isis-1] quit
[*P] interface gigabitethernet 1/0/0
[*P-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet 2/0/0
[*P-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[*P] interface loopback1
[*P-LoopBack1] isis ipv6 enable 1
[*P-LoopBack1] commit
[~P-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet 3/0/0
[*PE2-GigabitEthernet3/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet3/0/0] commit
[~PE2-GigabitEthernet3/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure PE3.
[~PE3] isis 1
[*PE3-isis-1] is-level level-1
[*PE3-isis-1] cost-style wide
[*PE3-isis-1] network-entity 10.0000.0000.0004.00
[*PE3-isis-1] ipv6 enable topology ipv6
[*PE3-isis-1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE3-GigabitEthernet1/0/0] commit
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] interface gigabitethernet 3/0/0
[*PE3-GigabitEthernet3/0/0] isis ipv6 enable 1
[*PE3-GigabitEthernet3/0/0] commit
[~PE3-GigabitEthernet3/0/0] quit
[*PE3] interface loopback1
[*PE3-LoopBack1] isis ipv6 enable 1
[*PE3-LoopBack1] commit
[~PE3-LoopBack1] quit
Total Peer(s): 1
# Display IS-IS routing table information. The following example uses the
command output on PE1.
[~PE1] display isis route
Route information for ISIS(1)
-----------------------------
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] ip vpn-instance vpna
[*PE3-vpn-instance-vpna] ipv4-family
[*PE3-vpn-instance-vpna-af-ipv4] route-distinguisher 300:1
[*PE3-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE3-vpn-instance-vpna-af-ipv4] quit
[*PE3-vpn-instance-vpna] quit
[*PE3] interface gigabitethernet 2/0/0
[*PE3-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE3-GigabitEthernet2/0/0] ip address 10.3.1.1 24
[*PE3-GigabitEthernet2/0/0] commit
[*PE3-GigabitEthernet2/0/0] quit
[*PE3] commit
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
[*CE1] bgp 65410
[*CE1-bgp] peer 10.1.1.1 as-number 100
[*CE1-bgp] network 11.11.11.11 32
[*CE1-bgp] quit
[*CE1] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] peer 10.3.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] router-id 3.3.3.3
[*PE3-bgp] ipv4-family vpn-instance vpna
[*PE3-bgp-vpna] peer 10.3.1.2 as-number 65420
[*PE3-bgp-vpna] import-route direct
[*PE3-bgp-vpna] commit
[*PE3-bgp-vpna] quit
[~PE3-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs and check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
# Configure PE2.
[~PE2] bgp 100
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[~PE2-bgp] peer 4::4 as-number 100
[*PE2-bgp] peer 4::4 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] peer 4::4 enable
[*PE2-bgp-af-vpnv4] commit
[~PE2-bgp-af-vpnv4] quit
[~PE2-bgp] quit
# Configure PE3.
[~PE3] bgp 100
[~PE3-bgp] peer 1::1 as-number 100
[*PE3-bgp] peer 1::1 connect-interface loopback 1
[~PE3-bgp] peer 3::3 as-number 100
[*PE3-bgp] peer 3::3 connect-interface loopback 1
[*PE3-bgp] ipv4-family vpnv4
[*PE3-bgp-af-vpnv4] peer 1::1 enable
[*PE3-bgp-af-vpnv4] peer 3::3 enable
[*PE3-bgp-af-vpnv4] commit
[~PE3-bgp-af-vpnv4] quit
[~PE3-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs and check whether BGP peer relationships have been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationships have been established successfully.
The following example uses the command output on PE1.
[~PE1] display bgp vpnv4 all peer
Step 6 Configure SRv6 SIDs. On PEs, configure VPN routes to carry the SID attribute.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3::3 prefix-sid
[*PE1-bgp-af-vpnv4] peer 4::4 prefix-sid
[~PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure the P.
[~P] segment-routing ipv6
[~P-segment-routing-ipv6] encapsulation source-address 2::2
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 3::3
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[*PE2-bgp-af-vpnv4] peer 4::4 prefix-sid
[~PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
[~PE2] isis 1
[*PE2-isis-1] segment-routing ipv6 locator as1
[*PE2-isis-1] commit
[~PE2-isis-1] quit
# Configure PE3.
[~PE3] segment-routing ipv6
[~PE3-segment-routing-ipv6] encapsulation source-address 4::4
[*PE3-segment-routing-ipv6] locator as1 ipv6-prefix 40::1 64 static 32
[*PE3-segment-routing-ipv6-locator] quit
[*PE3-segment-routing-ipv6] quit
[*PE3] bgp 100
[*PE3-bgp] ipv4-family vpnv4
[*PE3-bgp-af-vpnv4] peer 1::1 prefix-sid
[*PE3-bgp-af-vpnv4] peer 3::3 prefix-sid
[~PE3-bgp-af-vpnv4] quit
[*PE3-bgp] ipv4-family vpn-instance vpna
[*PE3-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort
[*PE3-bgp-vpna] segment-routing ipv6 locator as1
[*PE3-bgp-vpna] commit
[~PE3-bgp-vpna] quit
[~PE3-bgp] quit
[~PE3] isis 1
[*PE3-isis-1] segment-routing ipv6 locator as1
[*PE3-isis-1] commit
[~PE3-isis-1] quit
Total Locator(s): 1
Total SID(s): 2
[~PE2] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
[~P] display segment-routing ipv6 local-sid end forwarding
My Local-SID End Forwarding Table
---------------------------------
Total SID(s): 2
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
The following example uses the command output on PE1.
[~PE1] display srv6-te policy
PolicyName : policy1
Color : 101 Endpoint : 3::3
TunnelId :1 Binding SID : 10::100
TunnelType : SRv6-TE Policy DelayTimerRemain :-
Policy State : Active State Change Time : 2019-04-08 09:13:35
Admin State : UP Traffic Statistics : Disable
Candidate-path Count : 1
Weight :1
SID :
20::1:0:0
30::1:0:0
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 route-policy p1 import
[*PE2-bgp-af-vpnv4] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
Destination: 22.22.22.22/32
Protocol: IBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 3::3 Neighbour: 3::3
State: Active Adv Relied Age: 00h03m15s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x10000E0 Instance:
RelayNextHop: 0.0.0.0 Interface: policy1
TunnelID: 0x000000003400000001 Flags: RD
# Configure PE3.
[~PE3] segment-routing ipv6
[*PE3-segment-routing-ipv6] locator as1 ipv6-prefix 40::1 64 static 32
[*PE3-segment-routing-ipv6-locator] opcode ::999 end-m mirror-locator 30::1 64
[*PE3-segment-routing-ipv6-locator] quit
[*PE3-segment-routing-ipv6] quit
[*PE3] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] ipv6 frr
[*P-isis-1-ipv6-frr] loop-free-alternate level-1
[*P-isis-1-ipv6-frr] quit
[*P-isis-1] quit
[*P] commit
After the configuration is complete, run the display segment-routing ipv6 local-
sid end-m forwarding command to check SRv6 local SID table information.
[~PE3] display segment-routing ipv6 local-sid end-m forwarding
Total SID(s): 1
Run the ping command again. The command output shows that CEs belonging to
the same VPN can still ping each other. For example:
[~CE1] ping -a 11.11.11.11 22.22.22.22
PING 22.22.22.22: 56 data bytes, press CTRL_C to break
Reply from 22.22.22.22: bytes=56 Sequence=1 ttl=253 time=7 ms
Reply from 22.22.22.22: bytes=56 Sequence=2 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=3 ttl=253 time=4 ms
Reply from 22.22.22.22: bytes=56 Sequence=4 ttl=253 time=5 ms
Reply from 22.22.22.22: bytes=56 Sequence=5 ttl=253 time=5 ms
The preceding verification results indicate that PE3 can provide egress protection
for the SRv6 TE Policy between PE1 and PE2.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 20::1:0:0
index 10 sid ipv6 30::1:0:0
srv6-te policy policy1 endpoint 3::3 color 101
binding-sid 10::100
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
peer 4::4 as-number 100
peer 4::4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 3::3 enable
peer 3::3 route-policy p1 import
peer 3::3 prefix-sid
peer 4::4 enable
peer 4::4 route-policy p1 import
peer 4::4 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.1.1.2 as-number 65410
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● P configuration file
#
sysname P
#
segment-routing ipv6
encapsulation source-address 2::2
locator as1 ipv6-prefix 20::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
ipv6 frr
loop-free-alternate level-1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::1/96
ipv6 enable
ipv6 address 2004::2/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 4::4/64
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 1::1 as-number 100
peer 1::1 connect-interface LoopBack1
peer 3::3 as-number 100
peer 3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 prefix-sid
peer 3::3 enable
peer 3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
peer 10.3.1.2 as-number 65420
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 11.11.11.11 255.255.255.255
peer 10.1.1.1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
#
interface LoopBack1
Networking Requirements
On the network shown in Figure 2-88:
● PE1 and ASBR1 belong to AS 100, and PE2 and ASBR2 belong to AS 200.
Intra-AS IPv6 connectivity needs to be achieved for AS 100 and AS 200
through IS-IS.
● PE1 and ASBR1 belong to IS-IS process 1, and PE2 and ASBR2 belong to IS-IS
process 10. PE1, ASBR1, PE2, and ASBR2 are all Level-1 devices.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity on each
device.
3. Configure VPN instances on PE1 and PE2.
4. Establish an EBGP peer relationship between each PE and its connected CE.
5. Configure IS-IS to advertise SRv6 locator routes on the PEs and allocate intra-
AS static End.X SIDs to the PEs.
6. Configure BGP EPEv6 on ASBR1 and ASBR2 and allocate inter-AS static End.X
SIDs.
7. Establish an MP-EBGP peer relationship between the PEs, and configure the
PEs to exchange BGP VPNv4 routes carrying SIDs through this peer
relationship.
8. Configure an SRv6 TE Policy between PE1 and PE2.
9. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
Precautions
When configuring inter-AS L3VPNv4 over SRv6 TE Policy, note the following:
● When configuring BGP EPEv6, you need to enable the BGP-LS address family
peer relationship so that BGP EPEv6 can take effect.
Data Preparation
To complete the configuration, you need the following data:
● Interface IPv6 addresses on each device
● AS numbers of the PEs and ASBRs
● IS-IS process IDs, levels, and network entity names of the PEs and ASBRs
● VPN instance name, RD, and RT on PE1 and PE2
Procedure
Step 1 Enable IPv6 forwarding on each device and configure an IPv6 address for each
interface. The following example uses the configuration of PE1. The configuration
of other devices is similar to the configuration of PE1. For configuration details,
see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001::1 96
[*PE1-GigabitEthernet1/0/0] commit
[~PE1] isis 1
[*PE1-isis-1] is-level level-1
[*PE1-isis-1] cost-style wide
[*PE1-isis-1] network-entity 10.0000.0000.0001.00
[*PE1-isis-1] ipv6 enable topology ipv6
[*PE1-isis-1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[*PE1] interface loopback1
[*PE1-LoopBack1] isis ipv6 enable 1
[*PE1-LoopBack1] commit
[~PE1-LoopBack1] quit
# Configure ASBR1.
[~ASBR1] isis 1
[*ASBR1-isis-1] is-level level-1
[*ASBR1-isis-1] cost-style wide
[*ASBR1-isis-1] network-entity 10.0000.0000.0002.00
[*ASBR1-isis-1] ipv6 enable topology ipv6
[*ASBR1-isis-1] quit
[*ASBR1] interface gigabitethernet 1/0/0
[*ASBR1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*ASBR1-GigabitEthernet1/0/0] commit
[~ASBR1-GigabitEthernet1/0/0] quit
[*ASBR1] interface loopback1
[*ASBR1-LoopBack1] isis ipv6 enable 1
[*ASBR1-LoopBack1] commit
[~ASBR1-LoopBack1] quit
# Configure ASBR2.
[~ASBR2] isis 10
[*ASBR2-isis-10] is-level level-1
[*ASBR2-isis-10] cost-style wide
[*ASBR2-isis-10] network-entity 10.0000.0000.0003.00
[*ASBR2-isis-10] ipv6 enable topology ipv6
[*ASBR2-isis-10] quit
[*ASBR2] interface gigabitethernet 1/0/0
[*ASBR2-GigabitEthernet1/0/0] isis ipv6 enable 10
[*ASBR2-GigabitEthernet1/0/0] commit
[~ASBR2-GigabitEthernet1/0/0] quit
[*ASBR2] interface loopback1
[*ASBR2-LoopBack1] isis ipv6 enable 10
[*ASBR2-LoopBack1] commit
[~ASBR2-LoopBack1] quit
# Configure PE2.
[~PE2] isis 10
[*PE2-isis-10] is-level level-1
[*PE2-isis-10] cost-style wide
[*PE2-isis-10] network-entity 10.0000.0000.0004.00
[*PE2-isis-10] ipv6 enable topology ipv6
[*PE2-isis-10] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 10
[*PE2-GigabitEthernet1/0/0] commit
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 10
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Display IS-IS neighbor information. The following example uses the command
output on PE1.
[~PE1] display isis peer
Total Peer(s): 1
Step 3 Configure a VPN instance on each PE, enable the IPv4 address family for the
instance, and bind the interface that connects a PE to a CE to the VPN instance on
that PE.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] commit
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Assign an IP address to each interface on the CEs, as shown in Figure 2-88. For
configuration details, see Configuration Files in this section.
After the configuration is complete, run the display ip vpn-instance verbose
command on the PEs to check VPN instance configurations. The command output
shows that each PE can successfully ping its connected CE.
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP address
using the -a source-ip-address parameter in the ping -vpn-instance vpn-instance-name -a
source-ip-address dest-ip-address command to ping the CE that is connected to the remote
PE. If the source IP address is not specified, the ping operation fails.
Step 4 Establish an EBGP peer relationship between each PE and its connected CE.
# Configure CE1.
[~CE1] interface loopback 1
[*CE1-LoopBack1] ip address 11.11.11.11 32
[*CE1-LoopBack1] quit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] commit
[*PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] network 22.22.22.22 32
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] commit
[*PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs to check whether BGP peer relationships have been
established between the PEs and CEs. If the Established state is displayed in the
command output, the BGP peer relationships have been established successfully.
The following example uses the command output on PE1 to show that a BGP peer
relationship has been established between PE1 and CE1.
[~PE1] display bgp vpnv4 vpn-instance vpna peer
Step 5 Configure IS-IS to advertise SRv6 locator routes on the PEs and allocate intra-AS
static End.X SIDs to the PEs.
# Configure PE1.
[~PE1] segment-routing ipv6
[~PE1-segment-routing-ipv6] encapsulation source-address 1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 10::1 64 static 32
[*PE1-segment-routing-ipv6-locator] opcode ::100 end-x interface GigabitEthernet1/0/0 nexthop 2001::2
no-psp
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] isis 1
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] encapsulation source-address 4::4
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 40::1 64 static 32
[*PE2-segment-routing-ipv6-locator] opcode ::200 end-x interface GigabitEthernet1/0/0 nexthop 2002::1
[*PE2-segment-routing-ipv6-locator] quit
[*PE2-segment-routing-ipv6] quit
[*PE2] isis 10
[*PE2-isis-10] ipv6 enable topology ipv6
[*PE2-isis-10] segment-routing ipv6 locator as1 auto-sid-disable
[*PE2-isis-10] commit
[~PE2-isis-10] quit
Total Locator(s): 1
Total SID(s): 1
Step 6 Configure BGP EPEv6 on ASBR1 and ASBR2 and allocate inter-AS static End.X SIDs.
NOTICE
When configuring BGP EPEv6, you need to enable the BGP-LS address family peer
relationship so that BGP EPEv6 can take effect.
# Configure ASBR1.
[~ASBR1] segment-routing ipv6
[~ASBR1-segment-routing-ipv6] encapsulation source-address 2::2
[*ASBR1-segment-routing-ipv6] locator as1 ipv6-prefix 20::1 64 static 32
# Configure ASBR2.
[~ASBR2] segment-routing ipv6
[~ASBR2-segment-routing-ipv6] encapsulation source-address 3::3
[*ASBR2-segment-routing-ipv6] locator as1 ipv6-prefix 30::1 64 static 32
[*ASBR2-segment-routing-ipv6-locator] opcode ::100 end-x interface GigabitEthernet1/0/0 nexthop
2002::2
[*ASBR2-segment-routing-ipv6-locator] quit
[*ASBR2-segment-routing-ipv6] quit
[*ASBR2] isis 10
[*ASBR2-isis-10] segment-routing ipv6 locator as1 auto-sid-disable
[*ASBR2-isis-10] commit
[~ASBR2-isis-10] quit
[~ASBR2] bgp 200
[*ASBR2-bgp] router-id 3.3.3.3
[*ASBR2-bgp] segment-routing ipv6 egress-engineering locator as1
[*ASBR2-bgp] peer 2020::1 as-number 100
[*ASBR2-bgp] peer 2020::1 ebgp-max-hop 255
[*ASBR2-bgp] peer 2020::1 egress-engineering srv6 static-sid no-psp-usp 30::200
[*ASBR2-bgp] ipv6-family unicast
[*ASBR2-bgp-af-ipv6] peer 2020::1 enable
[*ASBR2-bgp-af-ipv6] quit
[*ASBR2-bgp] link-state-family unicast
[*ASBR2-bgp-af-ls] peer 2020::1 enable
[*ASBR2-bgp-af-ls] quit
[*ASBR2-bgp] quit
[*ASBR2] commit
Nexthop : 2020::2
Out Interface : GigabitEthernet2/0/0
Step 7 Establish an MP-EBGP peer relationship between the PEs, and configure the PEs to
exchange BGP VPNv4 routes carrying SIDs through this peer relationship.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] peer 4::4 as-number 200
[*PE1-bgp] peer 4::4 ebgp-max-hop 255
[*PE1-bgp] peer 4::4 connect-interface loopback 1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 4::4 enable
[*PE1-bgp-af-vpnv4] peer 4::4 prefix-sid
[~PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer
[*PE1-bgp-vpna] segment-routing ipv6 locator as1
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 200
[~PE2-bgp] peer 1::1 as-number 100
[*PE2-bgp] peer 1::1 ebgp-max-hop 255
[*PE2-bgp] peer 1::1 connect-interface loopback 1
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 enable
[*PE2-bgp-af-vpnv4] peer 1::1 prefix-sid
[~PE2-bgp-af-vpnv4] quit
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] segment-routing ipv6 traffic-engineer
[*PE2-bgp-vpna] segment-routing ipv6 locator as1
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 all peer command
on the PEs to check whether a BGP peer relationship has been established
between the PEs. If the Established state is displayed in the command output, the
BGP peer relationship has been established successfully. The following example
uses the command output on PE1.
[~PE1] display bgp vpnv4 all peer
# Configure PE2.
[~PE2] segment-routing ipv6
[~PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6
40::200
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6
30::200
[*PE2-segment-routing-ipv6-segment-list-list1] index 15 sid ipv6 20::200
[*PE2-segment-routing-ipv6-segment-list-list1] commit
[~PE2-segment-routing-ipv6-segment-list-list1] quit
[~PE2-segment-routing-ipv6] srv6-te-policy locator as1
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 1::1 color 101
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 40::1000
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1
[*PE2-segment-routing-ipv6-policy-policy1-path] commit
[~PE2-segment-routing-ipv6-policy-policy1-path] quit
[~PE2-segment-routing-ipv6-policy-policy1] quit
[~PE2-segment-routing-ipv6] quit
After the configuration is complete, run the display srv6-te policy command to
check SRv6 TE Policy information.
Weight :1
SID :
10::100
20::100
30::100
# Configure PE1.
[~PE1] route-policy p1 permit node 10
[*PE1-route-policy] apply extcommunity color 0:101
[*PE1-route-policy] quit
[*PE1] bgp 100
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 4::4 route-policy p1 import
[*PE1-bgp-af-vpnv4] quit
[*PE1-bgp] quit
[*PE1] tunnel-policy p1
[*PE1-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE1-tunnel-policy-p1] quit
[*PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE1-vpn-instance-vpna-af-ipv4] commit
[~PE1-vpn-instance-vpna-af-ipv4] quit
[~PE1-vpn-instance-vpna] quit
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 200
[*PE2-bgp] ipv4-family vpnv4
[*PE2-bgp-af-vpnv4] peer 1::1 route-policy p1 import
[*PE2-bgp-af-vpnv4] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
Destination: 22.22.22.22/32
Protocol: EBGP Process ID: 0
Preference: 255 Cost: 0
NextHop: 4::4 Neighbour: 4::4
State: Active Adv Relied Age: 00h00m01s
Tag: 0 Priority: low
Label: 3 QoSInfo: 0x0
IndirectID: 0x10000E5 Instance:
RelayNextHop: :: Interface: policy1
TunnelID: 0x000000003400000001 Flags: RD
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 1::1
locator as1 ipv6-prefix 10::1 64 static 32
opcode ::100 end-x interface GigabitEthernet1/0/0 nexthop 2001::2 no-
psp
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 10::100
index 10 sid ipv6 20::100
index 15 sid ipv6 30::100
srv6-te policy policy1 endpoint 4::4 color
101
binding-sid 10::1000
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 4::4 as-number 200
peer 4::4 ebgp-max-hop 255
peer 4::4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 4::4 enable
peer 4::4 route-policy p1 import
peer 4::4 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer
peer 10.1.1.2 as-number 65410
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● ASBR1 configuration file
#
sysname ASBR1
#
segment-routing ipv6
locator as1 ipv6-prefix 20::1 64 static 32
opcode ::200 end-x interface GigabitEthernet1/0/0 nexthop 2001::1 no-psp
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2020::1/96
#
interface LoopBack1
ipv6 enable
ipv6 address 2::2/64
isis ipv6 enable 1
#
bgp 100
router-id 2.2.2.2
segment-routing ipv6 egress-engineering locator
as1
peer 2020::2 as-number 200
peer 2020::2 ebgp-max-hop 255
peer 2020::2 egress-engineering srv6 static-sid no-psp-usp
20::100
#
ipv4-family unicast
undo synchronization
#
link-state-family unicast
peer 2020::2 enable
#
ipv6-family unicast
undo synchronization
network 10:: 64
import-route direct
peer 2020::2 enable
#
return
● ASBR2 configuration file
#
sysname ASBR2
#
segment-routing ipv6
locator as1 ipv6-prefix 30::1 64 static 32
opcode ::100 end-x interface GigabitEthernet1/0/0 nexthop 2002::2
#
isis 10
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
import-route bgp
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::1/96
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2002::2/96
isis ipv6 enable 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 4::4/64
isis ipv6 enable 10
#
bgp 200
router-id 4.4.4.4
peer 1::1 as-number 100
peer 1::1 ebgp-max-hop 255
peer 1::1 connect-interface
LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 1::1 enable
peer 1::1 route-policy p1 import
peer 1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer
peer 10.2.1.2 as-number 65420
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 11.11.11.11 255.255.255.255
peer 10.1.1.1 enable
#
return
3 EVPN
3.1 EVPN
Definition
Ethernet virtual private network (EVPN) is used for Layer 2 internetworking. EVPN
is similar to BGP/MPLS IP VPN. Using extended BGP reachability information,
EVPN implements MAC address learning and advertisement between Layer 2
networks at different sites on the control plane instead of on the data plane.
Purpose
As services grow rapidly, different sites have an increasingly strong need for Layer
2 interworking. VPLS, which is generally used for such a purpose, has the following
shortcomings:
● Lack of support for load balancing: VPLS does not support traffic load
balancing in multi-homing networking scenarios.
● High network resource usage: Interworking between sites requires all PEs
serving these sites on the ISP backbone network to be fully meshed, with PWs
established between any two PEs. If a large number of PEs exist, PW
establishment will consume a significant amount of network resources. In
addition, a large number of ARP messages must be transmitted for MAC
address learning. These ARP messages not only consume network bandwidth
but may also consume CPU resources on remote sites that do not need to
learn the MAC addresses carried in them.
● EVPN does not require PEs on the ISP backbone network to be fully meshed.
PEs on an EVPN use BGP to communicate, and BGP provides the route
reflection function. PEs can establish BGP peer relationships only with RRs
deployed on the ISP backbone network, with RRs reflecting EVPN routes. This
implementation significantly reduces network complexity and minimizes the
number of network signaling messages.
● EVPN enables PEs to use ARP to learn the local MAC addresses and use
MAC/IP address advertisement routes to learn remote MAC addresses and IP
addresses corresponding to these MAC addresses, and store them locally.
After receiving another ARP request, a PE searches the locally cached MAC
address and IP address based on the destination IP address in the ARP
request. If the corresponding information is found, the PE returns an ARP
reply packet. This prevents ARP request packets from being broadcast to other
PEs, therefore reducing network resource consumption.
Benefits
EVPN offers the following benefits:
● Improved link usage and transmission efficiency: EVPN supports load
balancing, fully utilizing network resources and reducing network congestion.
● Reduced network resource consumption: By deploying RRs on the public
network, EVPN decreases the number of logical connections required between
PEs on the public network. In addition, EVPN enables PEs to use locally stored
MAC addresses to respond to ARP Request messages from connected sites,
minimizing the number of broadcast ARP Request messages.
EVPN Networking
An EVPN has a similar network structure to a BGP/MPLS IP VPN. In EVPN
networking, CEs at each site connect to PEs on the ISP backbone network. These
PEs have EVPN instances configured and establish BGP EVPN peer relationships
and MPLS/SR tunnels with each other. Unlike a BGP/MPLS IP VPN, an EVPN has its
sites on Layer 2 networks. Therefore, the PEs learn MAC addresses but not IP
routes from the CEs, and then advertise the learned MAC addresses to other sites
using EVPN routes.
In EVPN networking, a CE can be single-homed to one PE or multi-homed to
several PEs. On the network shown in Figure 3-1, CE1, CE2, and CE4 use the
single-homing mode, whereas CE3 uses the multi-homing mode. Load balancing
can be implemented in CE multi-homing networking.
EVPN defines Ethernet segment identifiers (ESIs) to identify links between PEs and
CEs. Links connecting multiple PEs to the same CE have the same ESI, and links
connecting multiple PEs to different CEs have different ESIs. PEs exchange routes
that carry ESIs, so that a PE can discover other PEs connecting to the same CE as
itself.
A PE can use both IPv4 and IPv6 addresses to establish EVPN peer relationships with other
PEs. An MPLS, VXLAN or SR tunnel can be established between IPv4 EVPN peers to carry
services, while an SRv6 tunnel needs to be established between IPv6 EVPN peers to carry
services. The EVPN routes that carry the SID can be sent only to IPv6 EVPN peers. The EVPN
routes that do not carry the SID can be sent only to IPv4 EVPN peers.
EVPN Routes
To enable sites to learn MAC addresses from each other, EVPN defines a new type
of BGP network layer reachability information (NLRI), called the EVPN NLRI. EVPN
NLRI includes the following types of EVPN routes:
● Ethernet A-D route: carries the reachability of the local PE to the MAC
addresses of its connected sites. PEs advertise Ethernet A-D routes after
establishing a BGP EVPN peer relationship. Ethernet A-D routes can be
classified as Ethernet A-D Per ES routes or Ethernet A-D Per EVI routes.
Ethernet A-D Per ES routes are used in fast convergence, redundancy mode,
and split horizon scenarios. Ethernet A-D Per EVI routes are used in alias
scenarios. Figure 3-2 shows the NLRI packet format of Ethernet A-D routes.
According to the standard, the MPLS Label field in a per-ES route is all 0s. By
default, the value of the MPLS Label field in a per-ES route on the current device
is equal to an ESI label value. After the peer esad-route-compatible enable
command is run, the MPLS Label field is changed to all 0s.
● MAC/IP advertisement route: carries EVPN instance RD, ESI, and label
information on the local PE. A PE uses MAC advertisement routes to advertise
unicast MAC address reachability information to other PEs. For details, see
Unicast MAC Address Transmission. Figure 3-3 shows the format of an
EVPN NLRI specific to the MAC/IP advertisement route. In addition to NLRI,
MAC/IP Advertisement route contains the RT of EVPN instance and next-hop
address.
An ARP route carries host MAC and IP addresses and a Layer 2 VNI. An IRB route
carries host MAC and IP addresses, a Layer 2 VNI, and a Layer 3 VNI. Therefore,
IRB routes carry ARP routes and can be used to advertise IP routes as well as ARP
entries.
An ND route carries the following valid information: host MAC address, host IPv6
address, and Layer 2 VNI. An IRBv6 route carries the following valid information:
host MAC address, host IPv6 address, Layer 2 VNI, and Layer 3 VNI. An IRBv6
route includes information about an ND route and therefore can be used to
advertise both a host IPv6 route and host ND entry.
● Inclusive multicast route: After a BGP peer relationship is established between
PEs, the PEs transmit inclusive multicast routes to each other. An inclusive
multicast route carries the RD and RTs of the EVPN instance on the local PE,
source IP address (usually the loopback address of the local PE), and Provider
Multicast Service Interface (PMSI). The PMSI is used to carry the tunnel type
(ingress replication or mLDP) and tunnel label information in multicast
packets. The PMSI and RTs are carried in the route attribute information, and
the RD and source IP address are carried in the NLRI. Figure 3-4 shows the
format of the EVPN NLRI specific to an inclusive multicast route. In this
situation, EVPN involves broadcast, unknown unicast, and multicast (BUM)
traffic. A PE forwards the BUM traffic that it receives to other PEs in P2MP
mode. A tunnel is established to transmit BUM traffic between PEs through
inclusive multicast routes. For details, see BUM Packet Transmission.
– Ethernet Tag ID: The value of this field is all zeros except that it is the
same as the local service ID in an EVPN VPWS scenario or the same as
the BD tag value in BD EVPN access in VLAN-aware mode.
– IP Address Length: a 1-byte field representing the length of the source IP
address configured on the local PE.
– Originating Router's IP Address: a 4-byte or 16-byte field representing the
source IP address configured on the local PE.
● Ethernet segment route: carries ESI information, source IP address and RD
(source IP:0) on the local PE. PEs connecting to the same CE use Ethernet
segment routes to discover each other. This type of route is used in
Designated forwarder election. Figure 3-5 shows the format of an EVPN
NLRI specific to the Ethernet segment route.
In the case where a CE is dual-homed to two PEs, an EVPN ESI label will be encapsulated
into the BUM packets exchanged between the two PEs to prevent loops.
3.1.3 EVPN-MPLS
Related Concepts
● Designated forwarder (DF) election
On the network shown in Figure 3-10, CE1 is dual-homed to PE1 and PE2,
and CE2 sends BUM traffic to PE1 and PE2. In this scenario, CE1 receives the
same copy of traffic from both PE1 and PE2, wasting network resources. To
solve this problem, EVPN allows one PE to be elected to forward BUM traffic.
The elected PE is referred to as the DF. If PE1 is elected, it becomes the
primary DF, with PE2 functioning as the backup DF. The primary DF forwards
BUM traffic from CE2 to CE1.
If a PE interface connecting to a CE is Down, the PE functions as a backup DF.
If a PE interface connecting to a CE is Up, the PE and other PEs with Up
interfaces elect a primary DF using the following procedure:
a. The PEs establish BGP EVPN peer relationships with each other and then
exchange Ethernet segment routes.
b. Upon receipt of the Ethernet segment routes, each PE generates a
multi-homing PE list based on the ESIs carried in Ethernet segment
An Ethernet segment may have multiple VLANs configured. In this case, the
smallest VLAN ID is used as the V value.
● Split horizon
On the network shown in Figure 3-11, CE1 is dual-homed to PE1 and PE2
and has load balancing enabled. If PE1 and PE2 have established a BGP EVPN
peer relationship with each other, after PE1 receives BUM traffic from CE1,
PE1 forwards the BUM traffic to PE2. If PE2 forwards BUM traffic to CE1, a
loop will occur. To prevent this problem, EVPN uses split horizon. After PE1
forwards the BUM traffic to PE2, PE2 checks the EVPN ESI label carried in the
traffic. If the ESI carried in the label equals the ESI for the link between PE2
and CE1, PE2 does not forward the traffic to CE1, preventing a loop.
Background
The use of MPLS networks increases requirements for service scalability of
network architecture. Different MANs of a service provider or collaborative
backbone networks of different service providers often span multiple ASs.
Implementation
Through seamless MPLS networking, all services (support inter-AS Option C) are
signaled to an LSP only by access nodes, and all network-side faults are rectified
using the same transport layer convergence technology, which does not affect
service transmission.
Usage Scenario
Seamless MPLS supports the following networking solutions:
● Intra-AS seamless MPLS: The access, aggregation, and core layers are
deployed within a single AS. This solution mainly applies to mobile bearer
networks.
● Inter-AS seamless MPLS: The access and aggregation layers are deployed
within a single AS, whereas the core layer in another AS. This solution mainly
applies to enterprise services.
Network Description
Deployment
Network Description
Deployment
Network Description
Deployment
Network Description
Deployment
Network Description
Deployment
Network Description
Deployment
Reliability
Seamless MPLS network reliability can be improved using various functions. If a
network fault occurs, a device immediately detects the fault and switch traffic to a
standby link.
Figure 3-21 Traffic protection triggered by a fault on the link between the
CSG and AGG on the inter-AS seamless MPLS network
● A fault occurs on the link between an AGG ASBR and a core ASBR.
On the inter-AS seamless MPLS network shown in Figure 3-25, BFD for
interface is configured on AGG ASBR1 and core ASBR1. If the BFD module
detects a fault on the link between AGG ASBR1 and core ASBR1, the BFD
module triggers BGP Auto FRR. BGP auto FRR switches both upstream and
downstream traffic from the primary path to backup paths.
Figure 3-27 Traffic protection from a link fault in a core area on the inter-AS
seamless MPLS network
Port-based Mode
In port-based mode, an interface is used to access a user service. Specifically, the
physical interface connected to a user network is directly bound to a common EVI
(neither an EVI in BD mode nor an EVI in VPWS mode) and has no sub-interfaces
created. This service mode is used only to carry Layer 2 services.
VLAN-based Mode
On the network shown in Figure 3-30, in VLAN-based mode, the physical
interfaces connected to user networks each have different sub-interfaces created.
Each sub-interface is associated with a unique VLAN and added to a specific
bridge domain (BD). Each BD is bound to a specific EVI. In this service mode, the
sub-interface, VLAN, BD, and EVI are exclusive for a user to access a network, and
a separate MAC forwarding table is used on the forwarding plane for each user.
Therefore, this mode effectively ensures service isolation. However, an EVI is
required per user, consuming numerous EVI resources. This service mode is used to
carry Layer 2 or Layer 3 services.
VLAN Bundle
On the network shown in Figure 3-31, in VLAN bundle mode, an EVI connects to
multiple users, who are divided by VLAN, and the EVI is bound to a BD. In this
service mode, the users connected to the same EVI share a MAC forwarding table,
requiring each user on the network to have a unique MAC address. This service
mode is used to carry Layer 2 or Layer 3 services.
In a VLAN bundle scenario, only termination EVC sub-interfaces support both Layer 2 and
Layer 3 interfaces. Non-termination EVC sub-interfaces support only Layer 2 services.
VLAN-Aware Bundle
On the network shown in Figure 3-32, in VLAN-aware bundle mode, an EVI
connects to multiple users, who are divided by VLAN. Additionally, the EVI can be
bound to multiple BDs, in which case, the EVI must have different BD tags
configured. When EVPN peers send routes to each other, a BD tag is encapsulated
into the Ethernet Tag ID field of an Ethernet auto-discovery route, MAC/IP
advertisement route, and inclusive multicast route. In this service mode, users
connected to the same EVI use separate forwarding entries. During traffic
forwarding, the system uses the BD tag carried in user packets to locate the
corresponding MAC forwarding table and searches the table for a forwarding entry
based on a MAC address.
Unlike the other service mode, in VLAN-aware bundle mode, load balancing,
designated forwarder (DF), host migration, and route re-origination are
implemented based on a BD:
● Load balancing: In VLAN-aware bundle mode, load balancing can be
implemented only if a MAC/IP advertisement route and Ethernet auto-
discovery route have the same Ethernet segment identifier (ESI) and the same
BD tag. If the BD tags are inconsistent, the routes belong to different BDs,
preventing load balancing from being implemented.
● DF election:
– For interface-based DF election, the system chooses the first interface to
go Up in a BD for DF election.
3.1.4 EVPN-VXLAN
Introduction
Ethernet virtual private network (EVPN) is a VPN technology used for Layer 2
internetworking. EVPN is similar to BGP/MPLS IP VPN. EVPN defines a new type of
BGP network layer reachability information (NLRI), called the EVPN NLRI. The
EVPN NLRI defines new BGP EVPN routes to implement MAC address learning and
advertisement between Layer 2 networks at different sites.
VXLAN does not provide a control plane, and VTEP discovery and host information
(IP and MAC addresses, VNIs, and gateway VTEP IP address) learning are
implemented by traffic flooding on the data plane, resulting in high traffic
volumes on DC networks. To address this problem, VXLAN uses EVPN as the
control plane. EVPN allows VTEPs to exchange BGP EVPN routes to implement
automatic VTEP discovery and host information advertisement, preventing
unnecessary traffic flooding.
In summary, EVPN introduces several new types of BGP EVPN routes through BGP
extension to advertise VTEP addresses and host information. In this way, EVPN
applied to VXLAN networks enables VTEP discovery and host information learning
on the control plane instead of on the data plane.
Ethernet Unique ID for defining the connection between local and remote
Segment devices
Identifier
An ARP route carries host MAC and IP addresses and an L2VNI. An IRB route carries
host MAC and IP addresses, an L2VNI, and an L3VNI. Therefore, IRB routes carry ARP
routes and can be used to advertise IP routes as well as ARP entries.
● Host IPv6 route advertisement
In a distributed gateway scenario, to implement Layer 3 communication
between hosts on different subnets, the VTEPs (functioning as Layer 3
gateways) must learn host IPv6 routes from each other. To achieve this, VTEPs
functioning as BGP EVPN peers exchange MAC/IP routes to advertise host
IPv6 routes to each other. The IP Address Length and IP Address fields
carried in the MAC/IP routes indicate the destination addresses of host IPv6
routes, and the MPLS Label2 field must carry an L3VNI. MAC/IP routes in this
case are also called IRBv6 routes.
An ND route carries the following valid information: host MAC address, host IPv6
address, and L2VNI. An IRBv6 route carries the following valid information: host MAC
address, host IPv6 address, L2VNI, and L3VNI. It can be seen that an IRBv6 route
includes information about an ND route and therefore can be used to advertise both a
host IPv6 route and host ND entry.
Ethernet VLAN ID
Tag ID The value is all 0s in this type of route.
Flags Flags indicating whether leaf node information is required for the
tunnel
This field is inapplicable in VXLAN scenarios.
Field Description
Inclusive multicast routes are used on the VXLAN control plane for automatic
VTEP discovery and dynamic VXLAN tunnel establishment. VTEPs that function as
BGP EVPN peers transmit L2VNIs and VTEPs' IP addresses through inclusive
multicast routes. The Originating Router's IP Address field identifies the local
VTEP's IP address; the MPLS Label field identifies an L2VNI. If the remote VTEP's
IP address is reachable at Layer 3, a VXLAN tunnel to the remote VTEP is
established. At the same time, an ingress replication list based on VNI is created
for subsequent BUM packet forwarding, and the remote VTEP's IP address will be
added to this list.
Field Description
Ethernet Unique ID for defining the connection between local and remote
Segment devices
Identifier
Field Description
Overview
Ethernet virtual private network (EVPN) virtual private wire service (VPWS)
provides a P2P L2VPN service solution based on the EVPN service architecture.
This solution simplifies the EVPN technology by using MPLS tunnels over a
backbone network to provide Layer 2 packet forwarding between access circuits
(ACs) with no need of searching for MAC address entries.
As shown in Figure 3-36, the basic architecture of EVPN VPWS consists of the
following parts:
● AC: is an independent link or circuit that connects a CE to a PE. An AC
interface can be a physical or logical interface. AC attributes include the
encapsulation type, maximum transmission unit (MTU), and interface
parameters of a specified link type.
● EVPL instance: An EVPLS instance maps to an AC. Each EVPL instance has a
service ID. The EVPL instance of the local PE maps to that of the peer PE. PEs
exchange EVPN routes carrying a service ID to construct forwarding entries
that are used to forward or receive service traffic from different ESs, achieving
point-to-point interworking.
● EVPN VPWS instance: An EVPN VPWS instance is deployed on an edge PE and
contains services that have the same access-side or network-side attributes.
Routes are transmitted based on the RD and RT configured in each EVPN
VPWS instance in a BGP EVPN address family.
● Tunnel: indicates a network-side MPLS tunnel or SR tunnel.
Compared with the traditional L2VPN VPWS solution, the EVPN VPWS solution
simplifies the control and data models and uses BGP as the control plane where
the BGP route selection and the BGP next hop recursion are used to choose traffic
paths over backbone networks. This eliminates the need of specifying PWs.
▪ Ethernet Tag ID: indicates the local service ID of the EVPL instance
on the local PE.
▪ MPLS Label: For a non-SRv6 tunnel, this field is the EVPL label
assigned based on each EVI Ethernet Auto-Discovery route. For an
SRv6 tunnel, this field is the SRv6 SID of this tunnel.
In addition to NLRI, an EVI Ethernet Auto-Discovery route carries the
Layer 2 extended community attribute that includes the following control
fields:
2. PE1 and PE2 each send EVI Ethernet AD routes to the peer device. The EVI
Ethernet AD routes carry the RD, RT, next-hop information, local service ID,
and EVPL label or SRv6 SID.
3. PE1 and PE2 each receive EVI Ethernet AD routes from the peer device and
match the RTs of the corresponding EVPN VPWS instance. PE1 and PE1 then
select an MPLS or SRv4 tunnel to perform traffic recursion based on the next-
hop information or select an SRv6 tunnel to perform traffic recursion based
on SRv6 SIDs. If the service ID in the received routes is the same as the
remote service ID configured for the local EVPL instance, the forwarding
entries indicating the association between the MPLS or SRv4/v6 tunnel and
local EVPL instance are generated.
destined for PE1 are the master entries, and the entries destined for PE2 are
the backup entries.
7. PE1 and PE2 each receive EVI Ethernet AD routes from the peer device and
match the RTs of the corresponding EVPN VPWS instance. PE1 and PE1 then
select an MPLS or SRv4 tunnel to perform traffic recursion based on the next-
hop information or select an SRv6 tunnel to perform traffic recursion based
on SRv6 SIDs. If the service ID in the received routes is the same as the
remote service ID configured for the local EVPL instance, the bypass entries
indicating the association between the MPLS or SRv4/v6 tunnel and local
EVPL instance are generated.
By default, the PE with a smaller source IP address is elected as the master DF.
This, however, causes all service traffic to travel through the same PE, which may
lead to unbalanced network load. To address this problem, enable the service ID-
based DF election. Taking Figure 3-41 as an example, the service ID-based DF
election process is as follows:
1. PE1 and PE2 send each other the ES routes that carry the RD, RT, ESI, and
source IP address.
2. Upon receipt of ES routes, PE1 and PE2 construct PE lists based on different
ESIs. The PEs in a PE list are ordered by the source IP address in ascending
order, and the system assigns an index starting at 0 to each PE in ascending
order.
3. Each ES corresponds to a local service ID. The system calculates the DF
election result of each ES based on the formula "service ID mod N", where N
indicates the number of PEs. As shown in Figure 3-41, the service ID
corresponding to ESx is 100, the number of PEs (N) is 2, and the ESx is
calculated out to be 0. The system searches the PE list of ESx for the index
and finds that the DF election result of ESx is PE1. Similarly, the DF election
result of ESy is PE2. This allows traffic from different Es to be transmitted over
different PEs.
The DF election result determines the P control field in EVI Ethernet AD routes.
remote service ID. After the configuration, the local PE generates the
forwarding entries indicating the association between the AC interface and
EVPL instance. PE1 and PE2 are configured to work in all-active mode and the
access-side interfaces of PE1 and PEs are configured with the same ESI.
2. PE1 and PE2 send ES routes that carry the RD, RT, ESI, and source IP address.
Upon receipt of ES routes, PE1 and PE2 do not trigger DF election. Both PE1
and PE2 are in the master DF state.
3. PE1 and PE2 send PE3 the ES Ethernet AD routes that carry the RD, RT, next-
hop information, and all-active mode information.
4. The PEs send each other the EVI Ethernet AD routes that carry the RD, RT,
next-hop information, local service ID, EVPL label or SRv6 SID, and master/
backup role.
5. Upon receipt of EVI Ethernet AD routes from PE3, PE1 and PE2 match RTs of
the corresponding EVPN VPWS instance and select an MPLS or SRv4 tunnel to
perform traffic recursion based on the next-hop information or select an SRv6
tunnel to perform traffic recursion based on SRv6 SIDs. If the service ID in the
received routes is the same as the remote service ID configured for the local
EVPL instance, the forwarding entries indicating the association between the
MPLS or SRv4/v6 tunnel and local EVPL instance are generated.
6. Upon receipt of EVI Ethernet AD routes from PE1 and PE2, PE3 matches RTs
of the corresponding EVPN VPWS instance and select an MPLS or SRv4 tunnel
to perform traffic recursion based on the next-hop information or select an
SRv6 tunnel to perform traffic recursion based on SRv6 SIDs. If the service ID
in the received routes is the same as the remote service ID configured for the
local EVPL instance, load balancing entries of the MPLS or SRv4/v6 tunnel and
local EVPL instance are generated.
7. PE1 and PE2 each receive EVI Ethernet AD routes from the peer device and
match the RTs of the corresponding EVPN VPWS instance. PE1 and PE1 then
select an MPLS or SRv4 tunnel to perform traffic recursion based on the next-
hop information or select an SRv6 tunnel to perform traffic recursion based
on SRv6 SIDs. If the service ID in the received routes is the same as the
remote service ID configured for the local EVPL instance, the bypass entries
indicating the association between the MPLS or SRv4/v6 tunnel and local
EVPL instance are generated.
The data packets sent from AC-side interfaces are forwarded to the peer PE over
the corresponding MPLS tunnel based on the forwarding entries indicating the
association between tunnels and EVPL instances. Upon receipt of packets, the peer
PE searches for the association entries based on the label encapsulated in the
packets and the forwards the packets to the corresponding AC interface based on
the association entries.
3.1.6 PBB-EVPN
PBB-EVPN Networking
PBB-EVPN is an L2VPN technology implemented based on MPLS and Ethernet
technologies. PBB-EVPN uses BGP to exchange MAC address information between
PEs on the control plane and controls the exchange of data packets among
different sites across the MPLS network.
As shown in Figure 3-43, a PBB-EVPN has similar architecture as an EVPN.
Compared with EVPN, PBB-EVPN introduces the following concepts:
● PBB: a technique defined in IEEE 802.1ah. PBB precedes C-MAC addresses with
B-MAC addresses in a packet to completely separate the user network from
the carrier network. This implementation enhances network stability and
eases the pressure on the capacity of PEs' MAC forwarding tables.
● I-EVPN: accesses the user network by being bound to a PE interface
connecting to a CE. After an I-EVPN instance receives a data packet from the
user network, the I-EVPN instance encapsulates a PBB header into the packet.
● B-EVPN: accesses the backbone network. A B-EVPN instance manages EVPN
routes received from other PEs.
● I-SID: uniquely identifies a broadcast domain. One I-EVPN instance
corresponds to one I-SID. If two PEs share the same I-SID, the two PEs belong
to the same BUM group.
PBB-EVPN Routes
On a PBB-EVPN, PEs exchange the following types of routes:
● MAC advertisement route: carries B-EVPN instance RD, B-MAC address, and
VPN label information on the local PE. Figure 3-44 shows the prefix format of
a MAC advertisement route packet. A PE uses MAC advertisement routes to
advertise B-MAC address reachability information to other PEs. When network
topology changes due to a CE node failure or CE-PE link failure, the
corresponding PE sends MAC advertisement routes to instruct other PEs to
refresh C-MAC addresses corresponding to the specified B-MAC address,
thereby achieving fast convergence.
Other Concepts
● Fast convergence
On the network shown in Figure 3-47, if the link between CE1 and PE1 fails,
PE1 will send a MAC advertisement route that carries the MAC mobility
extended community attribute to PE3, notifying PE3 that C-MAC addresses at
Site1 are unreachable. Upon receipt of the route, PE3 sends traffic to Site1
only through PE2, implementing fast convergence.
● DF election
On the network shown in Figure 3-48, CE1 is dual-homed to PE1 and PE2,
and CE2 sends BUM traffic to PE1 and PE2. In this scenario, CE1 receives the
same copy of traffic from both PE1 and PE2, wasting network resources. To
solve this problem, EVPN elects one PE as the DF to forward BUM traffic. If
PE1 is elected, it becomes the primary DF, with PE2 functioning as the backup
DF. The primary DF forwards BUM traffic from CE2 to CE1.
If a PE interface connecting to a CE goes Down, the PE functions as a backup
DF. If a PE interface connecting to a CE goes Up, the PE and other PEs with Up
interfaces elect a primary DF using the following procedure:
a. The PEs establish EVPN BGP peer relationships with each other and then
exchange Ethernet segment routes.
b. Upon receipt of the Ethernet segment routes, each PE generates a
multi-homing PE list based on the ESIs carried in Ethernet segment
routes. Each multi-homing PE list contains information about all PEs
connecting to the same CE.
c. Each PE then sequences the PEs in each multi-homing PE list based on
the source IP addresses carried in Ethernet segment routes. The PEs are
numbered from 0.
d. The primary DF is elected based on I-SIDs. Specifically, PBB-EVPN uses
the formula of "I-SID modulo Number of PEs in the PE list corresponding
to the I-SID" to calculate a number and then elects the PE with the same
number as the calculated one as the primary DF.
● Split horizon
On the network shown in Figure 3-49, CE1 is dual-homed to PE1 and PE2. If
PE1 and PE2 have established an EVPN BGP peer relationship with each other,
after PE1 receives BUM traffic from CE1, it forwards the BUM traffic to PE2. If
PE2 forwards BUM traffic to CE1, a loop will occur. To prevent this problem,
EVPN uses split horizon. After PE1 forwards the BUM traffic to PE2, PE2
checks the B-SMAC address carried in the traffic. If the B-SMAC address
equals the B-MAC address configured on PE2, PE2 drops the traffic,
preventing a routing loop.
● Redundancy mode
If a CE is multi-homed to several PEs, a redundancy mode can be configured
to specify the redundancy mode of PEs connecting to the same CE. The
redundancy mode determines whether load balancing is implemented for
unicast traffic in CE multi-homing scenarios. On the network shown in Figure
3-50, if PE1 and PE2 are both configured to work in All-Active mode, after
PE1 and PE2 send MAC advertisement routes carrying the same B-MAC
address to PE3, PE3 sends unicast traffic destined for CE1 to both PE1 and PE2
in load balancing mode.
3. PE1 and PE2 exchange MAC advertisement routes that carry B-MAC
addresses, next hops, and EVPN instance extended community attributes
(such as RTs).
4. PE1 and PE2 construct B-EVPN instance forwarding entries based on the RTs
carried in received MAC advertisement routes.
Use the network shown in Figure 3-50 as an example. If PE1 and PE2 both work in Single-
Active mode, the bidirectional BUM traffic between CE2 and CE1 will be dropped by the
backup DF. If PE1 and PE2 both work in All-Active mode, only the BUM traffic from CE2 to
CE1 will be dropped by the backup DF.
EVPN BGP peers and the SPE will learn the B-MAC addresses of the entire
network.
5. Change the existing VSI on each UPE to be an I-EVPN instance, bind the I-
EVPN instance to the previously configured B-EVPN instance, and bind the AC
interface on each UPE to the I-EVPN instance on that UPE. After all
configurations are complete, the network becomes a PBB-EVPN.
Figure 3-55 Packet format of the extended community attribute used by EVPN E-
Tree
Take the network shown in Figure 3-56 as an example. Known unicast traffic is
isolated through the following process:
1. PE1 and PE2 transmit AC-side MAC addresses to each other through MAC
routes. Take the MAC address (MAC1) of the AC interface on CE2 as an
example. Because the AC interface has the leaf attribute, a MAC route
carrying the MAC1 address also carries the extended community attribute of
EVPN E-Tree. All bits in the Leaf Label field of the attribute are set to 0, and
the L bit in the Flags field is set to 1. PE1 then sends this MAC route to PE2.
2. Upon receipt, PE2 checks the L bit in the Flags field. Because this bit is set to
1, PE2 marks the entry corresponding to MAC1 in the local MAC routing table.
3. When PE2 receives traffic destined for CE2 from its own leaf AC interface, PE2
determines that the traffic needs to be sent to the remote leaf AC interface
based on the flag in the local MAC routing table and discards the traffic. In
this way, known unicast traffic is isolated between leaf AC interfaces.
In the preceding example, BUM traffic is isolated through the following process:
1. After EVPN E-Tree is configured on the network, PE1 and PE2 send a special
Ethernet A-D per ES route to each other. A regular Ethernet A-D per ES route
carries the ESI attribute. However, the ESI field in the Ethernet A-D per ES
route used by EVPN E-Tree is set to all zeros, and the route carries the
extended community attribute of EVPN E-Tree. The Leaf Label field of this
attribute uses a label value, and the L bit in the Flags field is set to 0.
2. After PE1 receives the Ethernet A-D per ES route, it determines that the route
is used to transmit the leaf label because the ESI field value is all zeros. PE1
then saves the label.
3. When PE1 needs to send BUM traffic from its leaf AC interface to PE2, PE1
encapsulates the saved leaf label into the BUM packets and then sends them
to PE2.
4. Upon receipt, PE2 finds the locally allocated leaf label in the BUM packets.
Therefore, PE2 does not send the traffic to CE4 and CE5. Instead, PE2 only
sends the traffic to CE3, implementing BUM traffic isolation between leaf AC
interfaces.
EVPN E-Tree supports the following types of AC interfaces: main interfaces bound to
common EVPN instances, EVC Layer 2 sub-interfaces associated with BDs, and VLAN sub-
interfaces.
In a CE dual-homing scenario, ensure that the same root or leaf attribute is set for the
same VLAN sub-interface in the same broadcast domain on two PEs. If the leaf attribute is
set on both PEs, the Leaf label can replace the ESI label to implement split horizon.
Different root or leaf attributes can be set for a PE's interfaces or sub-interfaces that
connect to different CEs.
On the network shown in Figure 3-57, EVPN runs between PE1 and PE2. CE1 and
CE2 access PE1 and PE2 respectively in one of the following ways: VLAN, QinQ,
static or dynamic PW, or static VXLAN. PE1 and PE2 can communicate with each
other both through network-side and access-side links, which induces a BUM
traffic loop:
1. After PE1 receives BUM traffic from CE1, PE1 first replicates it, and then
forwards it to both the network-side and access-side links (traffic 1 in Figure
3-57).
2. PE2 forwards the BUM traffic received from PE1 through the network-side link
to the access-side link (traffic 2 in Figure 3-57). Equally, PE2 forwards the
BUM traffic received from the access side to the network side (traffic 3 in
Figure 3-57).
3. PE1 forwards the BUM traffic received from PE2 through the network-side link
to the access-side link (traffic 4 in Figure 3-57). Equally, PE1 forwards the
BUM traffic received from the access side to the network side (traffic 5 in
Figure 3-57).
4. As steps 2 and 3 are repeated, BUM traffic is continuously transmitted
between PE1 and PE2.
On the network shown in Figure 3-58, in addition to a BUM traffic loop, route
flapping also occurs:
1. After PE1 receives BUM traffic from CE1, PE1 learns CE1's MAC address (M1)
from the source MAC address of the traffic. PE1 sends a MAC route with a
destination address of M1 to PE2 by means of EVPN.
2. Upon receipt, PE2 matches the RT of the MAC route, imports the MAC route
into a matching EVPN instance, generates a MAC entry, and iterates the MAC
route to the network-side VXLAN or MPLS tunnel to PE1.
3. PE2 can also receive BUM traffic from PE1 through the direct access-side link
between them. Upon receipt, PE2 also generates a MAC route to M1 based on
the source MAC address of the traffic. In this case, PE2 considers M1 to have
moved to its own access network. PE2 preferentially selects the MAC address
received from the local access side. PE2 therefore sends the MAC route
destined for M1 to PE1. This route carries the MAC Mobility extended
community attribute. The mobility sequence number is Seq0+1.
4. Upon receipt, PE1 matches the RT of the MAC route, and imports the MAC
route into a matching EVPN instance. PE1 preferentially selects the MAC route
received from PE2 because this route has a larger mobility sequence number.
PE1 then generates a MAC entry and iterates the MAC route to the network-
side VXLAN or MPLS tunnel to PE2. PE1 then sends a MAC Withdraw message
to PE2.
5. After PE1 receives BUM traffic again from the access-side link, PE1 generates
another MAC route to M1 and considers M1 to have moved to its own access
network. PE1 preferentially selects the local MAC route to M1 and sends it to
PE2. This route carries the MAC Mobility extended community attribute. The
mobility sequence number is Seq0+2.
6. Upon receipt, PE2 matches the RT of the MAC route and imports the MAC
route into a matching EVPN instance. PE2 preferentially selects the MAC route
received from PE1 because this route has a larger mobility sequence number.
PE2 then generates a MAC entry and iterates the MAC route to the network-
side VXLAN or MPLS tunnel to PE1. PE2 then sends a MAC Withdraw message
to PE1.
7. After PE2 receives BUM traffic again from PE1 through the direct access-side
link between them, PE2 generates another MAC route to M1 and considers
M1 to have moved to its own access network. PE2 preferentially selects the
local MAC route and sends the MAC route destined for M1 to PE1. This route
carries the MAC Mobility extended community attribute. The mobility
sequence number is Seq0+3.
8. As steps 3 to 7 are repeated, the mobility sequence number of the MAC route
is incremented by 1 continuously, causing route flapping on the network.
To prevent traffic loops and route flapping, the system starts the process of MAC
duplication suppression. The system checks the number of times a MAC entry
flaps within a detection period. If the number of MAC flaps exceeds the upper
threshold, the system considers MAC route flapping to be occurring on the
network and consequently suppresses the flapping MAC routes. The suppressed
MAC routes cannot be sent to a remote PE through a BGP EVPN peer relationship.
In addition to suppressing MAC route flapping, you can also configure black-hole
MAC routing and AC interface blocking:
● After black-hole MAC routing has been configured, the system sets the
suppressed MAC routes to black-hole routes. If a PE receives traffic with the
same source or destination MAC address as the MAC address of a black-hole
MAC route, the PE discards the traffic.
● If AC interface blocking is also configured, that is, if the traffic comes from a
local AC interface and the source MAC address of the traffic is the same as
the MAC address of a black-hole MAC route, the AC interface is blocked. In
this way, a loop can be removed quickly. Only BD-EVPN instances support AC
interface blocking.
Implementation
After EVPN ORF is configured, each PE on the EVPN sends the import VPN target
(IRT) and original AS number of the local EVPN instance to the other PEs or BGP
EVPN RRs that function as BGP-EVPN peers. The information is sent through ORF
routes. Upon receipt, the peers construct export route policies based on these
routes so that the local PE only receives the expected routes, which reduces the
receiving pressure on the local PE.
Figure 3-59 shows the basic EVPN ORF network on which each device supports
EVPN ORF. PE1, PE2, and PE3 establish BGP-EVPN peer relationships with the RR,
and are also clients of the RR. An EVPN instance with a specific RT is configured
on each PE.
Before EVPN ORF is enabled, the RR advertises all the routes received from PE1's
EVPN instances to PE2 and PE3. However, PE2 only needs routes with an export
VPN target (ERT) of 1:1, whereas PE3 only needs routes with an ERT of 2:2. As a
result, PE2 and PE3 discard unwanted routes upon receipt, which wastes device
resources.
After EVPN ORF is enabled on all devices and BGP-EVPN peer relationships are
established between the PEs and RR in the BGP-VT address family view, the BGP-
EVPN peers negotiate the EVPN ORF capability. Each device sends the IRT of its
local EVPN instance to the BGP-EVPN peers in the form of ORF routes. Each device
then constructs an export route policy based on the received ORF routes. Upon
construction, PE1 only sends EVPN1's and EVPN4's routes to the RR. The RR then
only sends routes with an ERT of 1:1 to PE2 and those with an ERT of 2:2 to PE3.
The BGP-VT address family obtains the IRT configured on the local device
regardless of the type of the instance that the IRT comes from. If EVPN ORF is
enabled on a network that consists of devices that do not support EVPN ORF, the
EVPN service cannot run properly. However, the BGP-VT address family can resolve
this problem.
On the network shown in Figure 3-59, PE1, PE2, and PE3 establish BGP-EVPN peer
relationships with the RR. PE1, PE2, and PE3 are clients of the RR. Suppose that
PE1, PE2, and the RR all support EVPN ORF but that PE3 does not, as it is running
an early version. If EVPN ORF is enabled on the network and the BGP-VT peer
relationships are established, PE3 does not send ORF routes to the RR, which
means that PE1 does not receive the ORF routes with an ERT of 2:2 from the RR.
As a result, PE1 does not send EVPN4's routes to the RR, thereby compromising
the services between EVPN4 and EVPN2. Because the BGP-VT address family does
not differentiate the type of instance the IRT belongs to, you can configure an
L3VPN instance on PE3 and set both IRT and ERT to 2:2. This configuration allows
PE3 to advertise an ORF route with an IRT of 2:2 to the RR, which then advertises
this route to PE1. Upon receipt, PE1 modifies its export route policy so that it can
advertise EVPN2's routes to the other PEs.
In addition to configuring an L3VPN instance, you can also configure the RR to advertise
default ORF routes to PE1 and PE3 and delete the BGP-VT peer relationship between the RR
and PE3. After the configuration is complete, PE1, PE2, and PE3 advertise all routes to the
RR. The RR then advertises routes with ERTs of 1:1 and 2:2 to PE1, routes with an ERT of 1:1
to PE2, and all routes to PE3.
If both EVPN and L3VPN services are deployed on the network in Figure 3-59, the
preceding two ways cannot be used. If you use either of them, the L3VPN service
cannot run properly. On the network shown in Figure 3-60, only PE3 does not
support EVPN ORF. After EVPN ORF is enabled on the network, the EVPN service
cannot run properly. If an L3VPN instance is created, the new L3VPN instance
receives the other PEs' L3VPN routes from the RR, which compromises the L3VPN
service. To resolve this issue, you can disable the RR from filtering routes based on
the IRT for PE3, thereby ensuring that both EVPN and L3VPN services can run
properly.
Figure 3-60 An EVPN ORF network carrying both EVPN and L3VPN services
Benefits
● Bandwidth consumption is lowered (because the number of routes being
advertised is smaller).
● System resources such as CPU and memory are saved.
entries are generated on devices. Multicast data packets from a multicast source
are advertised only to the devices that need these packets, saving network
bandwidth resources.
For details about EVPN routes used by IGMP snooping over EVPN MPLS, see
Related Routes. For details about route advertisement and traffic forwarding, see
Route Advertisement and Traffic Forwarding.
Related Routes
EVPN routes used by IGMP snooping over EVPN MPLS include Selective Multicast
Ethernet Tag (SMET), IGMP Join Synch routes.
● SMET route
SMET routes are used to transmit multicast group information between BGP
EVPN peers. A device that receives an SMET route can construct local (*, G) or
(S, G) entries based on the routing information. As shown in Figure 3-61, the
fields in the routing information are described as follows:
– Route Distinguisher: route distinguisher (RD) set in an EVPN instance.
– Ethernet Tag ID: This field is set to 0 when the VLAN-based or VLAN
bundle service mode is used to access a user network.
– Multicast Source Length: length of a multicast source address. This field is
set to 0 for any multicast source.
– Multicast Source Address: address of a multicast source. Packets do not
contain this field for any multicast source.
– Multicast Group Length: length of a multicast group address.
– Multicast Group Address: address of a multicast group.
– Originator Router Length: address length of the device that generated the
SMET route.
– Originator Router Address: address of the device that generated the
SMET route.
– Flags: This field contains eight bits. The first four most significant bits are
reserved, and the last three least significant bits are used to identify the
IGMP version. For example, if bit 5 is set to 1, the IGMP version of the
multicast entry carried in the route is IGMPv3. Only one of the last three
least significant bits can be set to 1. Bit 4 indicates the filtering mode of
group records in IGMPv3. The values 0 and 1 indicate Include and
Exclude, respectively.
least significant bits can be set to 1. Bit 4 indicates the filtering mode of
group records in IGMPv3. The values 0 and 1 indicate Include and
Exclude, respectively.
Figure 3-63 shows single-homing access for IGMP snooping over EVPN MPLS.
Configure an EVPN instance on PE1, PE2, and PE3, and bind a BD to the EVPN
instance. Establish BGP EVPN peer relationships between the PEs, and deploy
EVPN IGMP proxy on each PE. Deploy PE1 as a sender PE, and deploy PE2 and PE3
as receiver PEs. Configure IGMP snooping and IGMP proxy on BD1 bound to the
EVPN instance on PE1, PE2, and PE3. Connect BD1 on PE1, PE2, and PE3 to CE1,
CE2, and CE3 through VLAN dot1q sub-interfaces, respectively. Configure PIM-SM
and IGMP on CE1's interface connected to PE1.
3. After receiving the corresponding IGMP Report messages, PE2 and PE3
establish (S, G) and (*, G) entries of IGMP snooping in BD1 and add the
interfaces connected to CE2 and CE3 as outbound interfaces, respectively.
4. PE2 sends a BGP EVPN SMET route to other PEs through BGP EVPN peer
relationships. The route carries (S, G) entries, and the Flags field in the route
is set to IGMPv3 and Include.
5. PE3 sends a BGP EVPN SMET route to other PEs through BGP EVPN peer
relationships. The route carries (*, G) entries, and the Flags field in the route is
set to IGMPv2.
6. After receiving the corresponding BGP EVPN SMET routes, PE1 establishes (S,
G) and (*, G) entries of IGMP snooping in BD1 and adds the mLDP tunnel
interfaces of the corresponding EVPN instances as outbound interfaces.
7. PE1 sends IGMPv3 (S, G) Report and IGMPv2 (*, G) Report messages to CE1.
CE1 establishes IGMP and PIM entries and forwards multicast traffic to PE1.
8. After receiving the multicast traffic, PE1 forwards the traffic to PE2 and PE3
through the mLDP tunnel interfaces based on the (S, G) and (*, G) entries in
BD1.
9. After receiving the multicast traffic, PE2 and PE3 forward the traffic to
Receiver A and Receiver B based on the (S, G) and (*, G) entries, respectively.
Figure 3-63 Single-homing access for IGMP snooping over EVPN MPLS
Dual-homing access for IGMP snooping over EVPN MPLS on the multicast
source side
Figure 3-64 shows dual-homing access for IGMP snooping over EVPN MPLS on
the multicast source side. Configure an EVPN instance on PE1, PE2, PE3, and PE4,
and bind a BD to the EVPN instance. Establish BGP EVPN peer relationships
between the PEs, and deploy EVPN IGMP proxy on each PE. Deploy PE1 and PE2
as sender PEs, and deploy PE3 and PE4 as receiver PEs. Connect BD1 on PE3 and
PE4 to CE3 and CE4 through VLAN dot1q sub-interfaces, respectively. Connect CE1
to PE1 and PE2 through Eth-Trunk interfaces, and configure PIM-SM and IGMP on
the interfaces. Bind the Eth-Trunk interfaces of CE1 to an E-Trunk on PE1 and PE2.
Configure static router interfaces, and set the same ESI. Configure the E-Trunk to
work in dual-active mode, and ensure that the Eth-Trunk interfaces on PE1 and
PE2 are both Up.
The process of IGMP snooping over EVPN MPLS (dual-homing access on the
multicast source side) is described as follows:
1. PE3 and PE4 periodically send IGMP Query messages to the access side in
BD1.
2. Receiver A and Receiver B send IGMP Report messages to CE2 and CE3,
respectively. For example, Receiver A sends an IGMPv3 (S, G) Report message,
and Receiver B sends an IGMPv2 (*, G) Report message.
3. After receiving the corresponding IGMP Report messages, PE3 and PE4
establish (S, G) and (*, G) entries of IGMP snooping in BD1 and add the
interfaces connected to CE2 and CE3 as outbound interfaces, respectively.
4. PE3 sends a BGP EVPN SMET route to other PEs through BGP EVPN peer
relationships. The route carries (S, G) entries, and the Flags field in the route
is set to IGMPv3 and Include.
5. PE4 sends a BGP EVPN SMET route to other PEs through BGP EVPN peer
relationships. The route carries (*, G) entries, and the Flags field in the route is
set to IGMPv2.
6. After receiving the corresponding BGP EVPN SMET routes, PE1 and PE2
establish (S, G) and (*, G) entries of IGMP snooping in BD1 and adds the
mLDP tunnel interfaces of the corresponding EVPN instances as outbound
interfaces.
7. The Eth-Trunk interface of CE1 periodically sends IGMP Query messages to
BD1 of PE1 or PE2 based on hash rules. PE1 or PE2 periodically sends IGMP
Report messages to CE1.
8. After receiving an IGMP Report message, CE1 creates IGMP and PIM entries
and forwards multicast traffic to PE1.
9. CE1 forwards the multicast traffic from the multicast source to BD1 of PE1 or
PE2 based on hash rules. PE1 or PE2 forwards the multicast traffic to PE3 and
PE4 through the mLDP tunnel interfaces based on the (*, G) and (S, G) entries
of BD1.
10. After receiving the multicast traffic, PE2 and PE3 forward the traffic to
Receiver A and Receiver B based on the (S, G) and (*, G) entries, respectively.
Figure 3-64 Dual-homing access for IGMP snooping over EVPN MPLS on the
multicast source side
Dual-homing access for IGMP snooping over EVPN MPLS on the access side
Figure 3-65 shows dual-homing access for IGMP snooping over EVPN MPLS on
the access side. Configure an EVPN instance on PE1, PE2, and PE3, and bind a BD
to the EVPN instance. Establish BGP EVPN peer relationships between the PEs, and
deploy EVPN IGMP proxy on each PE. Deploy PE1 as a sender PE, and deploy PE2
and PE3 as receiver PEs. Configure IGMP snooping and IGMP proxy on BD1 bound
to the EVPN instance on PE1, PE2, and PE3. Connect BD1 on PE1, PE2, and PE3 to
CE1, CE2, and CE3 through VLAN dot1q sub-interfaces, respectively. Configure
PIM-SM and IGMP on CE1's interface connected to PE1, and connect CE2 to PE2
and PE3 through Eth-Trunk interfaces. Bind the Eth-Trunk interfaces of CE2 to an
E-Trunk and configure the same ESI on PE2 and PE3. Configure the E-Trunk on PE2
and PE3 to work in single-active mode, select PE2 as the master device, and
ensure that the Eth-Trunk interface of PE2 is Up.
The process of IGMP snooping over EVPN MPLS (dual-homing access on the
access side) is described as follows:
1. PE2 periodically sends IGMP Query messages to the access side in BD1.
2. The receiver sends an IGMP Report message, for example, IGMPv2 (*, G)
Report message, to CE2.
3. After receiving an IGMP Report message, PE2 establishes (*, G) entries of
IGMP snooping, adds the Eth-Trunk interface to CE2 as the outbound
interface, and sends the IGMP Join Synch route of BGP EVPN to other PEs. The
route carries the access-side ESI of PE2 and contains the IGMP version and V2
source filtering mode.
4. After receiving the IGMP Join Synch route, PE3 creates the corresponding (*,
G) entries of IGMP snooping in BD1. PE3 does not need to send a BGP EVPN
SMET route, because it is a non-DF. Additionally, PE3 does not add the Eth-
Figure 3-65 Dual-homing access for IGMP snooping over EVPN MPLS on the
access side
● Create EVPN instances on PEs. Configure the same RT values for the PEs to
allow EVPN route cross.
● Configure PE redundancy. If all PEs connecting to the same CE are configured
to work in All-Active mode, these PEs load-balance traffic destined for the CE.
EVPN L3VPN HVPN is classified into EVPN L3VPN HoVPN or EVPN L3VPN H-VPN:
● EVPN L3VPN HoVPN: An SPE advertises only default routes or summarized
routes to UPEs. UPEs do not have specific routes to NPEs and can only send
service data to SPEs over default routes. As a result, route isolation is
implemented. An EVPN L3VPN HoVPN can use devices with relatively poor
route management capabilities as UPEs, reducing network deployment costs.
● EVPN L3VPN H-VPN: SPEs advertise specific routes to UPEs. UPEs function as
RR clients to receive the specific routes reflected by SPEs functioning as RRs.
This mechanism facilitates route management and traffic forwarding control.
As L3VPN HoVPN evolves towards EVPN L3VPN HoVPN, the following splicing
scenarios occur:
● Splicing between EVPN L3VPN HoVPN and common L3VPN: EVPN L3VPN
HoVPN is deployed between the UPEs and SPE, and L3VPN is deployed
between the SPE and NPE. The SPE advertises only default routes or
summarized routes to the UPEs. After receiving specific routes (EVPN routes)
from the UPEs, the SPE encapsulates these routes into VPNv4 routes and
advertises them to the NPE.
● Splicing between L3VPN HoVPN and BD EVPN L3VPN: L3VPN HoVPN is
deployed between the UPEs and SPE, and BD EVPN L3VPN is deployed
between the SPE and NPE. The SPE advertises only default routes or
summarized routes to the UPEs. After receiving specific routes (L3VPN routes)
from the UPEs, the SPE encapsulates these routes into EVPN routes and
advertises them to the NPE.
– Using RR: Configure the SPE as an RR so that the RR directly reflects the
received IP prefix route to the NPE, and change the next hop of the route
to the SPE. An EVPN L3VPN H-VPN supports only this mode.
– Using re-encapsulation: The SPE re-encapsulates the IP prefix route into a
new IP prefix route with the next hop being the SPE. Then the SPE
advertises the new route to the NPE through a BGP-EVPN peer
relationship.
4. After receiving the IP prefix route, the NPE imports the route into its VRF
table under the condition that the route's next hop is reachable.
5. The NPE advertises the IPv4 route to Device 1 using the IP protocol.
Figure 3-70 Route advertisement from Device 1 to CE1 on an EVPN L3VPN H-VPN
Currently, EVPN supports the 6VPE function. The processing mechanism of EVPN
6VPE is similar to that of EVPN L3VPN. On the network shown in Figure 3-72, PEs
can exchange host routes and IP prefix routes to transmit IPv6 route information
of different sites in a VPN. The fields contained in the host routes and IP prefix
routes are as follows:
● Fields in host routes:
– Route Distinguisher (RD): Specifies the RD value set for an EVPN
instance.
– Ethernet Segment Identifier (ESI): Uniquely identifies a connection
between a PE and a CE.
– Ethernet Tag ID: In VLAN-Aware accessing BD EVPN scenarios, the value
is BD-Tag. In other scenarios, the value contains all 0s.
– MAC Address Length: Specifies the MAC address length in an ND entry.
– MAC Address: Specifies a MAC address in an ND entry.
– IPAddress Length: Specifies the IPv6 address length in an ND entry.
– IP Address: Specifies an IPv6 address in an ND entry.
– MPLS Label1: Specifies the VPN label of EVPN.
– MPLS Label2: Specifies the label used for Layer 3 traffic forwarding.
● Fields in IP prefix routes:
– RD: Specifies the RD value set for an EVPN instance.
– ESI: Uniquely identifies a connection between a PE and a CE.
– Ethernet Tag ID: Currently, the value of this field contains all 0s.
– IPAddress Length: Specifies the IPv6 address length in an ND entry.
– IP Address: Specifies an IPv6 address in an ND entry.
– GW IP Address: Specifies the default gateway address.
– MPLS Label: Specifies the VPN label of EVPN.
In Figure 3-73, the network has been deployed with the IP RAN solution and a
large number of IPv4-related services. If new services need to be deployed, IPv6
addresses may be used in consideration of IPv4 address exhaustion. In this case,
deploy EVPN L3VPNv6 HVPN in the 6VPE model to carry IPv6 services. The
processing mechanism of EVPN 6VPE is similar to that of L3VPN HVPN.
Background
The current MAN is evolving into EVPN. However, because there are a large
number of devices at the aggregation layer, it is difficult for the MAN to evolve
into EVPN at a time. To allow traditional L3VPN, VPWS or VPLS to be still used at
the aggregation layer and the core layer to evolve into EVPN first, splicing
between EVPN and the traditional network must be supported.
Figure 3-76 Single-homing scenario 1 for VLL accessing EVPN (per NID per
PW)
Additionally, VLL accessing EVPN allows multiple NIDs to share a PW. In this
scenario, multiple NIDs are aggregated to a switch, which then accesses a PW
on a UPE.
Figure 3-77 Single-homing scenario 2 for VLL accessing EVPN (multiple NIDs
per PW)
● Dual-homing scenario
A UPE is dual-homed to the master and slave NPEs through primary and
secondary PWs respectively to improve access reliability. On the EVPN, the
NPE1-NPE3 link and the NPE2-NPE3 link can be configured to work in single-
active mode or in all-active mode, which allows for load balancing.
Traffic received by PE2 is forwarded to PE1 through the bypass VXLAN tunnel and
then sent to the PE-AGG through the primary PW. Traffic from the PE-AGG to the
server is transmitted along the reverse paths.
Figure 3-80 Splicing primary and secondary PWs with an anycast VXLAN tunnel in
an EVPN active-active scenario
As shown in Figure 3-81, VPLS is deployed between CSGs and ASGs, and CSGs are
connected to ASGs through the primary and secondary PWs. EVPN is deployed
between ASGs and RSGs. On ASG1 and ASG2, a BD is configured and bound to a
VSI and an EVPN instance. In this manner, all PWs in the VSI can be connected to
the EVPN through BDs. On the CSG dual-homed to ASG1 and ASG2, the same ESI
is configured for the primary and secondary PW interfaces. The procedure for
traffic forwarding is as follows:
1. Because an ESI is configured on ASG1's PW interface and the PW is in the Up
state, ASG1 sends Ethernet A-D routes to the RSG.
2. After the Layer 2 packets sent by Site 1 reach ASG1 through the CSG, ASG1
generates MAC routes for the EVPN based on the MAC address of Site 1 in
the Layer 2 packets. Such MAC routes are sent to the RSG based on the BGP
EVPN peer relationship, and the RSG generates MAC forwarding entries based
on the received Ethernet A-D routes. Similarly, the RSG sends MAC routes that
carry the MAC address of Site 2 to ASGs and generate the corresponding MAC
forwarding entries.
3. After the forwarding entries are successfully set up, these entries can guid
through the forwarding of unicast traffic and BUM traffic. Taking the unicast
traffic sent from Site 1 to Site 2 as an example, upon receipt of the traffic
from the primary PW, ASG1 forwards the traffic to the RSG based on the MAC
routes sent by the RSG. The RSG then forwards the traffic to Site 2.
Although ASG1 and ASG2 transmit Ethernet Segment routes to each other, DF election
between ASG1 and ASG2 is implemented based on the PW status. The device (ASG1)
connected to the primary PW is the primary DF, and the device (ASG2) connected to the
secondary PW is the backup DF.
In BUM traffic forwarding scenarios, because the network is deployed with split horizon and
the backup DF blocks traffic, loops or extra packets do not occur on the network.
Background
With the wide application of MPLS VPN solutions, different MANs of a service
provider or collaborative backbone networks of different service providers often
span multiple ASs. Similar to L3VPN services, EVPN services running on an MPLS
network must also have the capability of spanning ASs.
Implementation
By advertisement of labeled routes between PEs, end-to-end BGP LSPs can be
established to carry Layer 2 traffic in BGP ASs (including inter-IGP areas) and
inter-BGP ASs that only support Option C.
Figure 3-82 Inter-AS EVPN Option C networking where PEs advertise labeled
EVPN routes
To improve expansibility, you can specify a route reflector (RR) in each AS. An RR
stores all EVPN routes and exchanges EVPN routes with PEs in the AS. RRs in two
ASs establish MP-EBGP connections with each other and advertise EVPN routes.
Benefits
● EVPN routes are directly exchanged between an ingress PE and egress PE. The
routes do not have to be stored and forwarded by intermediate devices.
● Only PEs exchange EVPN routing information. Ps and ASBRs forward packets
only. The intermediate devices need to support only MPLS forwarding rather
than MPLS VPN services. In such a case, ASBRs are no longer the performance
bottlenecks. Inter-AS EVPN Option C, therefore, is suitable for an EVPN that
spans multiple ASs.
On the network shown in Figure 3-84, gateways in the DCs (DC-GW1 and DC-
GW2) can access the carrier's network edge devices (DCI-PE1 and DCI-PE2) in
EVPN-VXLAN or VLAN mode. The L3VPN or EVPN-MPLS function can be deployed
on the DCI backbone network to transmit Layer 2 or Layer 3 service traffic. When
DC A and DC B exchange their tenant host IP addresses or MAC addresses, EVPN
integrated routing and bridging (IRB) routes, EVPN IP prefix routes, BGP VPNv4
routes, EVPN MAC routes, or ARP routes are used. For details about these routes,
see Table 3-9.
Upon receipt,
DCI-PE2
imports the
BGP VPNv4
route into the
DCI-PE1 re- local IP VPN
encapsulates instance based
the EVPN route on the route RT
DC-GW1 sends received from and delivers
a tenant's host DC-GW1 into a information
IP address to BGP VPNv4 about MPLS
DCI-PE1 route, applying tunnel recursion
through an IRB the following to the VPN
route or IP changes: forwarding
prefix route. table. DCI-PE2
DCI-PE1 parses ● Changes the re-encapsulates
the tenant's next hop to the received
host IP route the local BGP VPNv4
from the device's IP route into an IP
received EVPN address used prefix route,
route. Then the to establish a applying the
system imports BGP VPNv4 following
L3VPN peer
Layer 3 the tenant's changes:
(VXLAN relationship.
services route into the IP ● Changes the
access)
VPN instance ● Replaces the next hop to
based on RT RD and RT the VTEP
matching values of the address of
between the EVPN route DCI-PE2.
EVPN route and with those of
the IP VPN an L3VPN ● Replaces the
instance and instance. RD and RT
delivers values of the
● Applies for BGP VPNv4
information and
about VXLAN route with
encapsulates those of the
tunnel recursion a VPN label.
to the VPN L3VPN
forwarding After re- instance and
table. encapsulation, pads the
DCI-PE1 sends route with
the route to an L3VNI.
DCI-PE2. After re-
encapsulation,
DCI-PE2 sends
the IP prefix
route to DC-
GW2.
Advertisement Process
Deployme
Services DC-GW1 to DCI-PE1 to DCI- DCI-PE2 to DC-
nt Mode
DCI-PE1 PE2 GW2
DCI-PE1 re-
encapsulates
the VPN route
into an IP prefix
route, applying
the following
changes: After receiving
● Changes the the EVPN route,
DC-GW1 sends next hop to DCI-PE2
routes destined the local imports the
for the network device's IP route into the
segment on address used local IP VPN
which a tenant's to establish a instance based
host IP address BGP EVPN on the RT of
EVPN-
resides to DCI- peer the EVPN route,
MPLS Layer 3
PE1 through an relationship. generates a
(VLAN services
IGP or BGP VPN route
access) ● Adds the RD
route. Upon forwarding
and RT
receipt, DCI-PE1 entry, and
attributes to
delivers these advertises the
the EVPN
routes to the EVPN route to
route.
VPN forwarding DC-GW2
table. ● Applies for through a VPN
and IGP or BGP peer
encapsulates relationship.
a VPN label.
After re-
encapsulation,
DCI-PE1 sends
the route to
DCI-PE2.
Advertisement Process
Deployme
Services DC-GW1 to DCI-PE1 to DCI- DCI-PE2 to DC-
nt Mode
DCI-PE1 PE2 GW2
DCI-PE1
generates an
EVPN MAC
route, applying
the following
changes:
● Changes the
next hop to
the local Upon receipt,
DCI-PE1 learns
device's IP DCI-PE2
the source MAC
address used imports the
address of
to establish a MAC/IP
service traffic
BGP EVPN advertisement
received from
peer route into the
DC-GW1. Then
Layer 2 relationship. local EVPN
DCI-PE1
services instance based
generates a ● Adds the RD
on the route RT
local MAC and RT
and generates a
forwarding attributes to
local Layer 2
entry and an the EVPN
forwarding
EVPN MAC route.
entry
route. ● Applies for accordingly.
and
encapsulates
a VPN label.
After re-
encapsulation,
DCI-PE1 sends
the route to
DCI-PE2.
Advertisement Process
Deployme
Services DC-GW1 to DCI-PE1 to DCI- DCI-PE2 to DC-
nt Mode
DCI-PE1 PE2 GW2
DCI-PE1 re-
DC-GW1 sends encapsulates
a tenant's host the route into
IP address to an IRB or IP
DCI-PE1 prefix route. The Upon receipt,
through an IRB encapsulation DCI-PE2
route or IP mode changes imports the IRB
prefix route. from VXLAN to or IP prefix
DCI-PE1 parses MPLS: route into the
the tenant's ● Changes the IP VPN instance
host IP route next hop to and delivers
from the the local information
received EVPN device's IP about MPLS
route. Then the address used tunnel recursion
system imports to establish a to the VPN
EVPN-
the tenant's BGP EVPN forwarding
MPLS Layer 3
route into the IP peer table. DCI-PE2
(VXLAN services
VPN instance relationship. changes the L2
access)
based on RT and L3 VPN
● Adds the RD
matching labels in the
and RT
between the route to L2 and
attributes to
local EVPN L3 VNIs, re-
the EVPN
instance and encapsulates
route.
the IP VPN the route into
instance and ● Applies for an IRB or IP
delivers and prefix route,
information encapsulates and then sends
about VXLAN a VPN label. the route to
tunnel recursion After re- DC-GW2.
to the VPN encapsulation,
forwarding DCI-PE1 sends
table. the route to
DCI-PE2.
Advertisement Process
Deployme
Services DC-GW1 to DCI-PE1 to DCI- DCI-PE2 to DC-
nt Mode
DCI-PE1 PE2 GW2
Upon receipt,
DCI-PE2
DCI-PE1 re- imports the
encapsulates MAC/IP
the EVPN routes advertisement
and change the route into the
next-hop IP local EVPN
address to the instance based
DC-GW1 sends IP address of on RT
a tenant's host the locally matching. DCI-
MAC address to established PE2 re-
DCI-PE1 EVPN peer. The encapsulates
through a RD and RT the EVPN route
MAC/IP attributes in the by changing the
advertisement EVPN routes next hop to its
route. DCI-PE1 that carry the own VTEP
Layer 2 imports the VXLAN address,
services MAC/IP encapsulation replacing the
advertisement attribute are RD and RT
route into the replaced with values of the
local EVPN the RD and RT EVPN route
instance based of the local with those of
on RT matching EVPN instance. the local EVPN
and generates a The MPLS label instance and
MAC forwarding is requested. padding the
entry. The re- route with an
encapsulated L2VNI. Then
MAC/IP DCI-PE2 sends
Advertisement the re-
routes are then encapsulated
advertised to MAC address
DCI-PE2. advertisement
route to DC-
GW2.
DCI-PE2 parses
the VXLAN data
Upon receipt,
packet to obtain
DCI-PE1
the VNI and
removes the
data packet.
public MPLS
Based on the
tunnel label,
VNI, DCI-PE2
and, based on
finds the
the VPN label,
corresponding
finds the
VPN instance
corresponding
and, based on
VPN instance.
the tenant's
Then, based on
host IP address
the tenant's
DC-GW2 sends for the MPLS
host IP address
L3VPN a data packet to tunnel to DCI-
Layer 3 for the VXLAN
(VXLAN DCI-PE2 PE1, searches
services tunnel to DC-
access) through the the
GW1, DCI-PE1
VXLAN tunnel. corresponding
searches the
VPN instance
corresponding
forwarding
VPN instance
table. After
forwarding
encapsulating a
table. DCI-PE1
VPN label and a
encapsulates
public MPLS
the data packet
tunnel label into
with a VXLAN
the data packet,
header and
DCI-PE2 sends
then sends the
the packet to
VXLAN packet
DCI-PE1
to DC-GW1.
through the
MPLS tunnel.
Forwarding Process
Deployme
Services DC-GW2 to DCI-PE2 to DCI- DCI-PE1 to DC-
nt Mode
DCI-PE2 PE1 GW1
Upon receipt,
DCI-PE2 DCI-PE1
searches the removes the
forwarding table public MPLS
of the VPN tunnel label,
instance bound and, based on
to the interface the VPN label,
that receives the finds the
data packet corresponding
and, based on VPN instance.
the destination Based on the
DC-GW2 sends address of the tenant's host IP
EVPN-
a data packet to data packet, address, DC-PE1
MPLS Layer 3
DCI-PE2 finds the MPLS searches the
(VLAN services
through VPN tunnel to DCI- corresponding
access)
forwarding. PE1. After VPN instance
encapsulating a forwarding
VPN label and a table for the
public MPLS outbound
tunnel label into interface to DC-
the data packet, GW1. Then, DC-
DCI-PE2 sends PE1 sends the
the packet to data packet to
DCI-PE1 DC-GW1
through the through the
MPLS tunnel. outbound
interface.
Forwarding Process
Deployme
Services DC-GW2 to DCI-PE2 to DCI- DCI-PE1 to DC-
nt Mode
DCI-PE2 PE1 GW1
Upon receipt,
DCI-PE2 DCI-PE1
searches the removes the
forwarding table public MPLS
of the EVPN tunnel label,
instance bound and, based on
to the interface the VPN label,
that receives the finds the
data packet corresponding
and, based on EVPN instance.
the destination Based on the
DC-GW2 sends
address of the MAC
a data packet to
data packet, forwarding
Layer 2 DCI-PE2
finds the MPLS entry for the
services through Layer 2
tunnel to DCI- broadcast
forwarding on
PE1. After domain bound
the data plane.
encapsulating a to the EVPN
VPN label and a instance, DC-
public MPLS PE1 finds the
tunnel label into corresponding
the data packet, outbound
DCI-PE2 sends interface and
the packet to sends the data
DCI-PE1 packet to DC-
through the GW1 through
MPLS tunnel. the outbound
interface.
Forwarding Process
Deployme
Services DC-GW2 to DCI-PE2 to DCI- DCI-PE1 to DC-
nt Mode
DCI-PE2 PE1 GW1
DCI-PE2 parses
the VXLAN data
Upon receipt,
packet to obtain
DCI-PE1
the VNI and
removes the
data packet.
public MPLS
Based on the
tunnel label,
VNI, DCI-PE2
and, based on
finds the
the VPN label,
corresponding
finds the
VPN instance
corresponding
and, based on
VPN instance.
the tenant's
Then, based on
host IP address
the tenant's
DC-GW2 sends for the MPLS
EVPN- host IP address
a data packet to tunnel to DCI-
MPLS Layer 3 for the VXLAN
DCI-PE2 PE1, searches
(VXLAN services tunnel to DC-
through the the
access) GW1, DCI-PE1
VXLAN tunnel. corresponding
searches the
VPN instance
corresponding
forwarding
VPN instance
table. After
forwarding
encapsulating a
table. DCI-PE1
VPN label and a
encapsulates
public MPLS
the data packet
tunnel label into
with a VXLAN
the data packet,
header and
DCI-PE2 sends
then sends the
the packet to
VXLAN packet
DCI-PE1
to DC-GW1.
through the
MPLS tunnel.
Forwarding Process
Deployme
Services DC-GW2 to DCI-PE2 to DCI- DCI-PE1 to DC-
nt Mode
DCI-PE2 PE1 GW1
DCI-PE2 parses
the VXLAN data
packet to obtain
the VNI and
data packet.
Based on the
VNI, DCI-PE2
finds the Upon receipt,
corresponding DCI-PE1
broadcast removes the
domain. Based public MPLS
on the tunnel label
broadcast and, based on
domain, DCI- the VPN label
PE2 finds the and BD ID, finds
forwarding table the
of the corresponding
corresponding broadcast
EVPN instance. domain, and
DC-GW2 sends DCI-PE2 then, based on
a data packet to searches for the the tenant's
Layer 2
DCI-PE2 forwarding host destination
services
through the information MAC address,
VXLAN tunnel. corresponding searches the
to the broadcast
destination domain for the
address of the VXLAN tunnel
data packet, to DC-GW1.
that is, DCI-PE1
information encapsulates
about the MPLS the data packet
tunnel to DCI- with a VXLAN
PE1. After header and
encapsulating a then sends the
VPN label and a VXLAN packet
public MPLS to DC-GW1.
tunnel label into
the data packet,
DCI-PE2 sends
the packet to
DCI-PE1
through the
MPLS tunnel.
large amount of mobile phone traffic is sent to virtual unified gateways (vUGWs)
and virtual multiservice engines (vMSEs) on the DCN. After being processed by the
vUGWs and vMSEs, the IPv4 or IPv6 mobile phone traffic is forwarded over the
DCN to destination devices on the Internet. The destination devices send traffic to
mobile phones in similar ways. To achieve these functions and ensure traffic load
balancing on the DCN, you need to deploy the NFVI distributed gateway function.
A vUGW is a unified gateway developed for Huawei's CloudEdge solution, which can be
used for 3GPP access in GPRS, UMTS, and LTE modes. A vUGW can be a GGSN, an S-GW, or
a P-GW, which meets the networking requirements of carriers in different stages and
operations scenarios.
A vMSE is a virtual type of an MSE. The current carrier network includes multiple functional
boxes, including the firewall box, video acceleration box, header enhancement box, and URL
filter box. All of these functions are enabled through patch installation, causing a more and
more complex network and difficult service provisioning and maintenance. To address the
problems, vMSEs incorporate the functions of these boxes, uniformly manage these
functions, and implement value-added service processing for the data service initiated by
users.
The NFVI distributed gateway function supports service traffic transmission over
SR or VXLAN tunnels. SR tunnels are classified as SR tunnels or E2E SR tunnels.
This section describes the implementation of the NFVI distributed gateway
function for traffic transmission over SR tunnels.
Networking Introduction
Figure 3-85 shows the networking of an NFVI distributed gateway (SR tunnels).
DC-GWs, which are the border gateways of the DCN, exchange Internet routes
with external devices over PEs. L2GW/L3GW1 and L2GW/L3GW2 are connected to
VNFs. VNF1 and VNF2 that function as virtualized NEs are deployed to implement
the vUGW functions and vMSE functions, respectively. VNF1 and VNF2 are each
connected to L2GW/L3GW1 and L2GW/L3GW2 through IPUs.
Function Deployment
In NFVI distributed gateway networking (SR tunnels) shown in Figure 3-85, users
need to plan the number of bridge domains (BDs) based on the number of
network segments corresponding to each IPU. An example assumes that the four
IP addresses planned for four IPUs belong to four network segments. In this case,
four BDs need to be planned. You need to configure the BDs and the
corresponding VBDIF interfaces on all DC-GWs and L2GW/L3GWs and bind all the
VBDIF interfaces to the same L3VPN instance. In addition, the following functions
need to be deployed on the DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Establish BGP EVPN peer relationships between any DC-GW and L2GW/L3GW.
Based on the BGP EVPN peer relationships, the L2GW/L3GW can flood the
static routes destined for VNFs to other devices, and the DC-GW can then
advertise local loopback routes and default routes to the L2GW/L3GW.
● Establish BGP VPNv4/v6 peer relationships between DC-GWs and PEs. DC-
GWs can then advertise mobile phone routes to PEs and receive the Internet
routes sent by external devices based on the BGP VPNv4/v6 peer
relationships.
● Deploy SR tunnels between PEs and DC-GWs and between DC-GWs and
L2GW/L3GWs to carry service traffic.
● The traffic transmitted between mobile phones and the Internet over VNFs is
north-south traffic. The traffic transmitted between VNF1 and VNF2 is east-
west traffic. To achieve load balancing of east-west traffic and north-south
traffic, deploy the load balancing function on DC-GWs and L2GW/L3GWs.
2. L2GW/L3GWs learn the MAC addresses and ARP information of IPUs through
the data plane. Such information is advertised to DC-GWs through EVPN
routes and can be used for establishment of ARP entries and MAC entries for
Layer 2 forwarding.
– The destination MAC addresses in the MAC entries on L2GW/L3GWs are
IPUs' MAC addresses. For the IPU directly connected to an L2GW/L3GW,
this IPU is used as the outbound interface in MAC entries. For the IPUs
(such as IPU3 and IPU4 corresponding to L2GW/L3GW2 in Figure 3-85)
connecting to non-direct-link L2GW/L3GWs, MAC entries include the
3. After static VPN routes are configured on L2GW/L3GWs, these static VPN
routes are imported to the BGP EVPN routing table and sent to DC-GWs as IP
prefix routes based on BGP EVPN peer relationships.
Because multiple links and static routes exist between L2GW/L3GWs and VNFs, to
achieve load balancing, the Add-Path function needs to be enabled during
configuration of importing static routes to the BGP EVPN routing table.
4. By default, the next-hop address of the IP prefix routes received by DC-GWs is
the IP address of an L2GW/L3GW and routes are recursed over SR tunnels.
5. To establish BGP VPN peer relationships between DC-GWs and VNFs, DC-GWs
need to advertise the routes destined for loopback addresses to L2GW/L3GWs.
After BGP VPN peer relationships are established between DC-GWs and VNFs,
VNFs can send DC-GWs the mobile phone routes whose next-hop address is a
VNF's IP address.
6. Devices on the DCN do not need to get aware of external routes. Therefore, a
route-policy needs to be configured on DC-GWs to allow DC-GWs to send
only default routes except for the loopback routes to L2GW/L3GWs.
7. DC-GWs function as the border gateways of the DCN and can exchange
Internet route information, such as the Internet server address, with PEs.
8. PEs can obtain mobile phone routes and the routes carrying VNFs' IP
addresses from DC-GWs based on BGP VPNv4/v6 peer relationships. The next-
hop addresses of these routes are DC-GWs' IP addresses, and the outbound
interface is an SR tunnel connecting to a DC-GW.
9. To achieve load balancing of east-west traffic and north-south traffic, the load
balancing function and Add-Path function need to be deployed on DC-GWs
and L2GW/L3GWs.
– Load balancing of north-south traffic: Taking DC-GW1 in Figure 3-85 as
an example, DC-GW1 can receive the EVPN routes destined for VNF2
from L2GW/L3GW1 and L2GW/L3GW2. By default, after the load
balancing function is configured, DC-GW1 sends half of the traffic
destined for VNF2 through L2GW/L3GW1 and the other half of the traffic
through L2GW/L3GW2. However, L2GW/L3GW1 connects to VNF2 over
one link and L2GW/L3GW2 connects to VNF2 over two links, causing the
load balancing function failed to achieve the desired effect. Therefore, the
Add-Path function needs to be deployed on L2GW/L3GWs. After the Add-
Path function is deployed on L2GW/L3GWs, L2GW/L3GW2 sends two
routes with the same destination address to DC-GW1, achieving load
balancing.
– East-west traffic load balancing: Taking L2GW/L3GW1 in Figure 3-85 as
an example, because the Add-Path function is deployed on L2GW/
Figure 3-93 North-south traffic forwarding from mobile phones to the Internet
As shown in Figure 3-94, the procedure for forwarding east-west traffic from the
Internet to mobile phones over VNFs is as follows:
1. Devices on the Internet send mobile phones the reply packets whose
destination IP addresses are the IP addresses of mobile phones. This is
because mobile phone routes are advertised by VNFs to DC-GWs based on
BGP VPN peer relationships and then advertised to the Internet by DC-GWs
through PEs. Therefore, reply packets must be first transmitted to VNFs.
2. Upon receipt of reply packets, PEs search the VPN routing table for the
forwarding entries corresponding to mobile phone routes whose next-hop
addresses are DC-GWs' IP addresses and the outbound interface is the one for
SR tunnels. The reply packets are sent to DC-GWs after being encapsulated
with a VPN label and an SR label.
3. Upon receipt of the reply packets, DC-GWs search the VPN routing table for
the forwarding entries corresponding to the mobile phone routes based on
the BGP VPN peer relationships between DC-GWs and VNFs. These routes are
recursed to one or more VBDIF interfaces, and traffic is load balanced to these
VBDIF interfaces. The VBDIF interfaces look for ARP information and MAC
forwarding entries. Based on the MAC entries, the reply packets are forwarded
to L2GW/L3GWs over SR tunnels after being encapsulated with an EVPN
label.
4. Upon receipt of the reply packets, L2GW/L3GWs find the corresponding BDs
based on the EVPN label and search for the MAC forwarding entries. The
outbound interface information is then obtained based on the MAC
forwarding entries. The reply packets are then forwarded to VNFs.
5. Upon receipt of the reply packets, VNFs search for the base stations
corresponding to the destination IP addresses of mobile phones and add a tag
of tunnel information with the destination IP address as the IP address of a
base station. The reply packets are then forwarded to L2GW/L3GWs based on
default gateways.
6. Upon receipt of the reply packets, L2GW/L3GWs search the VPN routing table,
and the default routes advertised by DC-GWs to L2GW/L3GWs are hit. The
reply packets are then forwarded to DC-GWs over SR tunnels after being
encapsulated with a VPN label.
7. Upon receipt of the reply packets, DC-GWs find the corresponding VPN
forwarding table based on the VPN label, and the default routes or specific
routes are hit. The reply packets are then forwarded to PEs over SR tunnels
and then from PEs to base stations. After being encapsulated by base stations,
the reply packets are sent to the corresponding mobile phones.
Figure 3-94 East-west traffic forwarding from the Internet to mobile phones
Upon receipt of user packets, a VNF finds that the packets need to be sent to
another VNF for value-added service processing. In this case, east-west traffic
occurs. On the network shown in Figure 3-95, the difference between forwarding
east-west traffic and forwarding north-south traffic lies in the processing of
packets after they arrive VNF1.
1. Upon receipt of user packets, VNF1 finds that the packets need to be
processed by VNF2 and then adds a tunnel label with the destination IP
address as VNF2's IP address. The user packets are sent to L2GW/L3GWs
based on default routes.
2. Upon receipt of user packets, an L2GW/L3GW searches the VPN forwarding
table and finds that multiple load-balancing forwarding entries exist. In some
A vUGW is a unified gateway developed for Huawei's CloudEdge solution, which can be
used for 3GPP access in GPRS, UMTS, and LTE modes. A vUGW can be a GGSN, an S-GW, or
a P-GW, which meets the networking requirements of carriers in different stages and
operations scenarios.
A vMSE is a virtual type of an MSE. The current carrier network includes multiple functional
boxes, including the firewall box, video acceleration box, header enhancement box, and URL
filter box. All of these functions are enabled through patch installation, causing a more and
more complex network and difficult service provisioning and maintenance. To address the
problems, vMSEs incorporate the functions of these boxes, uniformly manage these
functions, and implement value-added service processing for the data service initiated by
users.
The NFVI distributed gateway function supports service traffic transmission over
SR or VXLAN tunnels. SR tunnels are classified as segmented SR tunnels or E2E SR
tunnels. In E2E SR tunnel scenarios, PEs use BGP VPNv4/VPNv6 or BGP EVPN to
connect to a DCN. The control-plane implementation varies according to the
protocol used. This section uses BGP VPNv4/VPNv6 as an example.
Networking Introduction
Figure 3-96 shows the networking of an NFVI distributed gateway (BGP VPNv4/
VPNv6 over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs.
Function Deployment
On the network shown in Figure 3-96, the number of BDs needs to be planned
based on the number of network segments corresponding to each IPU. An
example assumes that the four IP addresses planned for four IPUs belong to four
network segments. In this case, four BDs need to be planned. You need to
configure the BDs and the corresponding VBDIF interfaces on all L2GW/L3GWs
and bind all the VBDIF interfaces to the same L3VPN instance. In addition, the
following functions need to be deployed on the DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Establish BGP EVPN peer relationships between any DC-GW and L2GW/L3GW.
The DC-GW can then advertise local loopback routes and default routes to
2. After static VPN routes are configured on L2GW/L3GWs, these static VPN
routes are imported to the BGP EVPN routing table and sent to DC-GWs as IP
prefix routes based on BGP EVPN peer relationships. A route-policy needs to
be configured on L2GW/L3GWs so that L2GE/L3GWs send PEs only the routes
destined for VNFs.
4. DC-GWs can receive the IP prefix routes with a VNF's IP address as the
destination IP address from L2GW/L3GWs. However, DC-GWs do not have
VBDIF interfaces and therefore cannot recurse routes to the VBDIF interface
based on the Gateway IP attribute. DC-GWs need to recurse routes to SR
tunnels based on IP prefix routes. Therefore, DC-GWs must be enabled with
the function to ignore the Gateway IP attribute to send BGP packets to VNFs
over SR tunnels based on next-hop addresses.
5. To establish BGP VPN peer relationships between DC-GWs and VNFs, DC-GWs
need to advertise the routes destined for loopback addresses to L2GW/L3GWs.
After BGP VPN peer relationships are established between VNFs and DC-GWs,
VNFs send mobile phone routes to DC-GWs, and DC-GWs send mobile phone
routes to L2GW/L3GWs based on the BGP EVPN peer relationships. A route-
policy needs to be configured on DC-GWs to add the Gateway IP attribute to
the routes. The Gateway IP attribute is the IP address of a VNF. A route-policy
also needs to be configured on L2GW/L3GWs to allow L2GW/L3GWs to
receive only the mobile phone routes generated by the directly connected
VNFs. This prevents L2GW/L3GWs from sending a large number of repetitive
routes to PEs. Upon receipt of mobile phone routes from DC-GWs, L2GW/
L3GWs generate VPN forwarding entries and re-encapsulate EVPN routes as
BGP VPNv4/v6 routes before sending the routes to PEs. PEs then generate
VPN forwarding entries with the IP address of mobile phone routes as the
destination address, an L2GW/L3GW's IP address as the next-hop address, and
the interface for SR tunnels as the outbound interface.
6. Devices on the DCN do not need to get aware of external routes. Therefore,
route-policies need to be configured on PEs to allow PEs to send only default
routes to L2GW/L3GWs.
7. PEs can exchange information about Internet routes, such as the Internet
server address, with external devices.
8. To achieve load balancing of east-west traffic and north-south traffic, the load
balancing function and Add-Path function need to be deployed on PEs and
L2GW/L3GWs.
– Load balancing of north-south traffic: Taking PE1 in Figure 3-96 as an
example, PE1 can receive the VPN routes destined for VNF2 from L2GW/
L3GW1 and L2GW/L3GW2. By default, after the load balancing function
is configured, PE1 sends half of the traffic destined for VNF2 through
L2GW/L3GW1 and the other half of the traffic through L2GW/L3GW2.
However, L2GW/L3GW1 connects to VNF2 over one link and L2GW/
L3GW2 connects to VNF2 over two links, causing the load balancing
function failed to achieve the desired effect. Therefore, the Add-Path
function needs to be deployed on L2GW/L3GWs. After the Add-Path
function is deployed on L2GW/L3GWs, L2GW/L3GW2 sends two routes
with the same destination address to DC-GW1, achieving load balancing.
– East-west traffic load balancing: Taking L2GW/L3GW1 in Figure 3-96 as
an example, because the Add-Path function is deployed on L2GW/
L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2 and
L2GW/L3GW1 has a static route with the next-hop address as IPU3's IP
address. The destination addresses of all these routes are VNF2's IP
address. Therefore, the load balancing function needs to be configured to
balance traffic over static routes and EVPN routes.
Figure 3-102 North-south traffic forwarding from mobile phones to the Internet
As shown in Figure 3-103, the procedure for forwarding east-west traffic from the
Internet to mobile phones over VNFs is as follows:
1. Devices on the Internet send mobile phones the reply packets whose
destination IP addresses are the IP addresses of mobile phones. This is
because mobile phone routes are advertised by L2GW/L3GWs to VNFs based
on BGP VPNv4/v6 peer relationships and then advertised to the Internet by
PEs. Therefore, reply packets must be first transmitted to L2GW/L3GWs.
2. Upon receipt of reply packets, PEs search the VPN routing table for the
forwarding entries corresponding to mobile phone routes whose next-hop
addresses are L2GW/L3GWs' IP addresses and the outbound interface is the
one for SR tunnels. The reply packets are sent to L2GW/L3GWs after being
encapsulated with a VPN label and an SR label.
3. Upon receipt of reply packets, L2GW/L3GWs find the VPN forwarding entries
based on the VPN label and the VBDIF interface based on the VPN forwarding
entries. The packets are then forwarded to VNFs based on the MAC entries
corresponding to the VBDIF interface.
4. Upon receipt of the reply packets, VNFs search for the base stations
corresponding to the destination IP addresses of mobile phones and add a tag
of tunnel information with the destination IP address as the IP address of a
base station. The reply packets are then forwarded to L2GW/L3GWs based on
default gateways.
5. Upon receipt of the reply packets, L2GW/L3GWs search the VPN routing table,
and the default routes advertised by DC-GWs to L2GW/L3GWs are hit. The
reply packets are then forwarded to DC-GWs over SR tunnels after being
encapsulated with a VPN label.
6. Upon receipt of reply packets, PEs use the VPN forwarding table to forward
the packets to base stations based on the VPN label. The base stations
decapsulate the packets before forwarding them to mobile phones.
Figure 3-103 East-west traffic forwarding from the Internet to mobile phones
Upon receipt of user packets, a VNF finds that the packets need to be sent to
another VNF for value-added service processing. In this case, east-west traffic
occurs. On the network shown in Figure 3-104, the difference between forwarding
east-west traffic and forwarding north-south traffic lies in the processing of
packets after they arrive VNF1.
1. Upon receipt of user packets, VNF1 finds that the packets need to be
processed by VNF2 and then adds a tunnel label with the destination IP
A vUGW is a unified gateway developed for Huawei's CloudEdge solution, which can be
used for 3GPP access in GPRS, UMTS, and LTE modes. A vUGW can be a GGSN, an S-GW, or
a P-GW, which meets the networking requirements of carriers in different stages and
operations scenarios.
A vMSE is a virtual type of an MSE. The current carrier network includes multiple functional
boxes, including the firewall box, video acceleration box, header enhancement box, and URL
filter box. All of these functions are enabled through patch installation, causing a more and
more complex network and difficult service provisioning and maintenance. To address the
problems, vMSEs incorporate the functions of these boxes, uniformly manage these
functions, and implement value-added service processing for the data service initiated by
users.
The NFVI distributed gateway function supports service traffic transmission over
SR or VXLAN tunnels. SR tunnels are classified as segmented SR tunnels or E2E SR
tunnels. In E2E SR tunnel scenarios, PEs use BGP VPNv4/VPNv6 or BGP EVPN to
connect to a DCN. The control-plane implementation varies according to the
protocol used. This section describes the implementation principles in BGP EVPN
scenarios.
Networking Introduction
Figure 3-105 shows the networking of an NFVI distributed gateway (BGP EVPN
over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs.
Function Deployment
On the network shown in Figure 3-105, the number of BDs needs to be planned
based on the number of network segments corresponding to each IPU. An
example assumes that the four IP addresses planned for four IPUs belong to four
network segments. In this case, four BDs need to be planned. You need to
configure the BDs and the corresponding VBDIF interfaces on all L2GW/L3GWs
and bind all the VBDIF interfaces to the same L3VPN instance. In addition, the
following functions need to be deployed on the DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Deploy EVPN RRs which can be either a standalone device or a DC-GW. In this
section, BGP EVPN peer relationships are established between all L2GWs/
L3GWs, PEs, and DC-GWs, DC-GWs are deployed as RRs to reflect EVPN
routes, and L2GW/L3GWs function as RR clients. L2GW/L3GWs can use EVPN
RRs to synchronize MAC or ARP routes as well as the IP prefix routes carrying
a VNF address as the destination address with IPUs.
● Configure static default routes on PEs and use the EVPN RRs to reflect the
static default routes to L2GW/L3GWs.
● Deploy SR tunnels between PEs and L2GW/L3GWs and between DC-GWs and
L2GW/L3GWs to carry service traffic.
● The traffic transmitted between mobile phones and the Internet over VNFs is
north-south traffic. The traffic transmitted between VNF1 and VNF2 is east-
west traffic. To achieve load balancing of east-west traffic and north-south
traffic, deploy the load balancing function on DC-GWs and L2GW/L3GWs.
2. After static VPN routes destined for VNFs are configured on L2GW/L3GWs,
these static VPN routes are imported to the BGP EVPN routing table and IP
prefix routes are generated. These routes are sent to DC-GWs and PEs based
on BGP EVPN peer relationships. To prevent these routes from being
transmitted to DC-GWs or PEs and recursed to VBDIF interfaces (DC-GWs and
PEs do not have VBDIF interfaces) because they carry gateway addresses,
configure an import route-policy on DC-GWs and PEs to delete gateway
addresses from the received routes.
4. To establish BGP VPN peer relationships between DC-GWs and VNFs, DC-GWs
need to advertise the routes destined for loopback addresses to L2GW/L3GWs.
After BGP VPN peer relationships are established between VNFs and DC-GWs,
VNFs can send mobile phone routes to DC-GWs, and DC-GWs send the
mobile phone routes as IP prefix routes to L2GW/L3GWs based on the BGP
VPN peer relationships. The Gateway IP attribute, which is the IP address of a
VNF, is added to the IP prefix routes based on a route-policy. Upon receipt of
IP prefix routes, L2GW/L3GWs generate VPN forwarding entries. L2GW/
L3GWs then send EVPN routes to PEs based on EBGP EVPN peer relationships
so that PEs can generate VPN forwarding entries with the destination address
as the IP address of mobile phone routes and the next-hop address as a VNF's
IP address. The routes are then recursed to VNFs, and the outbound interface
is an SR tunnel.
5. Devices on the DCN do not need to get aware of external routes. Therefore,
route-policies need to be configured on PEs to allow PEs to send only default
routes to L2GW/L3GWs.
6. PEs can exchange information about Internet routes, such as the Internet
server address, with external devices.
7. To achieve load balancing of east-west traffic and north-south traffic, the load
balancing function and Add-Path function need to be deployed on PEs and
L2GW/L3GWs.
– Load balancing of north-south traffic: Taking PE1 in Figure 3-105 as an
example, PE1 can receive the VPN routes destined for VNF2 from L2GW/
L3GW1 and L2GW/L3GW2. By default, after the load balancing function
is configured, PE1 sends half of the traffic destined for VNF2 through
L2GW/L3GW1 and the other half of the traffic through L2GW/L3GW2.
However, L2GW/L3GW1 connects to VNF2 over one link and L2GW/
L3GW2 connects to VNF2 over two links, causing the load balancing
function failed to achieve the desired effect. Therefore, the Add-Path
function needs to be deployed on L2GW/L3GWs. After the Add-Path
function is deployed on L2GW/L3GWs, L2GW/L3GW2 sends two routes
with the same destination address to DC-GW1, achieving load balancing.
– East-west traffic load balancing: Taking L2GW/L3GW1 in Figure 3-105 as
an example, because the Add-Path function is deployed on L2GW/
L3GW2, L2GW/L3GW1 receives two EVPN routes from L2GW/L3GW2 and
Figure 3-111 North-south traffic forwarding from mobile phones to the Internet
As shown in Figure 3-112, the procedure for forwarding east-west traffic from the
Internet to mobile phones over VNFs is as follows:
1. Devices on the Internet send mobile phones the reply packets whose
destination IP addresses are the IP addresses of mobile phones. This is
because mobile phone routes are advertised by L2GW/L3GWs to VNFs based
on BGP VPNv4/v6 peer relationships and then advertised to the Internet by
PEs. Therefore, reply packets must be first transmitted to L2GW/L3GWs.
2. Upon receipt of reply packets, PEs search the VPN routing table for the
forwarding entries corresponding to mobile phone routes whose next-hop
addresses are L2GW/L3GWs' IP addresses and the outbound interface is the
one for SR tunnels. The reply packets are sent to L2GW/L3GWs after being
encapsulated with a VPN label and an SR label.
3. Upon receipt of reply packets, L2GW/L3GWs find the VPN forwarding entries
based on the VPN label and the VBDIF interface based on the VPN forwarding
entries. The packets are then forwarded to VNFs based on the MAC entries
corresponding to the VBDIF interface.
4. Upon receipt of the reply packets, VNFs search for the base stations
corresponding to the destination IP addresses of mobile phones and add a tag
of tunnel information with the destination IP address as the IP address of a
base station. The reply packets are then forwarded to L2GW/L3GWs based on
default gateways.
5. Upon receipt of the reply packets, L2GW/L3GWs search the VPN routing table,
and the default routes advertised by DC-GWs to L2GW/L3GWs are hit. The
reply packets are then forwarded to DC-GWs over SR tunnels after being
encapsulated with a VPN label.
6. Upon receipt of reply packets, PEs use the VPN forwarding table to forward
the packets to base stations based on the VPN label. The base stations
decapsulate the packets before forwarding them to mobile phones.
Figure 3-112 East-west traffic forwarding from the Internet to mobile phones
Upon receipt of user packets, a VNF finds that the packets need to be sent to
another VNF for value-added service processing. In this case, east-west traffic
occurs. On the network shown in Figure 3-113, the difference between forwarding
east-west traffic and forwarding north-south traffic lies in the processing of
packets after they arrive VNF1.
1. Upon receipt of user packets, VNF1 finds that the packets need to be
processed by VNF2 and then adds a tunnel label with the destination IP
Service Overview
If point-to-multipoint Layer 2 services are carried using EVPN E-LAN and Layer 3
services are carried using MPLS L3VPN, EVPN E-LAN accessing L3VPN must be
configured on the devices where Layer 2 and Layer 3 services overlap.
Networking Description
On the network shown in Figure 3-114, the first layer is the user-side access
network where Layer 2 services are carried between CPEs and UPEs to implement
VLAN forwarding. The second layer is the aggregation network where services are
carried using EVPN E-LAN and BD-based EVPN MPLS is deployed between UPEs
and NPEs. The third layer is the core network where services are carried using
MPLS L3VPN. In this case. NPEs must be deployed with EVPN E-LAN accessing
L3VPN. Specifically, configure an L2VE sub-interface to access BD EVPN on an NPE
and then configure an L3VE sub-interface to access L3VPN. In this manner, when
the public network packets of EVPN E-LAN reach the L2VE sub-interface on the
NPE, the corresponding tag is encapsulated into the packets based on the tag
processing behavior configured on the L2VE sub-interface. After the packets are
loopbacked to the L3VE interface, the corresponding L3VE sub-interface is
matched based on the tag carried in the packets, and the packets are forwarded
using the corresponding L3VPN instance.
Feature Deployment
Before deploying EVPN E-LAN accessing L3VPN, complete the following tasks on
an NPE:
● Create an L2VE interface and an L3VE interface and bind them to the same
VE group.
● Create an L2VE sub-interface and add it to the BD to which the EVPN
instance bound.
● Create an L3VE sub-interface, specify the associated VLAN, set the VLAN
encapsulation mode (configure the Tag), and bind it to the corresponding
L3VPN instance.
Purpose
As services grow rapidly, different sites have an increasingly strong need for Layer
2 interworking. VPLS, which is generally used for such a purpose, has the following
shortcomings:
● Lack of support for load balancing: VPLS does not support traffic load
balancing in multi-homing networking scenarios.
● High network resource usage: Interworking between sites requires all PEs
serving these sites on the ISP backbone network to be fully meshed, with PWs
established between any two PEs. If a large number of PEs exist, PW
establishment will consume a significant amount of network resources. In
addition, a large number of ARP messages must be transmitted for MAC
address learning. These ARP messages not only consume network bandwidth
but may also consume CPU resources on remote sites that do not need to
learn the MAC addresses carried in them.
EVPN solves the preceding problems with the following characteristics:
● EVPN uses extended BGP to implement MAC address learning and
advertisement on the control plane instead of on the data plane. This function
allows a device to manage MAC addresses in the same way as it manages
routes, implementing load balancing between EVPN routes with the same
destination MAC address but different next hops.
● EVPN does not require PEs on the ISP backbone network to be fully meshed.
PEs on an EVPN use BGP to communicate, and BGP provides the route
reflection function. PEs can establish BGP peer relationships only with RRs
deployed on the ISP backbone network, with RRs reflecting EVPN routes. This
implementation significantly reduces network complexity and minimizes the
number of network signaling messages.
● EVPN enables PEs to use ARP to learn the local MAC addresses and use
MAC/IP address advertisement routes to learn remote MAC addresses and IP
addresses corresponding to these MAC addresses, and store them locally.
After receiving another ARP request, a PE searches the locally cached MAC
address and IP address based on the destination IP address in the ARP
request. If the corresponding information is found, the PE returns an ARP
reply packet. This prevents ARP request packets from being broadcast to other
PEs, therefore reducing network resource consumption.
Benefits
EVPN offers the following benefits:
● Improved link usage and transmission efficiency: EVPN supports load
balancing, fully utilizing network resources and reducing network congestion.
Configuration Precautions
Restrictions Guidelines Impact
The cloud VPN and DCI The cloud VPN and DCI The re-originated routes
scenarios support co- scenarios support co- on the DCI network are
existence of different existence of different advertised to the cloud
scenarios. scenarios. VPN, which affects
ARP/IRB routes: ARP/IRB routes: service traffic planning.
When vBDIF interfaces When vBDIF interfaces
are configured on a are configured on a
cloud VPN in an L2+L3 cloud VPN in an L2+L3
scenario, the following scenario, the following
situations may occur. A situations may occur. A
route-policy needs to be route-policy needs to be
configured to prevent configured to prevent
routes from being re- routes from being re-
originated. originated.
1. Layer 2 services 1. Layer 2 services
belong to different EVPN belong to different EVPN
instances. Layer 3 instances. Layer 3
services belong to the services belong to the
same EVPN instance but same EVPN instance but
are on different network are on different network
segments. In this case, a segments. In this case, a
route filter policy can be route filter policy can be
configured for the configured for the
network segments. network segments.
2. Layer 2 services 2. Layer 2 services
belong to different EVPN belong to different EVPN
instances, and Layer 3 instances, and Layer 3
services belong to services belong to
different VPN instances. different VPN instances.
In this case, a route filter In this case, a route filter
policy must be policy must be
configured for the routes configured for the routes
originated from the DC originated from the DC
side. side.
IP prefix routes: IP prefix routes:
Configure a route-policy Configure a route-policy
to prevent routes from to prevent routes from
being re-originated. For being re-originated. For
details, see the details, see the
description of situations description of situations
1 and 2 for ARP/IRB 1 and 2 for ARP/IRB
routes. routes.
EVPN does not support Properly plan route re- EVPN does not support
the following route re- origination scenarios. the following route re-
origination scenarios: origination scenarios:
Common L3VPN over Common L3VPN over
SRv6 and EVPN L3VPN SRv6 and EVPN L3VPN
over MPLS over MPLS
Common L3VPN over Common L3VPN over
SRv6 and EVPN L3VPN SRv6 and EVPN L3VPN
over SRv6 over SRv6
EVPN L3VPN over MPLS EVPN L3VPN over MPLS
and EVPN L3VPN over and EVPN L3VPN over
SRv6 SRv6
EVPN L3VPN over MPLS EVPN L3VPN over MPLS
and EVPN L3VPN over and EVPN L3VPN over
MPLS MPLS
EVPN L3VPN over SRv6 EVPN L3VPN over SRv6
and EVPN L3VPN over and EVPN L3VPN over
SRv6 SRv6
EVPN VPLS over MPLS EVPN VPLS over MPLS
and EVPN VPLS over and EVPN VPLS over
SRv6 SRv6
EVPN over VXLAN and EVPN over VXLAN and
EVPN over SRv6 EVPN over SRv6
PW-VE interfaces cannot Properly plan VPN EVPN VPWS over SRv6
be bound to EVPL instance types. splicing with VPWS is not
instances in SRv6 mode. supported.
When a VLL is connected Run the evpn access vll When a VLL is connected
to an EVPN, the convergence separate to an EVPN, the
loopback scheme is used disable command. loopback scheme is used
in the downstream in the downstream
direction. In this case, direction. In this case,
CAR rate limit is CAR rate limit is
inaccurate on a PWVE inaccurate on a PWVE
sub-interface in the sub-interface in the
downstream direction. downstream direction.
If an instance receives Properly plan the tunnel The same VPN instance
multiple IP prefix routes type and RT import cannot be configured
with the same prefix but planning for VPN with different types
different label attributes, instances. (SRv6 and other types
determine the routes to such as MPLS and
be preferentially selected VXLAN) of tunnel
based on the tunnel type recursion and tunnel
in the instance. backup. When multiple
If the instance is enabled tunnels co-exist, traffic is
with SRv6 but routes do recursed only to
not carry the Prefix-SID instance-enabled
attribute, route import tunnels.
fails. If the instance is
not enabled with SRv6
but routes carry the
Prefix-SID attribute,
route import fails.
SRv6 SID and MPLS Properly plan the tunnel The same service
cannot co-exist on a VPN type for VPN instances. instance cannot be
instance. configured with different
types (SRv6 and other
types such as MPLS and
VXLAN) of tunnel
recursion and tunnel
backup.
The EVPN switch to When the VPLS network If the VPLS network is
which a user device is is reconstructed into the reconstructed into the
connected is replaced EVPN, manually shut EVPN before the MAC
with a VPLS network down the network device entry for the user device
switch. Before the MAC interface connecting to is aged, user services are
entry aging time expires, the switch and then re- interrupted until the
the user device is enable the interface VPLS network ages the
reconnected to the EVPN after the network MAC entry for the user
switch. As a result, reconstruction. device.
services are interrupted
until the VPLS network
ages the MAC entry for
the user device.
In an EVPN active-active Do not reset the BGP The BUM traffic doubles
scenario where a CE is connection between the
dual-homed to PEs, if a dual-homing Pes.
BGP connection between
the dual-homing PEs
goes down, the dual-
homing PEs both enter
the DF state. As a result,
the BUM traffic doubles,
and the dual-homing
and active-active
mechanism fails.
For the EVPN E-LAN Configure the same For the EVPN E-LAN
service, the control word control word function on service, the control word
can only be statically PEs of both ends of a can only be statically
configured. The same path. In dual-homing configured. The same
control word must be scenarios, configure the control word must be
configured on devices. same control word configured on devices.
For the EVPN VPWS function on the involved For the EVPN VPWS
service, a device checks PEs, including the single- service, a device checks
whether the control and dual-homing PEs. whether the control
word carried in a word carried in a
received route matches received route matches
the local control word. A the local control word. A
control word control word
inconsistency causes a inconsistency causes a
failure to create an EVPN failure to create an EVPN
VPWS service. Control VPWS service. Control
word self-negotiation word self-negotiation
(configured using the (configured using the
preferred control-word preferred control-word
command) is not command) is not
supported. supported.
In EVPN VPWS dual- In EVPN VPWS dual-
homing networking, a homing networking, a
bypass path needs to be bypass path needs to be
established. Therefore, established. Therefore,
the control word the control word
configuration must be configuration must be
consistent on all involved consistent on all involved
PEs (when a bypass path PEs (when a bypass path
is established between a is established between a
single-homing PE and single-homing PE and
each of dual-homing PEs each of dual-homing PEs
and between dual- and between dual-
homing PEs). A control homing PEs). A control
word inconsistency word inconsistency
causes a failure to causes a failure to
establish bypass paths. establish bypass paths.
When routes over which This problem cannot be Basic VPLS forwarding
VLL VPN QoS traffic prevented in the VLL with BGP LSPs used as
travels recurse to BGP VPN QoS scenario. public network tunnel is
LSPs, the BGP entropy In basic VPLS forwarding supported. The entropy
label function is not scenarios, you must run label function is not
supported. both the mpls vpls supported, and the P
In the basic VPLS convergence separate node-based load
forwarding, VPLS VPN enable and mpls balancing is not
QoS, or PBB VPLS multistep separate supported.
scenario or when BGP enable commands so
LSPs function as public that the BGP entropy
network tunnels for function can be
VPLS, the BGP entropy supported.
label function is not
supported.
When the decoupling
between the public
network and private
network is not enabled
for both VPLS and MPLS
in the scenario of VPLS
routes recursive to BGP
LSPs, the BGP entropy
label function is not
supported for known
unicast traffic.
Pre-configuration Tasks
Before activating EVPN interface licenses on a board, complete the following tasks:
● The license file for the master main control board has been activated using
the license active file-name command.
● The interface-specific basic hardware licenses for the board have been
activated in batches using the active port-basic slot slot-id card card-id port
port-list command.
● The interface-specific basic hardware licenses for the board have been
activated in batches using the active port-basic slot slot-id card card-id
command.
Context
In VS mode, this chapter applies only to the admin VS.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run license
The license view is displayed.
Step 3 Run active port-evpn slot slot-id card card-id port port-list
EVPN interface licenses are activated on the board.
This command applies only to CM boards on which the EVPN function is license-
specific.
Step 4 Run commit
The configuration is committed.
----End
Usage Scenario
EVPN is used for Layer 2 internetworking.
On the network shown in Figure 3-115, to allow Layer 2 networks at different
sites to communicate, configure EVPN. Specifically:
● Configure an EVPN instance on each PE and bind the EVPN instance on each
PE to the interface that connects the PE to a site.
● Configure EVPN source IP addresses to identify PEs in the EVPN networking.
● Configure ESIs for PE interfaces connecting to CEs. PE interfaces connecting to
the same CE have the same ESI.
● Configure BGP EVPN peer relationships between PEs on the backbone
network to allow MAC addresses to be advertised over routes.
● Configure RRs to decrease the number of BGP EVPN peer relationships
required.
Pre-configuration Tasks
Before configuring common EVPN, complete the following tasks:
● Configure an IGP on the backbone network to ensure IP connectivity.
● Configure MPLS LDP or TE tunnels on the backbone network.
● Configure Layer 2 connections between CEs and PEs.
Context
EVPN instances isolate EVPN routes from public network routes, and the routes of
EVPN instances from each other. EVPN instances are required in all EVPN
networking solutions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn vpn-instance vpn-instance-name
An EVPN instance is created, and its view is displayed.
Step 3 (Optional) Run description description-information
A description is configured for the EVPN instance.
Similar to a host name or an interface description, an EVPN instance description
helps you memorize the EVPN instance.
Step 4 Run route-distinguisher route-distinguisher
An RD is configured for the EVPN instance.
An EVPN instance takes effect only after the RD is configured. The RDs of different
EVPN instances on a PE must be different.
After being configured, an RD cannot be modified, but can be deleted. After you delete the
RD of an EVPN instance, the VPN targets of the EVPN instance will also be deleted.
The RT used by an Ethernet segment route is generated based on the middle six bytes of
the ESI. For example, if the ESI is 0011.1001.1001.1001.1002, then the Ethernet segment
route uses 11.1001.1001.10 as its RT.
An import routing policy must also be configured for precise EVPN route control.
An import routing policy filters routes that are received from other PEs.
After a device learns a large number of MAC addresses, system performance may
deteriorate when the device is busy processing services. This is because MAC
addresses consume system resources. To improve system security and reliability,
run the mac limit command to configure the maximum number of MAC
addresses allowed by an EVPN instance. If the number of MAC addresses learned
by an EVPN instance exceeds the maximum number, the system displays an alarm
message, instructing you to check the validity of MAC addresses in the EVPN
instance.
After you configure the maximum number of MAC addresses allowed by an EVPN
instance, you can run the mac threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value command to configure the upper and lower
thresholds for triggering MAC address alarms. This command enables you to learn
MAC address usage based on MAC address alarm reporting and clearing.
When users who use the same service are bound to the same EVPN instance,
configuring forwarding isolation in the EVPN instance prevents the users from
accessing each other.
In load balancing mode, packet disorder may occur during the in-depth parsing of
MPLS packets. To resolve this problem, run the control-word enable command to
enable the EVPN control word function so that MPLS packets can be reassembled
using the control word.
----End
Context
The EVPN source address, which can be used to identify a PE on an EVPN, is part
of EVPN route information. Configuring EVPN source addresses is a mandatory
task for EVPN configuration.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn source-address ip-address
An EVPN source address is configured.
Step 3 Run commit
The configuration is committed.
----End
Context
After an EVPN instance is configured on a PE, an interface that belongs to the
EVPN must be bound to the EVPN instance. Otherwise, the interface functions as
a public network interface and cannot forward EVPN traffic.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed. In a CE dual-homing scenario, the interface must
be an Eth-Trunk interface.
Step 3 Run evpn binding vpn-instance vpn-instance-name
The interface is bound to an EVPN instance.
Step 4 Run commit
The configuration is committed.
----End
Context
PEs connecting to the same CE must have the same ESI configured. PEs exchange
routes that carry ESIs, so that a PE can discover other PEs connecting to the same
CE as itself. This helps implement load balancing.
Before configuring an ESI on an interface, ensure that:
● The interface has been bound to an EVPN instance using the evpn binding
vpn-instance command.
● An ESI-configured interface must be Up. If the interface is Down, Ethernet
segment routes cannot be generated. When a CE is dual-homed to PEs, Eth-
Trunk interfaces have to be configured on the CE and PEs so that they can
access each other. To ensure fast convergence, configuring an E-Trunk
between the two PEs is recommended.
An ESI can be either statically configured or dynamically generated on an
interface.
Static configuration is recommended. Compared with dynamic ESI generation,
static configuration allows EVPN to implement faster traffic switching during a DF
election in a dual-homing scenario with active-active PEs.
Functions, such as rapid convergence, split horizon, and DF election that are required in the
EVPN dual-homing scenario fail to take effect in a single homing scenario. In such a scenario,
configuring the ESI is optional on a single-homing PE.
A dynamically generated ESI is in the format of xxxx.xxxx.xxxx.xxxx.xxxx where x is a
hexadecimal number. Such an ESI starts with 01 and then the system MAC address of the LACP
and the port key. The rest is padded with 0.
Procedure
● (Optional) Configure E-Trunk.
a. Run system-view
The system view is displayed.
b. Run e-trunk e-trunk-id
E-Trunk is configured, and the E-Trunk view is displayed.
c. Run priority priority
A priority is configured for the E-Trunk.
d. Run peer-address peer-ip-address source-address source-ip-address
IP addresses are configured for the local and peer ends of the E-Trunk.
e. Run quit
The system view is displayed.
f. Run interface eth-trunk trunk-id
The Eth-Trunk interface view is displayed.
g. Run e-trunk e-trunk-id
The Eth-Trunk interface is added to the E-Trunk.
The LACP system IDs in one E-Trunk mechanism must be the same.
k. (Optional) Run lacp e-trunk priority priority
The LACP system priorities in one E-Trunk mechanism must be the same.
l. Run commit
An ESI is configured.
d. Run commit
----End
Context
In EVPN networking, PEs need to have BGP EVPN peer relationships established
before they can exchange EVPN route information and implement communication
between EVPN instances.
Perform the following steps on each PE.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run peer ipv4-address as-number as-number
A BGP EVPN peer IP address is specified.
Step 4 Run peer ipv4-address connect-interface loopback interface-number
The interface on which a TCP connection to the specified peer is to be established
is specified.
A PE must use a loopback interface address with a 32-bit mask to set up an MP-IBGP peer
relationship with the peer PE, so that VPN routes can be relayed to tunnels. The routes to
the local loopback interface are advertised to the peer PE using an IGP on the MPLS
backbone network.
timer df-delay command to set a greater DF election delay. This ensures that the
network remains stable.
In an EVPN dual-homing scenario where interface-based DF election is enabled,
you need to run this command to set the delay interval for DF election to 0s
prevent the long-time existence of dual backup devices during switchback from
causing a traffic interruption.
Step 9 (Optional) Run peer { group-name | ipv4-address } mac-limit number
[ percentage ] [ alert-only | idle-forever | idle-timeout times ]
The maximum number of MAC advertisement routes that can be received from
each peer is configured.
If an EVPN instance may import many invalid MAC advertisement routes from
peers and these routes occupy a large proportion of the total MAC advertisement
routes. If the received MAC advertisement routes exceed the specified maximum
number, the system displays an alarm, instructing users to check the validity of the
MAC advertisement routes received in the EVPN instance.
Step 10 (Optional) Perform the following operations to enable the function to advertise
the routes carrying the large-community attribute to BGP EVPN peers:
The large-community attribute includes a 2-byte or 4-byte AS number and two 4-
byte LocalData fields, allowing the administrator to flexibly use route-policies.
Before enabling the function to advertise the routes carrying the large-community
attribute to BGP EVPN peers, configure the route-policy related to the large-
community attribute and use the route-policy to set the large-community
attribute.
1. Run peer { ipv4-address | group-name } route-policy route-policy-name
export
The outbound route-policy of the BGP EVPN peer is configured.
2. Run peer { ipv4-address | group-name } advertise-large-community
The device is enabled to advertise the routes carrying the large-community
attribute to BGP EVPN peers or peer groups.
If the routes carrying the large-community attribute does not need to be
advertised to one BGP EVPN peer in the peer group, run the peer ipv4-
address advertise-large-community disable command.
Step 11 Run commit
The configuration is committed.
----End
Context
By default, EVPN PEs work in All-Active mode. If a CE is multi-homed to several
EVPN PEs, these PEs will load-balance traffic. If you do not want an EVPN PE to
work with other EVPN PEs in load-balancing mode, change its global redundancy
mode to Single-Active. In Single-Active mode, the master PE used to transmit
traffic is determined based on DF election or the active/standby status of access-
side Eth-Trunk interfaces.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn redundancy-mode single-active
The Single-Active redundancy mode is configured.
Step 3 Run commit
The configuration is committed.
----End
Context
In an AS where a router serves as an RR, other routers can serve as RR clients. The
clients establish BGP EVPN peer relationships with the RR. The RR and its clients
form a cluster. The RR reflects routes among the clients, and therefore the clients
do not need to establish IBGP connections.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Run peer { ipv4-address | group-name } reflect-client
An RR and its clients are configured.
The device where the peer reflect-client command is run serves as the RR and
the specified peers or peer groups serve as clients.
Step 5 (Optional) Run undo reflect between-clients
Route reflection between clients through the RR is disabled.
If the clients of an RR have established full-mesh connections with each other, you
can run the undo reflect between-clients command to disable route reflection
between clients through the RR to reduce the link cost. The undo reflect
between-clients command can only be run on an RR.
If a cluster has multiple RRs, you can use this command to set the same cluster ID
for these RRs to prevent routing loops.
----End
Context
On the network shown in Figure 3-116, CE1 and CE2 are each dual-homed to PE1
and PE2. The PE interfaces that connect to CE1 have ESI1 (shown in green), and
the PE interfaces that connect to CE2 have ESI2 (shown in blue). If you want
unicast traffic from CE3 -> CE1 and from CE3 -> CE2 to be transmitted in load
balancing and non-load balancing mode, respectively, you can set a redundancy
mode per ESI instance. Specifically, you can set the redundancy mode of the ESI1
instance to all-active and that of the ESI2 instance to single-active.
If users want the BUM traffic to be sent from CE3 to CE2 to be forwarded over
PE1, configure the DF precedence by ESI on PE1 and PE2 and specify PE1 as the
master DF.
Procedure
● Set a redundancy mode based on a statically configured ESI.
a. Run system-view
A static ESI instance with a specific name is created. The value of the esi
variable must be the same as the statically configured ESI.
d. Run evpn redundancy-mode { single-active | all-active }
Context
In a CE dual-homing scenario, to speed up primary/backup DF switching if an
access link fails, you can create a BFD session between the two PEs, specify an
access-side Eth-Trunk or PW-VE interface as the interface to be monitored by the
BFD session, and then associate the interface with the BFD session. After the
configuration is complete, if the access link connected to the PE on which the
master DF resides goes faulty, BFD can rapidly detect the fault and transmit the
fault to the other PE through the BFD session. This allows the backup DF to
quickly become the primary DF.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
Currently, EVPN, VPLS, L2MC, and L3MC services use the internal GRE reserved
interfaces to implement public-network-and-private-network decoupling in the
BUM forwarding process. When a loopback board is removed, the BUM traffic is
interrupted for a short time. In addition, when any board is inserted or removed,
transient packet loss or extra packet generation may occur on the leaf nodes of
the other boards.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn reserve-interface enhancement
Board selection for internal loopback on a main control board is enabled when
packets enter a public network from an EVPN or BD EVPN for BUM forwarding.
When packets enter a public network from an EVPN or BD EVPN for BUM
forwarding, board selection for internal loopback is performed based on interface
boards by default. To enable board selection for internal loopback on a main
control board, run the evpn reserve-interface enhancement command. This
configuration allows primary and backup leaf nodes to be delivered to different
boards for protection. After a board is removed, the PST Down event on the
reserved interface triggers a primary/backup leaf node switchover, improving the
link switching performance.
Step 3 Run commit
The configuration is committed.
----End
Prerequisites
EVPN has been configured.
Procedure
● Run the display default-parameter evpn command to check default EVPN
configurations during EVPN initialization.
● Run the display evpn vpn-instance [ name vpn-instance-name ] command
to check EVPN instance information.
● Run the display evpn vpn-instance name vpn-instance-name df result [ esi
esi ] command to check the DF election result of an EVPN instance.
● Run the display evpn vpn-instance name vpn-instance-name df-timer state
command to check the DF timer status of an EVPN instance.
● Run the display bgp evpn { all | vpn-instance vpn-instance-name } esi [ esi ]
command to check information about the ESIs of a specified or all EVPN
instances.
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table [ { ad-route | es-route |
inclusive-route | mac-route | prefix-route } prefix ] command to check
information about EVPN routes.
● Run the display bgp evpn all routing-table statistics command to check
statistics about EVPN routes.
● Run the display evpn mac routing-table command to check MAC route
information about EVPN instances.
● Run the display evpn mac routing-table limit command to check MAC
address limits of EVPN instances.
● Run the display evpn mac routing-table statistics command displays MAC
route statistics of EVPN instances.
● Run the display arp broadcast-suppress user bridge-domain bd-id
command to check the ARP broadcast suppression table of a specified BD.
● Run the display arp packet statistics bridge-domain bd-id command to
check statistics about the ARP packets in a specified BD.
----End
Usage Scenario
This section describes how to configure BD-EVPN functions. An EVPN in non-BD
mode can only carry Layer 2 services, whereas a BD-EVPN can carry both Layer 2
and Layer 3 services.
Pre-configuration Tasks
Before configuring a BD-EVPN, complete the following tasks:
Context
EVPN instances are used to isolate EVPN routes from public routes and isolate the
routes of EVPN instances from each other. EVPN instances are required in all EVPN
networking solutions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn vpn-instance vpn-instance-name bd-mode
An EVPN instance in BD mode is created, and the EVPN instance view is displayed.
Step 3 (Optional) Run description description-information
A description is configured for the EVPN instance.
Similar to the description of a host name or an interface, the EVPN instance
description helps users memorize the EVPN instance.
Step 4 Run route-distinguisher route-distinguisher
An RD is configured for the EVPN instance.
An EVPN instance takes effect only after an RD is configured for it. The RDs of
different EVPN instances on the same PE must be different.
An RD cannot be modified after being configured but can be deleted. If the RD of an EVPN
instance is deleted, VPN targets configured in the EVPN instance are also deleted.
An RT of an Ethernet segment route is generated using the middle six bytes of an ESI. For
example, if the ESI is 0011.1001.1001.1001.1002, the Ethernet segment route uses
11.1001.1001.10 as its RT.
The maximum number of MAC addresses allowable is set for the EVPN instance.
After the maximum number of MAC addresses allowed by an EVPN instance is set,
you can run the mac threshold-alarm upper-limit upper-limit-value lower-limit
lower-limit-value command to set the upper and lower thresholds for triggering
MAC address alarms. The alarm generation and clearance help you detect
threshold-crossing events of MAC addresses.
EVPN routes that can be imported into the VPN instance IPv4 address family are
associated with a tunnel policy.
By default, after the bypass-vxlan enable command is globally run, the device
can be deployed only with EVPN VXLAN-related functions. This step needs to be
performed if users want to configure EVPN MPLS-related functions for some BD-
EVPN instances.
In load balancing mode, packet disorder may occur during the in-depth parsing of
MPLS packets. To resolve this problem, run the control-word enable command to
enable the EVPN control word function so that MPLS packets can be reassembled
using the control word.
----End
Context
The EVPN source address, which can be used to identify a PE on an EVPN, is part
of EVPN route information. Configuring EVPN source addresses is a mandatory
task for EVPN configuration.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn source-address ip-address
An EVPN source address is configured.
Step 3 Run commit
The configuration is committed.
----End
Context
PEs connecting to the same CE must have the same ESI configured. In this way,
the PEs exchange routes that carry ESIs, so that a PE can discover other PEs
connecting to the same CE as itself. This helps implement load balancing or FRR.
ESI-configured interfaces must be Up. If the interface is Down, Ethernet segment
routes cannot be generated. When a CE is dual-homed to PEs, Eth-Trunk interfaces
have to be configured on the CE and PEs so that they can access each other. In
this case, however, one of the Eth-Trunk interfaces that connect the CE to the PEs
is Down. To ensure that both Eth-Trunk interfaces are Up, configure an E-Trunk
between the two PEs.
An ESI can be obtained for an interface in the following ways:
● Statically configured
● Dynamically generated
Static configuration is recommended. Compared with dynamic ESI generation,
static configuration allows EVPN to implement faster traffic switching during a DF
election in a dual-homing scenario with active-active PEs.
The features required in an EVPN dual-homing scenario, such as fast convergence, split horizon,
and DF election, all become invalid in a single-homing scenario. Therefore, configuring an ESI on
a single-homed PE is optional.
Procedure
● (Optional) Configure an E-Trunk.
a. Run system-view
The system view is displayed.
b. Run e-trunk e-trunk-id
a. Run system-view
An ESI is configured.
d. Run commit
Perform this configuration if VXLAN is used to access an EVPN and the EVPN
transmits ARP or MAC routes.
a. Run system-view
An ESI is configured.
d. Run commit
----End
Context
An EVPN instance can be bound to a BD using a VXLAN Network Identifier (VNI)
or using MPLS.
Procedure
Step 1 Run system-view
A VNI is created and associated with the BD, and forwarding in split horizon mode
is enabled.
An EVPN instance is bound to the BD. By specifying different bd-tag values, you
can bind multiple BDs with different VLANs to the same EVPN instance and isolate
services in the BDs.
Before running this command, ensure that the Layer 2 interface on which the Layer 2 sub-
interface is to be created does not have the port link-type dot1q-tunnel command
configuration. If this configuration exists, run the undo port link-type command to delete
the configuration.
The traffic behavior is set to pop so that the Ethernet sub-interface removes VLAN
tags from received packets.
If the encapsulation type of packets has been set to QinQ, specify double in this
step to remove double VLAN tags from the received packets.
----End
Context
To enable an EVPN to transmit Layer 3 services, configure an L3VPN instance and
bind it to a VBDIF interface. After this configuration, the L3VPN instance can
manage host routes received from the VBDIF interface.
Procedure
Step 1 Run system-view
9. Run quit
Exit from the VPN instance IPv4 address family view.
10. Run quit
Exit from the VPN instance view.
----End
Context
In EVPN networking, PEs need to have BGP EVPN peer relationships established
before they can exchange EVPN route information and implement communication
between EVPN instances.
Perform the following steps on each PE.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run peer ipv4-address as-number as-number
A BGP EVPN peer IP address is specified.
Step 4 Run peer ipv4-address connect-interface loopback interface-number
A PE must use a loopback interface address with a 32-bit mask to set up an MP-IBGP peer
relationship with the peer PE, so that VPN routes can be relayed to tunnels. The routes to
the local loopback interface are advertised to the peer PE using an IGP on the MPLS
backbone network.
Step 12 (Optional) Perform the following operations to enable the function to advertise
the routes carrying the large-community attribute to BGP EVPN peers:
The maximum duration from the time the local device finds that the peer device is
restarted to the time a BGP EVPN session is re-established is set.
● To set the maximum wait time for re-establishing all BGP peer relationships,
run the graceful-restart timer restart command in the BGP view. The
maximum wait time can be set to 3600s at most.
● To set the maximum wait time for re-establishing a specified BGP-EVPN peer
relationship, run the peer graceful-restart static-timer command in the BGP
EVPN view. The maximum wait time can be set to a value greater than 3600s.
If both the graceful-restart timer restart time and peer graceful-restart static-
timer commands are run, the latter configuration takes effect.
This step can be performed only after GR has been enabled using the graceful-restart
command in the BGP view.
----End
Context
By default, EVPN PEs work in All-Active mode. If a CE is multi-homed to several
EVPN PEs, these PEs will load-balance traffic. If you do not want an EVPN PE to
work with other EVPN PEs in load-balancing mode, change its global redundancy
mode to Single-Active. In Single-Active mode, the master PE used to transmit
traffic is determined based on DF election or the active/standby status of access-
side Eth-Trunk interfaces.
Procedure
Step 1 Run system-view
----End
Context
On the network shown in Figure 3-117, CE1 and CE2 are each dual-homed to PE1
and PE2. The PE interfaces that connect to CE1 have ESI1 (shown in green), and
the PE interfaces that connect to CE2 have ESI2 (shown in blue). If you want
unicast traffic from CE3 -> CE1 and from CE3 -> CE2 to be transmitted in load
balancing and non-load balancing mode, respectively, you can set a redundancy
mode per ESI instance. Specifically, you can set the redundancy mode of the ESI1
instance to all-active and that of the ESI2 instance to single-active.
If users want the BUM traffic to be sent from CE3 to CE2 to be forwarded over
PE1, configure the DF precedence by ESI on PE1 and PE2 and specify PE1 as the
master DF.
Procedure
● Set a redundancy mode based on a statically configured ESI.
a. Run system-view
The system view is displayed.
b. Run evpn
The global EVPN configuration view is displayed.
c. Run esi esi
A static ESI instance with a specific name is created. The value of the esi
variable must be the same as the statically configured ESI.
d. Run evpn redundancy-mode { single-active | all-active }
A redundancy mode is set for the static ESI instance.
e. (Optional) Run df-election type preference-based
The preference-based DF election function is enabled.
f. (Optional) Run preference preference
A preference is configured. A larger value indicates a higher priority.
g. (Optional) Run non-revertive
The non-revertive function is enabled. As shown in Figure 3-117, CE2 is
dual-homed to PE1 and PE2. In this example, PE1 is elected as the master
DF for traffic forwarding because of a higher precedence. If PE4 that has
a higher DF precedence than PE1 is deployed on the network, PE4
becomes the master DF and traffic is switched to PE4 for forwarding. If
users do not want traffic switching, run the non-revertive command on
PE1 to enable the non-revertive function.
h. Run commit
The configuration is committed.
● Set a redundancy mode based on a dynamically generated ESI.
a. Run system-view
----End
Context
In an AS where a router serves as an RR, other routers can serve as RR clients. The
clients establish BGP EVPN peer relationships with the RR. The RR and its clients
form a cluster. The RR reflects routes among the clients, and therefore the clients
do not need to establish IBGP connections.
Procedure
Step 1 Run system-view
The device where the peer reflect-client command is run serves as the RR and
the specified peers or peer groups serve as clients.
If the clients of an RR have established full-mesh connections with each other, you
can run the undo reflect between-clients command to disable route reflection
between clients through the RR to reduce the link cost. The undo reflect
between-clients command can only be run on an RR.
If a cluster has multiple RRs, you can use this command to set the same cluster ID
for these RRs to prevent routing loops.
----End
Context
On an EVPN MPLS network, after a device receives an ARP request packet, it
broadcasts the packet within a BD. If the device receives a large number of ARP
request packets within a specified period and broadcasts these packets, excessive
ARP request packets are forwarded on the EVPN MPLS network, consuming
excessive network resources and causing network congestion. As a result, the
network performance deteriorates, and user services are affected.
To address this problem, configure proxy ARP on the device. Proxy ARP allows a
device to listen to a received ARP packet and generate an ARP snooping entry to
record the source user information, including the packet's source IP address,
source MAC address, and inbound interface. If proxy ARP is enabled on a device
and an ARP request packet is received, the device preferentially responds to the
request if an ARP snooping entry matches the user information in the ARP
request.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 (Optional) Run arp host ip-conflict-check period period-value retry-times retry-
times-value
Host IP address conflict check is configured.
Step 3 Run bgp as-number
The BGP view is displayed.
Step 4 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 5 Run peer { ipv4-address | group-name } advertise arp
ARP route advertisement is configured.
Step 6 Run quit
The system view is displayed.
Step 7 Run bridge-domain bd-id
The BD vew is displayed.
Step 8 Run arp l2-proxy enable
Proxy ARP is enabled.
Step 9 (Optional) Run arp l2-proxy timeout expire-time
An aging time is configured for ARP snooping entries.
Each ARP snooping entry has a life cycle, which is called the aging time. If an ARP
snooping entry is not updated before its aging time expires, the entry will be
deleted. If the corresponding ARP snooping entries are not released after a user
goes offline, CPU resources are wasted and ARP snooping entries for new users
cannot be properly generated. To resolve this problem, perform this step to set an
aging time so that ARP snooping entries are updated regularly.
----End
Prerequisites
EVPN has been configured.
Procedure
● Run the display default-parameter evpn command to check default EVPN
configurations during EVPN initialization.
● Run the display evpn vpn-instance [ name vpn-instance-name ] command
to check EVPN instance information.
● Run the display evpn vpn-instance name vpn-instance-name df result [ esi
esi ] command to check the DF election result of an EVPN instance.
● Run the display evpn vpn-instance name vpn-instance-name df-timer state
command to check the DF timer status of an EVPN instance.
● Run the display bgp evpn { all | vpn-instance vpn-instance-name } esi [ esi ]
command to check information about the ESIs of a specified or all EVPN
instances.
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table [ { ad-route | es-route |
inclusive-route | mac-route | prefix-route } prefix ] command to check
information about EVPN routes.
● Run the display bgp evpn all routing-table statistics command to check
statistics about EVPN routes.
● Run the display evpn mac routing-table command to check MAC route
information about EVPN instances.
● Run the display evpn mac routing-table limit command to check MAC
address limits of EVPN instances.
● Run the display evpn mac routing-table statistics command displays MAC
route statistics of EVPN instances.
● Run the display arp broadcast-suppress user bridge-domain bd-id
command to check the ARP broadcast suppression table of a specified BD.
● Run the display arp packet statistics bridge-domain bd-id command to
check statistics about the ARP packets in a specified BD.
----End
Usage Scenario
EVPN VPWS provides a P2P L2VPN service solution based on the EVPN service
architecture. Regarding this solution, a P2P MPLS tunnel is established between
PEs and traverses the backbone network. By binding the AC interface on the user
side to the P2P MPLS tunnel on the network side, traffic can be transmitted
between the AC interface and the P2P MPLS tunnel. As a result, traffic that enters
the AC interface is forwarded directly to the peer PE through the P2P MPLS tunnel.
This solution provides a simple Layer 2 packet forwarding mode for the connection
between AC interfaces at both ends, avoiding the need to search MAC address
entries. This service solution is named Ethernet Line (E-Line).
The basic EVPN VPWS architecture has the following components:
● AC: access circuit. An AC is an independent link or circuit that connects a CE
to a PE. An AC interface can be a physical interface or a logical interface. AC
attributes include the encapsulation type, maximum transmission unit (MTU),
and interface parameters of a specified link type.
● VPWS instance: virtual private wire service instance. Each VPWS instance
corresponds to an AC interface, indicating an Ethernet private line (EPL) or
Ethernet virtual private line (EVPL) access.
● EVI: EVPN instance. Deployed on an edge PE, an EVI contains services that
have the same access-side or network-side attributes. Routes are transmitted
based on the RD and RTs configured in each EVI.
● Tunnel: tunnel on the network side.
Pre-configuration Tasks
Before configuring EVPN VPWS over MPLS, enable route reachability on an IPv4
network.
Configuration Procedures
To configure EVPN functions, see 3.2.4 Configuring Common EVPN Functions.
Procedure
Step 1 Run system-view
An EVPN instance takes effect only after the RD is configured. The RDs of different
EVPN instances on a PE must be different.
After being configured, an RD cannot be modified, but can be deleted. After you delete the
RD of an EVPN instance, the VPN targets of the EVPN instance will also be deleted.
In load balancing mode, packet disorder may occur during the in-depth parsing of
MPLS packets. To resolve this problem, run the control-word enable command to
enable the EVPL control word function so that MPLS packets can be reassembled
using the control word.
----End
Procedure
Step 1 Run system-view
Before running this command, ensure that the Layer 2 main interface does not have the
port link-type dot1q-tunnel command configuration. If the configuration has existed, run
the undo port link-type command to delete it.
This step can be performed to ensure EVPN VPWS service continuity after the AC
interface is configured with the CFM and interface association function in a
primary/secondary link networking scenario. In this case, if a primary/secondary
link switchover is triggered after the AC status of the interface is changed to
down, the AC status of the interface is ignored, and the EVPN VPWS service does
not need to be re-established during a primary/secondary link switchover.
----End
Procedure
Step 1 Run system-view
NOTICE
The undo mpls command deletes all MPLS configurations, including the
established LDP sessions and LSPs.
Disabling MPLS LDP from an interface leads to interruptions of all LDP sessions on the
interface and deletions of all LSPs established over these LDP sessions.
----End
Context
The following figure shows the EVPN VPWS multi-homing scenario. CE1 is dual-
homed to PE1 and PE2, and MPLS LDP tunnels are established between PE1 and
PE3 and between PE2 and PE3 so that CE1 and CE2 can communicate with each
other. When PEs work in single-active mode and no E-Trunk is configured, to
prevent CE1 from receiving duplicate traffic from both PE1 and PE2, DF election
must be enabled on PE1 and PE2 so that a primary DF is selected to forward BUM
traffic. This helps save network resources.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn
The global EVPN configuration view is created and displayed.
Step 3 Run vpws-df-election type service-id
Service ID-based DF election is configured.
----End
Usage Scenario
The following figure shows the EVPN VPWS over MPLS scenario where CE1 is
dual-homed to PE1 and PE2 in single-active mode. PE3 forwards traffic only to the
primary PE according to the primary/backup DF status of PE1 and PE2. If PE1 is
the primary DF, the path marked red between CE1 and CE2 is the primary path,
and path marked blue is the backup path. To prevent a fault on the primary path
from causing a traffic loss, FRR must be configured.
In normal conditions, downstream traffic on PE3 is sent to PE1. After local and
remote FRR functions are configured on PE1 and PE2, if PE1 detects a link fault
between itself and CE1, PE1 forwards traffic to PE2, and then PE2 forwards traffic
to CE1. After remote FRR is enabled on PE3, if PE3 detects a link fault between
itself and PE1, PE3 quickly switches traffic to PE2, and then PE2 forwards traffic to
CE1.
Procedure
● Configure local and remote FRR functions on PE1 and PE2.
a. Run system-view
The system view is displayed.
In addition to enabling local and remote FRR functions in an EVPN instance, you
can enable local and remote FRR functions globally using the local-remote
vpws-frr enable command in the global EVPN view.
d. Run commit
The configuration is committed.
● Configure remote FRR on PE3.
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name vpws
The view of an EVPN instance for the VPWS is displayed.
c. Run remote frr { enable | disable }
Remote FRR is enabled in the EVPN instance.
In addition to enabling remote FRR in the EVPN instance view, you can also
enable remote FRR globally using the remote vpws-frr enable command in the
global EVPN view. By default, if the remote frr enable command is not run in
the VPWS-EVPN instance view, the remote vpws-frr enable command
configuration in the global view takes effect. If both the remote frr enable
command and the remote vpws-frr enable command are run, the remote frr
enable command configuration takes effect.
d. Run commit
The configuration is committed.
----End
Prerequisites
EVPN VPWS over MPLS has been configured.
Procedure
● Run the display bgp evpn evpl brief command to check brief information
about all EVPL instances.
● Run the display bgp evpn evpl instance-id instance-id command to check
information about a specified EVPL instance.
----End
Usage Scenario
On a traditional network, the BGP/MPLS IP VPN function is used to carry Layer 3
services. To additionally carry Layer 2 services, users have to deploy an L2VPN over
the existing network, which increases deployment and O&M costs. To address this
problem, users can deploy an EVPN to carry Layer 3 services. To additionally carry
Layer 2 services, users only add some EVPN configurations, implementing the
bearer of both Layer 2 and Layer 3 services.
EVPN can replace BGP/MPLS IP VPN in the following scenarios to carry Layer 3
services:
● Intra-AS mutual VPN communication
On the network shown in Figure 3-121, the VPNs at Site 1 and Site 2 need to
communicate with each other through a public MPLS network. To implement
this communication, perform the following configurations:
a. Configure an L3VPN instance on each PE to manage VPN routes.
b. Establish a BGP EVPN peer relationship between the PEs to transmit
EVPN routes carrying VPN routes.
c. Establish an IGP neighbor relationship or BGP peer relationship between
each PE and CE at the access side to mutually transmit VPN routes.
● DCI network
The EVPN function applies to traditional DCs that interconnect through a DCI
network. On the network shown in Figure 3-123, DC-GWs and DCI-PEs are
separately deployed. The DCI-PEs consider the connected DC-GWs as CEs,
receive VM IP routes from the DCs through a routing protocol, and save and
maintain the received routes. Deploying an EVPN over the DCI backbone
network allows VM IP routes to be transmitted between DCs, implementing
inter-DC VM communication. To implement this communication, perform the
following configurations:
a. Configure an L3VPN instance on each PE to manage VM IP routes.
b. Establish an IBGP EVPN peer relationship between the PEs to transmit
EVPN routes carrying VM IP routes.
c. Establish an IGP neighbor relationship or BGP peer relationship between
each PE and DC-GW to mutually transmit VM IP routes.
Pre-configuration Tasks
Before configuring an EVPN to carry Layer 3 services, ensure Layer 3 route
reachability on the IPv4 network.
Procedure
Step 1 Run system-view
----End
Procedure
● Configure BGP EVPN peers.
If loopback interfaces are used for an EBGP EVPN connection, the peer ebgp-
max-hop command must be run, with the hop-count value greater than or equal
to 2. If this configuration is absent, the EBGP EVPN connection fails to be
established.
e. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The BGP-VPN instance IPv4/IPv6 address family view is displayed.
f. Run import-route { direct | isis process-id | ospf process-id | rip process-
id | static | ospfv3 process-id | ripng process-id } [ med med | route-
policy route-policy-name ] *
The device is enabled to import non-BGP routing protocol routes into the
BGP-VPN instance IPv4/IPv6 address family. To advertise host IP routes,
only enable the device to import direct routes. To advertise the routes of
the network segment where a host resides, configure a dynamic routing
protocol (such as OSPF) to advertise the network segment routes. Then
enable the device to import routes of the configured routing protocol.
g. Run advertise l2vpn evpn
The BGP device is enabled to advertise IP prefix routes to the BGP peer.
This configuration allows the BGP device to advertise both host IP routes
and routes of the network segment where the host resides.
h. Run quit
Exit from the BGP-VPN instance IPv4/IPv6 address family view.
i. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
j. Run peer { ipv4-address | group-name } enable
The local BGP device is enabled to exchange EVPN routes with a peer or
peer group.
k. Run quit
Exit from the BGP view.
l. Run commit
The configuration is committed.
Procedure
Step 1 Configure a dynamic routing protocol or static routes. For configuration details,
see Configuring Route Exchange Between PEs and CEs or Configuring Route
Exchange Between PEs and CEs (IPv6).
----End
3.2.7.4 (Optional) Re-Encapsulating IRB (v6) Routes into the desired Routes
If you want to convert the IRB routes carrying the network segment address of a
tenant host that are received by a device into host IP prefix routes or ARP routes,
or convert the IRBv6 routes carrying the IPv6 network segment address of a
tenant host that are received by a device into host IPv6 prefix routes or ND routes,
you must enable the device to re-encapsulate IRB or IRBv6 routes into the desired
routes.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn
The global EVPN configuration view is created and displayed.
Step 3 Run irb-reoriginated compatible
The device is enabled to re-encapsulate IRB routes into IP prefix routes and ARP
routes, or re-encapsulate IRBv6 routes into IPv6 prefix routes and ND routes.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
EVPN functions have been configured.
Procedure
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table [ { ad-route | es-route |
inclusive-route | mac-route | prefix-route } prefix ] command to check
information about BGP EVPN routes.
● Run the display ip routing-table vpn-instance vpn-instance-name or display
ipv6 routing-table vpn-instance vpn-instance-name command on the local
PE to check information about VPN routes received from the remote PE.
----End
Context
On the network shown in Figure 3-124, an L3VPN is already deployed. If a user
wants to deploy an EVPN L3VPN over the network, co-existence of the EVPN
Pre-configuration Tasks
Before configuring splicing between L3VPN and EVPN L3VPN, complete the
following tasks:
Procedure
● Configure NPE1 and NPE2 to generate and advertise IP prefix routes.
Context
On the network shown in Figure 3-125, PE1 and PE2 are egress devices of the
data center network. PE1 and PE2 work in active-active mode with a bypass
VXLAN tunnel deployed between them. They use an anycast VTEP address to
establish a VXLAN tunnel with the TOR. In this manner, PE1, PE2, and the TOR can
communicate with each other. PE1 and PE2 communicate with the external
network through the VPLS network, on which PW redundancy is configured.
Specifically, the PE-AGG connects to PE1 and PE2 through primary and secondary
PWs, respectively.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run vsi vsi-name bd-mode
A VSI in BD mode is created.
Step 3 Run pwsignal ldp
LDP is configured as the PW signaling protocol, and the VSI-LDP view is displayed.
Step 4 Run vsi-id vsi-id
A VSI ID is configured.
Step 5 Run peer peer-address [ negotiation-vc-id vc-id ] [ encapsulation { ethernet |
vlan } ] [ tnl-policy policy-name ] ac-mode
A PW is set to AC mode.
Step 6 Run commit
----End
Usage Scenario
Multicast services, such as IPTV, multimedia conferencing, and Massive Multiplayer
Online Role Playing Game (MMORPG) services, are becoming more and more
common. As a result, the number of multicast services carried over EVPNs is
growing. On the network shown in Figure 3-126, PE1 is the root node, and PE2
and PE3 are leaf nodes. The access side is the multicast source and the receiver. By
default, an EVPN sends multicast service traffic from PE1 to PE2 and PE3 by
means of ingress replication. Specifically, PE1 replicates a multicast packet into
two copies and sends them to the P. The P then sends one copy to PE2 and the
other copy to PE3. For each additional receiver, an additional copy of the multicast
packet is sent. This increases the volume of traffic on the link between PE1 and
the P, consuming bandwidth resources. To conserve bandwidth resources, you can
configure EVPN to use an mLDP P2MP tunnel to transmit multicast services. After
the configuration is complete, PE1 sends only one copy of multicast traffic to the
P. The P replicates the multicast traffic into copies and sends them to the leaf
nodes, reducing the volume of traffic between PE1 and P.
Pre-configuration Tasks
Before configuring EVPN E-LAN over mLDP P2MP tunnels, complete the following
tasks:
● Configure BD-based EVPN functions over the EVPN.
Procedure
● Perform the following steps on the root node:
a. Run system-view
The system view is displayed.
b. Run evpn vpn-instance vpn-instance-name bd-mode
The view of a BD-EVPN instance is displayed.
c. Run inclusive-provider-tunnel
The EVI I-PMSI view is created and displayed.
d. Run root
The current device is specified as the root node for the multicast EVPN,
and the EVI I-PMSI root view is displayed.
e. Run mldp p2mp
An mLDP P2MP tunnel is specified for the BD-EVPN instance to carry
multicast services, and the EVI I-PMSI root mLDP view is displayed.
f. Run root-ip ip-address
An IP address is specified for the root node of the mLDP P2MP tunnel.
g. (Optional) Run data-delay-time delay-time
A hold-off time for the mLDP P2MP tunnel to go Up is set. If you want
the mLDP P2MP tunnel to go Up after all leaf nodes are configured, run
this command to set a hold-off time for the mLDP P2MP tunnel to go Up.
h. (Optional) Run data-switch disable
Multicast traffic is disabled from being forwarded through a P2P tunnel if
the mLDP P2MP tunnel goes Down.
Before an mLDP P2MP tunnel carries multicast services, you can establish
a bypass tunnel to provide mLDP P2MP FRR protection for the primary
mLDP P2MP tunnel. The bypass tunnel is a P2P tunnel. If both the
primary P2MP tunnel and bypass P2P tunnel go Down, the backup mLDP
P2MP tunnel carries multicast services. After the bypass P2P tunnel for
the primary mLDP P2MP tunnel goes Up, the P2P tunnel carries multicast
services. Because the primary mLDP P2MP tunnel remains Down, a leaf
node also receives multicast traffic from the backup mLDP P2MP tunnel.
As a result, the leaf node receives and forwards duplicate copies of traffic.
To prevent this issue, run this command to disable multicast traffic from
being forwarded through a P2P tunnel when the mLDP P2MP tunnel goes
Down.
i. Run commit
The configuration is committed.
The current device is specified as a leaf node for the multicast EVPN.
e. (Optional) Run root-ip root-ip use-next-hop
----End
Context
An EVPN route-policy is used to match specified EVPN route information or some
attributes in EVPN route information and to change these attributes as required.
In addition, an EVPN route-policy can be deployed to control the routing table
size, thereby conserving system resources.
A route-policy can consist of multiple nodes, and each node can comprise the
following clauses:
● if-match clause
Defines the matching rules that EVPN routes meet. The matching objects are
some attributes of the EVPN routes.
● apply clause
Procedure
● Create a route-policy.
a. Run system-view
The system view is displayed.
b. Run route-policy route-policy-name { permit | deny } node node
A route-policy node is created, and the route-policy view is displayed.
c. Run commit
The configuration is committed.
● Configure filter criteria (if-match clause).
Filter Criteria Configuration Procedure
----End
Result
● Run the display bgp evpn all routing-table command to check the EVPN
route information after the EVPN route-policy is applied.
● Run the display ip routing-table vpn-instance vpn-instance-name or display
ipv6 routing-table vpn-instance vpn-instance-name command to check the
VPN IP route information after the EVPN route-policy is applied.
Usage Scenario
BGP EVPN soft reset performs a soft reset on BGP EVPN connections, which
triggers BGP EVPN peers to send EVPN routes to a local device without tearing
down the BGP EVPN connections and allows the local device to apply a new
filtering policy and refresh the BGP EVPN routing table.
Pre-configuration Tasks
Before configuring BGP EVPN soft reset, configure EVPN functions.
Procedure
● Run refresh bgp evpn { all | peer-address | group group-name } { export |
import }
----End
Usage Scenario
In CE dual-homing networking, the following conditions must be met to balance
BUM traffic:
1. An Eth-Trunk sub-interface is bound to an EVPN instance created on each PE.
2. VLAN-based DF election is configured.
For details on how to meet the first condition, see Eth-Trunk Interface
Configuration. This section describes how to configure VLAN-based DF election.
Pre-configuration Tasks
Before configuring VLAN-based DF election, bind an Eth-Trunk sub-interface to an
EVPN instance and configure EVPN Functions.
Perform the following steps on each PE with the EVPN instance created.
Procedure
Step 1 Run system-view
----End
Application Scenarios
Reliability Function Scenario
Configuration
Pre-configuration Tasks
Before configuring EVPN reliability functions, configure EVPN functions.
Procedure
● Configure a PE to monitor the BGP EVPN peer status.
a. Enter the system view.
system-view
system-view
b. Enter the global EVPN configuration view.
evpn
c. Enable the PE to allow the extended VLAN community attribute to be
carried in local routes before the routes are advertised to peers.
----End
Usage Scenario
On the EVPN E-LAN shown in Figure 3-129, the two PEs may be interconnected
both through network-side and access-side links. If this is the case, a BUM traffic
loop and MAC route flapping both occur, preventing devices from working
properly. By default, MAC duplication suppression is enabled on a device. Also by
default, the system checks the number of times a MAC entry flaps within a
detection period. If the number of MAC flaps exceeds the upper threshold, the
system considers MAC route flapping to be occurring on the network and
consequently suppresses the flapping MAC routes. The suppressed MAC routes
cannot be sent to a remote PE through a BGP EVPN peer relationship.
Pre-configuration Tasks
Before configuring MAC duplication suppression, 3.2.5 Configuring BD-EVPN
Functions or 3.2.4 Configuring Common EVPN Functions must have been
performed on the network.
Procedure
● Configure MAC duplication suppression for all EVPN instances.
a. Run system-view
The system view is displayed.
b. Run evpn
The global EVPN configuration view is displayed.
c. Run mac-duplication
The EVPN-MAC-duplication view is displayed.
d. Run detect loop-times loop-times detect-cycle detect-cycle-time
Loop detection parameters for MAC duplication suppression are
configured, including the detection period and the threshold for the
number of MAC entry flaps within a detection period. The default
detection period is 180s, and the default threshold is 5.
e. Run retry-cycle retry-times
A hold-off time to unsuppress MAC routes is configured.
f. Run commit
The configuration is committed.
● Configure MAC duplication suppression for a specified EVPN instance.
a. Run system-view
The system view is displayed.
b. Run either of the following commands:
The flapping MAC routes are set to black-hole MAC routes. If the source
or destination MAC address of the forwarded traffic is the same as the
MAC address of the black-hole MAC route, the traffic is discarded.
----End
Follow-up Procedure
When a MAC route to a specific MAC address or MAC routes in a specific BD have
stopped flapping and you want to restore them before the configured hold-off
timer expires, run the reset evpn vpn-instance mac-duplication [ bridge-
domain bd-id ] [ mac-address mac-address ] command. This allows you to
manually clear the suppression state of the MAC routes.
Usage Scenario
As the number of services carried on an EVPN increases, the number of user MAC
addresses managed by the EVPN is also increasing. The user MAC addresses are
flooded on the network through EVPN routes. As a result, all interfaces in the
same broadcast domain can communicate with each other at Layer 2. However,
broadcast, unknown unicast, multicast (BUM), and unicast traffic cannot be
isolated for users who do not need to communicate with each other. To isolate
interfaces that do not need to communicate with each other in the same
broadcast domain, you can deploy the EVPN E-Tree function on the network.
EVPN E-Tree implements the E-Tree model defined by the Metro Ethernet Forum
(MEF) by setting the root or leaf attribute for AC interfaces. An AC interface with
the leaf attribute is a leaf AC interface, and an AC interface with the root attribute
is a root AC interface.
● A leaf AC interface and a root AC interface can send traffic to each other.
However, flows between leaf AC interfaces are isolated from each other.
● A root AC interface can communicate with other root AC interfaces and with
leaf AC interfaces.
Pre-configuration Tasks
Before configuring EVPN E-Tree, 3.2.5 Configuring BD-EVPN Functions or 3.2.4
Configuring Common EVPN Functions has been performed on the network.
Procedure
● Configure EVPN E-Tree for a BD-EVPN.
a. Run system-view
The view of the interface to which a basic EVPN instance will be bound is
displayed.
f. Run evpn binding vpn-instance vpn-instance-name
----End
Usage Scenario
The growing number of services over EVPNs has triggered a proliferation of new
users. As a result, BGP-EVPN peers on an EVPN send vast quantities of EVPN
routes to each other. Even if the remote peer does not have an RT-matching EVPN
instance, the local PE still sends it EVPN routes. To reduce network load, each PE
needs to receive only desired routes. If a separate export route policy is configured
for each user, the cost of O&M goes up. To address this issue, EVPN ORF can be
deployed.
After this function is configured, each PE on the EVPN sends the IRT and original
AS number of locally desired routes to the other PEs or BGP EVPN RRs that
function as BGP-EVPN peers. The information is sent through ORF routes. Upon
receipt, the peers construct export route policies based on these routes so that the
local PE only receives the expected routes, which reduces the receiving pressure on
the local PE.
Pre-configuration Tasks
Before configuring EVPN ORF, ensure that one of the following EVPN functions
has been configured: PBB-EVPN, BD-EVPN Functions, Common EVPN Functions,
and EVPN VPWS over MPLS Functions
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run ipv4-family vpn-target
The BGP-VT address family view is displayed.
Step 4 Run peer ipv4-address enable
ORF route exchange with a specified peer or peer group is enabled.
Step 5 (Optional) Run peer { group-name | ipv4-address } reflect-client
RR is enabled, and the peer is specified as an RR. Even on an EVPN where an RR is
deployed, perform this step on the RR to enable the RR function in the BGP-VT
address family view.
Step 6 Run quit
Exit from the BGP-VT address family view.
----End
Usage Scenario
As shown in Figure 3-130, Device A and Device D belong to AS100, and Device B,
Device C, Device E, and Device F belong to AS200 in a BGP EVPN FRR switchover
scenario. Device B selects the optimal route that is learned from the EBGP EVPN
peer Device A and the sub-optimal route that is learned from the IBGP EVPN peer
Device E. In normal situations, Device B preferentially selects a labeled BGP route
learned from EBGP EVPN peer Device A and advertises the route to the IBGP EVPN
peer Device C. Device B delivers an incoming label map (ILM) entry and next hop
label forwarding entry (NHLFE), and Device C also delivers an NHLFE entry. If
Device A is restarted due to a fault, Device B withdraws the route learned from
Device A, selects the sub-optimal route learned from IBGP EVPN peer Device E,
and reflects the route to Device C through the RR. Then, Device B releases the
applied label and deletes the ILM entry mapped to the label. If traffic arrives at
Device C before Device C updates the NHLFE entry, Device C still uses this entry to
forward traffic to Device B. Because Device B has released the label and deleted
the ILM entry, packets are lost for the second time.
Figure 3-130 Networking for packet loss during a BGP EVPN FRR switchover
To prevent traffic loss during a BGP EVPN FRR switchover, configure the local
device to delay deleting the ILM entry.
Pre-configuration Tasks
Before setting a delay in releasing an obtained label in a BGP EVPN FRR
switchover scenario, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Context
As the MAN is evolving into EVPN, a large number of devices at the aggregation
layer still use traditional VLLs, making E2E evolution into EVPN at a time difficult.
However, the core network has evolved into EVPN. To implement communication
between different network layers, VLL splicing common EVPN must be supported.
On the network shown in Figure 3-131, CE1 with two sites attached is connected
to a UPE through a NID (switch). The UPE and NPEs are connected through an
MPLS network at the aggregation layer, and services are carried using a VLL.
NPE1, NPE2, and NPE3 are connected through an MPLS network at the core layer,
and services are carried through an EVPN.
Pre-configuration Tasks
Before splicing a VLL with a common EVPN E-LAN, complete the following tasks:
● Configure interfaces and their IP addresses on the UPE, NPE1, NPE2, and
NPE3.
● Configure an IGP on the UPE, NPE1, NPE2, and NPE3 to implement route
connectivity.
● Configuring master/slave mode of PW redundancy on the UPE.
● Configure EVPN functions on NPE3.
● Configure basic MPLS functions on NPE1, NPE2, and NPE3.
Procedure
Step 1 Configure basic EVPN functions on NPE1 and NPE2.
1. Configure an EVPNinstance
2. Configure an EVPN source address
3. Configure a BGP EVPN peer relationship
Step 2 (Optional) Run evpn access vll convergence separate disable
The EVPN-accessing-VLL coupling flag is enabled.
Step 3 Run mpls l2vpn
MPLS L2VPN is enabled on NPE1 and NPE2.
Step 4 Configure PW VE interfaces and sub-interfaces on NPE1 and NPE2. Bind the VLL to
the PW VE interfaces and bind EVPN instances to the PW VE sub-interfaces.
1. Run interface interface-type interface-number
The PW VE interface view is displayed.
2. Run mpls l2vc { ip-address | pw-template template-name } * vc-id [ tunnel-
policy policy-name | [ control-word | no-control-word ] | [ raw | tagged ] |
access-port | ignore-standby-state ] *
A VPWS connection is created.
3. Run esi esi
An ESI is configured.
4. Run quit
Return to the system view.
5. Run interface interface-type interface-number [ .subinterface-number ]
The PW VE sub-interface view is displayed.
6. Run encapsulationqinq-termination [ local-switch | rt-protocol ]
The encapsulation type of the sub-interface is set to QinQ VLAN tag
termination.
7. Run qinq termination pe-vid pe-vid [ to high-pe-vid ] ce-vid ce-vid [ to
high-ce-vid ]
The sub-interface is configured as a QinQ VLAN tag termination sub-
interface.
8. Run evpn binding vpn-instance vpn-instance-name
----End
Context
As the MAN is evolving into EVPN, a large number of devices at the aggregation
layer still use traditional VLLs, making E2E evolution into EVPN at a time difficult.
However, the core network has evolved into EVPN. To implement communication
between different network layers, VLL accessing EVPN must be supported. EVPN-
VPWS services involve the E-Line and E-LAN models. This section describes how to
splice a VLL with an MPLS EVPN E-Line.
On the network shown in Figure 3-132, CE1 with two sites attached is connected
to a UPE through a NID (switch). The UPE and NPEs are connected through an
MPLS network at the aggregation layer, and services are carried using a VLL.
NPE1, NPE2, and NPE3 are connected through an MPLS network at the core layer,
and services are carried through an EVPN.
Pre-configuration Tasks
Before splicing a VLL with an MPLS EVPN E-Line, complete the following tasks:
● Configure interfaces and their IP addresses on NPE1, NPE2, NPE3, and the
UPE.
● Configure an IGP on NPE1, NPE2, NPE3, and the UPE to implement Layer 3
reachability.
● Configure PW redundancy in master/slave mode on the UPE.
● Configuring EVPN VPWS over MPLS Functions on NPE1, NPE2, and NPE3.
● Configure basic MPLS LDP functions on NPE1, NPE2, NPE3, and the UPE.
● Configure an EVPL instance on each of NPE1, NPE2, and NPE3.
Procedure
Step 1 Run system-view
An ESI is configured.
Before running this command, ensure that the Layer 2 interface on which the PW-VE sub-
interface is to be created does not have the port link-type dot1q-tunnel command
configuration. If this configuration exists, run the undo port link-type command to delete
the configuration.
In addition to a Layer 2 sub-interface, an Ethernet main interface, Layer 3 sub-interface, or
Eth-Trunk interface can also function as an AC interface.
▪ If you do not configure a matching policy, the dot1q VLAN tag termination
sub-interface terminates the VLAN tags of packets carrying the specified
VLAN ID. If you configure a matching policy, the sub-dot1q VLAN tag
termination sub-interface terminates the VLAN tags of packets carrying the
specified VLAN ID+802.1p value/DSCP value/EthType.
Context
During IP RAN evolution towards EVPN, if VPLS is still commonly used on access-
layer devices, it is difficult to implement E2E evolution towards MPLS EVPN.
However, the aggregation network has already evolved to MPLS EVPN. In this
case, the IP RAN needs to be configured with VPLS splicing with MPLS EVPN E-
LAN.
As shown in Figure 3-133, CSGs and ASGs are connected over the access network
where the VPLS function is deployed to carry services, and ASGs and RSGs are
connected over the aggregation network where the BD EVPN function is deployed
to carry services. On ASGs, a BD needs to be configured and bound to a BD EVPN
instance and a VSI to achieve VPLS splicing with MPLS EVPN E-LAN.
Pre-configuration Tasks
Before splicing VPLS with MPLS EVPN E-LAN, complete the following tasks:
● Configure LDP VPLS between CSGs and ASGs.
The VSIs for BDs must be configured using the vsi bd-mode command.
● Configure BD EVPN between ASGs and RSGs.
Perform the following operations on each ASG:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn
The global EVPN configuration view is displayed.
Step 3 Run esi esi
The name of a static ESI instance is configured.
Step 4 Run evpn redundancy-mode single-active
The redundancy mode of the static ESI instance is set to single-active.
Step 5 Run quit
Return to the global EVPN configuration view.
Step 6 Run quit
Return to the system view.
Step 7 Run vsi vsi-name bd-mode
The VSI view is displayed.
Step 8 Run pwsignal ldp
LDP is configured as the PW signaling protocol, and the VSI-LDP view is displayed.
Step 9 Run vsi-id vsi-id
A VSI ID is configured.
Step 10 Configure a CSG as the UPE.
● Run the peer peer-address [ negotiation-vc-id vc-id ] [ tnl-policy policy-
name ] upe command to specify a CSG as the UPE.
● Run the peer peer-address [ negotiation-vc-id vc-id ] [ tnl-policy policy-
name ] static-npe trans transmit-label recv receive-label command to
configure a static VPLS PW.
Step 11 Run peer peer-address [ negotiation-vc-id vc-id ] pw pw-name
A PW is created, and the VSI-LDP-PW view is displayed.
----End
Usage Scenario
On the network shown in , the first layer is the user-side access network where
Layer 2 services are carried between CPEs and UPEs to implement VLAN
forwarding. The second layer is the aggregation network where services are
carried using EVPN E-LAN and BD-based EVPN MPLS is deployed between UPEs
and NPEs. The third layer is the core network where services are carried using
MPLS L3VPN. In this case, EVPN E-LAN accessing L3VPN must be configured on
NPEs.
Pre-configuration Tasks
Before configuring EVPN E-LAN accessing L3VPN, complete the following tasks:
● Configuring BD-EVPN Functions on UPEs and NPEs.
● Configuring a Basic BGP/MPLS IP VPN or Configuring a Basic BGP/MPLS IPv6
VPN on NPEs.
Context
Perform the following steps on an NPE:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface virtual-ethernet interface-number or interface global-ve
interface-number
A VE interface or a global VE interface is created, and the VE interface view or
global VE interface view is displayed.
● A VE group contains only one L2VE interface and one L3VE interface. Two VE interfaces
in a VE group cannot reside on different boards.
● Two global VE interfaces in a VE group can reside on different boards.
----End
Context
Perform the following steps on an NPE:
Procedure
Step 1 Run system-view
● A VE group contains only one L2VE interface and one L3VE interface. Two VE interfaces
in a VE group cannot reside on different boards.
● Two global VE interfaces in a VE group can reside on different boards.
----End
3.2.22.3 (Optional) Configuring MAC Addresses for L2VE and L3VE Interfaces
When the MAC address of an L2VE or L3VE interface conflicts with the MAC
address of another type of interface, you can change the MAC address of the L2VE
or L3VE interface to allow normal communication.
Context
Perform the following steps on an NPE:
Procedure
Step 1 Run system-view
You can modify the MAC address of an L2VE interface on an NPE if this MAC
address is the same as that of the UPE interface that connects to the NPE. An
L2VE interface has only one MAC address.
----End
Context
Perform the following steps on an NPE:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface virtual-ethernet interface-number.subinterface-number or
interface global-ve interface-number.subinterface-number
The L2VE sub-interface or global L2VE sub-interface view is displayed.
Step 3 Run encapsulation { dot1q [ vid low-pe-vid [ to high-pe-vid ] ] | untag | qinq
[ vid pe-vid ce-vid { low-ce-vid [ to high-ce-vid ] | default } ] }
An encapsulation type of packets allowed to pass through the Layer 2 sub-
interface is specified.
Step 4 Run rewrite pop { single | double }
The traffic behavior is set to pop so that the Ethernet sub-interface removes VLAN
tags from received packets.
If the received packets each carry a single VLAN tag, specify single.
If the encapsulation type of packets has been set to QinQ, specify double in this
step to remove double VLAN tags from the received packets.
Step 5 Run bridge-domain bd-id
The Layer 2 sub-interface is added to a BD so that the sub-interface can transmit
data packets only through this BD.
Step 6 Run commit
The configuration is committed.
----End
Context
Perform the following steps on an NPE:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface virtual-ethernet interface-number.subinterface-number or
interface global-ve interface-number.subinterface-number
The L3VE sub-interface or global L3VE sub-interface view is displayed.
Step 3 Run vlan-type dot1q vlan-id
----End
Prerequisites
All configurations for EVPN E-LAN accessing L3VPN are complete.
Procedure
Step 1 Run the display virtual-ethernet ve-group [ ve-group-id | slot slot-id ] command
to check the binding relationship between VE interfaces and the VE group.
----End
Context
On an HVPLS-deployed network, if users want to evolve the VPLS function to the
EVPN function, users can deploy the EVPN HVPN function as a replacement of
HVPLS to carry Layer 2 services.
As shown in Figure 3-135, UPEs and SPEs are connected over the access network,
and SPEs and NPEs are connected over the aggregation network. An IGP needs to
be deployed on both the access network and aggregation network to achieve
route communication at each layer. The BD EVPN function needs to be deployed
between neighboring devices. In addition, the EVPN RR function can be deployed
on SPEs to allow CE1 and CE2 to transmit information, such as the MAC address.
Pre-configuration Tasks
Before configuring EVPN HVPN to carry Layer 2 services, complete the following
tasks:
● Configure BD EVPN between SPEs and NPEs.
Perform the following operations on each SPE to implement the EVPN RR
function:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Specify a UPE and an RR client.
● Run the peer { ipv4-address | group-name } upe command to configure a
peer as the UPE.
● Specify an RR client.
a. Run the peer { ipv4-address | group-name } reflect-client command to
configure the RR function and specify the UPE as an RR client.
b. Run the peer { ipv4-address | group-name } next-hop-local command to
configure a device to use its own IP address as the next-hop IP address
during route advertisement.
To allow an SPE to use its own IP address as the next-hop IP address
when advertising routes to UPEs and NPEs, run the peer next-hop-local
command on the SPE twice with different parameters specified for UPEs
and NPEs.
Step 5 (Optional) Run apply-label per-nexthop
The next-hop-based label distribution function is enabled for EVPN routes.
In an EVPN HVPN scenario, if an SPE needs to send a large number of EVPN
routes but the MPLS labels are insufficient, next-hop-based label distribution can
be configured on the SPE to conserve MPLS label resources.
NOTICE
----End
Follow-up Procedure
On NPEs, configure the MAC-based load balancing function to improve network
resource utilization.
1. Run the system-view command to enter the system view.
2. Enter the EVPN instance view or BD-EVPN instance view.
– Run the evpn vpn-instance vpn-instance-name command to enter the
EVPN instance view.
– Run the evpn vpn-instance vpn-instance-name bd-mode command to
enter the BD-EVPN instance view.
3. Run the mac load-balancing command to enable MAC-based load balancing.
4. Run the commit command to commit the configuration.
Context
In a MetroFabric scenario with CU separation deployed, as shown in Figure 3-136,
MPLS EVPN runs on the metro network, and static VXLAN tunnels are established
between the MEF devices and vBRAS-UPs. Therefore, the MEF1 and MEF2 devices
must support MPLS EVPN splicing with static VXLAN to establish E2E forwarding
path for traffic.
MPLS EVEPN splicing with static VXLAN supports the following key functions:
● MEF1 and MEF2 learn MAC routes on the static VXLAN tunnel side.
● A VXLAN EVPN peer relationship is established between MEF1 and MEF2.
Both MEF1 and MEF2 can re-originate the MAC routes learned on the static
VXLAN tunnel side and flood the MAC routes through BGP EVPN.
● An MPLS EVPN peer relationship is established between MEF1 and MEF2.
MEF1 and MEF2 flood ES routes to each other and support DF election.
● MEF1 and MEF2 advertise the MAC routes (maybe the re-originated MAC
routes) learned on the static VXLAN tunnel side to MEF3 (the BGP EVPN peer
on the MPLS EVPN side) through EVPN. MAC load balancing is implemented
on MEF3.
● MAC mobility is supported between MEF1 and MEF2.
Pre-configuration Tasks
Before configuring MPLS EVPN splicing with static VXLAN, complete the following
tasks:
● Configure BD EVPN between MEF1 and MEF3 and between MEF2 and MEF3.
Specifically, configure BD-based MPLS EVPN. To allow MEF1 and MEF2 to
implement load balancing, configure the same ESI in the BDs on MEF1 and
MEF2.
● Configure VXLAN tunnels between MEF1, MEF2, and vBRAS-UPs. MEF1 and
MEF2 function as aggregation devices, and vBRAS-UPs function as BRASs.
Currently, only static VXLAN tunnels can be established.
MEF1 and MEF2 form the anycast relationship. Therefore, the same source VTEP IP
address must be configured for MEF1 and MEF2.
After the pre-configuration tasks are complete, perform the following operations.
Procedure
Step 1 Configure a BGP EVPN peer relationship between MEF1 and MEF2.
1. Run bgp as-number
BGP is enabled, and the BGP view is displayed.
2. (Optional) Run router-id ipv4-address
A BGP router ID is configured.
3. Run peer ipv4-address as-number as-number
The remote device is specified as the BGP peer.
4. (Optional) Run peer ipv4-address connect-interface interface-type interface-
number [ ipv4-source-address ]
The source interface and source IP address are specified for the TCP
connection to be set up between BGP peers.
If loopback interfaces are used to establish a BGP connection, running the peer
connect-interface command on both ends is recommended to ensure the
connectivity. If this command is run on only one end, the BGP connection may fail to
be established.
5. (Optional) Run peer ipv4-address ebgp-max-hop [ hop-count ]
The maximum number of allowed hops is set for an EBGP EVPN connection.
Generally, EBGP EVPN peers are directly connected. If they are not directly
connected, run the peer ebgp-max-hop command to allow the EBGP EVPN
peers to establish a multi-hop TCP connection.
If loopback interfaces are used for an EBGP EVPN connection, the peer ebgp-max-
hop command must be run, with the hop-count value greater than or equal to 2. If
this configuration is absent, the EBGP EVPN connection fails to be established.
6. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
7. Run peer { ipv4-address | group-name } enable
The function to exchange EVPN routes with a peer or peer group is enabled.
8. Run peer { ipv4-address | group-name } advertise encap-type vxlan
The device is enabled to advertise EVPN routes carrying the VXLAN
encapsulation attribute to peers.
9. (Optional) Run peer { group-name | ipv4-address } route-policy route-policy-
name { import | export }
A route-policy is specified for the routes received from or to be advertised to a
BGP EVPN peer or peer group.
After the route-policy is applied, the routes received from or to be advertised
to a specified BGP EVPN peer or peer group will be filtered, ensuring that only
desired routes are imported or advertised. This configuration helps manage
routes and reduce required routing entries and system resources.
The device is enabled to advertise routes with the next hop unchanged to an
EBGP EVPN peer. If an EBGP VPN peer relationship is established between a
spine node and a gateway, run the peer next-hop-invariable command on
the spine node to ensure that the next hop of the routes received by this
gateway is another gateway.
11. (Optional) Run peer { group-name | ipv4-address } mac-limit number
[ percentage ] [ alert-only | idle-forever | idle-timeout times ]
Step 2 Configure the function to re-originate EVPN routes between MEF1 and MEF2.
1. Run peer { ipv4-address | group-name } import reoriginate
The function to re-originate the routes received from BGP EVPN peers is
enabled.
2. Run peer { ipv4-address | group-name } advertise route-reoriginated evpn
mac
The function to re-originate MAC routes through EVPN and send the re-
originated MAC routes to a specified BGP EVPN peer is enabled.
3. Run quit
Step 3 Enable MEF1 and MEF2 to advertise the MAC routes (maybe the re-originated
MAC routes) learned on the static VXLAN tunnel side to the BGP EVPN peer on
the MPLS EVPN side through EVPN.
1. Run evpn
The device is enabled to advertise the MAC routes on the static VXLAN tunnel
side to the BGP EVPN peers on the MPLS EVPN side through EVPN.
----End
Usage Scenario
When a DC with an EVPN VXLAN deployed interconnects to an enterprise campus
through an MPLS L2VPN, splicing a VXLAN EVPN and a VPLS must be deployed.
On the network shown in Figure 3-137, the TOR, which is a DC's gateway,
accesses the backbone network through the egress routers PE1 and PE2 on the DC
network. PE3, which is the egress router on the campus network, interconnects to
PE1 and PE2 through the MPLS VPLS network. Splicing VXLAN and VPLS is
configured on PE1 and PE2 to implement communication between the DC and
campus network.
Pre-configuration Tasks
Before splicing a VXLAN EVPN with a VPLS, complete the following tasks:
● Configure interfaces and IP addresses for the egress device PE3 on the campus
network, egress devices PE1 and PE2 on the DC network, and the TOR.
● Configure an IGP on PE3, PE1, PE2, and the TOR to implement IP connectivity
on the backbone network.
● Enable MPLS on PE3, PE1, and PE2. Configure MPLS LDP LSPs between PE3
and PE1 and between PE3 and PE2.
● Configure Layer 2 connections between CEs and PEs.
Procedure
● Configure PE1 and PE2.
a. Run system-view
The system view is displayed.
b. Run vsi vsi-name [ static | auto ] bd-mode
A VSI in BD mode is created.
c. Run pwsignal { ldp | bgp }
LDP is configured as the VSI signaling protocol, and the VSI-LDP view is
displayed.
d. Run vsi-id vsi-id
An ID is configured for the VSI.
e. Run peer peer-address [ negotiation-vc-id vc-id ] pw pw-name
PE3 is specified as the VSI peer.
f. Run commit
The configuration is committed.
● Configure PE3.
a. Run system-view
The system view is displayed.
b. Run vsi vsi-name [ static | auto ] bd-mode
A VSI in BD mode is created.
c. Run pwsignal { ldp | bgp }
The VSI-LDP view is displayed.
d. Run vsi-id vsi-id
An ID is configured for the VSI.
e. Run peer peer-address [ negotiation-vc-id vc-id ] pw pw-name
PE1 and PE2 are specified as the VSI peers of PE3.
f. Run protect-group group-name
A PW protection group is created, and the PW protection group view is
displayed.
g. Run protect-mode pw-redundancy { master | independent }
The PW redundancy mode of the current PW protection group is
specified.
Network protection depends on the VPLS's network's protection solution.
It is recommended that PW redundancy in master mode instead of
independent be configured on PE3.
h. Run peer peer-address [ negotiation-vc-id vc-id ] preference preference-
value
The PWs connected to PE1 and PE2 are added to the PW protection
group, and priorities of the PWs are specified. The smaller the value, the
higher the priority. The PW with a higher priority serves as the primary
PW.
i. Run quit
Exit from the PW protection group view.
j. Run quit
Exit from the VSI-LDP view.
k. Run pw-redundancy mac-withdraw rfc-compatible
PE3 is enabled to instruct the peer PEs to clear the MAC addresses of the
PWs if their primary/secondary status is changed.
When the master/backup status of PE1 and PE2 changes, PE3 performs a
primary/secondary PW switchover. In this case, PE3 must notify PE1 and
PE2 of the current PW status and instruct them to clear the MAC
addresses.
l. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run evpn vpn-instance vpn-instance-name bd-mode
An EVPN instance in BD mode is created, and the EVPN instance view is displayed.
Step 3 Run route-distinguisher route-distinguisher
An RD is configured for the EVPN instance.
Step 4 Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ]
VPN targets (RTs) are configured for the EVPN instance.
Step 5 Run quit
Exit from the EVPN instance view.
Step 6 Run interface nve nve-number
The view of a Network Virtualization Edge (NVE) interface is displayed.
Step 7 Run source ip-address
An IP address is configured for the source virtual tunnel end point (VTEP).
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
BGP is enabled, and the BGP view is displayed.
Step 3 Run peer ipv4-address as-number as-number
The TOR is specified as the BGP peer.
Step 4 Run peer ipv4-address ebgp-max-hop [ hop-count ]
The maximum number of hops allowed for a BGP peer session is specified.
Step 5 Run peer ipv4-address connect-interface loopback interface-number
The source interface for sending BGP messages is specified.
Step 6 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 7 Run undo policy vpn-target
The function of filtering received EVPN routes based on RTs is disabled.
Step 8 Run peer { ipv4-address | group-name } enable
The capability to exchange EVPN routes with the specified peer or peer group is
enabled.
Step 9 Run peer { ipv4-address | group-name } advertise encap-type vxlan
The device is configured to advertise EVPN routes carrying the VXLAN
encapsulation attribute to the TOR.
Step 10 Run quit
The BGP-EVPN address family view is displayed.
----End
Procedure
● Configure PE1 and PE2.
a. Run bridge-domain bd-id
The BD view is displayed.
b. Run vxlan vni vni-id split-horizon-mode
A VNI is created and associated with the BD, and forwarding in split
horizon mode is enabled.
c. Run evpn binding vpn-instance vpn-instance-name [ bd-tag bd-tag ]
A specified EVPN instance is bound to the BD. By specifying different bd-
tag values, you can bind multiple BDs with different VLANs to the same
EVPN instance and isolate services in the BDs.
d. Run l2 binding vsi vsi-name
A specified VSI is bound to the BD.
On a node that splices VXLAN and VPLS, the BD-bound VSI does not support pw-tag,
and the PW configured in the VSI must be in Raw mode rather than in Tag mode.
e. Run quit
Return to the system view.
f. Run commit
The configuration is committed.
● The BD bound to an EVPN instance and a VSI do not support AC interface access.
● A node that splices VXLAN and VPLS can have only one VSI and one EVPN
instance bound to its BD. Specifically, one BD cannot be bound to multiple VSIs or
EVPN instances. Additionally, multiple VSIs or EVPN instances cannot be bound to
the same BD.
● Bind the AC interface to the BD on PE3.
a. Run system-view
The system view is displayed.
b. Run bridge-domain bd-id
The BD view is displayed.
The Layer 2 sub-interface and the vsi are bound to the same BD.
g. Run quit
----End
Context
On the network shown in Figure 3-138, PE3 is dual-homed to PE1 and PE2, and
an MPLS L2VPN is deployed between the PEs, with PW connections configured. To
accelerate PW fault detection, static BFD for VPLS PW can be configured on PE1,
PE2, and PE3. The configuration allows fast switching of upper-layer applications.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bfd
BFD is enabled globally, and the global BFD view is displayed.
Step 3 Run quit
Return to the system view.
Step 4 Run bfd session-name bind pw vsi vsi-name peer peer-address [ vc-id vc-id ]
[ remote-peer remote-peer-address pw-ttl { auto-calculate | ttl-number } ]
BFD configuration items are created.
Step 5 Run the following commands to configure BFD session discriminators:
● Run the discriminator local discr-value command to set the local
discriminator.
● Run the discriminator remote discr-value command to set the remote
discriminator.
The local discriminator on one end must be the remote discriminator on the other end.
● You must simultaneously configure or cancel BFD for PW on both PEs. Otherwise, the
PW status on both ends may be inconsistent.
● To modify parameters of a created BFD session, run the min-tx-interval, min-rx-
interval, and detect-multiplier commands as needed.
----End
Prerequisites
All configurations of splicing a VXLAN EVPN with a VPLS have been completed.
Procedure
● Check the VPLS configurations.
– Run the display vsi [ name vsi-name ] [ verbose ] command to check
VSI information of the VPLS.
Usage Scenario
If an ASBR can manage BGP-EVPN routes but there are not enough interfaces for
all inter-AS EVPNs, MPLS EVPN E-LAN Option B can be used. MPLS EVPN E-LAN
Option B requires ASBRs to help to maintain and advertise EVPN routes, and you
do not need to create EVPN instances on the ASBRs. In the basic networking of
inter-AS VPN Option B, an ASBR cannot play other roles, such as the PE or RR, and
an RR is not required in each AS.
Pre-configuration Tasks
Before configuring MPLS EVPN E-LAN Option B, complete the following tasks:
● Configure an IGP for the MPLS backbone network of each AS to ensure IP
connectivity of the backbone network in each AS.
● Configure MPLS and MPLS LDP both globally and per interface on each node
of the MPLS backbone network in each AS and establish an LDP LSP or TE
tunnel between MP-IBGP peers.
● Configure Configuring an EVPN Instance on the PE connected to the CE.
● Configure Configuring an EVPN Source Address on the PE connected to the
CE.
● Configure Binding an Interface to an EVPN Instance on the PE connected to
the CE.
● Configure Configuring an ESI on the PE connected to the CE.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run peer peer-address as-number as-number
The IBGP peer relationship is set up between the PE and ASBR in the same AS.
Step 4 Run peer peer-address connect-interface loopback interface-number
The loopback interface is specified as the outbound interface of the BGP session.
Step 5 Run ipv4-family unicast
The BGP-IPv4 unicast address family view is displayed.
Step 6 Run peer peer-address enable
The function to exchange IPv4 routes between the PE and ASBR is enabled.
Step 7 Run quit
Return to the system view.
Step 8 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 9 Run peer peer-address enable
The function to exchange EVPN routes between the PE and ASBR is enabled.
----End
Context
In inter-AS EVPN Option B, you do not need to create EVPN instances on ASBRs.
The ASBR does not filter the EVPN routes received from the PE in the same AS
based on VPN targets. Instead, it advertises the received routes to the peer ASBR
through MP-EBGP.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip address ip-address { mask | mask-length }
An IP address is configured for the interface.
Step 4 Run mpls
The MPLS capability is enabled.
Step 5 Run quit
Return to the system view.
Step 6 Run bgp as-number
The BGP view is displayed.
Step 7 Run peer peer-address as-number as-number
The peer ASBR is specified as an EBGP peer.
Step 8 Run ipv4-family unicast
The BGP-IPv4 unicast address family view is displayed.
Step 9 Run peer peer-address enable
The function to exchange IPv4 routes with the peer ASBR is enabled.
Step 10 Run quit
----End
3.2.26.3 Configuring ASBRs Not to Filter EVPN Routes Based on VPN Targets
In inter-AS EVPN OptionB mode, ASBRs do not have EVPN instances. If you want
ASBRs to keep received EVPN routes, configure ASBRs not to filter EVPN routes
based on VPN targets.
Context
By default, an ASBR filters the VPN targets of only the received EVPN routes. The
routes are imported into the routing table if they pass the filtration; otherwise,
they are discarded. Therefore, if no VPN instance is configured on the ASBR or no
VPN target is configured for the EVPN instance, the ASBR discards all the received
EVPN routes.
In inter-AS EVPN OptionB mode, the ASBR does not need to store EVPN instance
information, but must store all the EVPN routing information and advertise the
routing information to the peer ASBR. In this situation, the ASBR needs to import
all the received EVPN routes without filtering them based on VPN targets.
Procedure
Step 1 Run system-view
The system view of the ASBR is displayed.
Step 2 Run bgp as-number
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Run undo policy vpn-target
The function to filter EVPN routes based on VPN targets is disabled.
Step 5 Run commit
----End
Context
In an inter-AS EVPN Option B scenario, after one-label-per-next-hop label
distribution is configured on an ASBR, the ASBR assigns only one label to EVPN
routes that share the same next hop and outgoing label. Compared with on-label-
per-route label distribution, one-label-per-next-hop label distribution significantly
saves label resources.
Procedure
Step 1 Run system-view
NOTICE
After one-label-per-next-hop label distribution is enabled or disabled on an ASBR,
the labels assigned by the ASBR to routes change. As a result, temporary packet
loss may occur.
----End
Context
On an intra-AS EVPN-OptionB network that has protection switching enabled, if a
link or node fails, traffic switches to a backup path, which implements
uninterrupted traffic transmission.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Run auto-frr
The BGP Auto fast reroute (FRR) is enabled.
Step 5 (Optional) Run bestroute nexthop-resolved tunnel
Configure BGP EVPN routes that recurse to LSPs to participate in route selection.
Step 6 Run commit
The configuration is committed.
----End
Context
In inter-AS EVPN Option B mode, if multiple PEs exist in an AS, you can configure
an ASBR as an RR to reduce the number of MP-IBGP connections needed between
Procedure
Step 1 Run system-view
The ASBR1 system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Run peer peer-ipv4-address reflect-client
The ASBR is configured as an RR, and the PE is configured as a client. If you need
to configure multiple PEs as clients, repeatedly run this command.
Step 5 Run peer peer-ipv4-address next-hop-local
The ASBR is configured to change the next hop address of a route to the device's
own IP address before the device advertises the route to an IBGP peer.
Step 6 (Optional) Run rr-filter extended-list-number
A reflection policy is configured for the RR. Only IBGP routes whose VPN targets
meet the matching rules can be reflected. This allows load balancing among RRs.
Step 7 (Optional) Run rr-filter extended-list-number
A reflection policy is configured for the RR.
Step 8 (Optional) Run reflect change-path-attribute
You can enable the RR to modify BGP route attributes using an export policy.
----End
Prerequisites
MPLS EVPN E-Lan OptionB has been configured.
Procedure
● Run the display bgp evpn all peer command on the PE or ASBR to check the
status of all BGP peer relationships.
● Run the display bgp evpn all routing-table command on the PE or ASBR to
check information about EVPN routes.
----End
Usage Scenario
At present, the IP bearer network uses L2VPN and L3VPN (HVPN) to carry Layer 2
and Layer 3 services, respectively. The protocols are complex. EVPN can carry both
Layer 2 and Layer 3 services. To simplify service bearer protocols, many IP bearer
networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3
services, needs to evolve to EVPN L3VPN HVPN.
Figure 3-140 shows the basic architecture of an HVPN that consists of the
following device roles:
● UPE: A UPE is a device that is directly connected to a user and is referred to as
an underlayer PE or a user-end PE, therefore shortened as UPE. UPEs provide
access services for users.
● SPE: An SPE is a superstratum PE or service provider-end PE, which is
connected to UPEs and located at the core of a network. An SPE manages
and advertises VPN routes.
● NPE: An NPE is a network provider-end PE that is connected to SPEs and
located at the network side.
The UPEs and SPE are connected at the access layer. The SPE and NPE are
connected at the aggregation layer. An EVPN L3VPN HVPN can be deployed only
after separate IGPs are deployed at the access and aggregation layers to
implement interworking.
EVPN L3VPN HVPN is classified into EVPN L3VPN HoVPN or EVPN L3VPN H-VPN:
● EVPN L3VPN HoVPN: An SPE advertises only default routes or summarized
routes to UPEs. UPEs do not have specific routes to NPEs and can only send
service data to SPEs over default routes. As a result, route isolation is
implemented. An EVPN L3VPN HoVPN can use devices with relatively poor
route management capabilities as UPEs, reducing network deployment costs.
● EVPN L3VPN H-VPN: SPEs advertise specific routes to UPEs. UPEs function as
RR clients to receive the specific routes reflected by SPEs functioning as RRs.
This mechanism facilitates route management and traffic forwarding control.
As L3VPN HoVPN evolves towards EVPN L3VPN HoVPN, the following splicing
scenarios occur:
● Splicing between EVPN L3VPN HoVPN and common L3VPN: EVPN L3VPN
HoVPN is deployed between the UPEs and SPE, and L3VPN is deployed
between the SPE and NPE. The SPE advertises only default routes or
summarized routes to the UPEs. After receiving specific routes (EVPN routes)
from the UPEs, the SPE encapsulates these routes into VPNv4 routes and
advertises them to the NPE.
● Splicing between L3VPN HoVPN and EVPN L3VPN: L3VPN HoVPN is deployed
between the UPEs and SPE, and EVPN L3VPN is deployed between the SPE
and NPE. The SPE advertises only default routes or summarized routes to the
UPEs. After receiving specific routes (L3VPN routes) from the UPEs, the SPE
encapsulates these routes into EVPN routes and advertises them to the NPE.
Pre-configuration Tasks
Before configuring an EVPN L3VPN HVPN, complete the following tasks:
Configure an IGP on each network layer (between the UPEs and SPE, and between
the SPE and NPE) to implement interworking. Different IGPs can be deployed at
the access and aggregation layers, or the same IGP with different process IDs can
be deployed at the different layers.
Context
On an EVPN L3VPN HoVPN, the following configurations must be performed:
1. Create VPN instances on UPE, SPEs, and NPEs and bind the VPN instances to
AC interfaces or bind the IPv6 VPN instances to AC interfaces on the UPEs and
NPEs.
According to relevant standards, the VPN instance status obtained from an NMS is Up
only if at least one interface bound to the VPN instance is Up. On an HoVPN, VPN
instances on SPEs are not bound to interfaces. As a result, the VPN instance status
obtained from an NMS is always Down. To solve this problem, run the transit-vpn
command in the VPN instance view or VPN instance IPv4 address family view of an
SPE. Then, the VPN instance status obtained from the NMS is always Up, regardless of
whether the VPN instance is bound to interfaces.
2. Configure BGP-EVPN peer relationships between UPEs and SPEs and between
SPEs and NPEs. For details, see 3.2.4.5 Configuring a BGP EVPN Peer
Relationship.
3. Configure routing protocols for NPEs and UPEs to exchange routes with CEs.
This configuration is similar to configuring PEs and CEs to exchange routes on
a BGP/MPLS VPN. For more information, see Configuring Route Exchange
Between PEs and CEs or Configuring IPv6 Route Exchange Between PEs and
CEs.
4. Configure SPEs to advertise only default or summarized routes to UPEs. For
details, see Procedure.
Procedure
Step 1 Configure an SPE to advertise only default or summarized routes to a UPE.
● Configure the SPE to advertise default routes to the UPE.
a. Run system-view
The system view is displayed.
b. Run ip route-static vpn-instance vpn-instance-name 0.0.0.0 { 0.0.0.0 |
0 } { nexthop-address | interface-type interface-number [ nexthop-
address ] } or ipv6 route-static vpn-instance vpn-instance-name 0::0 0
{ nexthop-ipv6-address | interface-type interface-number [ nexthop-ipv6-
address ] }
A default IPv4/IPv6 static route is created for the VPN instance.
c. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
d. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The BGP-VPN instance IPv4/IPv6 address family view is displayed.
Configure a route policy on the NPE to prevent the NPE from receiving default routes.
● Configure the SPE to advertise summarized routes to the UPE.
a. Run system-view
The system view is displayed.
b. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
c. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The BGP-VPN instance IPv4/IPv6 address family view is displayed.
d. Run aggregate ipv4-address { mask | mask-length } [ as-set | attribute-
policy route-policy-name1 | detail-suppressed | origin-policy route-
policy-name2 | suppress-policy route-policy-name3 ] * or aggregate
ipv6-address prefix-length [ as-set | attribute-policy route-policy-name1
| detail-suppressed | origin-policy route-policy-name2 | suppress-policy
route-policy-name3 ] *
A summarized route is created.
e. Run advertise l2vpn evpn
The SPE is enabled to advertise IP prefix routes.
f. Run quit
Exit from the BGP-VPN instance IPv4 address family view.
g. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
h. Run peer { ipv4-address | group-name } upe
The UPE is specified as a lower-level PE of the SPE.
i. Run quit
Exit from the BGP-EVPN address family view.
j. Run quit
Exit from the BGP view.
k. Run commit
The configuration is committed.
----End
Context
On an EVPN L3VPN H-VPN, the following configurations must be performed:
1. Create VPN instances on UPE and NPEs and bind the VPN instances to AC
interfaces or bind the IPv6 VPN instances to AC interfaces on the UPEs and
NPEs.
2. Configure BGP-EVPN peer relationships between UPEs and SPEs and between
SPEs and NPEs. For details, see 3.2.4.5 Configuring a BGP EVPN Peer
Relationship.
3. Configure routing protocols for NPEs and UPEs to exchange routes with CEs.
This configuration is similar to configuring PEs and CEs to exchange routes on
a BGP/MPLS VPN. For more information, see Configuring Route Exchange
Between PEs and CEs or Configuring IPv6 Route Exchange Between PEs and
CEs.
4. Configure SPEs as RRs, and specify UPEs and NPEs as RR clients. For details,
see Procedure.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 4 Run peer { ipv4-address | group-name } reflect-client
The SPE is configured as an RR, and UPEs are specified as its clients.
Step 5 Run peer { ipv4-address | group-name } next-hop-local
The SPE is configured to use its own IP address as the next hop of routes when
advertising these routes.
To enable an SPE to use its own IP address as the next hop of routes when
advertising these routes to UPEs and NPEs, run the peer next-hop-local
command with different parameters specified on the SPE for each UPE and each
NPE.
Step 6 (Optional) Run apply-label per-nexthop
One-label-per-next-hop label distribution is enabled on the SPE.
On an EVPN L3VPN H-VPN, if an SPE needs to send large numbers of EVPN routes
but the MPLS labels are inadequate, configure one-label-per-next-hop label
distribution on the SPE.
NOTICE
----End
Context
On a network with an EVPN L3VPN HoVPN and a common L3VPN spliced, the
following configurations must be performed:
● Create VPN instances on UPEs and SPEs and bind the VPN instances to AC
interfaces or bind the IPv6 VPN instances to AC interfaces on the UPEs.
According to relevant standards, the VPN instance status obtained from an NMS is Up
only if at least one interface bound to the VPN instance is Up. On an HoVPN, VPN
instances on SPEs are not bound to interfaces. As a result, the VPN instance status
obtained from an NMS is always Down. To solve this problem, run the transit-vpn
command in the VPN instance view or VPN instance IPv4 address family view of an
SPE. Then, the VPN instance status obtained from the NMS is always Up, regardless of
whether the VPN instance is bound to interfaces.
● For IPv4 services, please configure VPN instances and bind the VPN instances
to AC interfaces on NPEs; For IPv6 services, please Configure IPv6 VPN
instances and bind the IPv6 VPN instances to AC interfaces on NPEs.
● Configure the default static VPN routes on the SPEs. For details, see Creating
IPv4 Static Routes or Creating IPv6 Static Routes.
● For IPv4 services, please configure a BGP VPNv4 peer relationship between
each SPE and NPE. This configuration is similar to configuring a BGP VPNv4
peer relationship between PEs on a BGP/MPLS IPv4 VPN. For more
information, see Establishing MP-IBGP Peer Relationships Between PEs; For
IPv6 services, please configure a BGP VPNv6 peer relationship between each
SPE and NPE. This configuration is similar to configuring a BGP VPNv6 peer
relationship between PEs on a BGP/MPLS IPv6 VPN. For more information, see
Establishing MP-IBGP Peer Relationships Between PEs.
● Configure BGP-EVPN peer relationships between UPEs and SPEs. For details,
see 3.2.4.5 Configuring a BGP EVPN Peer Relationship.
● Configure SPEs to advertise only default or summarized routes to UPEs. For
details, see 3.2.27.1 Configuring an EVPN L3VPN HoVPN.
● Configure routing protocols for NPEs and UPEs to exchange routes with CEs.
This configuration is similar to configuring PEs and CEs to exchange routes on
a BGP/MPLS VPN. For more information, see Configuring Route Exchange
Between PEs and CEs or Configuring IPv6 Route Exchange Between PEs and
CEs.
● Configure SPEs to advertise re-encapsulated VPNv4 routes to NPEs. For
details, see Procedure.
Procedure
Step 1 Run system-view
The SPE is enabled to add the regeneration flag to the routes received from the
UPE.
Step 5 Run quit
Return to the BGP view.
Step 6 Run ipv4-family vpnv4 or ipv6-family vpnv6
The BGP-VPNv4/VPNv6 address family view is displayed.
Step 7 Run peer { ipv4-address | group-name } advertise route-reoriginated evpn { ip |
ipv6 }
The SPE is configured to re-encapsulate the EVPN routes received from the UPE
into BGP VPNv4/VPNv6 routes and then send them to the NPE.
Step 8 Run commit
The configuration is committed.
----End
Context
On a network with an L3VPN HoVPN and an EVPN L3VPN spliced, the following
configurations must be performed:
● Configure VPN instances on SPEs and NPEs and bind the VPN instances to the
NPE interfaces that connect to CEs. For details, see 3.2.5.5 Creating an
L3VPN Instance and Binding It to a VBDIF Interface.
According to relevant standards, the VPN instance status obtained from an NMS is Up
only if at least one interface bound to the VPN instance is Up. On an HoVPN, VPN
instances on SPEs are not bound to interfaces. As a result, the VPN instance status
obtained from an NMS is always Down. To solve this problem, run the transit-vpn
command in the VPN instance view or VPN instance IPv4 address family view of an
SPE. Then, the VPN instance status obtained from the NMS is always Up, regardless of
whether the VPN instance is bound to interfaces.
● Configure VPN instances and bind the VPN instances to AC interfaces on
UPEs.
● Configure BGP-EVPN peer relationships between SPEs and NPEs. For details,
see 3.2.4.5 Configuring a BGP EVPN Peer Relationship.
● Configure a BGP VPNv4 peer relationship between each SPE and UPE. This
configuration is similar to configuring a BGP VPNv4 peer relationship between
PEs on a BGP/MPLS VPN. For more information, see Establishing MP-IBGP
Peer Relationships Between PEs.
● Configure routing protocols for NPEs and UPEs to exchange routes with CEs.
This configuration is similar to configuring PEs and CEs to exchange routes on
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
Step 3 Run ipv4-family vpnv4
The BGP-VPNv4 address family view is displayed.
Step 4 Run peer { ipv4-address | group-name } import reoriginate
The SPE is enabled to add the regeneration flag to the routes received from the
UPE.
Step 5 Run quit
Return to the BGP view.
Step 6 Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
Step 7 Run peer { ipv4-address | group-name } advertise route-reoriginated vpnv4
The SPE is configured to re-encapsulate the BGP VPNv4 routes received from the
UPE into EVPN routes and then send them to the NPE.
Step 8 Run commit
The configuration is committed.
----End
Prerequisites
All EVPN L3VPN HVPN configurations are complete.
Procedure
● Run the display ip routing-table vpn-instance or display ipv6 routing-table
vpn-instance vpn-instance-name command on a UPE or an NPE to check the
VPN routing table for default or specific routes sent by the remote end.
Usage Scenario
By default, when the EVPN function is deployed on a network to carry Layer 2
multicast services, multicast data packets are broadcast on the network. The
devices that do not need to receive the multicast data packets also receive these
packets, which wastes network bandwidth resources. To resolve this issue, deploy
IGMP snooping over EVPN MPLS. After IGMP snooping over EVPN MPLS is
deployed, IGMP snooping packets are transmitted on the network through EVPN
routes, and multicast forwarding entries are generated on devices. Multicast data
packets from a multicast source are advertised only to the devices that need these
packets, saving network bandwidth resources.
Pre-configuration Tasks
Before configuring IGMP snooping over EVPN MPLS, complete the following task:
● Configure BD-based EVPN functions.
● Configure EVPN E-LAN over mLDP P2MP tunnels.
Procedure
Step 1 Run system-view
IGMP snooping can process IGMPv1, IGMPv2, and IGMPv3 messages. By default,
IGMP snooping can process IGMPv1 and IGMPv2 messages but cannot process
IGMPv3 messages.
● If number is set to 1, only IGMPv1 messages can be processed.
● If number is set to 2, both IGMPv1 and IGMPv2 messages can be processed.
● If number is set to 3, IGMPv1, IGMPv2, and IGMPv3 messages can all be
processed.
----End
Context
The configurations required for various scenarios are as follows:
● A CE is single-homed to a source PE or dual-homed to source PEs on the
multicast source side. In this scenario, no additional configuration is required
on the access side. You must configure IGMP signaling synchronization based
on BGP EVPN IGMP Join Synch routes in a BD.
● A CE is single-homed to a receiver PE on the access side. In this scenario, no
additional configuration is required on the access side. You must configure
IGMP signaling synchronization based on BGP EVPN IGMP Join Synch routes
in a BD.
Procedure
● A CE is single-homed to a receiver PE on the access side.
a. Run system-view
The system view is displayed.
b. Run bridge-domain bd-id
The bridge domain view is displayed.
c. Run igmp-snooping signal-synch enable
IGMP signaling synchronization based on BGP EVPN IGMP Join Synch
routes is enabled in the BD.
d. (Optional) Run evi vpn-target
The EVI-RT extended community attributes of IGMP Join Synch routes are
associated with the BD.
e. Run commit
The configuration is committed.
● A CE is dual-homed to receiver PEs on the access side.
a. Run system-view
The system view is displayed.
b. Run bridge-domain bd-id
The bridge domain view is displayed.
c. Run igmp-snooping signal-synch enable
IGMP signaling synchronization based on BGP EVPN IGMP Join Synch
routes is enabled in the BD.
d. Run igmp-snooping signal-ignore-df enable
The DF status ignoring function is enabled in the BD of the non-DF node.
e. Run evi vpn-target vrfRt [ vrfRtType ]
The EVI-RT extended community attributes of IGMP Join Synch routes are
associated with the BD.
f. Run commit
The configuration is committed.
PIM-SM is enabled.
g. Run igmp enable
IGMP is enabled.
h. Run quit
The EVI-RT extended community attributes of IGMP Join Synch routes are
associated with the BD.
m. Run commit
----End
Prerequisites
IGMP snooping over EVPN MPLS has been configured.
Procedure
● Run the display bgp evpn { vpn-instance evpn-name | route-distinguisher
route-distinguisher } routing-table { smet-route | join-route } [ prefix ] or
display bgp evpn all routing-table [ peer peer-address advertised-routes ]
{ smet-route | join-route } [ prefix ] command to check information about
BGP EVPN SMET or IGMP Join routes.
----End
Context
On the network shown in Figure 3-142, area A is a newly built network deployed
with the EVPN L3VPN over SRv6 service, and area B is the legacy network
deployed with the common L3VPN over MPLS service. To ensure normal
communication between the two networks, splice the EVPN L3VPN over SRv6 with
the common L3VPN over MPLS on border leaves.
Figure 3-142 Networking of splicing an EVPN L3VPN over SRv6 with a common
L3VPN over MPLS
Pre-configuration Tasks
Before splicing an EVPN L3VPN over SRv6 with a common L3VPN over MPLS,
complete the following tasks:
● On the network in area A, 2.2.5 Configuring EVPN L3VPN over SRv6 BE.
● On the network in area B, Configuring a Basic BGP/MPLS IP VPN or
Configuring a Basic BGP/MPLS IPv6 VPN.
Procedure
Step 1 Configure a border leaf to advertise the routes that are re-originated in the BGP-
EVPN address family to a BGP VPNv4/VPNv6 peer (PE).
1. Run system-view
The system view is displayed.
2. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
3. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
4. Run peer ipv6-address import reoriginate
The function to re-originate the routes received from a specified BGP EVPN
IPv6 peer is enabled.
5. Run quit
The BGP view is displayed.
6. Run ipv4-family vpnv4 or ipv6-family vpnv6
The BGP-VPNv4/VPNv6 address family view is displayed.
7. Run peer { ipv4-address | group-name } advertise route-reoriginated evpn
{ ip | ipv6 }
The function to advertise the routes re-originated in the BGP-EVPN address
family to a BGP VPNv4/VPNv6 peer or peer group is enabled.
After this step is performed, the border leaf re-originates the SRv6-
encapsulated EVPN IPv4/IPv6 prefix routes that are received from access leaf
nodes and then advertises the MPLS-encapsulated BGP VPNv4/VPNv6 routes
to the BGP VPNv4/VPNv6 peer (PE).
Step 2 Configure the border leaf to advertise the routes that are re-originated in the BGP-
VPNv4/VPNv6 address family to a BGP EVPN peer (access leaf).
1. Run peer { ipv4-address | group-name } import reoriginate
The device is enabled to re-originate the routes received from a BGP VPNv4
peer.
2. Run quit
The BGP view is displayed.
3. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
4. Run peer ipv6-address advertise route-reoriginated { vpnv4 | vpnv6 }
The border leaf is enabled to advertise the routes re-originated in the BGP
VPNv4/VPNv6 address family to the BGP VPN IPv6 peer.
After this step is performed, the border leaf re-originates the MPLS-
encapsulated VPNv4/VPNv6 routes that are received from the BGP VPNv4/
VPNv6 peer (PE) and then advertises SRv6-encapsulated EVPN routes to the
BGP EVPN peer (access leaf).
5. Run quit
Step 3 Enable the EVPN route advertisement in the BGP-VPN instance address family
view.
1. Run ipv4-family vpn-instance vpn-instance-name or ipv6-family vpn-
instance vpn-instance-name
The device is enabled to advertise the host IPv4/IPv6 routes in the VPN
instance as EVPN IPv4/IPv6 prefix routes.
----End
Context
In Figure 3-143, network A is newly deployed and runs the EVPN L3VPN over
SRv6 TE Policy service. Network B is a legacy network and runs the common
L3VPN over MPLS service. Alternatively, network A runs the EVPN L3VPNv6 over
SRv6 TE Policy service, and network B runs the common L3VPNv6 over MPLS
service. To ensure that these two types of networks can communicate with each
other and services are running properly, configure each border leaf node to splice
the EVPN L3VPN over SRv6 TE Policy with the common L3VPN over MPLS.
Figure 3-143 Networking diagram for configuring a border leaf node to splice an
EVPN L3VPN over SRv6 TE Policy with a common L3VPN over MPLS
Pre-configuration Tasks
Before configuring a border leaf node to splice an EVPN L3VPN over SRv6 TE
Policy with a common L3VPN over MPLS, complete the following tasks:
● Configure EVPN L3VPN over SRv6 TE Policy or 2.2.13 Configuring EVPN
L3VPNv6 over SRv6 TE Policy on the network that contains area A.
● Configure a basic BGP/MPLS IPv4 VPN and configure a basic BGP/MPLS IPv6
VPN on network B.
Procedure
Step 1 Configure each border leaf node to advertise the re-originated BGP-EVPN address
family routes to a specified VPNv4 or VPNv6 peer (PE) or peer group.
1. Run system-view
The system view is displayed.
2. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
3. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
4. Run peer ipv6-address import reoriginate
The border leaf node is configured to re-originate the routes received from a
specified BGP EVPN IPv6 peer.
5. Run quit
Return to the BGP view.
6. Run ipv4-family vpnv4 or ipv6-family vpnv6
The BGP-VPNv4 or BGP-VPNv6 address family view is displayed.
7. Run peer { ipv4-address | group-name } advertise route-reoriginated evpn
{ ip | ipv6 }
----End
● Run the display bgp vpnv4 all routing-table command to check information
about VPNv4 routes re-originated after the border leaf node receives EVPN IP
prefix routes encapsulated in SRv6 packets from access leaf nodes.
● Run the display bgp vpnv6 all routing-table command to check information
about VPNv6 routes re-originated after the border leaf node receives EVPN IP
prefix routes encapsulated in SRv6 packets from access leaf nodes.
Background
To meet the requirements of cross-region operation, user access, and inter-city
disaster recovery that arise during enterprise development, an increasing number
of enterprises have deployed data centers in multiple regions and across carrier
networks. Currently, leased fibers or leased lines are commonly used to
interconnect cross-region data centers, causing the following disadvantages:
● For enterprises, leased fibers or leased lines are costly.
● For carriers, service exploration is difficult, and resource utilization is low.
To cope with these disadvantages, a DCI network that is characterized by high
security and reliability and flexible scheduling needs to be constructed and
operated. Data Center Interconnect (DCI) provides solutions to interconnect data
centers. Using Virtual extensible local area network (VXLAN), Ethernet virtual
private network (EVPN), and BGP/MPLS IP VPN technologies, DCI solutions allow
packets that are exchanged between data centers to be transmitted securely and
reliably over carrier networks, allowing VMs in different data centers to
communicate with each other.
Pre-configuration Tasks
Before configuring DCI functions, configure MPLS tunnels over the DCI backbone
network.
Context
GWs and DCI-PEs are separately deployed. DCI-PEs function as edge devices on
the underlay network and ensure VTEPs in data centers are reachable through
routes, without saving data center tenant and host information.
In Figure 3-145, data center gateways GW1 and GW2 are connected to the
backbone network. BGP/MPLS IP VPN functions are deployed on the DCI
backbone network to transmit VTEP IP information between GW1 and GW2. A
VXLAN tunnel is established between GW1 and GW2 for inter-data center E2E
VXLAN packet encapsulation and VM communication.
Figure 3-145 Configuring a DCI Scenario with an E2E VXLAN EVPN Deployed on a
Gateway
Procedure
Step 1 Configure basic L3VPN functions on the DCI backbone network. For configuration
details, see Configuring a Basic BGP/MPLS IP VPN.
Step 2 Establish a VXLAN tunnel to GW1 on GW2. For configuration details, see
Configuring Device-specific VXLAN.
----End
Context
An underlay VLAN can access a DCI network through a Layer 3 gateway when
traditional DCs are connected through the DCI network.
GWs and DCI-PEs are separately deployed. Each DCI-PE considers the GW of a
data center as a CE, uses a Layer 3 VPN routing protocol to receive VM host routes
from the data center, and saves and maintains the routes.
If VXLAN is deployed in the data center, the solution of Underlay VLAN Layer 3
access to DCI can be used. In Figure 3-146, VXLAN tunnels are established within
Procedure
Step 1 Configure basic L3VPN functions on the DCI backbone network. For configuration
details, see Configuring a Basic BGP/MPLS IP VPN.
Step 2 Configure a dot1q VLAN tag termination sub-interface and bind the sub-interface
to a VPN instance.
1. Run interface interface-type interface-number.subinterface-number
----End
Context
GWs and DCI-PEs are separately deployed. EVPN is used as a control plane
protocol to dynamically establish VXLAN tunnels. A DCI-PE runs EVPN to learn a
VM's IP route information from a DC and uses VPNv4/VPNv6 to send received host
IP routes to the peer DCI-PE, and packets of VM hosts can be forwarded at Layer
3.
In Figure 3-147, data center gateway devices GW1 and GW2 are connected to the
DCI backbone network. To allow inter-data center VM communication, BGP/MPLS
IPv4/IPv6 VPN functions are deployed on the DCI backbone network. In addition,
EVPN and a VXLAN tunnel are deployed between the GW and DCI-PE to transmit
VM host routes so that VMs in different data centers can communicate with each
other.
Figure 3-147 Configuring a DCI scenario with a VXLAN EVPN L3VPN accessing a
common L3VPN
Procedure
Step 1 Configure a VXLAN tunnel between each DCI PE and the corresponding GW. For
configuration details, see Configuring VXLAN.
Step 2 Configure basic L3VPN functions on the DCI backbone network. For configuration
details, see Configuring a Basic BGP/MPLS IP VPN or Configuring a Basic BGP/
MPLS IPv6 VPN.
Step 3 Configure the DCI-PE to send the routes that are regenerated in the EVPN address
family to a VPNv4/VPNv6 peer.
1. Run bgp as-number
The BGP view is displayed.
2. Run l2vpn-family evpn
The BGP-EVPN address family view is displayed.
3. Run peer { ipv4-address | group-name } import reoriginate
The DCI-PE is configured to add the regeneration flag to the routes to be
received from a BGP EVPN peer.
4. Run quit
The BGP view is displayed.
5. Run ipv4-family vpnv4 or ipv6-family vpnv6
The BGP-VPNv4/VPNv6 address family view is displayed.
6. Run peer { ipv4-address | group-name } advertise route-reoriginated evpn
{ mac-ip | ip | mac-ipv6 | ipv6 }
The DCI-PE is configured to send the routes that are regenerated in the EVPN
address family to a VPNv4/VPNv6 peer.
After the peer { ipv4-address | group-name } advertise route-reoriginated
evpn { mac-ip | ip | mac-ipv6 | ipv6 } command is run and EVPN routes that
are received from the DC side and carry the VXLAN encapsulation attribute
are regenerated on the DCI-PE, the DCI-PE advertises VPNv4/VPNv6 routes
that carry the MPLS encapsulation attribute to the VPNv4/VPNv6 peer on the
DCI backbone network.
Step 4 Configure the DCI-PE to send the routes that are regenerated in the VPNv4/VPNv6
address family to a BGP EVPN peer.
1. Run bgp as-number
The BGP view is displayed.
2. Run ipv4-family vpnv4 or ipv6-family vpnv6
The BGP-VPNv4/VPNv6 address family view is displayed.
3. Run peer { ipv4-address | group-name } import reoriginate
The DCI-PE is configured to add the regeneration flag to the routes received
from a VPNv4/VPNv6 peer.
4. Run quit
The BGP view is displayed.
----End
Context
A VXLAN tunnel can be established in each DC to implement interworking
between VMs in a DC. To achieve Layer 2 or Layer 3 service communication
between VMs in a DC, associate Ethernet sub-interfaces with VLANs on PEs in the
DCI backbone network, create an L3VPN or EVPN instance, and enable the EVPN
IRB function. Such a network can be deployed in either of the following modes:
● Centralized deployment mode: As shown in Figure 3-148, the DC gateway
and the PE on the DCI backbone network are the same device (DCI-PE-GW).
Specifically, the PE also functions as the DC gateway to access the DC
network.
Pre-configuration Tasks
Before configuring a DCI scenario with a VLAN base accessing an MPLS EVPN IRB,
ensure that routes on the IPv4 network are reachable.
Procedure
Step 1 Configure BGP EVPN peers.
If a BGP RR needs to be configured on the network, establish BGP EVPN peer relationships
between all the PEs and the RR.
1. Run bgp { as-number-plain | as-number-dot }
BGP is enabled, and the BGP view is displayed.
2. Run peer ipv4-address as-number { as-number-plain | as-number-dot }
The remote PE is specified as the BGP peer.
3. (Optional) Run peer ipv4-address connect-interface interface-type interface-
number [ ipv4-source-address ]
A source interface and a source IP address are specified to set up a TCP
connection between the BGP peers.
VPN targets are configured for the EVPN instance. The export RT of the local
EVPN instance must be the same as the import RT of the remote EVPN
instance. Similarly, the import RT of the local EVPN instance must be the
same as the export RT of the remote EVPN instance.
4. (Optional) Run import route-policy policy-name
To strictly control the import of routes into the EVPN instance, specify an
import route policy to filter routes and set route attributes for routes that
meet the filter criteria.
5. (Optional) Run export route-policy policy-name
EVPN routes that can be imported into the VPN instance IPv4 address family
are associated with a tunnel policy.
The maximum number of MAC addresses allowable is set for the EVPN
instance.
Step 5 (Optional) Configure an RR. To minimize the number of BGP EVPN peers on the
network, deploy an RR so that the PEs establish BGP EVPN peer relationships only
with the RR.
1. Run bgp { as-number-plain | as-number-dot }
The local device is configured as an RR, and a peer or peer group is specified
as the RR client.
The router where the peer reflect-client command is run functions as the RR,
and the specified peer or peer group functions as a client.
4. (Optional) Run undo reflect between-clients
If a cluster has multiple RRs, run this command to set the same cluster ID for
these RRs to prevent routing loops.
----End
Context
DC-GWs and DCI-PEs are separately deployed, and EVPN is used as the control
plane protocol to establish VXLAN tunnels. A DCI-PE runs EVPN to learn a VM's IP
route from a DC and sends the learned host IP route to the peer DCI-PE through a
BGP EVPN peer relationship to implement Layer 3 service forwarding between
VMs.
On the network shown in Figure 3-150, the DC-GWs GW1 and GW2 are
connected to the DCI backbone network with BGP EVPN configured. After BGP
EVPN peer relationships and VXLAN tunnels are established between the DC-GWs
and the DCI-PEs, host IP routes can be exchanged between different DCs,
implementing communication between VMs in different DCs.
Figure 3-150 Configuring a DCI Scenario with a VXLAN EVPN Accessing an MPLS
EVPN IRB
Pre-configuration Tasks
Before configuring a DCI scenario with a VXLAN EVPN accessing an MPLS EVPN
IRB, ensure Layer 3 route reachability on the IPv4 network.
Procedure
Step 1 Configure an IGP on the DCI backbone network to ensure IP connectivity.
Step 2 Configure VXLAN tunnels on the DCI-PEs destined for the DC-GWs. For
configuration details, see 4.2 VXLAN Configuration.
Step 3 Configure VPN instances to exchange routes with EVPN instances.
For IPv4 services, configure an IPv4 L3VPN instance.
1. Run ip vpn-instance vpn-instance-name
A VPN instance is created, and the VPN instance view is displayed.
2. Run ipv4-family
The IPv4 address family is enabled for the VPN instance, and the VPN
instance IPv4 address family view is displayed.
3. Run route-distinguisher route-distinguisher
An RD is configured for the VPN instance IPv4 address family.
4. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ]
VPN targets are configured for the VPN instance IPv4 address family to
mutually import routes with the remote PE's L3VPN instance.
5. Run vpn-target vpn-target &<1-8> [ both | export-extcommunity | import-
extcommunity ] evpn
VPN targets are configured for the VPN instance IPv4 address family to
mutually import routes with the local EVPN instance.
6. Run evpn mpls routing-enable
EVPN is enabled to generate and advertise IP prefix routes and IRB routes.
7. (Optional) Run tnl-policy policy-name evpn
EVPN routes that can be imported into the VPN instance IPv4 address family
are associated with a tunnel policy.
8. (Optional) Run import route-policy policy-name evpn
The VPN instance IPv4 address family of the current VPN instance is
associated with an import routing policy to filter routes imported from the
EVPN instance. To control route import more precisely, perform this step to
associate the VPN IPv4 address family with an import routing policy and set
attributes for eligible routes.
9. (Optional) Run export route-policy policy-name evpn
The VPN instance IPv4 address family of the current VPN instance is
associated with an export routing policy to filter routes to be advertised to the
EVPN instance. To control route export more precisely, perform this step to
associate the VPN IPv4 address family with an export routing policy and set
attributes for eligible routes.
10. Run quit
Step 4 Establish on the local DCI-PE a BGP EVPN peer relationship with the remote DCI-
PE, and enable the local DCI-PE to advertise routes regenerated by the EVPN
address family to the BGP EVPN peer.
1. Run system-view
The system view is displayed.
2. Run bgp { as-number-plain | as-number-dot }
BGP is enabled, and the BGP view is displayed.
3. (Optional) Run router-id ipv4-address
A BGP router ID is configured.
4. Run peer ipv4-address as-number { as-number-plain | as-number-dot }
The remote DCI-PE is specified as the BGP peer.
5. (Optional) Run peer ipv4-address connect-interface interface-type interface-
number [ ipv4-source-address ]
A source interface and a source IP address are specified to set up a TCP
connection between the BGP peers.
– If you want the network to carry both Layer 2 and Layer 3 services,
perform the following configurations:
i. Run the peer { ipv4-address | group-name } advertise route-
reoriginated evpn { mac | mac-ip | ip | mac-ipv6 | ipv6 } command
to configure the device to regenerate EVPN routes and advertise
them to the BGP EVPN peer.
ii. Run the peer { ipv4-address | group-name } advertise { irb | irbv6 }
command to configure the device to advertise IRB/IRBv6 routes.
----End
Prerequisites
A DCI solution has been configured.
Procedure
● Run the display ip vpn-instance vpn-instance-name command to check brief
information about a specified VPN instance.
● Run the display ip vpn-instance verbose vpn-instance-name command to
check detailed information about a specified VPN instance, including
information in the IPv4 address family of the VPN instance.
● Run the display ip vpn-instance [ vpn-instance-name ] interface command
to view information about the interfaces bound to a specified VPN instance.
● Run the display evpn vpn-instance [ vpn-instance-name ] command to
check EVPN instance information.
● Run the display vxlan tunnel [ tunnel-id ] [ verbose ] command to check
VXLAN tunnel information.
● Run the display evpn mac routing-table { all-evpn-instance | mac-address
mac-address } command to check information about MAC routes of a
specified EVPN instance.
● Run the display bgp evpn peer [ [ ipv4-address ] verbose ] command to
check information about BGP EVPN peers.
● Run the display bgp evpn { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table mac-route command to
check MAC route information.
● Run the display bgp vpnv4 { all | route-distinguisher route-distinguisher |
vpn-instance vpn-instance-name } routing-table command to check BGP
VPNv4 route information.
Usage Scenario
The NFVI telco cloud solution uses the data center interconnect (DCI)+DCN
networking. A large amount of mobile phone traffic is sent to virtual unified
gateways (vUGWs) and virtual multiservice engines (vMSEs) on the DCN. After
being processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded
over the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-151 shows the networking of an NFVI distributed gateway (SR tunnels).
DC-GWs, which are the border gateways of the DCN, exchange Internet routes
with external devices over PEs. L2GW/L3GW1 and L2GW/L3GW2 are connected to
virtualized network function (VNF) devices. VNF1 and VNF2 that function as
virtualized NEs are deployed to implement the vUGW functions and vMSE
functions, respectively. VNF1 and VNF2 are connected to L2GW/L3GW1 and
L2GW/L3GW2 through interface process units (IPUs).
In NFVI distributed gateway networking (SR tunnels), the number of bridge
domains (BDs) needs to be planned based on the number of network segments
corresponding to each IPU. An example assumes that the five IP addresses
planned for five IPUs belong to four network segments. In this case, four BDs need
to be planned. You need to configure the BDs and the corresponding VBDIF
interfaces on all DC-GWs and L2GW/L3GWs and bind all the VBDIF interfaces to
the same L3VPN instance. In addition, the following functions need to be deployed
on the DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Establish BGP EVPN peer relationships between any DC-GW and L2GW/L3GW.
L2GW/L3GWs can then flood the static routes destined for VNFs to other
devices based on the BGP EVPN peer relationships. DC-GWs can then
advertise local loopback routes and default routes to L2GW/L3GWs.
● Establish BGP VPNv4/v6 peer relationships between DC-GWs and PEs. DC-
GWs can then advertise mobile phone routes to PEs and receive the Internet
The NFVI distributed gateway function supports both IPv4 and IPv6 services. If a step does
not differentiate IPv4 and IPv6 services, this step applies to both IPv4 and IPv6 services.
Pre-configuration Tasks
Before configuring the NFVI distributed gateway function, complete the following
tasks:
● Allow the routes between PEs and DC-GWs and between DC-GWs and L2GW/
L3GWs to be reachable.
● Deploy SR tunnels between PEs and DC-GWs and between DC-GWs and
L2GW/L3GWs.
● Configure the BD EVPN function on DC-GWs and L2GW/L3GWs. The
configuration includes creating EVPN instances and L3VPN instances,
establishing BGP EVPN peer relationships, and configuring VBDIF interfaces.
● Deploy the EBGP VPNv4 or EBGP VPNv6 function between PEs and DC-GWs.
● Configure the static routes destined for VNF1 and VNF2 on L2GW/L3GWs by
referring to Static VPN IPv4 Routes or Static VPN IPv6 Routes.
Context
You can configure a tunnel policy or an SR-MPLS BE tunnel-preferred policy to
recurse routes over SR tunnels.
Procedure
● Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs and apply the
tunnel policy. You can choose to recurse routes over SR-MPLS BE or SR-MPLS
TE tunnels. For configuration details, see Configuring and Applying a Tunnel
Policy.
● If both the LDP and SR-MPLS BE tunnels are deployed and users want route
recursion over SR-MPLS BE tunnels, configure an SR-MPLS BE tunnel-preferred
policy on each device.
a. Run system-view
----End
Procedure
Step 1 Configure DC-GWs to advertise default static VPN routes and VPN loopback routes
through EVPN.
1. Run system-view
The system view is displayed.
2. Run interface Loopback interface-number
The loopback interface view is displayed.
3. Run ip binding vpn-instance vpn-instance-name
The loopback interface is bound to an L3VPN instance.
4. (Optional) Run ipv6 enable
IPv6 is enabled on the loopback interface. This step is mandatory when the
loopback interface is configured with an IPv6 address.
5. Configure an IPv4 or IPv6 address for the loopback interface.
– Run the ip address ip-address { mask | mask-length } command to
configure an IPv4 address for the loopback interface.
– Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-
length } command to configure an IPv6 address for the loopback
interface.
6. Run quit
Exit from the loopback interface view.
7. Configure default static VPN routes.
– Run the ip route-static vpn-instance vpn-instance-name 0.0.0.0 { 0.0.0.0
| 0 } { nexthop-address | interface-type interface-number [ nexthop-
address ] } [ tag tag ] command to create a default static VPN IPv4
route.
– Run the ip route-static vpn-instance vpn-instance-name :: 0 { nexthop-
ipv6–address | interface-type interface-number [ nexthop-ipv6-address ] }
[ tag tag ] command to create a default static VPN IPv6 route.
8. Create a route-policy so that default static VPN routes and VPN loopback
routes of the L3VPN instance can be filtered out. For configuration details, see
Configuring a Route-Policy.
9. Run ip vpn-instance vpn-instance-name
The VPN instance view is displayed.
10. Enter the IPv4 or IPv6 address family view of the VPN instance.
– Run the ipv4-family command to enter the IPv4 address family view of
the VPN instance.
– Run the ipv6-family command to enter the IPv6 address family view of
the VPN instance.
VPN loopback routes are imported to the IPv4 or IPv6 address family of the
BGP-VPN instance.
21. Run network { 0.0.0.0 0 | :: 0 }
Default static VPN routes are imported to the IPv4 or IPv6 address family of
the BGP-VPN instance.
22. Run advertise l2vpn evpn
The function to advertise IP routes from the VPN instance to the EVPN
instance is enabled.
23. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
24. Run quit
Exit from the BGP view.
Step 2 Establish BGP VPN peer relationships between DC-GWs and VNFs.
1. Run route-policy route-policy-name deny node node
A route-policy that denies all routes is created.
2. Run quit
Exit from the route-policy view.
3. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
4. Enter the IPv4 or IPv6 address family view of a BGP-VPN instance.
– Run the ipv4-family vpn-instance vpn-instance-name command to enter
the IPv4 address family view of a BGP-VPN instance.
– Run the ipv6-family vpn-instance vpn-instance-name command to enter
the IPv6 address family view of a BGP-VPN instance.
5. Run peer { ipv4-address | ipv6-address | group-name } as-number { as-
number-plain | as-number-dot }
A BGP VPN peer relationship is established.
6. Run peer { ipv4-address | ipv6-address | group-name } connect-interface
interface-type interface-number [ ipv4-source-address ]
The source interface and source IP address are specified for the TCP
connection to be set up between BGP peers.
7. Run peer { ipv4-address | ipv6-address | group-name } route-policy route-
policy-name export
The route-policy is applied so that DC-GWs do not advertise BGP VPN routes
to VNFs. This prevents route loops.
8. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
Step 3 Configure DC-GWs to generate ARP or ND entries for Layer 2 forwarding based on
the ARP or ND information in EVPN routes.
1. Run interface vbdif bd-id
The VBDIF interface view is displayed.
2. Run anycast-gateway enable
The distributed gateway function is enabled.
3. Configure DC-GWs to generate ARP or ND entries for Layer 2 forwarding
based on the ARP or ND information in EVPN routes.
– Run the arp generate-rd-table enable command to allow DC-GWs to
generate ARP entries for Layer 2 forwarding based on the ARP
information in EVPN routes.
– Run the ipv6 nd generate-rd-table enable command to allow DC-GWs
to generate ND entries for Layer 2 forwarding based on the ND
information in EVPN routes.
4. (Optional) Run ipv6 nd dad attempts 0
Duplicate address detection (DAD) is prohibited. This command is mandatory
in IPv6 scenarios to prevent service interruptions occurred because the system
detects that the IPv6 address of another device is the same as the VBDIF
interface's IP address.
5. Run quit
Step 4 (Optional) Configure the asymmetric mode for IRB routes. If L2GW/L3GWs are
configured to advertise IRB or IRBv6 routes to DC-GWs, the asymmetric IRB
function needs to be configured on DC-GWs.
1. Run bgp { as-number-plain | as-number-dot }
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
----End
Procedure
Step 1 Configure L2GW/L3GWs to generate ARP or ND entries for Layer 2 forwarding
based on the ARP or ND information in EVPN routes.
1. Run system-view
Static routes are imported to the IPv4 or IPv6 address family of the BGP-VPN
instance.
10. Run advertise l2vpn evpn [ import-route-multipath ]
The function to advertise IP routes from the VPN instance to the EVPN
instance is enabled.
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
----End
Procedure
● Configure PEs.
a. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
b. Enter the IPv4 or IPv6 address family view of a BGP-VPN instance.
Prerequisites
The NFVI distributed gateway function has been configured.
Procedure
Step 1 Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer
command on DC-GWs to check whether the VPN peer relationships have been
established between DC-GWs and VNFs.
Step 2 Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or
display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on
DC-GWs to check whether DC-GWs can receive mobile phone routes from VNFs
and whether the next-hop addresses of the mobile phone routes are VNF
addresses.
Step 3 Run the display ip routing-table vpn-instance vpn-instance-name or display
ipv6 routing-table vpn-instance vpn-instance-name command on DC-GWs to
check whether the VPN routing tables of DC-GWs include mobile phone routes
and whether the outbound interfaces of the mobile phone routes are VBDIF
interfaces.
----End
Example
If the configuration succeeds, the display bgp vpnv4 vpn-instance vpn-instance-
name peer or display bgp vpnv6 vpn-instance vpn-instance-name peer
command output shows that VPN peer relationships have been established
between DC-GWs and VNFs.
# In IPv4 NFVI distributed gateway scenarios:
<DCGW> display bgp vpnv4 vpn-instance vpn1 peer
# In IPv6 NFVI distributed gateway scenarios (in this example, the destination IPv6
address of mobile phone routes is 2001:DB8:10::10):
<DCGW> display bgp vpnv6 vpn-instance vpn1 routing-table
Destination : :: PrefixLength : 0
NextHop : :: Preference : 60
Cost :0 Protocol : Static
RelayNextHop : :: TunnelID : 0x0
Interface : NULL0 Flags : DB
Usage Scenario
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-152 shows the networking of an NFVI distributed gateway (BGP VPNv4/
VPNv6 over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs.
In NFVI distributed gateway networking (BGP VPNv4/VPNv6 over E2E SR tunnels),
the number of BDs needs to be planned based on the number of network
segments corresponding to each IPU. An example assumes that the four IP
addresses planned for four IPUs belong to four network segments. In this case,
four BDs need to be planned. You need to configure the BDs and the
corresponding VBDIF interfaces on all L2GW/L3GWs and bind all the VBDIF
interfaces to the same L3VPN instance. In addition, the following functions need
to be deployed on the DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Establish BGP EVPN peer relationships between any DC-GW and L2GW/L3GW.
The DC-GW can then advertise local loopback routes and default routes to
the L2GW/L3GW. A route-policy needs to be configured on DC-GWs so that
the routes sent by DC-GWs to L2GW/L3GWs carry gateway addresses and the
next hops of the mobile phone routes received by L2GW/L3GWs from DC-
GWs are the VNF addresses. In addition, the BGP EVPN peer relationships
established between DC-GWs and L2GW/L3GWs can be used to advertise the
routes carrying the IP addresses used for establishing BGP VPN peer
relationships, and the BGP EVPN peer relationships established between
L2GW/L3GWs can be used to synchronize the MAC or ARP routes and the IP
prefix routes carrying gateway addresses with IPUs.
● Deploy EVPN RRs which can be either a standalone device or a DC-GW. In this
section, DC-GWs function as EVPN RRs, and L2GW/L3GWs function as RR
clients. L2GW/L3GWs can use EVPN RRs to synchronize MAC or ARP routes as
well as the IP prefix routes carrying a VNF address as the destination address
with IPUs.
● Establish BGP VPNv4/v6 peer relationships between L2GW/L3GWs and PEs.
L2GW/L3GWs advertise mobile phone routes to PEs based on BGP VPNv4/v6
peer relationships. DC-GWs send mobile phone routes to L2GW/L3GWs based
on BGP EVPN peer relationships. Therefore, mobile phone routes need to be
re-encapsulated as BGP VPNv4/v6 routes on L2GW/L3GWs before being
advertised to PEs.
● Configure static default routes on PEs and configure the PEs to send static
default routes to L2GW/L3GWs based on BGP VPNv4/v6 peer relationships.
● Deploy SR tunnels between PEs and L2GW/L3GWs and between DC-GWs and
L2GW/L3GWs to carry service traffic.
● The traffic transmitted between mobile phones and the Internet over VNFs is
north-south traffic. The traffic transmitted between VNF1 and VNF2 is east-
west traffic. To achieve load balancing of east-west traffic and north-south
traffic, deploy the load balancing function on DC-GWs and L2GW/L3GWs.
The NFVI distributed gateway function supports both IPv4 and IPv6 services. If a step does
not differentiate IPv4 and IPv6 services, this step applies to both IPv4 and IPv6 services.
Pre-configuration Tasks
Before configuring the NFVI distributed gateway function, complete the following
tasks:
● Allow the routes between PEs and DC-GWs and between DC-GWs and L2GW/
L3GWs to be reachable.
● Deploy SR tunnels between PEs and L2GW/L3GWs and between DC-GWs and
L2GW/L3GWs.
● Configure the BD EVPN function on DC-GWs and L2GW/L3GWs. The
configuration includes creating EVPN instances and L3VPN instances,
establishing BGP EVPN peer relationships, and configuring VBDIF interfaces.
On DC-GWs, the configuration involves only creating L3VPN instances and
establishing BGP EVPN peer relationships.
● Deploy L3VPN instances on PEs, and EBGP VPNv4 or EBGP VPNv6 has been
deployed between PEs and L2GW/L3GWs.
● Configure the static routes destined for VNF1 and VNF2 on L2GW/L3GWs by
referring to Static VPN IPv4 Routes or Static VPN IPv6 Routes.
Context
You can configure a tunnel policy or an SR-MPLS BE tunnel-preferred policy to
recurse routes over SR tunnels.
Procedure
● Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs and apply the
tunnel policy. You can choose to recurse routes over SR-MPLS BE or SR-MPLS
TE tunnels. For configuration details, see Configuring and Applying a Tunnel
Policy.
● Configure an SR-MPLS BE tunnel-preferred policy on PEs, DC-GWs, and
L2GW/L3GWs so that users can choose to recurse routes over SR-MPLS BE
tunnels.
a. Run system-view
----End
Procedure
Step 1 Run system-view
----End
Procedure
Step 1 Configure DC-GWs to advertise the VPN loopback routes and the mobile phone
routes received from VNFs through EVPN.
1. Run system-view
The system view is displayed.
2. Run interface Loopback interface-number
The loopback interface view is displayed.
3. Run ip binding vpn-instance vpn-instance-name
The loopback interface is bound to an L3VPN instance.
4. (Optional) Run ipv6 enable
IPv6 is enabled on the loopback interface. This step is mandatory when the
interface is configured with an IPv6 address.
5. Configure an IPv4 or IPv6 address for the loopback interface.
VPN loopback routes are imported to the IPv4 or IPv6 address family of the
BGP-VPN instance.
21. Run advertise l2vpn evpn
The function to advertise IP routes from the VPN instance to the EVPN
instance is enabled.
22. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
23. Run quit
Exit from the BGP view.
Step 2 Establish BGP VPN peer relationships between DC-GWs and VNFs.
1. Run route-policy route-policy-name deny node node
A route-policy that denies all routes is created.
2. Run quit
Exit from the route-policy view.
3. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
4. Enter the IPv4 or IPv6 address family view of a BGP-VPN instance.
– Run the ipv4-family vpn-instance vpn-instance-name command to enter
the IPv4 address family view of a BGP-VPN instance.
– Run the ipv6-family vpn-instance vpn-instance-name command to enter
the IPv6 address family view of a BGP-VPN instance.
5. Run peer { ipv4-address | ipv6-address | group-name } as-number { as-
number-plain | as-number-dot }
A BGP VPN peer relationship is established.
----End
Procedure
Step 1 Configure L2GW/L3GWs to generate ARP or ND entries for Layer 2 forwarding
based on the ARP or ND information in EVPN routes.
1. Run system-view
The system view is displayed.
2. Run interface vbdif bd-id
A VBDIF interface is created, and the VBDIF interface view is displayed.
3. Configure L2GW/L3GWs to generate ARP or ND entries for Layer 2 forwarding
based on the ARP or ND information in EVPN routes.
– Run the arp generate-rd-table enable command to allow L2GW/L3GWs
to generate ARP entries for Layer 2 forwarding based on the ARP
information in EVPN routes.
Static routes are imported to the IPv4 or IPv6 address family of the BGP-VPN
instance.
----End
Procedure
● Configure PEs.
a. Run bgp { as-number-plain | as-number-dot }
The BGP view is displayed.
b. Enter the IPv4 or IPv6 address family view of a BGP-VPN instance.
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
e. Run l2vpn-family evpn
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
e. Run l2vpn-family evpn
BGP Add-Path is enabled, and the number of preferred routes that can be
selected is configured.
g. Run peer { ipv4-address | group-name } capability-advertise add-path
{ send | receive | both }
----End
Prerequisites
The NFVI distributed gateway function has been configured.
Procedure
Step 1 Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer
command on DC-GWs to check whether the VPN peer relationships have been
established between DC-GWs and VNFs.
Step 2 Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or
display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on
DC-GWs to check whether DC-GWs can receive mobile phone routes from VNFs
and whether the next-hop addresses of the mobile phone routes are VNF
addresses.
Step 3 Run the display ip routing-table vpn-instance vpn-instance-name or display
ipv6 routing-table vpn-instance vpn-instance-name command on DC-GWs to
check whether the VPN routing tables of DC-GWs include mobile phone routes
and whether the outbound interfaces of the mobile phone routes are VBDIF
interfaces.
----End
Example
If the configuration succeeds, the display bgp vpnv4 vpn-instance vpn-instance-
name peer or display bgp vpnv6 vpn-instance vpn-instance-name peer
command output shows that VPN peer relationships have been established
between DC-GWs and VNFs.
# In IPv4 NFVI distributed gateway scenarios:
<DCGW> display bgp vpnv4 vpn-instance vpn1 peer
# In IPv4 NFVI distributed gateway scenarios (in this example, the destination IPv4
address of mobile phone routes is 10.10.10.10):
<DCGW> display bgp vpnv4 vpn-instance vpn1 routing-table
# In IPv6 NFVI distributed gateway scenarios (in this example, the destination IPv6
address of mobile phone routes is 2001:DB8:10::10):
<DCGW> display bgp vpnv6 vpn-instance vpn1 routing-table
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:2:: PrefixLen : 64
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:2::1 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:3:: PrefixLen : 64
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:3::1 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:4:: PrefixLen : 64
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:4::1 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:5::5 PrefixLen : 128
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:6::6 PrefixLen : 128
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:2.2.2.2 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:2.2.2.2 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:10::10 PrefixLen : 128
NextHop : 2001:DB8:5::5 LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:33::33 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:44::44 PrefixLen : 128
Destination : :: PrefixLength : 0
NextHop : :: Preference : 60
Cost :0 Protocol : Static
RelayNextHop : :: TunnelID : 0x0
Interface : NULL0 Flags : DB
Usage Scenario
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-153 shows the networking of an NFVI distributed gateway (BGP EVPN
over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs.
In NFVI distributed gateway networking (BGP EVPN over E2E SR tunnels), the
number of BDs needs to be planned based on the number of network segments
corresponding to each IPU. An example assumes that the four IP addresses
planned for four IPUs belong to four network segments. In this case, four BDs
need to be planned. You need to configure the BDs and the corresponding VBDIF
interfaces on all L2GW/L3GWs and bind all the VBDIF interfaces to the same
L3VPN instance. In addition, the following functions need to be deployed on the
DCN:
● Establish BGP VPN peer relationships between VNFs and DC-GWs so that the
VNFs can advertise mobile phone routes (UE IP) to DC-GWs.
● On L2GW/L3GW1 and L2GW/L3GW2, configure static VPN routes with the IP
addresses of VNFs as the destination addresses and the IP addresses of IPUs
as next-hop addresses.
● Deploy EVPN RRs which can be either a standalone device or a DC-GW. In this
section, BGP EVPN peer relationships are established between all L2GW/
L3GWs, PEs, and DC-GWs, DC-GWs are deployed as RRs to reflect EVPN
routes, and other devices function as RR clients. The functions of a BGP EVPN
RR are as follows:
– DC-GWs can reflect the mobile phone routes learned by VNFs to L2GW/
L3GWs and PEs so that mobile phone routes can be transmitted outside
the DCN and the traffic sent to mobile phone users can be introduced to
the DCN. Route-policies need to be configured DC-GWs so that the
mobile phone routes sent by DC-GWs to L2GW/L3GWs and PEs carry the
gateway addresses which are VNFs' loopback addresses.
– DC-GWs receive the IP prefix routes destined for VNFs from an L2GW/
L3GW based on BGP EVPN peer relationships and reflect the IP prefix
routes to PEs and other L2GWs/L3GWs. The EVPN RR can also be used to
synchronize the MAC or ARP routes of IPUs and the IP prefix routes
destined for VNFs between L2GW/L3GWs.
● Configure static default routes on PEs and use the EVPN RRs to reflect the
static default routes to L2GW/L3GWs.
● Deploy SR tunnels between PEs and L2GW/L3GWs and between DC-GWs and
L2GW/L3GWs to carry service traffic.
● The traffic transmitted between mobile phones and the Internet over VNFs is
north-south traffic. The traffic transmitted between VNF1 and VNF2 is east-
west traffic. To achieve load balancing of east-west traffic and north-south
traffic, deploy the load balancing function on DC-GWs and L2GW/L3GWs.
The NFVI distributed gateway function supports both IPv4 and IPv6 services. If a step does
not differentiate IPv4 and IPv6 services, this step applies to both IPv4 and IPv6 services.
Pre-configuration Tasks
Before configuring the NFVI distributed gateway function, complete the following
tasks:
● Allow the routes between PEs and DC-GWs and between DC-GWs and L2GW/
L3GWs to be reachable.
● Deploy SR tunnels between PEs and L2GW/L3GWs and between DC-GWs and
L2GW/L3GWs.
● Configure the BD EVPN function on DC-GWs and L2GW/L3GWs. The
configuration includes creating EVPN instances and L3VPN instances,
establishing BGP EVPN peer relationships, and configuring VBDIF interfaces.
On DC-GWs, the configuration involves only creating L3VPN instances and
establishing BGP EVPN peer relationships.
● Configure the static routes destined for VNF1 and VNF2 on L2GW/L3GWs by
referring to Static VPN IPv4 Routes or Static VPN IPv6 Routes.
Context
You can configure a tunnel policy or an SR-MPLS BE tunnel-preferred policy to
recurse routes over SR tunnels.
Procedure
● Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs and apply the
tunnel policy. You can choose to recurse routes over SR-MPLS BE or SR-MPLS
TE tunnels. For configuration details, see Configuring and Applying a Tunnel
Policy.
● Configure an SR-MPLS BE tunnel-preferred policy on PEs, DC-GWs, and
L2GW/L3GWs so that users can choose to recurse routes over SR-MPLS BE
tunnels.
a. Run system-view
----End
Procedure
Step 1 Run system-view
Step 2 Create a route-policy to filter and modify the advertised and received routes. For
configuration details, see Configuring a Route-Policy. You can run the apply
gateway-ip none or apply ipv6 gateway-ip none command to delete gateway
addresses from the VNF routes received from L2GW/L3GWs so that the routes
sent from PEs to VNFs can be recursed over SR tunnels, instead of being
forwarded based on gateway addresses.
Step 5 Enter the IPv4 or IPv6 address family view of the VPN instance.
● Run the ipv4-family command to enter the IPv4 address family view of the
VPN instance.
● Run the ipv6-family command to enter the IPv6 address family view of the
VPN instance.
Exit from the IPv4 or IPv6 address family view of the VPN instance.
Step 10 Enter the IPv4 or IPv6 address family view of a BGP-VPN instance.
● Run the ipv4-family vpn-instance vpn-instance-name command to enter the
IPv4 address family view of a BGP-VPN instance.
● Run the ipv6-family vpn-instance vpn-instance-name command to enter the
IPv6 address family view of a BGP-VPN instance.
Default static VPN routes are imported to the IPv4 or IPv6 address family of the
BGP-VPN instance.
The function to advertise IP routes from the VPN instance to the EVPN instance is
enabled.
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
----End
Procedure
Step 1 Configure DC-GWs to advertise the VPN loopback routes and the mobile phone
routes received from VNFs through EVPN.
1. Run system-view
The system view is displayed.
2. Run interface Loopback interface-number
The loopback interface view is displayed.
3. Run ip binding vpn-instance vpn-instance-name
The loopback interface is bound to an L3VPN instance.
4. (Optional) Run ipv6 enable
IPv6 is enabled on the loopback interface. This step is mandatory when the
loopback interface is configured with an IPv6 address.
5. Configure an IPv4 or IPv6 address for the loopback interface.
– Run the ip address ip-address { mask | mask-length } command to
configure an IPv4 address for the loopback interface.
– Run the ipv6 address { ipv6-address prefix-length | ipv6-address/prefix-
length } command to configure an IPv6 address for the loopback
interface.
6. Run quit
Exit from the loopback interface view.
7. Create a route-policy to filter and modify the advertised and received routes.
For configuration details, see Configuring a Route-Policy. A route-policy must
support the following functions:
– Export route-policy: An if-match clause configured in the route-policy can
be used to filter VPN loopback routes and mobile phone routes in the
L3VPN instance. You can run the apply gateway-ip { origin-nexthop |
ipv4-address } or apply ipv6 gateway-ip { origin-nexthop | ipv6-
address } command to configure the original next-hop address of mobile
phone routes as a gateway address.
– Import route-policy: You can run the apply gateway-ip none or apply
ipv6 gateway-ip none command to delete gateway addresses from the
routes received from L2GW/L3GWs to ensure that the BGP VPN packets
sent by DC-GWs to VNFs can be recursed over SR tunnels, instead of
being forwarded through VBDIF interfaces based on gateway addresses
(DC-GWs do not have VBDIF interfaces).
VPN loopback routes are imported to the IPv4 or IPv6 address family of the
BGP-VPN instance.
The function to advertise IP routes from the VPN instance to the EVPN
instance is enabled.
22. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
23. Run quit
Step 2 Establish BGP VPN peer relationships between DC-GWs and VNFs.
1. Run route-policy route-policy-name deny node node
The RR function is configured to reflect BGP VPN routes. This function allows
VNFs to share mobile phone routes.
9. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
Step 3 Configure RRs in BGP EVPN so that EVPN routes can be synchronized between
L2GW/L3GWs and the EVPN routes sent by L2GW/L3GWs can be reflected to PEs.
1. Run l2vpn-family evpn
DC-GWs are configured not to modify the next hop during route
advertisement to PEs and L2GW/L3GWs and the next-hop address is still the
IP address of an L2GW/L3GW when a PE receives routes from a VNF. Upon
receipt of default routes, the next -hop addresses of L2GW/L3GWs are the IP
addresses of PEs. In this way, routes are recursed over E2E SR-MPLS TE
tunnels.
----End
Procedure
Step 1 Configure L2GW/L3GWs to generate ARP or ND entries for Layer 2 forwarding
based on the ARP or ND information in EVPN routes.
1. Run system-view
Step 2 Configure an L3VPN instance to advertise the static VPN routes destined for VNFs
to an EVPN instance.
1. Create a route-policy so that the static VPN routes destined for VNFs can be
filtered out in the L3VPN instance. For configuration details, see Configuring a
Route-Policy. During configuration of an apply clause, run the apply
gateway-ip { origin-nexthop | ipv4-address } or apply ipv6 gateway-ip
{ origin-nexthop | ipv6-address } command to specify the original next-hop
address of a static VPN route as a gateway address.
2. Run ip vpn-instance vpn-instance-name
Exit from the IPv4 or IPv6 address family view of the VPN instance.
6. Run quit
Static routes are imported to the IPv4 or IPv6 address family of the BGP-VPN
instance.
10. Run advertise l2vpn evpn [ import-route-multipath ]
The function to advertise IP routes from the VPN instance to the EVPN
instance is enabled. If the load balancing function needs to be deployed on
the network, specify the import-route-multipath parameter so that the VPN
instance can advertise all the routes with the same destination address to the
EVPN instance.
11. (Optional) Run irb asymmetric
perform this step so that L2GW/L3GWs do not generate IP prefix routes. This
prevents route loops.
12. Run quit
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
----End
Procedure
● Configure PEs.
a. Run bgp { as-number-plain | as-number-dot }
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
e. Run l2vpn-family evpn
● Configure DC-GWs.
a. Run bgp { as-number-plain | as-number-dot }
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
e. Run l2vpn-family evpn
Exit from the IPv4 or IPv6 address family view of the BGP-VPN instance.
Prerequisites
The NFVI distributed gateway function has been configured.
Procedure
Step 1 Run the display bgp { vpnv4 | vpnv6 } vpn-instance vpn-instance-name peer
command on DC-GWs to check whether the VPN peer relationships have been
established between DC-GWs and VNFs.
Step 2 Run the display bgp vpnv4 vpn-instance vpn-instance-name routing-table or
display bgp vpnv6 vpn-instance vpn-instance-name routing-table command on
DC-GWs to check whether DC-GWs can receive mobile phone routes from VNFs
and whether the next-hop addresses of the mobile phone routes are VNF
addresses.
Step 3 Run the display ip routing-table vpn-instance vpn-instance-name or display
ipv6 routing-table vpn-instance vpn-instance-name command on DC-GWs to
check whether the VPN routing tables of DC-GWs include mobile phone routes
and whether the outbound interfaces of the mobile phone routes are VBDIF
interfaces.
----End
Example
If the configuration succeeds, the display bgp vpnv4 vpn-instance vpn-instance-
name peer or display bgp vpnv6 vpn-instance vpn-instance-name peer
command output shows that VPN peer relationships have been established
between DC-GWs and VNFs.
# In IPv4 NFVI distributed gateway scenarios:
<DCGW> display bgp vpnv4 vpn-instance vpn1 peer
*i 2.2.2.2 0 100 0 ?
*> 10.1.1.0/24 0.0.0.0 0 0 ?
*i 5.5.5.5 0 100 0 ?
*> 10.1.1.1/32 0.0.0.0 0 0 ?
*> 10.2.1.0/24 0.0.0.0 0 0 ?
*i 5.5.5.5 0 100 0 ?
*> 10.2.1.1/32 0.0.0.0 0 0 ?
*> 10.3.1.0/24 0.0.0.0 0 0 ?
*> 10.3.1.1/32 0.0.0.0 0 0 ?
*> 10.4.1.0/24 0.0.0.0 0 0 ?
*> 10.4.1.1/32 0.0.0.0 0 0 ?
*>i 10.10.10.10/32 5.5.5.5 0 100 0 ?
*> 33.33.33.33/32 0.0.0.0 0 0 ?
*>i 44.44.44.44/32 9.9.9.9 0 100 0 ?
*> 127.0.0.0/8 0.0.0.0 0 0 ?
# In IPv6 NFVI distributed gateway scenarios (in this example, the destination IPv6
address of mobile phone routes is 2001:DB8:10::10):
<DCGW> display bgp vpnv6 vpn-instance vpn1 routing-table
Path/Ogn : ?
*> Network : 2001:DB8:4:: PrefixLen : 64
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:4::1 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:5::5 PrefixLen : 128
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:6::6 PrefixLen : 128
NextHop : ::FFFF:1.1.1.1 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:2.2.2.2 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*i
NextHop : ::FFFF:2.2.2.2 LocPrf : 100
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:10::10 PrefixLen : 128
NextHop : 2001:DB8:5::5 LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*> Network : 2001:DB8:33::33 PrefixLen : 128
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
*>i Network : 2001:DB8:44::44 PrefixLen : 128
NextHop : ::FFFF:9.9.9.9 LocPrf : 100
MED :0 PrefVal : 0
Label : 200/NULL
Path/Ogn : ?
*> Network : FE80:: PrefixLen : 10
NextHop : :: LocPrf :
MED :0 PrefVal : 0
Label :
Path/Ogn : ?
Destination : :: PrefixLength : 0
NextHop : :: Preference : 60
Cost :0 Protocol : Static
RelayNextHop : :: TunnelID : 0x0
Interface : NULL0 Flags : DB
Networking Requirements
On the network shown in Figure 3-154, Site 1 and Site 2 reside on Layer 2
networks. To allow Site 1 and Site 2 to communicate over the backbone network,
configure EVPN. Specifically, create EVPN instances on the PEs to store EVPN
routes sent from CEs or remote PEs and configure an RR to reflect EVPN routes. To
ensure high transmission efficiency, configure the All-Active redundancy mode on
PE1 and PE2 to implement load balancing.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0 respectively.
Functions, such as rapid convergence, split horizon, and DF election that are required in the
EVPN dual-homing scenario fail to take effect in a single homing scenario. In such a scenario,
configuring the ESI is optional on a dual-homing PE.
Precautions
When configuring a VPN to access a common EVPN E-LAN, note the following:
● On the same EVPN, the export VPN target list of a site shares VPN targets
with the import VPN target lists of the other sites; the import VPN target list
of a site shares VPN targets with the export VPN target lists of the other sites.
● It is recommended that you configure a PE's local loopback interface address
as the EVPN source address.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure basic MPLS functions and MPLS LDP, and establish MPLS LSPs on
the backbone network.
3. Configure an EVPN instance on each PE.
4. Configure an EVPN source address on each PE.
5. Bind each PE interface that connects to a CE to the EVPN instance.
6. Configure an ESI for each PE interface that connects to a CE.
7. Configure BGP EVPN peer relationships between PEs and the RR, and
configure the PEs as RR clients.
8. Configure a redundancy mode on PE1 and PE2.
9. Configure CEs and PEs to communicate.
10. Associate BFD sessions with the AC interfaces on PE1 and PE2 to accelerate
DF switching during an AC link fault.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name (evpna)
● EVPN instance RDs (100:1, 200:1, and 300:1) and RTs (1:1) on PEs
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-154. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used as the IGP in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
After the configuration is complete, PE1, PE2, and PE3 can establish OSPF
neighbor relationships with the RR. Run the display ospf peer command. The
command output shows that State is Full. Run the display ip routing-table
command. The command output shows that the RR and PEs have learned the
routes to Loopback1 of each other.
The following example uses the command output on PE1.
[~PE1] display ospf peer
OSPF Process 1 with Router ID 10.1.1.1
Neighbors
Step 3 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
After the configuration is complete, LDP sessions are established between PEs and
the RR. Run the display mpls ldp session command. The command output shows
that Status is Operational. Run the display mpls ldp lsp command. The
command output shows LDP LSP configurations.
# Configure PE1.
[~PE1] evpn vpn-instance evpna
[*PE1-evpn-instance-evpna] route-distinguisher 100:1
[*PE1-evpn-instance-evpna] vpn-target 1:1
[*PE1-evpn-instance-evpna] quit
[*PE1] commit
# Configure PE2.
[~PE2] evpn vpn-instance evpna
[*PE2-evpn-instance-evpna] route-distinguisher 200:1
[*PE2-evpn-instance-evpna] vpn-target 1:1
[*PE2-evpn-instance-evpna] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evpna
[*PE3-evpn-instance-evpna] route-distinguisher 300:1
[*PE3-evpn-instance-evpna] vpn-target 1:1
[*PE3-evpn-instance-evpna] quit
[*PE3] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
Step 6 Configure an ESI for PE1 and PE2 interface that connects to a CE. (In this example,
an ESI is dynamically generated. For details on how to configure a static ESI, see
Configuring an ESI.)
# Configure PE1.
[~PE1] lacp e-trunk priority 1
[*PE1] lacp e-trunk system-id 00E0-FC00-0000
[*PE1] e-trunk 1
[*PE1-e-trunk-1] priority 10
[*PE1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*PE1-e-trunk-1] quit
[*PE1] interface eth-trunk 10
[*PE1-Eth-Trunk10] mode lacp-static
[*PE1-Eth-Trunk10] e-trunk 1
[*PE1-Eth-Trunk10] quit
[*PE1] commit
# Configure PE2.
[~PE2] lacp e-trunk priority 1
[*PE2] lacp e-trunk system-id 00E0-FC00-0000
[*PE2] e-trunk 1
[*PE2-e-trunk-1] priority 10
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] mode lacp-static
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] quit
[*PE2] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] evpn binding vpn-instance evpna
[*PE2-Eth-Trunk10] commit
[~PE2-Eth-Trunk10] quit
[~PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
# Configure PE3.
[~PE3] interface gigabitethernet 2/0/0
[*PE3-GigabitEthernet2/0/0] evpn binding vpn-instance evpna
[*PE3-GigabitEthernet2/0/0] commit
[~PE3-GigabitEthernet2/0/0] quit
Step 8 Configure BGP EVPN peer relationships between PEs and the RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After the configuration is complete, run the display bgp evpn peer command on
the RR. The command output shows that BGP peer relationships have been
established between the PEs and RR and are in the Established state.
[~RR] display bgp evpn peer
BGP local router ID : 3.3.3.3
Local AS number : 100
Total number of peers : 3 Peers in established state : 3
If PE1 and PE2 work in Single-Active redundancy mode, GE 1/0/0 and GE 2/0/0 on CE1
must join different Eth-Trunk interfaces.
# Configure CE2.
[~CE2] interface Eth-Trunk1
[*CE2-Eth-Trunk1] quit
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] eth-trunk 1
[*CE2-GigabitEthernet1/0/0] commit
[~CE2-GigabitEthernet1/0/0] quit
Step 10 Associate BFD sessions with the AC interfaces on PE1 and PE2 to accelerate DF
switching during an AC link fault.
# Configure PE1.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd bfd1 bind peer-ip 2.2.2.2 track-interface interface Eth-Trunk10
[*PE1-bfd-session-bfd1] discriminator local 10
[*PE1-bfd-session-bfd1] discriminator remote 20
[*PE1-bfd-session-bfd1] quit
[*PE1] interface eth-trunk 10
[*PE1-Eth-Trunk10] es track bfd bfd1
[*PE1-Eth-Trunk10] quit
[*PE1] commit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd bfd1 bind peer-ip 1.1.1.1 track-interface interface Eth-Trunk10
[*PE2-bfd-session-bfd1] discriminator local 20
[*PE2-bfd-session-bfd1] discriminator remote 10
[*PE2-bfd-session-bfd1] quit
[*PE2] interface eth-trunk 10
EVPN-Instance evpna:
Number of Mac Routes: 2
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*>i 1:48:00e0-fc12-3456:0:0.0.0.0 1.1.1.1
*i 2.2.2.2
Run the display bgp evpn all routing-table mac-route mac-route command on
PE3. The command output shows that MAC/IP advertisement routes destined for
CE1 work in load balancing mode.
[~PE3] display bgp evpn all routing-table mac-route 1:48:00e0-fc12-3456:0:0.0.0.0
BGP local router ID : 4.4.4.4
Local AS number : 100
EVPN-Instance evpna:
BGP routing table entry information of 1:48:00e0-fc12-3456:0:0.0.0.0:
Remote-Cross route
Label information (Received/Applied): 32831/NULL
From: 3.3.3.3 (3.3.3.3)
Route Duration: 0d00h26m51s
Relay Tunnel Out-Interface: LDP LSP
Original nexthop: 1.1.1.1
Qos information : 0x0
Ext-Community:RT <1 : 1>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
Originator: 10.1.1.1
Cluster list: 3.3.3.3
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 1, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:
0138.ba56.b790.0201.2100
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
lacp e-trunk system-id 00e0-fc00-0000
lacp e-trunk priority 1
#
evpn vpn-instance evpna
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bfd
#
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
evpn binding vpn-instance evpna
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.3.1.0 0.0.0.255
#
evpn source-address 4.4.4.4
#
return
● RR configuration file
#
sysname RR
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
ipv4-family
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
Networking Requirements
On the network shown in Figure 3-155, Site1 and Site2 reside on Layer 2
networks. To allow Site1 and Site2 to communicate over the backbone network,
configure EVPN functions. Specifically, create EVPN instances on the PEs to store
EVPN routes sent from CEs or remote PEs and configure a route reflector (RR) to
reflect EVPN routes. To have the BUM traffic balanced along the links between
CE1 and PE1 and between CE1 and PE2, configure Eth-Trunk sub-interfaces on PE1
and PE2 to connect to CE1 and then configure VLAN-based DF election.
To improve reliability for the EVPN, configure also the following functions in this
example:
● Function that the AC status influences DF election
● Delay after which ES routes are advertised
● Per-ES AD route division based on RTs
Interfaces 1 through 3 in this example represent Ethernet 1/0/0, Ethernet 2/0/0, and Ethernet
3/0/0, respectively.
In a single-homing scenario, fast convergence, split horizon, and DF election required in a dual-
homing scenario do not take effect. Therefore, configuring an ESI in a single-homing scenario is
optional.
Precautions
When configuring Eth-Trunk sub-interfaces to access a common EVPN E-LAN in
active-active mode, note the following:
● On the same EVPN, the export VPN target list of a site shares VPN targets
with the import VPN target lists of the other sites; the import VPN target list
of a site shares VPN targets with the export VPN target lists of the other sites.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure basic MPLS functions and MPLS LDP, and establish MPLS LSPs on
the backbone network.
3. Configure EVPN instances on the PEs.
4. Configure a source address on each PE.
5. Configure each PE's sub-interface connecting to a CE.
6. Bind each PE's sub-interface connecting to a CE to an EVPN instance on each
PE.
7. Configure an ESI for each PE interface connecting to a CE.
8. Configure EVPN BGP peer relationships between PEs and the RR, and
configure the PEs as RR clients.
9. Configure CEs and PEs to communicate.
10. Configure VLAN-based DF election and the function that the AC status
influences DF election.
11. Associate BFD sessions with the AC interfaces on PE1 and PE2 to accelerate
DF switching during an AC link fault.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name (evpna and evpnb)
● EVPN instance evpna's RDs (100:1, 200:1, and 300:1) and RTs (1:1) on PEs
EVPN instance evpnb's RDs (100:2, 200:2, 300:2) and RTs (2:2) on PEs
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-155. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used as the IGP in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
After completing the configurations, PE1, PE2, and PE3 can establish OSPF
neighbor relationships with the RR. Run the display ospf peer command. The
command output shows that State is Full. Run the display ip routing-table
command. The command output shows that the RR and PEs have learned the
routes to Loopback1 of each other.
The following example uses the command output on PE1.
[~PE1] display ospf peer
(M) Indicates MADJ neighbor
Step 3 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface ethernet 2/0/0
[*PE1-Ethernet2/0/0] mpls
[*PE1-Ethernet2/0/0] mpls ldp
[*PE1-Ethernet2/0/0] commit
[~PE1-Ethernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface ethernet 2/0/0
[*PE2-Ethernet2/0/0] mpls
[*PE2-Ethernet2/0/0] mpls ldp
[*PE2-Ethernet2/0/0] commit
[~PE2-Ethernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface ethernet 1/0/0
[*PE3-Ethernet1/0/0] mpls
[*PE3-Ethernet1/0/0] mpls ldp
[*PE3-Ethernet1/0/0] commit
[~PE3-Ethernet1/0/0] quit
After the configurations are complete, LDP sessions are established between PEs
and the RR. Run the display mpls ldp session command. The command output
shows that Status is Operational. Run the display mpls ldp lsp command. The
command output shows LDP LSP configurations.
The following example uses the command output on PE1.
[~PE1] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
An asterisk (*) before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
3.3.3.3:0 Operational DU Passive 0000:00:05 22/22
--------------------------------------------------------------------------
TOTAL: 1 Session(s) Found.
[~PE1] display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.1/32 3/NULL 3.3.3.3 127.0.0.1 Loop1
*1.1.1.1/32 Liberal/32828 DS/3.3.3.3
2.2.2.2/32 NULL/32829 - 10.1.1.2 Eth2/0/0
2.2.2.2/32 32829/32829 3.3.3.3 10.1.1.2 Eth2/0/0
3.3.3.3/32 NULL/3 - 10.1.1.2 Eth2/0/0
3.3.3.3/32 32828/3 3.3.3.3 10.1.1.2 Eth2/0/0
4.4.4.4/32 NULL/32830 - 10.1.1.2 Eth2/0/0
4.4.4.4/32 32830/32830 3.3.3.3 10.1.1.2 Eth2/0/0
-------------------------------------------------------------------------------
TOTAL: 7 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 FRR LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
# Configure PE2.
[~PE2] evpn vpn-instance evpna
[*PE2-evpn-instance-evpna] route-distinguisher 200:1
[*PE2-evpn-instance-evpna] vpn-target 1:1
[*PE2-evpn-instance-evpna] quit
[*PE2] evpn vpn-instance evpnb
[*PE2-evpn-instance-evpnb] route-distinguisher 200:2
[*PE2-evpn-instance-evpna] vpn-target 2:2
[*PE2-evpn-instance-evpna] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evpna
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE2.
[~PE2] e-trunk 1
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1
[*PE2-Eth-Trunk10.1] vlan-type dot1q 1
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface eth-trunk 10.2
[*PE2-Eth-Trunk10.2] vlan-type dot1q 2
[*PE2-Eth-Trunk10.2] quit
[*PE2] interface ethernet 1/0/0
[*PE2-Ethernet1/0/0] eth-trunk 10
[*PE2-Ethernet1/0/0] quit
[*PE2] commit
# Configure PE3.
# Configure PE2.
[~PE2] interface eth-trunk 10.1
[*PE2-Eth-Trunk10.1] evpn binding vpn-instance evpna
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface eth-trunk 10.2
[*PE2-Eth-Trunk10.2] evpn binding vpn-instance evpnb
[*PE2-Eth-Trunk10.2] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10.1
[*PE3-Eth-Trunk10] evpn binding vpn-instance evpna
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.2
[*PE3-Eth-Trunk10] evpn binding vpn-instance evpnb
[*PE3-Eth-Trunk10] quit
[*PE3] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] esi 0000.1111.2222.1111.1111
[*PE2-Eth-Trunk10] quit
[*PE2] commit
Step 9 Configure EVPN BGP peer relationships between PEs and the RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After completing the configurations, run the display bgp evpn peer command on
the RR. The command output shows that BGP peer relationships have been
established between the PEs and RR and are in the Established state.
[~RR] display bgp evpn peer
[*CE1-Eth-Trunk20] portswitch
[*CE1-Eth-Trunk20] port link-type trunk
[*CE1-Eth-Trunk20] port trunk allow-pass vlan 1 to 2
[*CE1-Eth-Trunk20] quit
[*CE1] interface ethernet1/0/0
[*CE1-Ethernet1/0/0] eth-trunk 20
[*CE1-Ethernet1/0/0] quit
[*CE1] interface ethernet2/0/0
[*CE1-Ethernet2/0/0] eth-trunk 20
[*CE1-Ethernet2/0/0] quit
[*CE1] commit
# Configure CE2.
[~CE2] vlan batch 1 2
[*CE2] interface Eth-Trunk 10
[*CE2-Eth-Trunk10] portswitch
[*CE2-Eth-Trunk10] port link-type trunk
[*CE2-Eth-Trunk10] port trunk allow-pass vlan 1 to 2
[*CE2-Eth-Trunk10] quit
[*CE2] interface ethernet1/0/0
[*CE2-Ethernet1/0/0] eth-trunk 10
[*CE2-Ethernet1/0/0] quit
[*CE2] commit
Step 11 Configure VLAN-based DF election and the function that the AC status influences
DF election on PE1 and PE2.
Run the display evpn vpn-instance name evpna df result and display evpn vpn-
instance name evpnb df result commands on PE1 to check the DF election
result.
[~PE1] display evpn vpn-instance name evpna df result
ESI Count: 1
ESI: 0000.1111.2222.1111.1111
Eth-Trunk10.1:
Current State: IFSTATE_UP
DF Result : Primary
[~PE1] display evpn vpn-instance name evpnb df result
ESI Count: 1
ESI: 0000.1111.2222.1111.1111
Eth-Trunk10.2:
Current State: IFSTATE_UP
DF Result : Primary
The preceding command output shows that PE1 is elected to be the primary DF in
both evpna and evpnb. Therefore, PE1 will forward the BUM traffic.
# Configure PE1.
[~PE1] evpn
[*PE1-evpn] df-election type vlan
[*PE1-evpn] df-election ac-influence enable
[*PE1-evpn] quit
[*PE1] commit
# Configure PE2.
[~PE2] evpn
[*PE2-evpn] df-election type vlan
[*PE2-evpn] df-election ac-influence enable
[*PE2-evpn] quit
[*PE2] commit
Run the display evpn vpn-instance name evpna df result and display evpn vpn-
instance name evpnb df result commands on PE1 to check the DF election
result.
[~PE1] display evpn vpn-instance name evpna df result
ESI Count: 1
ESI: 0000.1111.2222.1111.1111
Eth-Trunk10.1:
Current State: IFSTATE_UP
DF Result : Backup
[~PE1] display evpn vpn-instance name evpnb df result
ESI Count: 1
ESI: 0000.1111.2222.1111.1111
Eth-Trunk10.2:
Current State: IFSTATE_UP
DF Result : Primary
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] timer es-recovery 30
[*PE2-Eth-Trunk10] quit
[*PE2] commit
Step 13 Associate BFD sessions with the AC interfaces on PE1 and PE2 to accelerate DF
switching during an AC link fault.
# Configure PE1.
[~PE1] bfd
[*PE1-bfd] quit
[*PE1] bfd bfd1 bind peer-ip 2.2.2.2 track-interface interface Eth-Trunk10
[*PE1-bfd-session-bfd1] discriminator local 10
[*PE1-bfd-session-bfd1] discriminator remote 20
[*PE1-bfd-session-bfd1] quit
[*PE1] bfd bfd2 bind peer-ip 2.2.2.2
[*PE1-bfd-session-bfd2] discriminator local 30
[*PE1-bfd-session-bfd2] discriminator remote 40
[*PE1-bfd-session-bfd2] quit
[*PE1] interface eth-trunk 10
[*PE1-Eth-Trunk10] es track bfd bfd2
[*PE1-Eth-Trunk10] quit
[*PE1] commit
# Configure PE2.
[~PE2] bfd
[*PE2-bfd] quit
[*PE2] bfd bfd1 bind peer-ip 1.1.1.1
[*PE2-bfd-session-bfd1] discriminator local 20
[*PE2-bfd-session-bfd1] discriminator remote 10
EVPN-Instance evpna:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*>i 0000.1111.2222.1111.1111:0 1.1.1.1
*>i 0000.1111.2222.1111.1111:4294967295 1.1.1.1
*i 2.2.2.2
EVPN-Instance evpnb:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*>i 0000.1111.2222.1111.1111:0 1.1.1.1
*>i 0000.1111.2222.1111.1111:4294967295 1.1.1.1
*i 2.2.2.2
EVPN-Instance evpna:
Number of Inclusive Multicast Routes: 3
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*>i 0:32:1.1.1.1 1.1.1.1
*>i 0:32:2.2.2.2 2.2.2.2
*> 0:32:4.4.4.4 127.0.0.1
EVPN-Instance evpnb:
Number of Inclusive Multicast Routes: 3
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*>i 0:32:1.1.1.1 1.1.1.1
*>i 0:32:2.2.2.2 2.2.2.2
*> 0:32:4.4.4.4 127.0.0.1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn
df-election type vlan
df-election ac-influence enable
#
evpn vpn-instance evpna
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evpnb
route-distinguisher 100:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
bfd
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
e-trunk 1
peer-address 2.2.2.2 source-address 1.1.1.1
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
es track bfd bfd2
timer es-recovery 30
#
interface Eth-Trunk10.1
vlan-type dot1q 1
evpn binding vpn-instance evpna
#
interface Eth-Trunk10.2
vlan-type dot1q 2
mpls ldp
#
e-trunk 1
peer-address 1.1.1.1 source-address 2.2.2.2
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
es track bfd bfd1
timer es-recovery 30
#
interface Eth-Trunk10.1
vlan-type dot1q 1
evpn binding vpn-instance evpna
#
interface Eth-Trunk10.2
vlan-type dot1q 2
evpn binding vpn-instance evpnb
#
interface Ethernet1/0/0
undo shutdown
port-tx-enabling-delay 30000
carrier up-hold-time 30000
eth-trunk 10
#
interface Ethernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bfd bfd1 bind peer-ip 1.1.1.1
discriminator local 20
discriminator remote 10
#
bfd bfd2 bind peer-ip 1.1.1.1 track-interface interface Eth-Trunk10
discriminator local 40
discriminator remote 30
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn redundancy-mode single-active
#
interface Ethernet1/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface Ethernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface Ethernet3/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 2.2.2.2 enable
peer 2.2.2.2 reflect-client
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
● CE1 configuration file
#
sysname CE1
#
vlan batch 1 to 2
#
interface Eth-Trunk20
portswitch
port link-type trunk
port trunk allow-pass vlan 1 to 2
#
interface Ethernet1/0/0
undo shutdown
eth-trunk 20
#
interface Ethernet2/0/0
undo shutdown
eth-trunk 20
#
return
Networking Requirements
On the network shown in Figure 3-156, to allow Site 1 and Site 2 to communicate
over the backbone network, configure the EVPN and VPN functions to transmit
both Layer 2 and Layer 3 traffic. If Site 1 and Site 2 are connected through the
same subnet, create an EVPN instance on each PE to store EVPN routes. Layer 2
forwarding is based on an EVPN route that matches a MAC address. If Site 1 and
Site 2 are connected through different subnets, create a VPN instance on each PE
to store VPN routes. In this situation, Layer 2 traffic is terminated, and Layer 3
traffic is forwarded through a Layer 3 gateway. A route reflector (RR) is configured
to reflect both EVPN and VPN routes. To balance BUM traffic along the links
between CE1 and PE1 and between CE1 and PE2, configure Eth-Trunk sub-
interfaces on PE1 and PE2 to connect to Site 1.
Precautions
When you configure Eth-Trunk sub-interfaces to access a BD EVPN IRB in active-
active mode (carrying both layer 2 and layer 3 services), note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites; the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of a PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
10. Enable PE1 and PE2 to send ARP packets at a constant speed to limit the rate
at which ARP broadcasts request packets. This ensures a rapid traffic
switchover after a CE fault occurs.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance named evpna and VPN instance named vpnb
● EVPN instance evpna's RDs (100:1, 200:1, 300:1) and RTs (1:1) on PEs VPN
instance vpnb's RDs (100:2, 200:2, 300:2) and RTs (2:2) on PEs
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-156. For configuration details, see "Configuration Files" in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
After the configurations are complete, PE1, PE2, and PE3 can establish OSPF
neighbor relationships with the RR. Run the display ospf peer command. The
command output shows that State is Full. Run the display ip routing-table
command. The command output shows that the RR and PEs have learned the
routes to Loopback 1 of each other.
The following example uses the command output on PE1.
[~PE1] display ospf peer
(M) Indicates MADJ neighbor
Step 3 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
After the configurations are complete, LDP sessions are established between the
PEs and RR. Run the display mpls ldp session command. The command output
shows that Status is Operational. Run the display mpls ldp lsp command. The
command output shows LDP LSP configurations.
The following example uses the command output on PE1.
[~PE1] display mpls ldp session
LDP Session(s) in Public Network
Codes: LAM(Label Advertisement Mode), SsnAge Unit(DDDD:HH:MM)
An asterisk (*) before a session means the session is being deleted.
--------------------------------------------------------------------------
PeerID Status LAM SsnRole SsnAge KASent/Rcv
--------------------------------------------------------------------------
3.3.3.3:0 Operational DU Passive 0000:00:05 22/22
--------------------------------------------------------------------------
TOTAL: 1 Session(s) Found.
[~PE1] display mpls ldp lsp
LDP LSP Information
-------------------------------------------------------------------------------
Flag after Out IF: (I) - RLFA Iterated LSP, (I*) - Normal and RLFA Iterated LSP
-------------------------------------------------------------------------------
DestAddress/Mask In/OutLabel UpstreamPeer NextHop OutInterface
-------------------------------------------------------------------------------
1.1.1.1/32 3/NULL 3.3.3.3 127.0.0.1 Loop1
*1.1.1.1/32 Liberal/32828 DS/3.3.3.3
2.2.2.2/32 NULL/32829 - 10.1.1.2 GE2/0/0
2.2.2.2/32 32829/32829 3.3.3.3 10.1.1.2 GE2/0/0
3.3.3.3/32 NULL/3 - 10.1.1.2 GE2/0/0
3.3.3.3/32 32828/3 3.3.3.3 10.1.1.2 GE2/0/0
4.4.4.4/32 NULL/32830 - 10.1.1.2 GE2/0/0
4.4.4.4/32 32830/32830 3.3.3.3 10.1.1.2 GE2/0/0
-------------------------------------------------------------------------------
TOTAL: 7 Normal LSP(s) Found.
TOTAL: 1 Liberal LSP(s) Found.
TOTAL: 0 FRR LSP(s) Found.
An asterisk (*) before an LSP means the LSP is not established
An asterisk (*) before a Label means the USCB or DSCB is stale
An asterisk (*) before an UpstreamPeer means the session is stale
An asterisk (*) before a DS means the session is stale
An asterisk (*) before a NextHop means the LSP is FRR LSP
# Configure PE2.
[~PE2] evpn vpn-instance evpna bd-mode
[*PE2-evpn-instance-evpna] route-distinguisher 200:1
[*PE2-evpn-instance-evpna] vpn-target 1:1
[*PE2-evpn-instance-evpna] quit
[*PE2] ip vpn-instance vpnb
[*PE2-vpn-instance-vpnb] ipv4-family
[*PE2-vpn-instance-vpnb-af-ipv4] route-distinguisher 200:2
[*PE2-vpn-instance-vpnb-af-ipv4] vpn-target 2:2 evpn
[*PE2-vpn-instance-vpnb-af-ipv4] evpn mpls routing-enable
[*PE2-vpn-instance-vpnb-af-ipv4] quit
[*PE2-vpn-instance-vpnb] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evpna
[*PE2-bd10] quit
[*PE2] evpn
[*PE2-evpn] vlan-extend private enable
[*PE2-evpn] vlan-extend redirect enable
[*PE2-evpn] local-remote frr enable
[*PE2-evpn] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evpna bd-mode
[*PE3-evpn-instance-evpna] route-distinguisher 300:1
[*PE3-evpn-instance-evpna] vpn-target 1:1
[*PE3-evpn-instance-evpna] quit
[*PE3] ip vpn-instance vpnb
[*PE3-vpn-instance-vpnb] ipv4-family
[*PE3-vpn-instance-vpnb-af-ipv4] route-distinguisher 300:2
[*PE3-vpn-instance-vpnb-af-ipv4] vpn-target 2:2 evpn
[*PE3-vpn-instance-vpnb-af-ipv4] evpn mpls routing-enable
[*PE3-vpn-instance-vpnb-af-ipv4] quit
[*PE3-vpn-instance-vpnb] quit
[*PE3] bridge-domain 10
[*PE3-bd10] evpn binding vpn-instance evpna
[*PE3-bd10] quit
[*PE3] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE2.
[~PE2] e-trunk 1
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 2
[*PE2-Eth-Trunk10.1] rewrite pop single
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.1 mode l2
[*PE3-Eth-Trunk10.1] encapsulation dot1q vid 2
[*PE3-Eth-Trunk10.1] rewrite pop single
[*PE3-Eth-Trunk10.1] bridge-domain 10
[*PE3-Eth-Trunk10.1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] eth-trunk 10
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
Step 7 Bind each sub-interface to the EVPN and VPN instances on each PE.
# Configure PE1.
[~PE1] interface Vbdif10
[*PE1-Vbdif10] ip binding vpn-instance vpnb
[*PE1-Vbdif10] ip address 192.168.1.1 255.255.255.0
[*PE1-Vbdif10] arp distribute-gateway enable
[*PE1-Vbdif10] arp collect host enable
[*PE1-Vbdif10] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface Vbdif10
[*PE2-Vbdif10] ip binding vpn-instance vpnb
[*PE2-Vbdif10] ip address 192.168.2.1 255.255.255.0
[*PE2-Vbdif10] arp distribute-gateway enable
[*PE2-Vbdif10] arp collect host enable
[*PE2-Vbdif10] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface Vbdif10
[*PE3-Vbdif10] ip binding vpn-instance vpnb
[*PE3-Vbdif10] ip address 192.168.3.1 255.255.255.0
[*PE3-Vbdif10] arp distribute-gateway enable
[*PE3-Vbdif10] arp collect host enable
[*PE3-Vbdif10] quit
[*PE3] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] esi 0000.1111.2222.1111.1111
[*PE2-Eth-Trunk10] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] esi 0000.1111.3333.4444.5555
[*PE3-Eth-Trunk10] quit
[*PE3] commit
Step 9 Configure EVPN BGP peer relationships between the PEs and RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] peer 3.3.3.3 advertise irb
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv4-family vpn-instance vpnb
[*PE2-bgp-vpnb] import-route direct
[*PE2-bgp-vpnb] advertise l2vpn evpn
[*PE2-bgp-vpnb] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] peer 3.3.3.3 advertise irb
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] ipv4-family vpn-instance vpnb
[*PE3-bgp-vpnb] import-route direct
[*PE3-bgp-vpnb] advertise l2vpn evpn
[*PE3-bgp-vpnb] quit
[*PE3-bgp] quit
[*PE3] commit
After the configurations are complete, run the display bgp evpn peer command
on the RR. The command output shows that BGP peer relationships have been
established between the PEs and RR and are in the Established state.
# Configure CE2.
[~CE2] interface Eth-Trunk 10
[*CE2-Eth-Trunk10] quit
[*CE2] bridge-domain 10
[*CE2-bd10] quit
[*CE2] interface Eth-Trunk 10.1 mode l2
[*CE2-Eth-Trunk10.1] encapsulation dot1q vid 2
[*CE2-Eth-Trunk10.1] bridge-domain 10
[*CE2-Eth-Trunk10.1] quit
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] eth-trunk 10
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] commit
Step 11 Enable PE1 and PE2 to send ARP packets at a constant speed to limit the rate at
which ARP broadcasts request packets. This ensures a rapid traffic switchover after
a CE fault occurs.
# Configure PE1.
[~PE1] arp constant-send enable
[*PE1] arp constant-send maximum 1
[*PE1] commit
# Configure PE2.
[~PE2] arp constant-send enable
[*PE2] arp constant-send maximum 1
[*PE2] commit
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
arp constant-send enable
arp constant-send maximum 1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
evpn vpn-instance evpna bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 100:2
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evpna
#
mpls ldp
#
e-trunk 1
peer-address 2.2.2.2 source-address 1.1.1.1
#
interface Vbdif10
ip binding vpn-instance vpnb
ip address 192.168.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 2
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpnb
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise irb
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
arp constant-send enable
arp constant-send maximum 1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
mac-duplication
#
evpn vpn-instance evpna bd-mode
route-distinguisher 200:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 200:2
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evpna
#
mpls ldp
#
e-trunk 1
peer-address 1.1.1.1 source-address 2.2.2.2
#
interface Vbdif10
ip binding vpn-instance vpnb
ip address 192.168.2.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 2
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpnb
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise irb
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evpna bd-mode
route-distinguisher 300:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance vpnb
ipv4-family
route-distinguisher 300:2
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.4
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evpna
#
mpls ldp
#
interface Vbdif10
ip binding vpn-instance vpnb
ip address 192.168.3.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface Eth-Trunk10
esi 0000.1111.3333.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 2
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
eth-trunk 10
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpnb
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise irb
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.3.1.0 0.0.0.255
#
evpn source-address 4.4.4.4
#
return
● RR configuration file
#
sysname RR
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
peer 1.1.1.1 reflect-client
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
peer 2.2.2.2 reflect-client
peer 4.4.4.4 enable
peer 4.4.4.4 advertise irb
peer 4.4.4.4 reflect-client
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
● CE1 configuration file
#
sysname CE1
#
bridge-domain 10
#
interface Eth-Trunk20
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 2
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 20
#
interface GigabitEthernet2/0/0
undo shutdown
eth-trunk 20
#
return
Networking Requirements
On the network shown in Figure 3-157, PE1, PE2, RR, and PE3 belong to the same
AS and use OSPF to communicate with each other. An MPLS tunnel is deployed to
carry EVPN private line services.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure basic MPLS functions and MPLS LDP, and establish MPLS LSPs on
the backbone network.
3. Configure an EVPN VPWS instance and an EVPL instance on each PE and bind
the EVPL instance to an access-side sub-interface.
4. Configure EVPN BGP peer relationships between PEs and the RR, and
configure the PEs as RR clients.
5. Configure the FRR function on PEs.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● RDs (100:1, 100:2, and 100:3) and RT (1:1) of the EVPN instance
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-157. For configuration details, see "Configuration Files" in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
Step 3 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
Step 4 Configure an EVPN VPWS instance and an EVPL instance on each PE and bind the
EVPL instance to an access-side sub-interface.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 vpws
[*PE1-vpws-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE1-vpws-evpn-instance-evrf1] quit
[*PE1] evpl instance 1 mpls-mode
[*PE1-evpl-mpls1] evpn binding vpn-instance evrf1
[*PE1-evpl-mpls1] local-service-id 100 remote-service-id 200
[*PE1-evpl-mpls1] quit
[*PE1] interface gigabitethernet 0/1/0
[*PE1-GigabitEthernet 0/1/0] esi 0001.0002.0003.0004.0005
[*PE1-GigabitEthernet 0/1/0] quit
[*PE1] interface gigabitethernet 0/1/0.1 mode l2
[*PE1-GigabitEthernet 0/1/0.1] encapsulation dot1q vid 1
[*PE1-GigabitEthernet 0/1/0.1] evpl instance 1
[*PE1-GigabitEthernet 0/1/0.1] quit
[*PE1] commit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] route-distinguisher 100:2
[*PE2-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] evpl instance 1 mpls-mode
[*PE2-evpl-mpls1] evpn binding vpn-instance evrf1
[*PE2-evpl-mpls1] local-service-id 100 remote-service-id 200
[*PE2-evpl-mpls1] quit
[*PE2] interface gigabitethernet 0/1/0
[*PE2-GigabitEthernet 0/1/0] esi 0001.0002.0003.0004.0005
[*PE2-GigabitEthernet 0/1/0] quit
[*PE2] interface gigabitethernet 0/1/0.1 mode l2
[*PE2-GigabitEthernet 0/1/0.1] encapsulation dot1q vid 1
[*PE2-GigabitEthernet 0/1/0.1] evpl instance 1
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 vpws
[*PE3-vpws-evpn-instance-evrf1] route-distinguisher 100:3
[*PE3-vpws-evpn-instance-evrf1] vpn-target 1:1
[*PE3-vpws-evpn-instance-evrf1] quit
[*PE3] evpl instance 1 mpls-mode
[*PE3-evpl-mpls1] evpn binding vpn-instance evrf1
[*PE3-evpl-mpls1] local-service-id 200 remote-service-id 100
[*PE3-evpl-mpls1] quit
[*PE3] interface gigabitethernet 2/0/0.1 mode l2
[*PE3-GigabitEthernet 2/0/0.1] encapsulation dot1q vid 1
[*PE3-GigabitEthernet 2/0/0.1] evpl instance 1
[*PE3-GigabitEthernet 2/0/0.1] quit
[*PE2] commit
Step 5 Configure EVPN BGP peer relationships between PEs and the RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After completing the configurations, run the display bgp evpn peer command on
the RR. The command output shows that BGP peer relationships have been
established between the PEs and RR and are in the Established state.
[~RR] display bgp evpn peer
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 vpws
[*PE2-vpws-evpn-instance-evrf1] local-remote frr enable
[*PE2-vpws-evpn-instance-evrf1] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 vpws
[*PE3-vpws-evpn-instance-evrf1] remote frr enable
[*PE3-vpws-evpn-instance-evrf1] quit
[*PE3] commit
EVPN-Instance evrf1:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*>i 0001.0002.0003.0004.0005:100 1.1.1.1
*i 2.2.2.2
*> 0000.0000.0000.0000.0000:200 127.0.0.1
After completing the configurations, run the display bgp evpn all routing-table
ad-route 0001.0002.0003.0004.0005:100 command on PE3. The command output
displays details about the EVPN A-D routes sent form PE1 and PE2 as well as the
label information about the bypass tunnel after the FRR function is configured.
[~PE3] display bgp evpn all routing-table ad-route 0001.0002.0003.0004.0005:100
BGP local router ID : 4.4.4.4
Local AS number : 100
Total routes of Route Distinguisher(100:1): 1
BGP routing table entry information of 0001.0002.0003.0004.0005:100:
Label information (Received/Applied): 48123/NULL
From: 3.3.3.3 (3.3.3.3)
Route Duration: 0d00h21m09s
Relay IP Nexthop: 10.3.1.1
Relay IP Out-Interface:GigabitEthernet1/0/0
Relay Tunnel Out-Interface: LDP LSP
Original nexthop: 1.1.1.1
Qos information : 0x0
Ext-Community: RT <1 : 1>, EVPN L2 Attributes <MTU:1500 C:0 P:1 B:0>, Bypass Label<0 : 0 : 48124>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 2
Originator: 1.1.1.1
Cluster list: 3.3.3.3
Route Type: 1 (Ethernet Auto-Discovery (A-D) route)
ESI: 0001.0002.0003.0004.0005, Ethernet Tag ID: 100
Not advertised to any peer yet
EVPN-Instance evrf1:
Number of A-D Routes: 2
BGP routing table entry information of 0001.0002.0003.0004.0005:100:
Route Distinguisher: 100:1
Remote-Cross route
Label information (Received/Applied): 48123/NULL
From: 3.3.3.3 (3.3.3.3)
Route Duration: 0d00h21m10s
Relay Tunnel Out-Interface: LDP LSP
Original nexthop: 1.1.1.1
Qos information : 0x0
Ext-Community: RT <1 : 1>, EVPN L2 Attributes <MTU:1500 C:0 P:1 B:0>, Bypass Label<0 : 0 : 48124>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 2
Originator: 1.1.1.1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:1
local-remote frr enable
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1 mpls-mode
evpn binding vpn-instance evrf1
local-service-id 100 remote-service-id 200
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet0/1/0
undo shutdown
esi 0001.0002.0003.0004.0005
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:2
local-remote frr enable
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1 mpls-mode
evpn binding vpn-instance evrf1
local-service-id 100 remote-service-id 200
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
interface GigabitEthernet0/1/0
undo shutdown
esi 0001.0002.0003.0004.0005
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
return
● RR configuration file
#
sysname RR
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 2.2.2.2 enable
peer 2.2.2.2 reflect-client
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
Networking Requirements
On the network shown in Figure 3-158, a user wants to deploy an EVPN to
transmit services. In this case, you must configure an EVPN instance (BD-EVPN
instance in this example) on each PE and configure each PE to establish a BGP-
EVPN peer relationship with its neighboring PE. CE1 and CE2 are each dual-homed
to PE1 and PE2. The PE interfaces that connect CE1 use a dynamically generated
ESI (ESI1), and those that connect to CE2 use a statically configured ESI (ESI2). If
you want unicast traffic from CE3 -> CE1 and from CE3 -> CE2 to be transmitted
in load balancing and non-load balancing mode, respectively, you can set a
redundancy mode per ESI instance. Specifically, you can set the redundancy mode
of the ESI1 instance to all-active and that of the ESI2 instance to single-active.
Interfaces 1 through 4 in this example represent GE 1/0/0, GE 2/0/0, GE 3/0/0, and GE 4/0/0,
respectively.
Precautions
When you set a redundancy mode per ESI instance, note the following:
● For the same EVPN instance, the export VPN target list of one site shares VPN
targets with the import VPN target lists of the other sites. Conversely, the
import VPN target list of one site shares VPN targets with the export VPN
target lists of the other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each PE interface, including the loopback interfaces.
2. Configure a routing protocol on each PE to ensure Layer 3 communication.
OSPF is used in this example.
3. Configure MPLS LDP on each PE.
4. Create a BD-EVPN instance and a BD on each PE, and bind the BD to the
EVPN instance.
5. Configure each PE interface that connects to a CE and configure an ESI on
each PE interface.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance names: evrf1 and evrf2
● RDs of evrf1 on PE1, PE2, and PE3: 10:1, 10:2, and 10:3, respectively, RT of
evrf1: 11:1; RDs of evrf2 on PE1, PE2, and PE3: 20:1, 20:2, and 20:3,
respectively, RT of evrf2: 22:2
Procedure
Step 1 Assign an IP address to each PE interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol on each PE to ensure Layer 3 communication. OSPF
is used in this example.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on each PE.
For configuration details, see Configuration Files in this section.
Step 4 Create a BD-EVPN instance and a BD on each PE, and bind the BD to the EVPN
instance.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 10:1
[*PE1-evpn-instance-evrf1] vpn-target 11:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] evpn vpn-instance evrf2 bd-mode
[*PE1-evpn-instance-evrf2] route-distinguisher 20:1
[*PE1-evpn-instance-evrf2] vpn-target 22:2
[*PE1-evpn-instance-evrf2] quit
[*PE1] bridge-domain 10
[*PE1-bd10] evpn binding vpn-instance evrf1
[*PE1-bd10] quit
[*PE1] bridge-domain 20
[*PE1-bd20] evpn binding vpn-instance evrf2
[*PE1-bd20] quit
[*PE1] commit
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 5 Configure each PE interface that connects to a CE and configure an ESI on each PE
interface.
# Configure PE1.
[~PE1] lacp e-trunk system-id 00e0-fc00-0000
[*PE1] e-trunk 1
[*PE1-e-trunk-1] priority 10
# Configure PE2.
[~PE2] lacp e-trunk system-id 00e0-fc00-0000
[*PE2] e-trunk 1
[*PE2-e-trunk-1] priority 10
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] e-trunk 2
[*PE2-e-trunk-2] priority 10
[*PE2-e-trunk-2] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-2] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 10
[*PE2-Eth-Trunk10.1] rewrite pop single
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface eth-trunk 20
[*PE2-Eth-Trunk20] e-trunk 2
[*PE2-Eth-Trunk20] e-trunk mode force-master
[*PE2-Eth-Trunk20] quit
[*PE2] interface eth-trunk 20.1 mode l2
[*PE2-Eth-Trunk20.1] encapsulation dot1q vid 20
[*PE2-Eth-Trunk20.1] rewrite pop single
[*PE2-Eth-Trunk20.1] bridge-domain 20
[*PE2-Eth-Trunk20.1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface gigabitethernet 4/0/0
[*PE2-GigabitEthernet4/0/0] eth-trunk 20
[*PE2-GigabitEthernet4/0/0] quit
[*PE2] commit
# Configure PE3.
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 6 Configure a source address on each PE.
# Configure PE1.
[~PE1] evpn source-address 1.1.1.1
[*PE1] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 3.3.3.3
[*PE3] commit
Step 7 Establish a BGP-EVPN peer relationship between each PE and its neighboring PE.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface loopback 1
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 8 Set a redundancy mode per ESI instance on each PE.
# Configure PE1.
[~PE1] evpn
[*PE1-evpn] esi 0001.0002.0003.0004.0005
[*PE1-evpn-esi-0001.0002.0003.0004.0005] evpn redundancy-mode single-active
[*PE1-evpn-esi-0001.0002.0003.0004.0005] quit
[*PE1] esi dynamic-name esi1
[*PE1-esi-dynamic-esi1] evpn redundancy-mode all-active
[*PE1-esi-dynamic-esi1] quit
[*PE1] interface Eth-Trunk 10
[*PE1-Eth-Trunk10] esi dynamic esi1
[*PE1-Eth-Trunk10] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Run the display bgp evpn all esi command on PE1. The command output shows
ESI information of the two EVPN instances and the redundancy modes of the ESI
instances.
[~PE1] display bgp evpn all esi
Number of ESI for EVPN address family: 2
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
lacp e-trunk system-id 00e0-fc00-0000
#
evpn
#
esi 0001.0002.0003.0004.0005
evpn redundancy-mode single-active
#
esi dynamic-name esi1
evpn redundancy-mode all-active
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 20:1
vpn-target 22:2 export-extcommunity
vpn-target 22:2 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
mpls ldp
#
e-trunk 1
priority 10
peer-address 2.2.2.2 source-address 1.1.1.1
#
e-trunk 2
priority 10
peer-address 2.2.2.2 source-address 1.1.1.1
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi dynamic esi1
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
e-trunk 2
e-trunk mode force-master
esi 0001.0002.0003.0004.0005
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet4/0/0
undo shutdown
eth-trunk 20
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
lacp e-trunk system-id 00e0-fc00-0000
#
evpn
#
esi 0001.0002.0003.0004.0005
evpn redundancy-mode single-active
#
esi dynamic-name esi1
evpn redundancy-mode all-active
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:2
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 20:2
vpn-target 22:2 export-extcommunity
vpn-target 22:2 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
mpls ldp
#
e-trunk 1
priority 20
peer-address 1.1.1.1 source-address 2.2.2.2
#
e-trunk 2
priority 20
peer-address 1.1.1.1 source-address 2.2.2.2
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi dynamic esi1
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
e-trunk 2
e-trunk mode force-master
esi 0001.0002.0003.0004.0005
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 20
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet4/0/0
undo shutdown
eth-trunk 10
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:3
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 20:3
vpn-target 22:2 export-extcommunity
vpn-target 22:2 import-extcommunity
#
mpls lsr-id 3.3.3.3
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet3/0/0.2 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
evpn source-address 3.3.3.3
#
return
Networking Requirements
A user wants to deploy an EVPN on the network shown in Figure 3-159 to
transmit services. Specifically, an EVPN instance (BD-EVPN instance in this
example) is configured on each PE, and a BGP EVPN peer relationship is
established between every two PEs. To improve network security, PE2 and PE3 can
only interact with PE1, and PE2 and PE3 cannot send traffic to each other. To
implement this function, the user can deploy EVPN E-Tree over the network.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure EVPN E-Tree, note the following:
● For the same EVPN instance, the export VPN target list of one site shares VPN
targets with the import VPN target lists of the other sites. Conversely, the
import VPN target list of one site shares VPN targets with the export VPN
target lists of the other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each PE interface, including the loopback interfaces.
2. Configure a routing protocol on each PE to ensure Layer 3 communication.
OSPF is used in this example.
3. Configure MPLS LDP on each PE.
4. Create a BD-EVPN instance and a BD on each PE, and bind the BD to the
EVPN instance.
5. Configure each PE interface that connects to a CE.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● EVPN instance evrf1's RD (10:1) and RT (11:1) on each PE
Procedure
Step 1 Assign an IP address to each PE interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol on each PE to ensure Layer 3 communication. OSPF
is used in this example.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on each PE.
For configuration details, see Configuration Files in this section.
Step 4 Create a BD-EVPN instance and a BD on each PE, and bind the BD to the EVPN
instance.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 10:1
[*PE1-evpn-instance-evrf1] vpn-target 11:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] bridge-domain 10
[*PE1-bd10] evpn binding vpn-instance evrf1
[*PE1-bd10] quit
[*PE1] commit
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 5 Configure each PE interface that connects to a CE.
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0.1 mode l2
[*PE1-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*PE1-GigabitEthernet1/0/0.1] rewrite pop single
[*PE1-GigabitEthernet1/0/0.1] bridge-domain 10
[*PE1-GigabitEthernet1/0/0.1] quit
[*PE1] commit
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 6 Configure a source address on each PE.
# Configure PE1.
[~PE1] evpn source-address 1.1.1.1
[*PE1] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 3.3.3.3
[*PE3] commit
Step 7 Establish a BGP EVPN peer relationship between every two PEs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.2 as-number 100
[*PE1-bgp] peer 2.2.2.2 connect-interface loopback 1
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2 and PE3. For configuration details, see Configuration
Files in this section.
Step 8 Configure the AC interfaces on PE2 and PE3 as leaf AC interfaces.
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] etree enable
[*PE2-evpn-instance-evrf1] quit
[*PE2] interface gigabitethernet1/0/0.1 mode l2
[*PE2-GigabitEthernet1/0/0.1] evpn e-tree-leaf
[*PE2-GigabitEthernet1/0/0.1] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] etree enable
[*PE3-evpn-instance-evrf1] quit
[*PE3] interface gigabitethernet3/0/0.1 mode l2
[*PE3-GigabitEthernet3/0/0.1] evpn e-tree-leaf
[*PE3-GigabitEthernet3/0/0.1] quit
[*PE3] commit
EVPN-Instance evrf1:
Number of A-D Routes: 2
Network(ESI/EthTagId) NextHop
*>i 0000.0000.0000.0000.0000:4294967295 2.2.2.2
*i 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 6
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*> 0:48:00e0-fc00-0001:0:0.0.0.0 0.0.0.0
*>i 0:48:00e0-fc00-0005:0:0.0.0.0 2.2.2.2
*>i 0:48:00e0-fc00-0004:0:0.0.0.0 3.3.3.3
*>i 0:48:00e0-fc00-0002:0:0.0.0.0 2.2.2.2
*>i 0:48:00e0-fc00-0003:0:0.0.0.0 3.3.3.3
*> 0:48:00e0-fc00-0006:0:0.0.0.0 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 3
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:2.2.2.2 2.2.2.2
*>i 0:32:3.3.3.3 3.3.3.3
[~PE1] display bgp evpn all routing-table ad-route 0000.0000.0000.0000.0000:4294967295
BGP local router ID : 10.2.1.2
Local AS number : 100
Total routes of Route Distinguisher(2.2.2.2:0): 1
BGP routing table entry information of 0000.0000.0000.0000.0000:4294967295:
From: 2.2.2.2 (2.2.2.2)
Route Duration: 0d01h27m52s
Relay IP Nexthop: 10.2.1.1
Relay Tunnel Out-Interface: LDP LSP
Original nexthop: 2.2.2.2
Qos information : 0x0
Ext-Community: RT <11 : 1>, E-Tree <0 : 0 : 32915>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 1
Route Type: 1 (Ethernet Auto-Discovery (A-D) route)
ESI: 0000.0000.0000.0000.0000, Ethernet Tag ID: 4294967295
Not advertised to any peer yet
EVPN-Instance evrf1:
Number of A-D Routes: 2
BGP routing table entry information of 0000.0000.0000.0000.0000:4294967295:
Route Distinguisher: 2.2.2.2:0
Remote-Cross route
From: 2.2.2.2 (2.2.2.2)
Route Duration: 0d01h27m52s
Relay Tunnel Out-Interface: LDP LSP
Original nexthop: 2.2.2.2
Qos information : 0x0
Ext-Community: RT <11 : 1>, E-Tree <0 : 0 : 32915>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP cost 1
Route Type: 1 (Ethernet Auto-Discovery (A-D) route)
ESI: 0000.0000.0000.0000.0000, Ethernet Tag ID: 4294967295
Not advertised to any peer yet
EVPN-Instance evrf1:
Number of Mac Routes: 1
BGP routing table entry information of 0:48:00e0-fc00-0005:0:0.0.0.0:
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
etree enable
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
evpn e-tree-leaf
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
etree enable
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
mpls lsr-id 3.3.3.3
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
evpn e-tree-leaf
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
Networking Requirements
On the network shown in Figure 3-160, to enable different sites to communicate
with each other over the backbone network, EVPN is deployed on the network so
that PEs can exchange EVPN routes to transmit service traffic. Two EVPN instances
named evrf1 and evrf2 are configured on PE1. The EVPN instance named evrf1 is
configured on PE2, and the EVPN instance named evrf2 is configured on PE3. To
allow each PE to receive only desired routes and minimize system resource
consumption in processing unwanted routes, EVPN ORF can be configured.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure EVPN ORF, note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites; the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each PE interface, including the loopback interfaces.
2. Configure a routing protocol on each PE to ensure Layer 3 communication.
OSPF is used in this example.
3. Configure MPLS LDP on each PE.
4. Configure EVPN instances on the PEs and bind each EVPN instance to a BD.
5. Configure a source address on each PE.
6. Configure each PE's sub-interface that connects to a CE.
7. Configure an ESI for each PE interface that connects to a CE.
8. Configure the CEs and PEs to communicate.
9. Configure BGP-EVPN peer relationships between the PEs and RR, and
configure the PEs as RR clients.
10. Configure EVPN ORF on each device.
Data Preparation
To complete the configuration, you need the following data:
● PE1's EVPN instance names: evrf1 and evrf2; PE2's EVPN instance name: evrf1;
PE3's EVPN instance name: evrf2
● evrf1's RD (100:1) and RT (1:1); evrf2's RD (100:2) and RT (2:2)
Procedure
Step 1 Assign an IP address to each PE interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol on each PE to ensure Layer 3 communication. OSPF
is used in this example.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on each PE.
For configuration details, see Configuration Files in this section.
Step 4 Configure EVPN instances on the PEs and bind each EVPN instance to a BD.
# Configure PE1.
[~PE1] evpn vpn-instance evpn1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-evpn-instance-evrf1] vpn-target 1:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] evpn vpn-instance evrf2 bd-mode
[*PE1-evpn-instance-evrf2] route-distinguisher 100:2
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf2 bd-mode
[*PE3-evpn-instance-evrf2] route-distinguisher 100:2
[*PE3-evpn-instance-evrf2] vpn-target 2:2
[*PE3-evpn-instance-evrf2] quit
[*PE3] bridge-domain 20
[*PE3-bd20] evpn binding vpn-instance evrf2
[*PE3-bd20] quit
[*PE3] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE2.
[~PE2] interface gigabitethernet1/0/0.1 mode l2
[*PE2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*PE2-GigabitEthernet1/0/0.1] rewrite pop single
[*PE2-GigabitEthernet1/0/0.1] bridge-domain 10
[*PE2-GigabitEthernet1/0/0.1] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface gigabitethernet2/0/0.1 mode l2
[*PE3-GigabitEthernet2/0/0.1] encapsulation dot1q vid 20
[*PE3-GigabitEthernet2/0/0.1] rewrite pop single
[*PE3-GigabitEthernet2/0/0.1] bridge-domain 20
[*PE3-GigabitEthernet2/0/0.1] quit
[*PE3] commit
# Configure PE1.
[~PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] esi 0000.1111.1111.1111.1111
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface gigabitethernet1/0/0
[*PE2-GigabitEthernet1/0/0] esi 0000.1111.2222.2222.2222
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface gigabitethernet2/0/0
[*PE3-GigabitEthernet2/0/0] esi 0000.1111.3333.3333.3333
[*PE3-GigabitEthernet2/0/0] quit
[*PE3] commit
# Configure CE1.
[~CE1] interface gigabitethernet1/0/0.1 mode l2
[*CE1-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*CE1-GigabitEthernet1/0/0.1] rewrite pop single
[*CE1-GigabitEthernet1/0/0.1] bridge-domain 10
[*CE1-GigabitEthernet1/0/0.1] quit
[*CE1] interface gigabitethernet1/0/0.2 mode l2
[*CE1-GigabitEthernet1/0/0.2] encapsulation dot1q vid 20
[*CE1-GigabitEthernet1/0/0.2] rewrite pop single
[*CE1-GigabitEthernet1/0/0.2] bridge-domain 20
[*CE1-GigabitEthernet1/0/0.2] quit
[*CE1] commit
# Configure CE2.
[~CE2] interface gigabitethernet1/0/0.1 mode l2
[*CE2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*CE2-GigabitEthernet1/0/0.1] rewrite pop single
[*CE2-GigabitEthernet1/0/0.1] bridge-domain 10
[*CE2-GigabitEthernet1/0/0.1] quit
[*CE2] commit
# Configure CE3.
[~CE3] interface gigabitethernet1/0/0.1 mode l2
[*CE3-GigabitEthernet1/0/0.1] encapsulation dot1q vid 20
[*CE3-GigabitEthernet1/0/0.1] rewrite pop single
[*CE3-GigabitEthernet1/0/0.1] bridge-domain 20
[*CE3-GigabitEthernet1/0/0.1] quit
[*CE3] commit
Step 9 Configure BGP-EVPN peer relationships between the PEs and RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
# Run the display evn bgp peer command on the RR. The command output
shows that BGP peer relationships have been established between the PEs and RR
and are in the Established state.
[~RR] display bgp evpn peer
# Run the display bgp evpn all routing-table peer 2.2.2.2 advertised-routes
and display bgp evpn all routing-table peer 4.4.4.4 advertised-routes
commands on the RR to view routes advertised to PE2 and PE3.
[~RR] display bgp evpn all routing-table peer 2.2.2.2 advertised-routes
Local AS number : 100
The command output shows that the RR reflects all routes to PE2 and PE3.
However, PE2 and PE3 do not have to receive all the routes. To resolve this
problem, configure EVPN ORF on each device.
Step 10 Configure EVPN ORF on each device.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-target
[*PE1-bgp-af-vpn-target] peer 3.3.3.3 enable
[*PE1-bgp-af-vpn-target] quit
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] vpn-orf enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] ipv4-family vpn-target
[*PE2-bgp-af-vpn-target] peer 3.3.3.3 enable
[*PE2-bgp-af-vpn-target] quit
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] vpn-orf enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] ipv4-family vpn-target
[*PE3-bgp-af-vpn-target] peer 3.3.3.3 enable
[*PE3-bgp-af-vpn-target] quit
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] vpn-orf enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
Run the display bgp evpn all routing-table peer 2.2.2.2 advertised-routes and
display bgp evpn all routing-table peer 4.4.4.4 advertised-routes commands on
the RR again. The command output shows that the RR has advertised only
requested routes to PE2 and PE3.
[~RR] display bgp evpn all routing-table peer 2.2.2.2 advertised-routes
Local AS number : 100
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 100:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
esi 0000.1111.1111.1111.1111
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0.2 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
vpn-orf enable
peer 3.3.3.3 enable
#
ipv4-family vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
esi 0000.1111.2222.2222.2222
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
vpn-orf enable
peer 3.3.3.3 enable
#
ipv4-family vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 100:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
mpls lsr-id 4.4.4.4
#
mpls
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
esi 0000.1111.3333.3333.3333
#
bridge-domain 10
#
return
3.2.35.8 Example for Configuring a DCI Scenario with an E2E VXLAN EVPN
Deployed on a Gateway
This section provides an example for configuring a DCI scenario with an E2E
VXLAN EVPN deployed on a gateway. In this example, an E2E VXLAN tunnel is
established between DC-GWs, and an L3VPN is deployed over the DCI backbone
network to transmit VXLAN packets.
Networking Requirements
In Figure 3-161, data center gateway devices GW1 and GW2 are connected to the
DCI backbone network. To allow inter-data center VM communication (for
example, VMa1 and VMb2 communication), BGP/MPLS IP VPN functions must be
deployed on the DCI backbone network, and a VXLAN tunnel must be established
between GW1 and GW2.
LoopBack1 1.1.1.1/32
LoopBack1 2.2.2.2/32
LoopBack1 3.3.3.3/32
LoopBack1 4.4.4.4/32
LoopBack1 7.7.7.7/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VXLAN tunnel between the distributed gateways GW1 and GW2.
2. Enable OSPF on the DCI backbone network for DCI-PEs to communicate with
each other.
3. Configure an MPLS TE tunnel on the DCI backbone network.
4. Configure a VPN instance on each DCI-PE and bind the interface connected to
a GW to the VPN instance.
5. Establish an MP-IBGP peer relationship between DCI-PEs for them to
exchange VPNv4 routes.
6. Establish an EBGP peer relationship between each DCI-PE and its connected
GW for them to exchange VPNv4 routes.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the DCI-PEs and P
● Route distinguisher (RD) of a VPN instance
● VPN target
Procedure
Step 1 Assign an IP address to each interface on each node, and configure loopback
interface addresses.
For configuration details, see Configuration Files in this section.
Step 2 Configure a VXLAN tunnel between the distributed gateways GW1 and GW2.
This following uses the current product model as an example. For configuration
details, see Configuration Files. In practice, see the related product
documentation.
Step 3 Configure an IGP on the DCI backbone network. OSPF is used as an IGP in this
example.
For configuration details, see Configuration Files in this section.
Step 4 Configure an MPLS TE tunnel on the DCI backbone network.
For configuration details, see Configuration Files in this section.
Step 5 Configure VPN instances on DCI-PEs, connect GWs to the DCI-PEs, and apply a
tunnel policy.
# Configure DCI-PE1.
[~DCI-PE1] tunnel-policy te-lsp1
[*DCI-PE1-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE1-tunnel-policy-te-lsp1] quit
[*DCI-PE1] ip vpn-instance vpn1
[*DCI-PE1-vpn-instance-vpn1] ipv4-family
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1
# Configure DCI-PE2.
[~DCI-PE2] tunnel-policy te-lsp1
[*DCI-PE2-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE2-tunnel-policy-te-lsp1] quit
[*DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:1
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 111:1 both
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] interface gigabitethernet 1/0/0
[*DCI-PE2-GigabitEthernet1/0/0] ip binding vpn-instance vpn1
[*DCI-PE2-GigabitEthernet1/0/0] ip address 192.168.30.1 24
[*DCI-PE2-GigabitEthernet1/0/0] quit
[*DCI-PE2] commit
Step 6 Set up an EBGP peer relationship between each DCI-PE and its connected GW.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE1-bgp-vpn1] peer 192.168.20.2 as-number 65410
[*DCI-PE1-bgp-vpn1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE2-bgp-vpn1] peer 192.168.30.2 as-number 65420
[*DCI-PE2-bgp-vpn1] quit
[*DCI-PE2] commit
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] peer 3.3.3.3 as-number 100
[*DCI-PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*DCI-PE1-bgp] ipv4-family vpnv4
[*DCI-PE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*DCI-PE1-bgp-af-vpnv4] quit
[*DCI-PE1-bgp] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] peer 1.1.1.1 as-number 100
[*DCI-PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*DCI-PE2-bgp] ipv4-family vpnv4
[*DCI-PE2-bgp-af-vpnv4] peer 1.1.1.1 enable
[*DCI-PE2-bgp-af-vpnv4] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
----End
Configuration Files
● GW1 configuration file
#
sysname GW1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
vxlan vni 100
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
bridge-domain 10
vxlan vni 10
evpn binding vpn-instance evrf1
#
interface Vbdif10
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
arp collect host enable
vxlan anycast-gateway enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.20.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet3/0/0
undo shutdown
#
interface GigabitEthernet3/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
interface Nve1
source 4.4.4.4
vni 10 head-end peer-list protocol bgp
#
bgp 65410
peer 192.168.20.1 as-number 100
peer 7.7.7.7 as-number 65420
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 ebgp-max-hop 255
#
ipv4-family unicast
peer 192.168.20.1 enable
network 4.4.4.4 255.255.255.255
#
l2vpn-family evpn
policy vpn-target
peer 7.7.7.7 enable
peer 7.7.7.7 advertise encap-type vxlan
peer 7.7.7.7 advertise irb
#
return
#
sysname Device3
#
vlan batch 10
#
interface GigabitEthernet1/0/0
portswitch
undo shutdown
port link-type access
port default vlan 10
#
interface GigabitEthernet2/0/0
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
return
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpn1
peer 192.168.20.2 as-number 65410
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
mpls-te enable
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.10.0 0.0.0.255
mpls-te enable
#
return
3.2.35.9 Example for Configuring a DCI Scenario with a VLAN Layer 3 Sub-
Interface Accessing a Common L3VPN
This section provides an example for configuring a DCI scenario with a VLAN Layer
3 sub-interface accessing a common L3VPN. In this example, a data center
gateway is connected to the DCI network through a VLAN Layer 3 sub-interface,
and a common L3VPN is deployed over the DCI network to implement data center
interconnection.
Networking Requirements
An underlay VLAN can access a DCI network through a Layer 3 gateway when
traditional DCs are connected through the DCI network.
GWs and DCI-PEs are separately deployed. Each DCI-PE considers the GW of a
data center as a CE, uses a Layer 3 VPN routing protocol to receive VM host routes
from the data center, and saves and maintains the routes.
If VXLAN is deployed in the data center, the solution of Underlay VLAN Layer 3
access to DCI can be used. In Figure 3-162, to allow intra-data center VM
communication, a VXLAN tunnel must be established within each data center. To
allow inter-data center VM communication (for example, VMa1 and VMb2
communication), BGP/MPLS IP VPN functions must be deployed on the DCI
backbone network, and a Layer 3 Ethernet sub-interface must be configured on
each DCI-PE, added to the same VLAN, and bound to the VPN instance of each
DCI-PE.
Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0,
and GE 1/0/0.1, respectively.
LoopBack1 1.1.1.1/32
LoopBack1 2.2.2.2/32
LoopBack1 3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable OSPF on the DCI backbone network for DCI-PEs to communicate with
each other.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the DCI-PEs and P
● RD of a VPN instance
● VPN target
Procedure
Step 1 Assign an IP address to each interface on each node, and configure loopback
interface addresses.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the DCI backbone network. OSPF is used as an IGP in this
example.
For configuration details, see Configuration Files in this section.
Step 3 Configure an MPLS TE tunnel on the DCI backbone network.
For configuration details, see Configuration Files in this section.
Step 4 Configure VPN instances on DCI-PEs, connect GWs to the DCI-PEs, and apply a
tunnel policy.
# Configure DCI-PE1.
[~DCI-PE1] tunnel-policy te-lsp1
[*DCI-PE1-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE1-tunnel-policy-te-lsp1] quit
[*DCI-PE1] ip vpn-instance vpn1
[*DCI-PE1-vpn-instance-vpn1] ipv4-family
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] vpn-target 111:1 both
[*DCI-PE1-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE1-vpn-instance-vpn1] quit
[*DCI-PE1] interface gigabitethernet 1/0/0.1
[*DCI-PE1-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*DCI-PE1-GigabitEthernet1/0/0.1] ip binding vpn-instance vpn1
[*DCI-PE1-GigabitEthernet1/0/0.1] ip address 192.168.20.1 24
[*DCI-PE1-GigabitEthernet1/0/0.1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] tunnel-policy te-lsp1
[*DCI-PE2-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE2-tunnel-policy-te-lsp1] quit
[*DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:1
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 111:1 both
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] interface gigabitethernet 1/0/0.1
[*DCI-PE2-GigabitEthernet1/0/0.1] vlan-type dot1q 10
[*DCI-PE2-GigabitEthernet1/0/0.1] ip binding vpn-instance vpn1
[*DCI-PE2-GigabitEthernet1/0/0.1] ip address 192.168.30.1 24
[*DCI-PE2-GigabitEthernet1/0/0.1] quit
[*DCI-PE2] commit
Step 5 Set up an EBGP peer relationship between each DCI-PE and its connected GW.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE1-bgp-vpn1] peer 192.168.20.2 as-number 65410
[*DCI-PE1-bgp-vpn1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE2-bgp-vpn1] peer 192.168.30.2 as-number 65420
[*DCI-PE2-bgp-vpn1] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] peer 1.1.1.1 as-number 100
[*DCI-PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*DCI-PE2-bgp] ipv4-family vpnv4
[*DCI-PE2-bgp-af-vpnv4] peer 1.1.1.1 enable
[*DCI-PE2-bgp-af-vpnv4] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
----End
Configuration Files
● DCI-PE1 configuration file
#
sysname DCI-PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy te-lsp1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpn1
peer 192.168.20.2 as-number 65410
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
mpls-te enable
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.10.0 0.0.0.255
mpls-te enable
#
return
● DCI-PE2 configuration file
#
sysname DCI-PE2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy te-lsp1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls te cspf
mpls rsvp-te
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1
undo shutdown
vlan-type dot1q 10
ip binding vpn-instance vpn1
ip address 192.168.30.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
peer 192.168.30.2 as-number 65420
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 192.168.10.0 0.0.0.255
mpls-te enable
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
return
Networking Requirements
In Figure 3-163, data center gateway devices GW1 and GW2 are connected to the
DCI backbone network. To allow inter-data center VM communication (for
example, VMa1 and VMb2 communication), BGP/MPLS IP VPN functions must be
deployed on the DCI backbone network, and EVPN and VXLAN tunnels must be
deployed between the GW and DCI-PE to transmit VM host IP route information.
Figure 3-163 Configuring a DCI scenario with a VXLAN EVPN L3VPN accessing a
common L3VPN
LoopBack2 11.11.11.11/32
LoopBack1 2.2.2.2/32
LoopBack2 33.33.33.33/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable OSPF on the DCI backbone network for DCI-PEs to communicate with
each other.
2. Configure an MPLS TE tunnel on the DCI backbone network.
3. Configure static routes on the DCI-PEs destined for the loopback interface
addresses of the DC-GWs.
4. Configure an EVPN instance and a BD on each DCI-PE.
5. Configure a source address on each DCI-PE.
6. Configure VXLAN tunnels between DCI-PEs and GWs.
7. Configure a VPN instance on each DCI-PE and bind the interface connected to
a GW to the VPN instance.
8. Configure an MP-IBGP peer relationship between each DCI-PE and RR to
exchange VPNv4 routes and configure RR in the figure as a route reflector.
9. Configure the route regeneration function on each DCI-PE-GW.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface on each node, and configure loopback
interface addresses.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the DCI backbone network. OSPF is used as an IGP in this
example.
For configuration details, see Configuration Files in this section.
Step 3 Configure an MPLS TE tunnel on the DCI backbone network.
For configuration details, see Configuration Files in this section.
Step 4 On each DCI-PE, configure a static route destined for the loopback interface of the
connected DC-GW.
For configuration details, see Configuration Files in this section.
Step 5 Configure an EVPN instance and a BD on each DCI-PE.
# Configure DCI-PE1.
[~DCI-PE1] evpn vpn-instance evrf1 bd-mode
[*DCI-PE1-evpn-instance-evrf1] route-distinguisher 10:1
[*DCI-PE1-evpn-instance-evrf1] vpn-target 11:1 both
[*DCI-PE1-evpn-instance-evrf1] quit
[*DCI-PE1] bridge-domain 10
[*DCI-PE1-bd10] vxlan vni 5010 split-horizon-mode
[*DCI-PE1-bd10] evpn binding vpn-instance evrf1
[*DCI-PE1-bd10] esi 0000.1111.1111.4444.5555
[*DCI-PE1-bd10] quit
[*DCI-PE1] interface GigabitEthernet 1/0/0.1 mode l2
[*DCI-PE1-GigabitEthernet1/0/0.1] encapsulation qinq
[*DCI-PE1-GigabitEthernet1/0/0.1] bridge-domain 10
[*DCI-PE1-GigabitEthernet1/0/0.1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] evpn vpn-instance evrf1 bd-mode
[*DCI-PE2-evpn-instance-evrf1] route-distinguisher 20:1
[*DCI-PE2-evpn-instance-evrf1] vpn-target 11:1 both
[*DCI-PE2-evpn-instance-evrf1] quit
[*DCI-PE2] bridge-domain 10
[*DCI-PE2-bd10] vxlan vni 5020 split-horizon-mode
[*DCI-PE2-bd10] evpn binding vpn-instance evrf1
[*DCI-PE2-bd10] esi 0000.1111.3333.4444.5555
[*DCI-PE2-bd10] quit
[*DCI-PE2] interface GigabitEthernet 1/0/0.1 mode l2
[*DCI-PE2-GigabitEthernet1/0/0.1] encapsulation qinq
[*DCI-PE2-GigabitEthernet1/0/0.1] bridge-domain 10
[*DCI-PE2-GigabitEthernet1/0/0.1] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] evpn source-address 3.3.3.3
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] peer 5.5.5.5 as-number 65420
[*DCI-PE2-bgp] peer 5.5.5.5 ebgp-max-hop 255
[*DCI-PE2-bgp] peer 5.5.5.5 connect-interface loopback 2
[*DCI-PE2-bgp] l2vpn-family evpn
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 enable
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 advertise encap-type vxlan
[*DCI-PE2-bgp-af-evpn] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] vxlan vni 555
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 22:22
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 both
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 both evpn
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] interface Vbdif10
[*DCI-PE2-Vbdif10] ip binding vpn-instance vpn1
[*DCI-PE2-Vbdif10] ip address 10.20.10.1 255.255.255.0
[*DCI-PE2-Vbdif10] arp collect host enable
[*DCI-PE2-Vbdif10] quit
[*DCI-PE2] commit
# Configure DCI-PE1.
[~DCI-PE1] interface nve 1
[*DCI-PE1-Nve1] source 11.11.11.11
[*DCI-PE1-Nve1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] interface nve 1
[*DCI-PE2-Nve1] source 33.33.33.33
[*DCI-PE2-Nve1] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] tunnel-policy te-lsp1
[*DCI-PE2-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE2-tunnel-policy-te-lsp1] quit
[*DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] commit
Step 9 Configure an MP-IBGP peer relationship between each DCI-PE and RR to exchange
VPNv4 routes and configure RR in the figure as a route reflector.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] peer 2.2.2.2 as-number 100
[*DCI-PE1-bgp] peer 2.2.2.2 connect-interface loopback 1
[*DCI-PE1-bgp] ipv4-family vpnv4
[*DCI-PE1-bgp-af-vpnv4] peer 2.2.2.2 enable
[*DCI-PE1-bgp-af-vpnv4] quit
[*DCI-PE1-bgp] quit
[*DCI-PE1] commit
# Configure RR.
[~RR] bgp 100
[*RR-bgp] peer 1.1.1.1 as-number 100
[*RR-bgp] peer 1.1.1.1 connect-interface loopback 1
[*RR-bgp] peer 3.3.3.3 as-number 100
[*RR-bgp] peer 3.3.3.3 connect-interface loopback 1
[*RR-bgp] ipv4-family vpnv4
[*RR-bgp-af-vpnv4] peer 1.1.1.1 enable
[*RR-bgp-af-vpnv4] peer 1.1.1.1 reflect-client
[*RR-bgp-af-vpnv4] peer 3.3.3.3 enable
[*RR-bgp-af-vpnv4] peer 3.3.3.3 reflect-client
[*RR-bgp-af-vpnv4] quit
[*RR-bgp] quit
[*RR] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] peer 2.2.2.2 as-number 100
[*DCI-PE2-bgp] peer 2.2.2.2 connect-interface loopback 1
[*DCI-PE2-bgp] ipv4-family vpnv4
[*DCI-PE2-bgp-af-vpnv4] peer 2.2.2.2 enable
[*DCI-PE2-bgp-af-vpnv4] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
Step 10 Configure each DCI-PE to send regenerated EVPN routes to VPNv4 peers and to
send regenerated VPNv4 routes to EVPN peers.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] l2vpn-family evpn
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 import reoriginate
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 advertise route-reoriginated vpnv4
[*DCI-PE1-bgp-af-evpn] quit
[*DCI-PE1-bgp] ipv4-family vpnv4
[*DCI-PE1-bgp-af-vpnv4] peer 3.3.3.3 advertise route-reoriginated evpn mac-ip
[*DCI-PE1-bgp-af-vpnv4] peer 3.3.3.3 advertise route-reoriginated evpn ip
[*DCI-PE1-bgp-af-vpnv4] peer 3.3.3.3 import reoriginate
[*DCI-PE1-bgp-af-vpnv4] quit
[*DCI-PE1-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE1-bgp-vpn1] advertise l2vpn evpn
[*DCI-PE1-bgp-vpn1] quit
[*DCI-PE1-bgp] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] l2vpn-family evpn
[*DCI-PE1-bgp-af-evpn] peer 5.5.5.5 import reoriginate
[*DCI-PE1-bgp-af-evpn] peer 5.5.5.5 advertise route-reoriginated vpnv4
[*DCI-PE1-bgp-af-evpn] quit
[*DCI-PE2-bgp] ipv4-family vpnv4
[*DCI-PE2-bgp-af-vpnv4] peer 1.1.1.1 import reoriginate
[*DCI-PE2-bgp-af-vpnv4] peer 1.1.1.1 advertise route-reoriginated evpn mac-ip
[*DCI-PE2-bgp-af-vpnv4] peer 1.1.1.1 advertise route-reoriginated evpn ip
[*DCI-PE2-bgp-af-vpnv4] quit
[*DCI-PE2-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE2-bgp-vpn1] advertise l2vpn evpn
[*DCI-PE2-bgp-vpn1] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
Run the display vxlan tunnel command on DCI-PEs to check information about
the VXLAN tunnel. The following example uses the command output on DCI-PE1.
[~DCI-PE1] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531843 11.11.11.11 4.4.4.4 up dynamic 00:51:23
----End
Configuration Files
● DCI-PE1 configuration file
#
sysname DCI-PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
tnl-policy te-lsp1
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 555
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
bridge-domain 10
vxlan vni 5010 split-horizon-mode
evpn binding vpn-instance evrf1
esi 0000.1111.1111.4444.5555
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.10.10.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.20.1 255.255.255.0
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation qinq
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface LoopBack2
ip address 11.11.11.11 255.255.255.255
#
interface Nve1
source 11.11.11.11
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 65410
peer 4.4.4.4 ebgp-max-hop 255
peer 4.4.4.4 connect-interface LoopBack2
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 import reoriginate
peer 2.2.2.2 advertise route-reoriginated evpn mac-ip
peer 2.2.2.2 advertise route-reoriginated evpn ip
#
ipv4-family vpn-instance vpn1
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 4.4.4.4 enable
peer 4.4.4.4 advertise encap-type vxlan
peer 4.4.4.4 import reoriginate
peer 4.4.4.4 advertise route-reoriginated vpnv4
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
mpls-te enable
#
ip route-static 4.4.4.4 255.255.255.255 192.168.20.2
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
evpn source-address 1.1.1.1
#
return
● RR configuration file
#
sysname RR
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 3.3.3.3 enable
peer 3.3.3.3 reflect-client
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.10.0 0.0.0.255
mpls-te enable
#
return
● DCI-PE2 configuration file
#
sysname DCI-PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 20:1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 22:22
apply-label per-instance
tnl-policy te-lsp1
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 555
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
bridge-domain 10
vxlan vni 5020 split-horizon-mode
evpn binding vpn-instance evrf1
esi 0000.1111.3333.4444.5555
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.20.10.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.30.1 255.255.255.0
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation qinq
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
interface LoopBack2
ip address 33.33.33.33 255.255.255.255
#
interface Nve1
source 33.33.33.33
#
interface Tunnel10
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 5.5.5.5 as-number 65420
peer 5.5.5.5 ebgp-max-hop 255
peer 5.5.5.5 connect-interface LoopBack2
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 5.5.5.5 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 import reoriginate
peer 2.2.2.2 advertise route-reoriginated evpn mac-ip
peer 2.2.2.2 advertise route-reoriginated evpn ip
#
ipv4-family vpn-instance vpn1
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 5.5.5.5 enable
Networking Requirements
On the network shown in Figure 3-164, CE1 and CE2 belong to the same EVPN.
CE1 is connected to PE1 in AS100, and CE2 is connected to PE2 in AS200. A single-
hop MP-EBGP peer relationship is set up between the ASBRs to transmit all inter-
AS EVPN routing information.
Configuration Notes
When configuring inter-AS EVPN Option B with basic networking, ensure that an
MP-EBGP peer relationship is established between ASBR1 and ASBR2, and the
ASBRs do not filter received EVPN routes based on VPN targets.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 On the MPLS backbone networks in AS100 and AS200, configure an IGP to
interconnect the PE and ASBR on each network.
This example uses OSPF as the IGP. For configuration details, see Configuration
Files in this section.
The 32-bit IP address of the loopback interface that functions as the LSR ID needs to be
advertised by using OSPF.
After the configurations are complete, the OSPF neighbor relationship can be
established between the ASBR and PE in the same AS. Run the display ospf peer
command. The command output shows that the neighbor relationship is in the
Full state.
The ASBR and PE in the same AS can learn and successfully ping the IP address of
each other's loopback interface.
Step 2 Configure MPLS and MPLS LDP both globally and per interface on each node of
the MPLS backbone networks in AS100 and AS200 and set up LDP LSPs.
For configuration details, see Configuration Files in this section.
Step 3 Configuring EVPN functions on PE1 and PE2.
The VPN targets of the EVPN instances on PE1 and PE2 must match.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf_1
route-distinguisher 100:1
Networking Requirements
On the network shown in Figure 3-165, CE1 and CE2 belong to the same VPN.
CE1 is connected to PE1 in AS100, and CE2 is connected to PE2 in AS200. An
EBGP-EVPN peer relationship is configured between ASBRs 1 and 2 to exchange
EVPN routes so that an inter-AS Option B EVPN can carry Layer 3 service traffic.
Precautions
When configuring an MPLS EVPN L3VPN in E-LAN Option B mode, ensure that an
EBGP-EVPN peer relationship is configured between ASBRs 1 and 2 and EVPN
routes that are being received are not filtered based on VPN targets.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IGP on the MPLS backbone networks of AS100 and AS200, so that
PEs on each MPLS backbone network can communicate with each other.
In this example, OSPF is used in AS100, and IS-IS is used in AS200. For
configuration details, see Configuration Files in this section.
Step 2 Configure basic MPLS functions and MPLS LDP on the MPLS backbone networks
of AS100 and AS200 to establish LDP LSPs.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 4 Establish a BGP-EVPN peer relationship between the ASBR and PE in each AS.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2.2.2.9 as-number 100
[*PE1-bgp] peer 2.2.2.9 connect-interface LoopBack1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2.2.2.9 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] commit
Repeat this step for ASBR1, ASBR2, and PE2. For configuration details, see
Configuration Files in this section.
Step 5 Establish a VPN BGP peer relationship between each PE and CE.
# Configure PE1.
[~PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] peer 192.168.1.2 as-number 65410
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure CE1.
[~CE1] bgp 65410
[*CE1-bgp] peer 192.168.1.1 as-number 100
[*CE1-bgp] import-route direct
[*CE1-bgp] quit
[*CE1] commit
Repeat this step for CE2. For configuration details, see Configuration Files in this
section.
Step 6 Enable MPLS on the ASBR interfaces that connect each other. Establish an EBGP-
EVPN peer relationship between the ASBRs and disable ASBRs from filtering
received EVPN routes based on VPN targets.
# Configure ASBR1.
[~ASBR1] bgp 100
[*ASBR1-bgp] peer 10.2.1.2 as-number 200
[*ASBR1-bgp] l2vpn-family evpn
[*ASBR1-bgp-af-evpn] peer 10.2.1.2 enable
[*ASBR1-bgp-af-evpn] quit
[*ASBR1-bgp] quit
[*ASBR1] interface GigabitEthernet2/0/0
[*ASBR1-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*ASBR1-GigabitEthernet2/0/0] mpls
[*ASBR1-GigabitEthernet2/0/0] quit
[*ASBR1] commit
Repeat this step for ASBR2. For configuration details, see Configuration Files in
this section.
Step 7 Verify the configuration.
# Run the display ip routing-table command on each CE. The command output
shows the route to the remote CE. The following example uses the command
output on CE1.
[~CE1] display ip routing-table
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : _public_
Destinations : 10 Routes : 10
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 2.2.2.9 as-number 100
peer 2.2.2.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.9 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
peer 192.168.1.2 as-number 65410
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.9 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.255
#
evpn source-address 1.1.1.9
#
return
● ASBR1 configuration file
#
sysname ASBR1
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
peer 10.2.1.2 as-number 200
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
peer 10.2.1.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.9 enable
peer 10.2.1.2 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● ASBR2 configuration file
#
sysname ASBR2
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
isis 1
network-entity 00.1111.1111.1111.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
bgp 200
peer 4.4.4.9 as-number 200
peer 4.4.4.9 connect-interface LoopBack1
peer 10.2.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
peer 4.4.4.9 enable
peer 10.2.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 4.4.4.9 enable
peer 10.2.1.1 enable
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.9
#
mpls
#
mpls ldp
#
isis 1
network-entity 00.1111.1111.2222.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.2.1 255.255.255.0
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
bgp 200
peer 3.3.3.9 as-number 200
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
peer 192.168.2.2 as-number 65420
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.9 enable
#
evpn source-address 4.4.4.9
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
#
interface LoopBack1
ip address 10.10.10.10 255.255.255.255
#
bgp 65410
peer 192.168.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 192.168.1.1 enable
#
return
Networking Requirements
On the network shown in Figure 3-166, Site1 and Site2 belong to the same VPN.
Site1 accesses an MPLS backbone network over PE1 in AS100, and Site2 accesses
another MPLS backbone network over PE2 in AS200. Inter-AS EVPN Option C is
configured. Specifically, MPLS LDP and inter-AS BGP LSPs are configured to
construct a tunnel between PEs, and an EBGP EVPN peer relationship is
established between the PEs so that the PEs can carry Layer 2 and Layer 3 EVPN
services over the same tunnel.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure IP addresses for all interfaces on the PEs and ASBRs as well as the
loopback interface address.
2. Configure an IGP on the MPLS backbone networks in AS100 and AS200 so
that the PE and ASBR on the same MPLS backbone network can
communicate with each other.
3. Configure basic MPLS functions and MPLS LDP on the MPLS backbone
networks in AS100 and AS200 to establish MPLS LDP LSPs.
4. Configure an IBGP peer relationship between PE1 and ASBR1 and between
PE2 and ASBR2. Configure an EBGP peer relationship between ASBRs. Enable
these devices to exchange labeled routes.
5. Configure and apply a Route-Policy on each ASBR.
6. Configure a VPN instance and an EVPN instance on each PE.
7. Configure an EVPN source address on each PE.
8. Establish an EBGP EVPN peer relationship between PEs in different ASs.
9. Configure access-side interfaces on PEs.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the PEs and ASBRs: 1.1.1.1, 2.2.2.2, 3.3.3.3, and 4.4.4.4
● Name (vpn1), RD (100:1), and export and import VPN targets (1:1) of the
VPN instance on each PE
● Name (evrf1), RD (200:1), and export and import VPN targets (2:2) of the
EVPN instance on each PE
● Name of the routing policy configured on ASBR1: policy1; name of the routing
policy configured on ASBR2: policy2
Procedure
Step 1 Configure IP addresses for all interfaces on the PEs and ASBRs as well as the
loopback interface address.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the MPLS backbone networks in AS100 and AS200 so that
the PE and ASBR on the same MPLS backbone network can communicate with
each other.
For configuration details, see Configuration Files in this section.
Configure OSPF to advertise the 32-bit loopback interface addresses used as LSR IDs.
Step 3 Configure basic MPLS functions and MPLS LDP on the MPLS backbone networks in
AS100 and AS200 to establish MPLS LDP LSPs.
For configuration details, see Configuration Files in this section.
Step 4 Configure an IBGP peer relationship between PE1 and ASBR1 and between PE2
and ASBR2. Configure an EBGP peer relationship between ASBRs. Enable these
devices to exchange labeled routes.
For configuration details, see Configuration Files in this section.
Step 5 Configure and apply a Route-Policy on each ASBR.
# Configure ASBR1: Enable MPLS on GE 2/0/0 connecting ASBR1 to ASBR2.
[~ASBR1] interface gigabitethernet 2/0/0
[*ASBR1-GigabitEthernet2/0/0] mpls
[*ASBR1-GigabitEthernet2/0/0] quit
[*ASBR1] commit
# Configure ASBR1: Apply the routing policy to the routes advertised to PE1 and
enable ASBR1 to exchange labeled IPv4 routes with PE1.
[~ASBR1] bgp 100
[*ASBR1-bgp] peer 1.1.1.1 route-policy policy2 export
# Configure ASBR1: Apply the routing policy to the routes advertised to ASBR2
and enable ASBR1 to exchange labeled IPv4 routes with ASBR2.
[*ASBR1-bgp] peer 10.2.1.2 route-policy policy1 export
# Configure ASBR1: Advertise the loopback routes from PE1 to ASBR2 and then to
PE2.
[*ASBR1-bgp] network 1.1.1.9 32
[*ASBR1-bgp] network 10.1.1.0 24
[*ASBR1-bgp] quit
[*ASBR1] commit
Repeat this step for PE2 and ASBR2. For configuration details, see Configuration
Files in this section.
Step 6 Configure a VPN instance and an EVPN instance on each PE.
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 200:1
[*PE1-evpn-instance-evrf1] vpn-target 2:2
[*PE1-evpn-instance-evrf1] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpn1-ipv4] vpn-target 1:1
[*PE1-vpn-instance-vpn1-ipv4] quit
[*PE1-vpn-instance-vpn1] evpn mpls routing-enable
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 7 Configure an EVPN source address on each PE.
# Configure PE1.
[~PE1] evpn source-address 1.1.1.1
[*PE1] commit
# Configure PE2.
[~PE2] evpn source-address 4.4.4.4
[*PE2] commit
Step 8 Establish an EBGP EVPN peer relationship between PEs in different ASs.
[~PE1] bgp 100
[*PE1-bgp] peer 4.4.4.4 as-number 200
[*PE1-bgp] peer 4.4.4.4 ebgp-max-hop 255
[*PE1-bgp] peer 4.4.4.4 connect-interface LoopBack1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 4.4.4.4 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 9 Configure the access interfaces connecting PEs to CEs.
[~PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] interface GigabitEthernet2/0/0.1 mode l2
[*PE1-GigabitEthernet2/0/0.1] encapsulation dot1q vid 10
[*PE1-GigabitEthernet2/0/0.1] rewrite pop single
[*PE1-GigabitEthernet2/0/0.1] bridge-domain 10
[*PE1-GigabitEthernet2/0/0.1] quit
[*PE1] bridge-domain 10
[*PE1-bd10] evpn binding vpn-instance evrf1
[*PE1-bd10] quit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
After completing the configurations, run the display bgp evpn all routing-table
command to view the EVPN routes and IP private network routes sent from the
peer PE.
EVPN-Instance evrf1:
Number of Mac Routes: 3
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*> 0:48:00e0-fc12-3456:0:0.0.0.0 0.0.0.0
*> 0:48:00e0-fc12-3456:32:192.168.1.1 0.0.0.0
*> 0:48:00e0-fc12-7890:0:0.0.0.0 4.4.4.4
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*> 0:32:4.4.4.4 4.4.4.4
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 200:1
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 192.168.1.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
Networking Requirements
On the network shown in Figure 3-167, PE3 and the TOR are each dual-homed to
PE1 and PE2. An MPLS L2VPN is deployed between the PEs, with PW connections
configured. An EVPN VXLAN is deployed in the DC, and PE1 and PE2 are the DC's
egress devices.
Loopback 0 1.1.1.1/32
Loopback 0 2.2.2.2/32
Loopback 0 3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on each device to ensure Layer 3 connectivity.
2. Configure basic MPLS functions and MPLS LDP on PE1, PE2, and PE3.
Establish LDP LSPs between PE3 and PE1 and between PE3 and PE2.
3. Configure an EVPN instance on each of PE1, PE2, and the TOR.
4. Configure MPLS VPLS between PE3 and PE1 and between PE3 and PE2 for
interconnection.
5. Configure VXLAN between the TOR and PE1 and between the TOR and PE2
for interconnection.
6. Create an EVPN instance and a VSI and bind them to the same BD on each of
PE1 and PE2 to implement splicing VXLAN and VPLS.
Data Preparation
To complete the configuration, you need the following data:
● Interfaces and their IP addresses
● MPLS LSR IDs of PEs
● Names, RDs, and VPN targets of the EVPN instances of PE1, PE2, and the TOR
● Names and IDs of the VSIs on the PEs
● IP addresses of peers and tunnel policies used for setting up peer relationships
Procedure
Step 1 Configure IP addresses for interfaces on the PEs and TOR and configure an IGP.
OSPF is used in this example.
For configuration details, see Configuration Files in this section.
Step 2 Configure basic MPLS functions and MPLS LDP on PE1, PE2, and PE3. Establish
LDP LSPs between PE3 and PE1 and between PE3 and PE2.
For configuration details, see Configuration Files in this section.
Step 3 Configure EVPN instances on PE1 and PE2.
# Configure PE1.
[~PE1] evpn vpn-instance tor bd-mode
[*PE1-evpn-instance-tor] route-distinguisher 1.1.1.100:10
[*PE1-evpn-instance-tor] vpn-target 10:10 export-extcommunity
[*PE1-evpn-instance-tor] vpn-target 10:10 import-extcommunity
[*PE1-evpn-instance-tor] quit
[*PE1] commit
Repeat this step for the TOR and PE2. For configuration details, see Configuration
Files in this section.
Step 4 Configure a BGP-EVPN peer relationship between the TOR and each of PE1 and
PE2.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 4.4.4.100 as-number 65001
[*PE1-bgp] peer 4.4.4.100 ebgp-max-hop 255
[*PE1-bgp] peer 4.4.4.100 connect-interface LoopBack100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] policy vpn-target
[*PE1-bgp-af-evpn] peer 4.4.4.100 enable
[*PE1-bgp-af-evpn] peer 4.4.4.100 advertise encap-type vxlan
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for the TOR and PE2. For configuration details, see Configuration
Files in this section.
Step 5 Configure a VSI on each of PE1, PE2, and PE3.
# Configure PE1.
[~PE1] vsi cpe bd-mode
[*PE1-vsi-cpe] pwsignal ldp
[*PE1-vsi-cpe-ldp] vsi-id 10
[*PE1-vsi-cpe-ldp] peer 3.3.3.3
[*PE1-vsi-cpe-ldp] quit
[*PE1-vsi-cpe] quit
[*PE1] commit
# Configure PE2.
[~PE2] vsi cpe bd-mode
[*PE2-vsi-cpe] pwsignal ldp
[*PE2-vsi-cpe-ldp] vsi-id 10
[*PE2-vsi-cpe-ldp] peer 3.3.3.3
[*PE2-vsi-cpe-ldp] quit
[*PE2-vsi-cpe] quit
[*PE2] commit
# Configure PE3.
[~PE3] vsi cpe bd-mode
[*PE3-vsi-cpe] pw-redundancy mac-withdraw rfc-compatible
[*PE3-vsi-cpe] pwsignal ldp
[*PE3-vsi-cpe-ldp] vsi-id 10
[*PE3-vsi-cpe-ldp] peer 1.1.1.1
[*PE3-vsi-cpe-ldp] peer 2.2.2.2
[*PE3-vsi-cpe-ldp] protect-group 10
[*PE3-vsi-cpe-ldp-protect-group-10] protect-mode pw-redundancy master
[*PE3-vsi-cpe-ldp-protect-group-10] reroute delay 60
[*PE3-vsi-cpe-ldp-protect-group-10] peer 1.1.1.1 preference 1
[*PE3-vsi-cpe-ldp-protect-group-10] peer 2.2.2.2 preference 2
[*PE3-vsi-cpe-ldp-protect-group-10] quit
[*PE3-vsi-cpe-ldp] quit
[*PE3-vsi-cpe] quit
[*PE3] commit
Step 6 Bind the VSI and EVPN instance to the same BD on each of PE1 and PE2.
# Configure PE1.
[~PE1] bridge-domain 10
[*PE1-bd10] vxlan vni 10 split-horizon-mode
[*PE1-bd10] evpn binding vpn-instance tor
[*PE1-bd10] l2 binding vsi cpe
[*PE1-bd10] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 7 Verify the configuration.
Run the display vsi name cpe verbose command on each PE to view the PW and
VSI status. The following example uses the command output on PE1.
[~PE1] display vsi name cpe verbose
***VSI Name : cpe
Work Mode : bd-mode
Administrator VSI : no
Isolate Spoken : disable
VSI Index :1
PW Signaling : ldp
Member Discovery Style : --
Bridge-domain Mode : enable
PW MAC Learn Style : qualify
Encapsulation Type : vlan
MTU : 1500
Diffserv Mode : uniform
Service Class : --
Color : --
DomainId : 255
Domain Name :
Ignore AcState : disable
P2P VSI : disable
Multicast Fast Switch : disable
Create Time : 0 days, 3 hours, 24 minutes, 44 seconds
VSI State : up
Resource Status : --
VSI ID : 10
*Peer Router ID : 3.3.3.3
Negotiation-vc-id : 10
Encapsulation Type : vlan
primary or secondary : primary
ignore-standby-state : no
VC Label : 48123
Peer Type : dynamic
Session : up
Tunnel ID : 0x0000000001004c4b44
Broadcast Tunnel ID : --
Broad BackupTunnel ID : --
CKey :1
NKey : 16777348
Stp Enable :0
PwIndex :1
Control Word : disable
BFD for PW : unavailable
**PW Information:
Run the display vxlan tunnel command on each PE. The command output shows
that the VXLAN tunnel is Up. The following example uses the command output on
PE1.
[~PE1] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.100 4.4.4.100 up dynamic 00:18:05
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance tor bd-mode
route-distinguisher 2.2.2.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls l2vpn
#
vsi cpe bd-mode
pwsignal ldp
vsi-id 10
peer 3.3.3.3
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance tor
esi 0000.1111.1111.1111.2222
l2 binding vsi cpe
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 192.168.14.1 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface LoopBack100
ip address 1.1.1.100 255.255.255.255
#
interface Nve1
source 1.1.1.100
vni 10 head-end peer-list protocol bgp
#
bgp 100
peer 4.4.4.100 as-number 65001
peer 4.4.4.100 ebgp-max-hop 255
peer 4.4.4.100 connect-interface LoopBack100
#
ipv4-family unicast
undo synchronization
peer 4.4.4.100 enable
#
l2vpn-family evpn
policy vpn-target
peer 4.4.4.100 enable
peer 4.4.4.100 advertise encap-type vxlan
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
ospf 100
area 0.0.0.1
network 1.1.1.100 0.0.0.0
network 192.168.14.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance tor bd-mode
route-distinguisher 2.2.2.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls l2vpn
#
vsi cpe bd-mode
pwsignal ldp
vsi-id 10
peer 3.3.3.3
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance tor
esi 0000.1111.1111.1111.3333
l2 binding vsi cpe
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 192.168.24.1 255.255.255.0
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
interface LoopBack100
ip address 2.2.2.100 255.255.255.255
#
interface Nve1
source 2.2.2.100
vni 10 head-end peer-list protocol bgp
#
bgp 100
peer 4.4.4.100 as-number 65001
peer 4.4.4.100 ebgp-max-hop 255
peer 4.4.4.100 connect-interface LoopBack100
#
ipv4-family unicast
undo synchronization
peer 4.4.4.100 enable
#
l2vpn-family evpn
policy vpn-target
peer 4.4.4.100 enable
peer 4.4.4.100 advertise encap-type vxlan
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
ospf 100
area 0.0.0.1
network 2.2.2.100 0.0.0.0
network 192.168.24.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls l2vpn
#
vsi cpe bd-mode
pw-redundancy mac-withdraw rfc-compatible
pwsignal ldp
vsi-id 10
peer 1.1.1.1
peer 2.2.2.2
protect-group 10
protect-mode pw-redundancy master
reroute delay 60
peer 1.1.1.1 preference 1
peer 2.2.2.2 preference 2
#
bridge-domain 10
l2 binding vsi cpe
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1 mode l2
bridge-domain 10
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.3 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.2.1.3 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
● TOR configuration file
#
sysname TOR
#
evpn vpn-instance tor bd-mode
route-distinguisher 4.4.4.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
3.2.35.15 Example for Configuring EVPN E-LAN over mLDP P2MP Tunnels
On a network where an EVPN carries multicast services, to reduce redundant
traffic and conserve bandwidth resources, configure an EVPN E-LAN over mLDP
P2MP tunnel for service transmission.
Networking Requirements
On the network shown in Figure 3-168, EVPN is configured on the PEs and used
to carry multicast services. PE1 is the root node, and PE2 and PE3 are leaf nodes.
The access side is the multicast source and the receiver. By default, an EVPN sends
multicast service traffic from PE1 to PE2 and PE3 by means of ingress replication.
Specifically, PE1 replicates a multicast packet into two copies and sends them to
the P functioning as an RR. The P then sends one copy to PE2 and the other copy
to PE3. For each additional receiver, an additional copy of the multicast packet is
sent. This increases the volume of traffic on the link between PE1 and the P,
consuming bandwidth resources. To conserve bandwidth resources, you can
configure EVPN to use an mLDP P2MP tunnel to transmit multicast services. After
the configuration is complete, PE1 sends only one copy of multicast traffic to the
P. The P replicates the multicast traffic into copies and sends them to the leaf
nodes, reducing the volume of traffic between PE1 and P.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure EVPN to use an mLDP P2MP tunnel for service transmission,
note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites. Conversely, the
import VPN target list of a site shares VPN targets with the export VPN target
lists of the other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure MPLS and mLDP P2MP both globally and per interface on each
node of the backbone network.
3. Create an EVPN instance in BD mode and a BD on each PE, and bind the BD
to the EVPN instance on each PE.
4. Configure a source address on each PE.
5. Configure each PE's sub-interface connecting to a CE.
6. Configure an ESI for each PE interface that connects to a CE.
7. Configure BGP EVPN peer relationships between the PEs and RR, and
configure the PEs as RR clients.
8. Configure EVPN to use an mLDP P2MP tunnel for service transmission on
each PE.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● EVPN instance evrf1's RD (100:1) and RT (1:1) on each PE
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-168. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
Step 3 Configure MPLS and mLDP P2MP both globally and per interface on each node of
the backbone network and set up an mLDP P2MP tunnel.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] mldp p2mp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] mldp p2mp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] mldp p2mp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] route-distinguisher 100:1
[*PE3-evpn-instance-evrf1] vpn-target 1:1
[*PE3-evpn-instance-evrf1] quit
[*PE3] bridge-domain 10
[*PE3-bd10] evpn binding vpn-instance evrf1
[*PE3-bd10] quit
[*PE3] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE1.
[~PE1] interface eth-trunk 10
[*PE1-Eth-Trunk10] quit
[*PE1] interface eth-trunk 10.1 mode l2
[*PE1-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE1-Eth-Trunk10.1] bridge-domain 10
[*PE1-Eth-Trunk10.1] quit
[*PE1] interface gigabitethernet 1/0/0
[*PE1-GigabitEthernet1/0/0] eth-trunk 10
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.1 mode l2
[*PE3-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE3-Eth-Trunk10.1] bridge-domain 10
[*PE3-Eth-Trunk10.1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] eth-trunk 10
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
# Configure PE1.
[~PE1] interface eth-trunk 10
[*PE1-Eth-Trunk10] esi 0000.1111.1111.4444.5555
[*PE1-Eth-Trunk10] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] esi 0000.1111.2222.4444.5555
[*PE2-Eth-Trunk10] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] esi 0000.1111.3333.4444.5555
[*PE3-Eth-Trunk10] quit
[*PE3] commit
Step 8 Configure BGP EVPN peer relationships between the PEs and RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After the configurations are complete, run the display bgp evpn peer command
on the RR. The command output shows information about BGP peer relationships.
In the following example, the output shows that BGP peer relationships are
established between the PEs and RR and that they are in the Established state.
[~RR] display bgp evpn peer
Step 9 Configure EVPN to use an mLDP P2MP tunnel for service transmission on each PE.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode
[~PE1-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE1-evpn-instance-evrf1-inclusive] root
[*PE1-evpn-instance-evrf1-inclusive-root] mldp p2mp
[*PE1-evpn-instance-evrf1-inclusive-root-mldpp2mp] root-ip 1.1.1.1
[*PE1-evpn-instance-evrf1-inclusive-root-mldpp2mp] quit
[*PE1-evpn-instance-evrf1-inclusive-root] quit
[*PE1-evpn-instance-evrf1-inclusive] quit
[*PE1-evpn-instance-evrf1] quit
[*PE1] commit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[~PE2-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE2-evpn-instance-evrf1-inclusive] leaf
[*PE2-evpn-instance-evrf1-inclusive-leaf] quit
[*PE2-evpn-instance-evrf1] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[~PE3-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE3-evpn-instance-evrf1-inclusive] leaf
[*PE3-evpn-instance-evrf1-inclusive-leaf] quit
[*PE3-evpn-instance-evrf1] quit
[*PE3] commit
Bridge-domain : 10
Ingress provider tunnel
Egress provider tunnel
Egress PMSI count: 1
*PMSI type : P2MP mLDP
Root ip : 1.1.1.1
Opaque value : 01000400008001
State : up
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
inclusive-provider-tunnel
root
mldp p2mp
root-ip 1.1.1.1
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
mldp p2mp
#
interface Eth-Trunk10
esi 0000.1111.1111.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
inclusive-provider-tunnel
leaf
#
mpls lsr-id 2.2.2.2
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
mldp p2mp
#
interface Eth-Trunk10
esi 0000.1111.2222.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
evpn source-address 2.2.2.2
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
inclusive-provider-tunnel
leaf
#
mpls lsr-id 4.4.4.4
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
mpls ldp
mldp p2mp
#
interface Eth-Trunk10
esi 0000.1111.3333.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.3.1.0 0.0.0.255
#
evpn source-address 4.4.4.4
#
return
● RR configuration file
#
sysname RR
#
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
return
Networking Requirements
On the network shown in the following figure, a VLL (or VPWS) network is
deployed between the UPE and NPE1, and an EVPN is deployed between NPE1
and NPE2. To implement VLL accessing EVPN, a PW-VE interface and its sub-
interface must be configured on NPE1. Specifically, the VLL configurations are
performed on the PW-VE interface, and the EVPN configurations are performed on
the PW-VE sub-interface. The PW-VE sub-interface is configured as a QinQ VLAN
tag termination sub-interface.
Loopback 0 1.1.1.1/32
Loopback 0 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on each device. Because the VLL network (between the UPE
and NPE1) and EVPN (between NPE1 and NPE2) reside at different layers, use
different IGP processes to implement route communication.
2. Configure basic MPLS functions on the UPE, NPE1, and NPE2.
3. Configure VPWS connections on the UPE and NPE1.
4. Configure EVPN functions on NPE1 and NPE2 and establish an MPLS tunnel
between them.
5. On NPE1, bind the VSI to the PW-VE interface and the EVPN instance to PE-
VE sub-interface.
Data Preparation
To complete the configuration, you need the following data:
● Interface names and IP addresses of the interfaces on NPE1, NPE2, and the
UPE
● MPLS LSR IDs on the UPE, NPE1, and NPE2
● Names, RDs, and VPN targets of the EVPN instances created on NPE1 and
NPE2
Procedure
Step 1 Configure IP addresses for interfaces on the UPE, NPE1, and NPE2 and configure
an IGP. OSPF is used in this example.
For configuration details, see "Configuration Files" in this section.
Step 2 Configure basic MPLS functions and MPLS LDP on the UPE and NPE1.
For detailed configurations, see "Configuration Files" in this section.
Step 3 Configure VPWS connections on the UPE and NPE1.
For detailed configurations, see "Configuration Files" in this section.
Step 4 Configure basic EVPN functions on NPE2.
For detailed configurations, see "Configuration Files" in this section.
Step 5 Configure BGP on NPE1 and NPE2 and establish an EVPN peer relationship
between them.
For detailed configurations, see "Configuration Files" in this section.
Step 6 Configure an EVPN instance on NPE1.
# Configure NPE1.
[~NPE1] evpn vpn-instance evpna
[*NPE1-evpn-instance-evpna] route-distinguisher 1.1.1.100:10
[*NPE1-evpn-instance-evpna] vpn-target 10:10 export-extcommunity
[*NPE1-evpn-instance-evpna] vpn-target 10:10 import-extcommunity
[*NPE1-evpn-instance-evpna] quit
[*NPE1] evpn source-address 1.1.1.100
[*NPE1] commit
Step 8 On NPE1, bind the VSI to the PW-VE interface and the EVPN instance to the PW-
VE sub-interface.
# Configure NPE1.
[~NPE1] mpls
----End
Configuration Files
● NPE1 configuration file
#
sysname NPE1
#
evpn vpn-instance evpna
route-distinguisher 1.1.1.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls l2vpn
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 192.168.14.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface LoopBack100
ip address 1.1.1.100 255.255.255.255
#
interface PW-VE1
mpls l2vc 2.2.2.2 1
#
interface PW-VE1.1
encapsulation qinq-termination
qinq termination pe-vid 100 ce-vid 1 to 2
evpn binding vpn-instance evpna
#
bgp 100
peer 2.2.2.100 as-number 65001
peer 2.2.2.100 ebgp-max-hop 255
peer 2.2.2.100 connect-interface LoopBack100
#
ipv4-family unicast
undo synchronization
peer 2.2.2.100 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.100 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
ospf 100
area 0.0.0.1
network 1.1.1.100 0.0.0.0
network 192.168.14.0 0.0.0.255
#
evpn source-address 1.1.1.100
#
return
● UPE configuration file
#
sysname UPE
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls l2vpn
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
mpls l2vc 1.1.1.1 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.3 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● NPE2 configuration file
#
sysname NPE2
#
evpn vpn-instance evpna
route-distinguisher 2.2.2.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
mpls lsr-id 2.2.2.100
#
mpls
#
mpls l2vpn
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
evpn binding vpn-instance evpna
#
interface GigabitEthernet1/0/1
undo portswitch
ip address 192.168.14.4 255.255.255.0
mpls
mpls ldp
#
interface LoopBack100
ip address 2.2.2.100 255.255.255.255
#
bgp 65001
peer 1.1.1.100 as-number 100
peer 1.1.1.100 ebgp-max-hop 255
peer 1.1.1.100 connect-interface LoopBack100
#
ipv4-family unicast
undo synchronization
peer 1.1.1.100 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.100 enable
#
ospf 100
area 0.0.0.1
network 2.2.2.100 0.0.0.0
network 192.168.14.0 0.0.0.255
#
evpn source-address 2.2.2.100
#
return
Networking Requirements
On the network shown in Figure 3-170, a VLL (or VPWS) network is deployed
between the UPE and NPE1, and an EVPN is deployed between NPE1 and NPE2.
To implement VLL accessing EVPN, a PW-VE interface and its sub-interface must
be configured on NPE1. Specifically, the VLL configurations are performed on the
PW-VE interface, and the EVPN configurations are performed on the PW-VE sub-
interface. An EVPL instance corresponding to an EVPN instance is bound to the
PW-VE sub-interface, which is configured as a QinQ VLAN tag termination sub-
interface.
Loopback 0 1.1.1.1/32
Loopback 0 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on each device. Because the VLL network (between the UPE
and NPE1) and EVPN (between NPE1 and NPE2) reside at different layers, use
different IGP processes to implement route communication.
2. Configure basic MPLS LDP functions on the UPE, NPE1, and NPE2.
3. Configure VPWS connections on the UPE and NPE1.
4. Configure EVPN functions on NPE1 and NPE2, including creating EVPN
instances and establishing a BGP EVPN peer relationship between the devices.
5. Configure EVPL functions on NPE1 and NPE2.
6. Bind the VSI to the PW-VE interface on NPE1; bind the EVPL instances to the
PE-VE sub-interfaces of NPE1.
Data Preparation
To complete the configuration, you need the following data:
● Interface names and IP addresses of the interfaces on NPE1, NPE2, and the
UPE
● MPLS LSR IDs on the UPE, NPE1, and NPE2
● Names, RDs, and VPN targets of the EVPN instances created on NPE1 and
NPE2
Procedure
Step 1 Configure IP addresses for interfaces on the UPE, NPE1, and NPE2 and configure
an IGP. OSPF is used in this example.
For detailed configurations, see "Configuration Files" in this section.
Step 2 Configure basic MPLS functions and MPLS LDP on NPE1, NPE2, and the UPE.
For detailed configurations, see "Configuration Files" in this section.
Step 3 Configure VPWS connections on the UPE and NPE1.
For detailed configurations, see "Configuration Files" in this section.
Step 4 Configure an EVPN instance on NPE1.
# Configure NPE1.
[~NPE1] evpn vpn-instance evpna vpws
[*NPE1-vpws-evpn-instance-evpna] route-distinguisher 1:1
[*NPE1-vpws-evpn-instance-evpna] vpn-target 10:10 export-extcommunity
[*NPE1-vpws-evpn-instance-evpna] vpn-target 10:10 import-extcommunity
[*NPE1-vpws-evpn-instance-evpna] quit
[*NPE1] bgp 100
[*NPE1-bgp] peer 2.2.2.100 as-number 100
[*NPE1-bgp] peer 2.2.2.100 connect-interface LoopBack100
[*NPE1-bgp] l2vpn-family evpn
[*NPE1-bgp-af-evpn] peer 2.2.2.100 enable
[*NPE1-bgp-af-evpn] quit
[*NPE1-bgp] quit
[*NPE1] commit
The configuration on NPE2 is similar to that on NPE1 and is not provided here. For
configuration details, see "Configuration Files" in this section.
Step 5 Configure EVPL functions on NPE1 and NPE2.
# Configure NPE1.
[~NPE1] evpl instance 1 mpls-mode
[*NPE1-evpl-mpls1] evpn binding vpn-instance evpna
[*NPE1-evpl-mpls1] local-service-id 100 remote-service-id 200
[*NPE1-evpl-mpls1] quit
[*NPE1] commit
The configuration on NPE2 is similar to that on NPE1 and is not provided here. For
configuration details, see "Configuration Files" in this section.
Step 6 On NPE1, bind the VSI to the PW-VE interface and the EVPN instance to the PW-
VE sub-interface.
# Configure NPE1.
[~NPE1] mpls
[*NPE1-mpls] mpls l2vpn
[*NPE1-l2vpn] quit
[*NPE1] interface PW-VE 1
[*NPE1-PW-VE1] esi 0011.1111.0000.0000.0000
[*NPE1-PW-VE1] mpls l2vc 2.2.2.2 1
[*NPE1-PW-VE1] quit
[*NPE1] interface PW-VE 1.1
[*NPE1-PW-VE1.1] encapsulation qinq-termination
[*NPE1-PW-VE1.1] qinq termination pe-vid 100 ce-vid 100
[*NPE1-PW-VE1.1] evpl instance 1
[*NPE1-PW-VE1.1] commit
Run the display bgp evpn all routing-table command on NPE2. The command
output shows the EVI AD routes received from NPE1.
[~NPE2] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evpna:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
i 0011.1111.0000.0000.0000:100 1.1.1.100
i 0011.1111.0000.0000.0000:4294967295 1.1.1.100
*> 0011.1111.0000.0000.1111:200 127.0.0.1
EVPN-Instance evpna:
Number of ES Routes: 2
Network(ESI) NextHop
i 0011.1111.0000.0000.0000 1.1.1.100
*> 0011.1111.0000.0000.1111 127.0.0.1
----End
Configuration Files
● NPE1 configuration file
#
sysname NPE1
#
evpn vpn-instance evpna vpws
route-distinguisher 1:1
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
evpl instance 1 mpls-mode
evpn binding vpn-instance evpna
#
mpls l2vpn
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
mpls l2vc 1.1.1.1 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.1.3 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● NPE2 configuration file
#
sysname NPE2
#
evpn vpn-instance evpna vpws
route-distinguisher 2.2.2.100:10
vpn-target 10:10 export-extcommunity
vpn-target 10:10 import-extcommunity
#
evpl instance 1 mpls-mode
evpn binding vpn-instance evpna
local-service-id 200 remote-service-id 100
#
mpls lsr-id 2.2.2.100
#
mpls
#
mpls l2vpn
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
esi 0011.1111.0000.0000.1111
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation qinq vid 100 ce-vid 100
evpl instance 1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.14.4 255.255.255.0
mpls
mpls ldp
#
interface LoopBack100
ip address 2.2.2.100 255.255.255.255
#
bgp 100
peer 1.1.1.100 as-number 100
peer 1.1.1.100 connect-interface LoopBack100
#
ipv4-family unicast
undo synchronization
peer 1.1.1.100 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.100 enable
#
ospf 100
area 0.0.0.1
network 2.2.2.100 0.0.0.0
network 192.168.14.0 0.0.0.255
#
evpn source-address 2.2.2.100
#
return
Networking Requirements
On the network shown in Figure 3-171, Layer 2 traffic is transmitted within Site 1
and Site 2 separately. To allow Site 1 and Site 2 to communicate over the
backbone network, configure the EVPN function to transmit both Layer 2 and
Layer 3 traffic. If the sites belong to the same subnet, an EVPN instance is created
on each PE to store EVPN routes, which are used for Layer 2 forwarding by
matching MAC addresses. A route reflector (RR) is configured to reflect EVPN
routes. To balance BUM traffic along the links between CE1 and PE1 and between
CE1 and PE2, configure Eth-Trunk sub-interfaces on PE1 and PE2 to connect to Site
1. To allow different VLANs configured on a physical interface to access the same
EVI and isolate the BDs to which the VLAN-configured sub-interfaces belong,
configure the VLAN-aware bundle mode for the access of a CE to the PEs.
Figure 3-171 Accessing a BD EVPN E-LAN over MPLS tunnel in VLAN-Aware mode
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure the VLAN-aware bundle access mode, note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites; the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on
the MPLS backbone network.
3. Create an EVPN instance in BD mode and a BD and bind the BD to the EVPN
instance with a BD tag set on each PE.
4. Configure a source address on each PE.
5. Configure each PE's sub-interface connecting to a CE.
6. Configure an ESI for each PE interface that connects to a CE.
7. Configure BGP EVPN peer relationships between the PEs and RR, and
configure the PEs as RR clients.
8. Configure CEs and PEs to communicate.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-171. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
After the configurations are complete, PE1, PE2, and PE3 can establish OSPF
neighbor relationships with the RR. Run the display ospf peer command. The
command output shows that State is Full. Run the display ip routing-table
command. The command output shows that the RR and PEs have learned the
routes destined for each other's loopback interfaces.
The following example uses the command output on PE1.
[~PE1] display ospf peer
(M) Indicates MADJ neighbor
Step 3 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the
MPLS backbone network.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
[*RR-GigabitEthernet1/0/0] mpls
[*RR-GigabitEthernet1/0/0] mpls ldp
[*RR-GigabitEthernet1/0/0] quit
[*RR] interface gigabitethernet 2/0/0
[*RR-GigabitEthernet2/0/0] mpls
[*RR-GigabitEthernet2/0/0] mpls ldp
[*RR-GigabitEthernet2/0/0] quit
[*RR] interface gigabitethernet 3/0/0
[*RR-GigabitEthernet3/0/0] mpls
[*RR-GigabitEthernet3/0/0] mpls ldp
[*RR-GigabitEthernet3/0/0] commit
[~RR-GigabitEthernet3/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
After the configurations are complete, LDP sessions are established between the
PEs and RR. Run the display mpls ldp session command. The command output
shows that Status is Operational. Run the display mpls ldp lsp command. The
command output shows LDP LSP configurations.
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1 bd-tag 100
[*PE2-bd10] quit
[*PE2] bridge-domain 20
[*PE2-bd20] evpn binding vpn-instance evrf1 bd-tag 200
[*PE2-bd20] quit
[*PE2] evpn
[*PE2-evpn] vlan-extend private enable
[*PE2-evpn] vlan-extend redirect enable
[*PE2-evpn] local-remote frr enable
[*PE2-evpn] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] route-distinguisher 100:1
[*PE3-evpn-instance-evrf1] vpn-target 1:1
[*PE3-evpn-instance-evrf1] quit
[*PE3] bridge-domain 10
[*PE3-bd10] evpn binding vpn-instance evrf1 bd-tag 100
[*PE3-bd10] quit
[*PE3] bridge-domain 20
[*PE3-bd20] evpn binding vpn-instance evrf1 bd-tag 200
[*PE3-bd20] quit
[*PE3] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE2.
[~PE2] e-trunk 1
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface eth-trunk 10.2 mode l2
[*PE2-Eth-Trunk10.2] encapsulation dot1q vid 200
[*PE2-Eth-Trunk10.2] bridge-domain 20
[*PE2-Eth-Trunk10.2] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.1 mode l2
[*PE3-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE3-Eth-Trunk10.1] bridge-domain 10
[*PE3-Eth-Trunk10.1] quit
[*PE3] interface eth-trunk 10.2 mode l2
[*PE3-Eth-Trunk10.2] encapsulation dot1q vid 200
[*PE3-Eth-Trunk10.2] bridge-domain 20
[*PE3-Eth-Trunk10.2] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] eth-trunk 10
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
[*PE1] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] esi 0000.1111.2222.1111.1111
[*PE2-Eth-Trunk10] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] esi 0000.1111.3333.4444.5555
[*PE3-Eth-Trunk10] quit
[*PE3] commit
Step 8 Configure BGP EVPN peer relationships between the PEs and RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
[*RR] commit
After the configurations are complete, run the display bgp evpn peer command
on the RR. The command output shows that BGP peer relationships are
established between the PEs and RR and are in the Established state.
[~RR] display bgp evpn peer
# Configure CE2.
[~CE2] interface Eth-Trunk 10
[*CE2-Eth-Trunk10] quit
[*CE2] bridge-domain 10
[*CE2-bd10] quit
[*CE2] interface Eth-Trunk 10.1 mode l2
[*CE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*CE2-Eth-Trunk10.1] bridge-domain 10
[*CE2-Eth-Trunk10.1] quit
[*CE2] interface Eth-Trunk 10.2 mode l2
[*CE2-Eth-Trunk10.2] encapsulation dot1q vid 200
[*CE2-Eth-Trunk10.2] bridge-domain 20
[*CE2-Eth-Trunk10.2] quit
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] eth-trunk 10
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] commit
EVPN-Instance evrf1:
Number of A-D Routes: 6
Network(ESI/EthTagId) NextHop
*>i 0000.1111.2222.1111.1111:100 1.1.1.1
*>i 0000.1111.2222.1111.1111:200 1.1.1.1
*>i 0000.1111.2222.1111.1111:4294967295 1.1.1.1
*i 2.2.2.2
*> 0000.1111.3333.4444.5555:100 127.0.0.1
*> 0000.1111.3333.4444.5555:200 127.0.0.1
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 6
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*>i 100:32:1.1.1.1 1.1.1.1
*>i 100:32:2.2.2.2 2.2.2.2
*> 100:32:4.4.4.4 127.0.0.1
*>i 200:32:1.1.1.1 1.1.1.1
*>i 200:32:2.2.2.2 2.2.2.2
*> 200:32:4.4.4.4 127.0.0.1
EVPN-Instance evrf1:
Number of ES Routes: 1
Network(ESI) NextHop
*> 0000.1111.3333.4444.5555 127.0.0.1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1 bd-tag 100
#
bridge-domain 20
evpn binding vpn-instance evrf1 bd-tag 200
#
mpls ldp
#
e-trunk 1
peer-address 2.2.2.2 source-address 1.1.1.1
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
bridge-domain 10
#
interface Eth-Trunk10.2 mode l2
encapsulation dot1q vid 200
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
bgp 100
interface LoopBack0
ip address 2.2.2.2 255.255.255.0
#
interface NULL0
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.2.1.0 0.0.0.255
#
return
● PE3 configuration file
#
sysname PE3
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 4.4.4.4
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1 bd-tag 100
#
bridge-domain 20
evpn binding vpn-instance evrf1 bd-tag 200
#
mpls ldp
#
interface Eth-Trunk10
esi 0000.1111.3333.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
bridge-domain 10
#
interface Eth-Trunk10.2 mode l2
encapsulation dot1q vid 200
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
eth-trunk 10
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
bgp 100
that these VLANs can access the same EVPN instance (EVI) and isolate the BDs to
which the VLAN-configured sub-interfaces belong.
Networking Requirements
On the network shown in Figure 3-172, Layer 2 traffic is transmitted within Site 1
and Site 2 separately. To allow Site 1 and Site 2 to communicate over the
backbone network, configure the EVPN function to transmit service traffic. When
the sites belong to the same subnet, an EVI is configured on each PE to store
EVPN routes. A route reflector (RR) is configured to reflect EVPN routes. A VXLAN
tunnel is set up between every two PEs to carry service traffic. To balance BUM
traffic along the links between CE1 and PE1 and between CE1 and PE2, configure
Eth-Trunk sub-interfaces on PE1 and PE2 to connect to Site 1. To allow different
VLANs configured on a physical interface to access the same EVI and isolate the
BDs to which the VLAN-configured sub-interfaces belong, configure the VLAN-
aware bundle mode for the access of a CE to the PEs.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure the VLAN-aware bundle mode, note the following:
● For the same EVPN instance, the export VPN target list of one site shares VPN
targets with the import VPN target lists of the other sites. Conversely, the
import VPN target list of one site shares VPN targets with the export VPN
target lists of the other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Create a BD-EVPN instance and a BD and bind the BD to the BD-EVPN
instance with a BD tag set on each PE.
3. Configure each PE's sub-interface that connects to a CE.
4. Configure an ESI for each PE interface that connects to a CE.
5. Configure BGP EVPN peer relationships between the PEs and RR, and
configure the PEs as RR clients.
6. Configure CEs and PEs to communicate.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● EVPN instance evrf1's RD (100:1) and RT (1:1) on each PE
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-172. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
[~RR] ospf 1
[*RR-ospf-1] area 0
[*RR-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*RR-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*RR-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*RR-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[*RR-ospf-1-area-0.0.0.0] commit
[~RR-ospf-1-area-0.0.0.0] quit
[~RR-ospf-1] quit
After the configurations are complete, PE1, PE2, and PE3 can establish OSPF
neighbor relationships with the RR. Run the display ospf peer command. The
command output shows that State is Full. Run the display ip routing-table
command. The command output shows that the RR and PEs have learned the
routes destined for each other's loopback interfaces.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] route-distinguisher 100:1
[*PE1-evpn-instance-evrf1] vpn-target 1:1
[*PE1-evpn-instance-evrf1] quit
[*PE1] bridge-domain 10
[*PE1-bd10] vxlan vni 11 split-horizon-mode
[*PE1-bd10] evpn binding vpn-instance evrf1 bd-tag 100
[*PE1-bd10] quit
[*PE1] bridge-domain 20
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 200:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] vxlan vni 11 split-horizon-mode
[*PE2-bd10] evpn binding vpn-instance evrf1 bd-tag 100
[*PE2-bd10] quit
[*PE2] bridge-domain 20
[*PE2-bd20] vxlan vni 22 split-horizon-mode
[*PE2-bd20] evpn binding vpn-instance evrf1 bd-tag 200
[*PE2-bd20] quit
[*PE2] evpn
[*PE2-evpn] vlan-extend private enable
[*PE2-evpn] vlan-extend redirect enable
[*PE2-evpn] local-remote frr enable
[*PE2-evpn] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] route-distinguisher 400:1
[*PE3-evpn-instance-evrf1] vpn-target 1:1
[*PE3-evpn-instance-evrf1] quit
[*PE3] bridge-domain 10
[*PE3-bd10] vxlan vni 11 split-horizon-mode
[*PE3-bd10] evpn binding vpn-instance evrf1 bd-tag 100
[*PE3-bd10] quit
[*PE3] bridge-domain 20
[*PE3-bd20] vxlan vni 22 split-horizon-mode
[*PE3-bd20] evpn binding vpn-instance evrf1 bd-tag 200
[*PE3-bd20] quit
[*PE3] commit
# Configure PE2.
[~PE2] e-trunk 1
[*PE2-e-trunk-1] peer-address 1.1.1.1 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] e-trunk 1
[*PE2-Eth-Trunk10] e-trunk mode force-master
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface eth-trunk 10.2 mode l2
[*PE2-Eth-Trunk10.2] encapsulation dot1q vid 200
[*PE2-Eth-Trunk10.2] bridge-domain 20
[*PE2-Eth-Trunk10.2] quit
[*PE2] interface gigabitethernet 1/0/0
[*PE2-GigabitEthernet1/0/0] eth-trunk 10
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.1 mode l2
[*PE3-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE3-Eth-Trunk10.1] bridge-domain 10
[*PE3-Eth-Trunk10.1] quit
[*PE3] interface eth-trunk 10.2 mode l2
[*PE3-Eth-Trunk10.2] encapsulation dot1q vid 200
[*PE3-Eth-Trunk10.2] bridge-domain 20
[*PE3-Eth-Trunk10.2] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] eth-trunk 10
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] esi 0000.1111.2222.1111.1111
[*PE2-Eth-Trunk10] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] esi 0000.1111.3333.4444.5555
[*PE3-Eth-Trunk10] quit
[*PE3] commit
Step 6 Configure BGP EVPN peer relationships between the PEs and RR, and configure
the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 3.3.3.3 enable
[*PE3-bgp-af-evpn] peer 3.3.3.3 advertise encap-type vxlan
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
After the configurations are complete, run the display bgp evpn peer command
on the RR. The command output shows information about BGP peer relationships.
In the following example, the output shows that BGP peer relationships are
established between the PEs and RR and that they are in the Established state.
[~RR] display bgp evpn peer
# Configure CE2.
[~CE2] interface Eth-Trunk 10
[*CE2-Eth-Trunk10] quit
[*CE2] bridge-domain 10
[*CE2-bd10] quit
[*CE2] interface Eth-Trunk 10.1 mode l2
[*CE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*CE2-Eth-Trunk10.1] bridge-domain 10
[*CE2-Eth-Trunk10.1] quit
[*CE2] interface Eth-Trunk 10.2 mode l2
[*CE2-Eth-Trunk10.2] encapsulation dot1q vid 200
[*CE2-Eth-Trunk10.2] bridge-domain 20
[*CE2-Eth-Trunk10.2] quit
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] eth-trunk 10
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] commit
# Configure PE2.
[~PE2] interface Nve 1
[*PE2-Nve1] source 2.2.2.2
[*PE2-Nve1] vni 11 head-end peer-list protocol bgp
[*PE2-Nve1] vni 22 head-end peer-list protocol bgp
[*PE2-Nve1] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface Nve 1
[*PE3-Nve1] source 4.4.4.4
[*PE3-Nve1] vni 11 head-end peer-list protocol bgp
Run the display bgp evpn all routing-table command on PE3. The command
output shows that EVPN routes carrying Ethernet tag IDs are received from the
remote PEs.
[~PE3] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evrf1:
Number of A-D Routes: 8
Network(ESI/EthTagId) NextHop
*>i 0000.1111.2222.1111.1111:100 1.1.1.1
*i 2.2.2.2
*>i 0000.1111.2222.1111.1111:200 1.1.1.1
*i 2.2.2.2
*>i 0000.1111.2222.1111.1111:4294967295 1.1.1.1
*i 2.2.2.2
*> 0000.1111.3333.4444.5555:100 0.0.0.0
*> 0000.1111.3333.4444.5555:200 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 6
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*>i 100:32:2.2.2.2 2.2.2.2
*> 100:32:4.4.4.4 0.0.0.0
*> 100:32:4.4.4.4 127.0.0.1
*>i 200:32:1.1.1.1 1.1.1.1
*> 200:32:4.4.4.4 0.0.0.0
*> 200:32:4.4.4.4 127.0.0.1
EVPN-Instance evrf1:
Number of ES Routes: 1
Network(ESI) NextHop
*> 0000.1111.3333.4444.5555 0.0.0.0
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bridge-domain 10
vxlan vni 11 split-horizon-mode
evpn binding vpn-instance evrf1 bd-tag 100
#
bridge-domain 20
vxlan vni 22 split-horizon-mode
evpn binding vpn-instance evrf1 bd-tag 200
#
e-trunk 1
peer-address 2.2.2.2 source-address 1.1.1.1
#
interface Eth-Trunk10
e-trunk 1
e-trunk mode force-master
esi 0000.1111.2222.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk10.2 mode l2
encapsulation dot1q vid 200
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface Nve1
source 1.1.1.1
vni 11 head-end peer-list protocol bgp
vni 22 head-end peer-list protocol bgp
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise encap-type vxlan
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● PE2 configuration file
#
sysname PE2
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 200:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
bridge-domain 10
vxlan vni 11 split-horizon-mode
evpn binding vpn-instance evrf1 bd-tag 100
#
bridge-domain 20
vxlan vni 22 split-horizon-mode
evpn binding vpn-instance evrf1 bd-tag 200
#
e-trunk 1
peer-address 1.1.1.1 source-address 2.2.2.2
#
interface Eth-Trunk10
e-trunk 1
#
interface Eth-Trunk10.2 mode l2
encapsulation dot1q vid 200
bridge-domain 20
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
return
Networking Requirements
As shown in the following figure, a VPLS network is deployed between the CSG
and ASGs, and an EVPN is deployed between the RSG and ASGs. Access-side PW
interfaces can be configured on ASGs to implement the VPLS splicing with MPLS
EVPN E-LAN function
Figure 3-173 Networking of configuring VPLS splicing with MPLS EVPN E-LAN
Interfaces 1 through 3 in this example represent GE 1/0/1, GE 1/0/2, and GE 1/0/3, respectively.
GigabitEthernet 1/0/3 -
Loopback 1 1.1.1.1/32
Loopback 1 2.2.2.2/32
Loopback 1 3.3.3.3/32
GigabitEthernet 1/0/3 -
Loopback 1 4.4.4.4/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on each device. Because the VPLS network (between the
CSG and ASGs) and EVPN (between the RSG and ASGs) reside at different
layers, use different IGP processes to implement route communication.
2. Configure basic MPLS LDP functions on the CSG, RSG, and ASGs.
3. Configure the EVPN function on the RSG and ASGs.
4. Configure PW connections between the CSG and ASGs.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of device interfaces
● MPLS LSR IDs of devices
● Names, RDs, and VPN targets of the EVPN instances created on the RSG and
ASGs
Procedure
Step 1 Configure an IGP on each device. Because the VPLS network (between the CSG
and ASGs) and EVPN (between the RSG and ASGs) reside at different layers, use
different IGP processes to implement route communication.
For configuration details, see Configuration Files in this section.
Step 2 Configure basic MPLS LDP functions on the CSG, RSG, and ASGs.
For configuration details, see Configuration Files in this section.
Step 3 Configure the EVPN function on the RSG and ASGs.
# Configure ASG1.
[~ASG1] evpn vpn-instance evrf1 bd-mode
[*ASG1-evpn-instance-evrf1] route-distinguisher 100:1
[*ASG1-evpn-instance-evrf1] vpn-target 1:1
[*ASG1-evpn-instance-evrf1] quit
[*ASG1] evpn source-address 2.2.2.2
[*ASG1] bgp 100
[*ASG1-bgp] peer 3.3.3.3 as-number 100
[*ASG1-bgp] peer 3.3.3.3 connect-interface LoopBack 1
[*ASG1-bgp] peer 4.4.4.4 as-number 100
[*ASG1-bgp] peer 4.4.4.4 connect-interface LoopBack 1
[*ASG1-bgp] l2vpn-family evpn
[*ASG1-bgp-af-evpn] peer 3.3.3.3 enable
[*ASG1-bgp-af-evpn] peer 4.4.4.4 enable
[*ASG1-bgp-af-evpn] quit
[*ASG1-bgp] quit
[*ASG1] bridge-domain 10
[*ASG1-bd10] evpn binding vpn-instance evrf1
[*ASG1-bd10] quit
[*ASG1] commit
Repeat this step for ASG2. For configuration details, see Configuration Files in
this section.
# Configure the RSG.
[~RSG] evpn vpn-instance evrf1 bd-mode
[*RSG-evpn-instance-evrf1] route-distinguisher 100:3
[*RSG-evpn-instance-evrf1] vpn-target 1:1
[*RSG-evpn-instance-evrf1] quit
[*RSG] evpn source-address 4.4.4.4
[*RSG] bgp 100
[*RSG-bgp] peer 2.2.2.2 as-number 100
[*RSG-bgp] peer 2.2.2.2 connect-interface LoopBack 1
[*RSG-bgp] peer 3.3.3.3 as-number 100
[*RSG-bgp] peer 3.3.3.3 connect-interface LoopBack 1
[*RSG-bgp] l2vpn-family evpn
[*RSG-bgp-af-evpn] peer 2.2.2.2 enable
[*RSG-bgp-af-evpn] peer 3.3.3.3 enable
[*RSG-bgp-af-evpn] quit
[*RSG-bgp] quit
[*RSG] bridge-domain 10
[*RSG-bd10] evpn binding vpn-instance evrf1
[*RSG-bd10] quit
[*RSG] interface GigabitEthernet 1/0/3.1 mode l2
[*RSG-GigabitEthernet1/0/3.1] encapsulation dot1q vid 100
[*RSG-GigabitEthernet1/0/3.1] bridge-domain 10
[*RSG-GigabitEthernet1/0/3.1] quit
[*RSG] commit
# Configure ASG1.
[~ASG1] mpls l2vpn
[*ASG1-l2vpn] quit
[*ASG1] evpn
[*ASG1-evpn] esi 0000.1111.1111.1111.1111
[*ASG1-evpn-esi-0000.1111.1111.1111.1111] evpn redundancy-mode single-active
[*ASG1-evpn-esi-0000.1111.1111.1111.1111] quit
[*ASG1-evpn] quit
[*ASG1] vsi vsi1 bd-mode
[*ASG1-vsi-vsi1] pwsignal ldp
[*ASG1-vsi-vsi1-ldp] vsi-id 1
[*ASG1-vsi-vsi1-ldp] peer 1.1.1.1 upe
[*ASG1-vsi-vsi1-ldp] peer 1.1.1.1 pw pw1
[*ASG1-vsi-vsi1-ldp-pw-pw1] esi 0000.1111.1111.1111.1111
[*ASG1-vsi-vsi1-ldp-pw-pw1] quit
[*ASG1-vsi-vsi1-ldp] quit
[*ASG1-vsi-vsi1] quit
[*ASG1] bridge-domain 10
[*ASG1-bd10] l2 binding vsi vsi1
[*ASG1-bd10] quit
[*ASG1] commit
# Configure ASG2.
[~ASG2] mpls l2vpn
[*ASG2-l2vpn] quit
[*ASG2] evpn
[*ASG2-evpn] esi 0000.1111.1111.1111.1111
[*ASG2-evpn-esi-0000.1111.1111.1111.1111] evpn redundancy-mode single-active
[*ASG2-evpn-esi-0000.1111.1111.1111.1111] quit
[*ASG2-evpn] quit
[*ASG2] vsi vsi1 bd-mode
[*ASG2-vsi-vsi1] pwsignal ldp
[*ASG2-vsi-vsi1-ldp] vsi-id 1
[*ASG2-vsi-vsi1-ldp] peer 1.1.1.1 upe
[*ASG2-vsi-vsi1-ldp] peer 1.1.1.1 pw pw1
[*ASG2-vsi-vsi1-ldp-pw-pw1] esi 0000.1111.1111.1111.1111
[*ASG2-vsi-vsi1-ldp-pw-pw1] quit
[*ASG2-vsi-vsi1-ldp] quit
[*ASG2-vsi-vsi1] quit
[*ASG2] bridge-domain 10
[*ASG2-bd10] l2 binding vsi vsi1
[*ASG2-bd10] quit
[*ASG2] commit
VSI ID :1
*Peer Router ID : 1.1.1.1
Negotiation-vc-id :1
Encapsulation Type : vlan
primary or secondary : primary
ignore-standby-state : no
VC Label : 48123
Peer Type : dynamic
Session : up
Tunnel ID : 0x0000000001004c4b42
Broadcast Tunnel ID : --
Broad BackupTunnel ID : --
CKey :1
NKey : 16777393
Stp Enable :0
PwIndex :1
Control Word : disable
BFD for PW : unavailable
**PW Information:
Run the display bgp evpn all routing-table command on ASG1 to view EVPN
routes. The command output contains information about the MAC routes on
VPLS-side interfaces and the ES routes and ES-AD routes corresponding to the ESI
on the PW.
[~ASG1] display bgp evpn all routing-table
Local AS number : 100
EVPN-Instance evrf1:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*> 0000.1111.1111.1111.1111:0 127.0.0.1
*i 3.3.3.3
*>i 0000.1111.1111.1111.1111:4294967295 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 1
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*> 0:48:00e0-fc12-3456:0:0.0.0.0 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 3
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:2.2.2.2 127.0.0.1
*>i 0:32:3.3.3.3 3.3.3.3
*>i 0:32:4.4.4.4 4.4.4.4
EVPN-Instance evrf1:
Number of ES Routes: 2
Network(ESI) NextHop
*> 0000.1111.1111.1111.1111 127.0.0.1
*i 3.3.3.3
The route details include information about the associated PW. The command
output of MAC routes is used as an example.
[~ASG1] display bgp evpn all routing-table mac-route 0:48:00e0-fc12-3456:0:0.0.0.0
BGP local router ID : 2.2.2.2
Local AS number : 100
Total routes of Route Distinguisher(100:1): 1
BGP routing table entry information of 0:48:00e0-fc12-3456:0:0.0.0.0:
Imported route.
From: 0.0.0.0 (0.0.0.0)
Route Duration: 0d13h48m24s
Original nexthop: 0.0.0.0
Qos information : 0x0
Ext-Community: RT <1 : 1>
AS-path Nil, origin incomplete, pref-val 0, valid, local, best, select, pre 255
Prefix-sid: ::
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:
0000.1111.1111.1111.1111
Advertised to such 2 peers:
3.3.3.3
4.4.4.4
EVPN-Instance evrf1:
Number of Mac Routes: 1
BGP routing table entry information of 0:48:00e0-fc12-3456:0:0.0.0.0:
Route Distinguisher: 100:1
Imported route.
From: 0.0.0.0 (0.0.0.0)
Route Duration: 0d13h48m26s
Direct Out-interface: PW<peerip:1.1.1.1 vcid:1 vctype:vlan>
Original nexthop: 0.0.0.0
Qos information : 0x0
AS-path Nil, origin incomplete, pref-val 0, valid, local, best, select, pre 255
Prefix-sid: ::
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:
0000.1111.1111.1111.1111
Not advertised to any peer yet
----End
Configuration Files
● CSG configuration file
#
sysname CSG
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls l2vpn
#
vsi vsi1 bd-mode
pwsignal ldp
vsi-id 1
peer 2.2.2.2
peer 3.3.3.3
protect-group vsi1
protect-mode pw-redundancy master
peer 2.2.2.2 preference 1
peer 3.3.3.3 preference 2
#
bridge-domain 10
l2 binding vsi vsi1
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.2.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.3.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3.1 mode l2
encapsulation dot1q vid 100
bridge-domain 10
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 100
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
#
return
● ASG1 configuration file
#
sysname ASG1
#
evpn
#
mac-duplication
#
esi 0000.1111.1111.1111.1111
evpn redundancy-mode single-active
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls l2vpn
#
vsi vsi1 bd-mode
pwsignal ldp
vsi-id 1
#
esi 0000.1111.1111.1111.1111
evpn redundancy-mode single-active
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:2
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls l2vpn
#
vsi vsi1 bd-mode
pwsignal ldp
vsi-id 1
peer 1.1.1.1 upe
peer 1.1.1.1 pw pw1
esi 0000.1111.1111.1111.1111
#
bridge-domain 10
l2 binding vsi vsi1
evpn binding vpn-instance evrf1
#
mpls ldp
#
isis 1
is-level level-1
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.3.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.2.3.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.3.4.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
l2vpn-family evpn
undo policy vpn-target
Networking Requirements
On the network shown in Figure 3-174, a VPLS service is deployed. A user wants
to deploy EVPN on PE1 and PE3, that is, to use a BGP EVPN to transmit the PE1-
PE3 service. To meet this requirement, an EVPN instance has to be configured on
each of PE1 and PE3 and be bound to the bridge domain (BD) on each of the PEs.
Then a BGP EVPN peer relationship has to be established between PE1 and PE3.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure co-existence of VPLS and EVPN, note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites; the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of each PE as the source address is
recommended.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create an EVPN instance in BD mode and a BD on each of PE1 and PE3, and
bind the BD to the EVPN instance on each PE.
2. Configure a source address on each of PE1 and PE3.
3. Establish a BGP EVPN peer relationship between PE1 and PE3.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● EVPN instance evrf1's RD (100:1) and RT (1:1) on each PE
Procedure
Step 1 Ensure that an EVC has been configured to carry a VPLS service. For configuration
details, see Configuration Files in this section.
Run the display vsi name e1 verbose command on PE1. The command output
shows that VSI e1 has PWs to PE2 and PE3 established separately, and the VSI
status and the PW status are both Up.
[~PE1] display vsi name e1 verbose
***VSI Name : e1
Work Mode : bd-mode
Administrator VSI : no
Isolate Spoken : disable
VSI Index :2
PW Signaling : bgp
Member Discovery Style : --
Bridge-domain Mode : enable
PW MAC Learn Style : qualify
Encapsulation Type : vlan
MTU : 1500
Diffserv Mode : uniform
Service Class : --
Color : --
DomainId : 255
Domain Name :
Ignore AcState : disable
P2P VSI : disable
Create Time : 0 days, 0 hours, 50 minutes, 49 seconds
VSI State : up
Resource Status : --
BGP RD : 100:1
SiteID/Range/Offset : 1/10/0
Import vpn target : 1:1
Export vpn target : 1:1
Remote Label Block : 294928/8/0 294928/8/0
Local Label Block : 0/294928/8/0
**PW Information:
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] route-distinguisher 100:1
[*PE3-evpn-instance-evrf1] vpn-target 1:1
[*PE3-evpn-instance-evrf1] quit
[*PE3] bridge-domain 10
[*PE3-bd10] evpn binding vpn-instance evrf1
[*PE3-bd10] quit
[*PE3] commit
# Configure PE3.
[~PE3] evpn source-address 3.3.3.3
[*PE3] commit
Step 4 Establish a BGP EVPN peer relationship between PE1 and PE3.
# Configure PE1.
[~PE1] bgp 100
[~PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE3.
[~PE3] bgp 100
[~PE3-bgp] l2vpn-family evpn
[*PE3-bgp-af-evpn] peer 1.1.1.1 enable
[*PE3-bgp-af-evpn] quit
[*PE3-bgp] quit
[*PE3] commit
Run the display bgp evpn all routing-table command on PE1. The command
output shows the inclusive multicast route received from PE3.
[~PE1] display bgp evpn all routing-table
Local AS number : 100
m
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 3.3.3.3
Run the display alarm active root verbose command on PE1. The command
output shows information about the alarm triggered when the VPLS VC on PE1
Run the display vsi name e1 verbose command on PE1. The command output
shows that only the PW to PE2 is available and the PW status is Up.
[~PE1] display vsi name e1 verbose
***VSI Name : e1
Work Mode : bd-mode
Administrator VSI : no
Isolate Spoken : disable
VSI Index :2
PW Signaling : bgp
Member Discovery Style : --
Bridge-domain Mode : enable
PW MAC Learn Style : qualify
Encapsulation Type : vlan
MTU : 1500
Diffserv Mode : uniform
Service Class : --
Color : --
DomainId : 255
Domain Name :
Ignore AcState : disable
P2P VSI : disable
Create Time : 0 days, 1 hours, 0 minutes, 52 seconds
VSI State : up
Resource Status : --
BGP RD : 100:1
SiteID/Range/Offset : 1/10/0
Import vpn target : 1:1
Export vpn target : 1:1
Remote Label Block : 294928/8/0 294928/8/0
Local Label Block : 0/294928/8/0
**PW Information:
Mac Flapping :0
PW Last Up Time : 2018/03/23 11:38:42
PW Total Up Time : 0 days, 0 hours, 11 minutes, 4 seconds
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls l2vpn
#
vsi e1 bd-mode
pwsignal bgp
route-distinguisher 100:1
vpn-target 1:1 import-extcommunity
vpn-target 1:1 export-extcommunity
site 1 range 10 default-offset 0
#
bridge-domain 10
l2 binding vsi e1
evpn binding vpn-instance evrf1
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack0
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
l2vpn-ad-family
policy vpn-target
signaling vpls
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
evpn source-address 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls l2vpn
#
vsi e1 bd-mode
pwsignal bgp
route-distinguisher 100:1
vpn-target 1:1 import-extcommunity
vpn-target 1:1 export-extcommunity
site 2 range 10 default-offset 0
#
bridge-domain 10
l2 binding vsi e1
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
peer 3.3.3.3 as-number 100
Networking Requirements
On the network shown in Figure 3-175, the DC-GWs GW1 and GW2 are
connected to the DCI backbone network with BGP EVPN configured. After BGP
EVPN peer relationships and VXLAN tunnels are established between the DC-GWs
and the DCI-PEs, host IP routes can be exchanged between different DCs,
implementing communication between DC A and DC B (for example,
communication between VMa1 and VMb2).
Figure 3-175 Configuring a DCI scenario with a VXLAN EVPN accessing an MPLS
EVPN IRB
Loopback 2 11.11.11.11/32
Loopback 1 2.2.2.2/32
Loopback2 33.33.33.33/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF on the DCI backbone network to implement communication
between DCI-PEs.
2. Configure an MPLS TE tunnel on the DCI backbone network.
3. Configure static routes on the DCI-PEs destined for the loopback interface
addresses of the DC-GWs.
4. Configure an EVPN instance and a BD on each DCI-PE.
5. Configure a source address on each DCI-PE.
6. Configure the DCI-PEs to establish a BGP EVPN peer relationship with each
other and BGP EVPN peer relationships with the DC-GWs.
7. Configure a VPN instance on each DCI-PE.
8. Configure VXLAN tunnels between the DCI-PEs and DC-GWs.
9. Apply a tunnel policy.
10. Configure route regeneration on each of the DCI-PEs.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the DCI-PEs and P
● RDs of the VPN and EVPN instances
● Import and export VPN targets of the VPN and EVPN instances
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interfaces.
Step 2 Configure an IGP on the DCI backbone network. OSPF is used in this example.
Step 4 On each DCI-PE, configure a static route destined for the loopback interface of the
connected DC-GW.
# Configure DCI-PE1.
[~DCI-PE1] evpn vpn-instance evrf1 bd-mode
[*DCI-PE1-evpn-instance-evrf1] route-distinguisher 10:1
[*DCI-PE1-evpn-instance-evrf1] vpn-target 11:1 both
[*DCI-PE1-evpn-instance-evrf1] quit
[*DCI-PE1] bridge-domain 10
[*DCI-PE1-bd10] vxlan vni 5010 split-horizon-mode
[*DCI-PE1-bd10] evpn binding vpn-instance evrf1
[*DCI-PE1-bd10] esi 0000.1111.1111.4444.5555
[*DCI-PE1-bd10] quit
[*DCI-PE1] interface GigabitEthernet 1/0/0.1 mode l2
[*DCI-PE1-GigabitEthernet1/0/0.1] encapsulation qinq
[*DCI-PE1-GigabitEthernet1/0/0.1] bridge-domain 10
[*DCI-PE1-GigabitEthernet1/0/0.1] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] evpn vpn-instance evrf1 bd-mode
[*DCI-PE2-evpn-instance-evrf1] route-distinguisher 10:1
[*DCI-PE2-evpn-instance-evrf1] vpn-target 11:1 both
[*DCI-PE2-evpn-instance-evrf1] quit
[*DCI-PE2] bridge-domain 10
[*DCI-PE2-bd10] vxlan vni 5020 split-horizon-mode
[*DCI-PE2-bd10] evpn binding vpn-instance evrf1
[*DCI-PE2-bd10] esi 0000.1111.3333.4444.5555
[*DCI-PE2-bd10] quit
[*DCI-PE2] interface GigabitEthernet 1/0/0.1 mode l2
[*DCI-PE2-GigabitEthernet1/0/0.1] encapsulation qinq
[*DCI-PE2-GigabitEthernet1/0/0.1] bridge-domain 10
[*DCI-PE2-GigabitEthernet1/0/0.1] quit
[*DCI-PE2] commit
# Configure DCI-PE1.
[~DCI-PE1] evpn source-address 11.11.11.11
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] evpn source-address 33.33.33.33
[*DCI-PE2] commit
Step 7 Configure the DCI-PEs to establish a BGP EVPN peer relationship with each other
and BGP EVPN peer relationships with the DC-GWs.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] peer 3.3.3.3 as-number 100
[*DCI-PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*DCI-PE1-bgp] peer 4.4.4.4 as-number 65410
[*DCI-PE1-bgp] peer 4.4.4.4 ebgp-max-hop 255
[*DCI-PE1-bgp] peer 4.4.4.4 connect-interface loopback 1
[*DCI-PE1-bgp] l2vpn-family evpn
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 advertise encap-type vxlan
[*DCI-PE1-bgp-af-evpn] quit
[*DCI-PE1-bgp] quit
[*DCI-PE1] commit
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] peer 1.1.1.1 as-number 100
[*DCI-PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*DCI-PE2-bgp] peer 5.5.5.5 as-number 65420
[*DCI-PE2-bgp] peer 5.5.5.5 ebgp-max-hop 255
[*DCI-PE2-bgp] peer 5.5.5.5 connect-interface loopback 1
[*DCI-PE2-bgp] l2vpn-family evpn
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 enable
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 enable
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 advertise encap-type vxlan
[*DCI-PE2-bgp-af-evpn] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 both
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 both evpn
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] vxlan vni 555
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] interface Vbdif10
[*DCI-PE2-Vbdif10] ip binding vpn-instance vpn1
[*DCI-PE2-Vbdif10] ip address 10.20.10.1 255.255.255.0
# Configure DCI-PE2.
[~DCI-PE2] interface nve 1
[*DCI-PE2-Nve1] source 33.33.33.33
[*DCI-PE2-Nve1] quit
[*DCI-PE2] commit
# Configure DCI-PE2.
[~DCI-PE2] tunnel-policy te-lsp1
[*DCI-PE2-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE2-tunnel-policy-te-lsp1] quit
[*DCI-PE2] ip vpn-instance vpn1
[*DCI-PE2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1 evpn
[*DCI-PE2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-vpn-instance-vpn1] quit
[*DCI-PE2] evpn vpn-instance evrf1 bd-mode
[*DCI-PE2-evpn-instance-evrf1] tnl-policy te-lsp1
[*DCI-PE2-evpn-instance-evrf1] quit
[*DCI-PE2] commit
Step 11 Enable each DCI-PE to advertise regenerated routes to each BGP EVPN peer.
# Configure DCI-PE1.
[~DCI-PE1] bgp 100
[*DCI-PE1-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE1-bgp-vpn1] import-route direct
[*DCI-PE1-bgp-vpn1] advertise l2vpn evpn
[*DCI-PE1-bgp-vpn1] quit
[*DCI-PE1-bgp] l2vpn-family evpn
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 import reoriginate
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 advertise route-reoriginated evpn mac-ip
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 advertise route-reoriginated evpn mac
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 advertise route-reoriginated evpn ip
[*DCI-PE1-bgp-af-evpn] peer 3.3.3.3 advertise irb
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 import reoriginate
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 advertise route-reoriginated evpn mac-ip
[*DCI-PE1-bgp-af-evpn] peer 4.4.4.4 advertise route-reoriginated evpn ip
# Configure DCI-PE2.
[~DCI-PE2] bgp 100
[*DCI-PE2-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE2-bgp-vpn1] import-route direct
[*DCI-PE2-bgp-vpn1] advertise l2vpn evpn
[*DCI-PE2-bgp-vpn1] quit
[*DCI-PE2-bgp] l2vpn-family evpn
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 import reoriginate
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 advertise route-reoriginated evpn mac-ip
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 advertise route-reoriginated evpn mac
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 advertise route-reoriginated evpn ip
[*DCI-PE2-bgp-af-evpn] peer 1.1.1.1 advertise irb
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 import reoriginate
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 advertise route-reoriginated evpn mac-ip
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 advertise route-reoriginated evpn ip
[*DCI-PE2-bgp-af-evpn] peer 5.5.5.5 advertise irb
[*DCI-PE2-bgp-af-evpn] quit
[*DCI-PE2-bgp] quit
[*DCI-PE2] commit
----End
Configuration Files
● DCI-PE1 configuration file
#
sysname DCI-PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
tnl-policy te-lsp1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.10.0 0.0.0.255
mpls-te enable
#
return
#
sysname DCI-PE2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
tnl-policy te-lsp1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
tnl-policy te-lsp1 evpn
evpn mpls routing-enable
vxlan vni 555
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
bridge-domain 10
vxlan vni 5020 split-horizon-mode
evpn binding vpn-instance evrf1
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.20.10.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.30.1 255.255.255.0
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation qinq
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface LoopBack2
ip address 33.33.33.33 255.255.255.255
#
interface Nve1
source 33.33.33.33
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
#
bgp 100
peer 1.1.1.1 as-number 100
Networking Requirements
A DC-GW and a DCI-PE are the same device, which is directly connected to a DC
device. On the network shown in Figure 3-176, a DC-PE-GW functions as both a
DC-GW and a DCI-PE. The DC-PE-GW is connected to the P on the DCI backbone
network on one side and directly connected to a DC device on the other side. A
VXLAN tunnel is established in each DC to implement intra-DC VM
communication. To implement inter-DC VM communication, create L3VPN
instances and EVPN instances on the DCI-PE-GWs and establish a BGP EVPN peer
relationship between the DCI-PE-GWs.
Figure 3-176 Configuring underlay VLAN access to DCI (using EVPN-MPLS as the
bearer and PE as a GW)
Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0,
and GE 1/0/0.1, respectively.
GE 1/0/0.1 -
DCI-
PE1- GE 2/0/0 192.168.1.1/24
GW1
Loopback 1 1.1.1.1/32
GE 1/0/0 192.168.1.2/24
P GE 2/0/0 192.168.10.1/24
Loopback1 2.2.2.2/32
GE 1/0/0.1 -
DCI-
PE2- GE 2/0/0 192.168.10.2/24
GW2
Loopback1 3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF on the DCI backbone network to implement communication
between DCI-PEs.
2. Configure an MPLS TE tunnel on the DCI backbone network.
3. Configure a VPN instance on each DCI-PE-GW and apply a tunnel policy to
the VPN instance.
4. Create a VBDIF interface on each DCI-PE-GW and bind the VPN instance to
the VBDIF interface.
5. Configure each DCI-PE-GW to advertise IP prefix routes.
6. Configure an EVPN instance on each DCI-PE-GW and establish a BGP EVPN
peer relationship between the DCI-PE-GWs, and configure each DCI-PE-GW to
advertise IRB routes.
7. Configure a source address on each DCI-PE-GW.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the DCI-PE-GWs and P
● RD of a VPN instance
● Import and export VPN targets of the VPN instance
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the DCI backbone network. OSPF is used in this example.
For configuration details, see Configuration Files in this section.
Step 3 Configure an MPLS TE tunnel on the DCI backbone network.
For configuration details, see Configuration Files in this section.
Step 4 Configure a VPN instance on each DCI-PE-GW and apply a tunnel policy to the
VPN instance.
# Configure DCI-PE1-GW1.
[~DCI-PE1-GW1] tunnel-policy te-lsp1
[*DCI-PE1-GW1-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE1-GW1-tunnel-policy-te-lsp1] quit
[*DCI-PE1-GW1] ip vpn-instance vpn1
[*DCI-PE1-GW1-vpn-instance-vpn1] ipv4-family
[*DCI-PE1-GW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*DCI-PE1-GW1-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1 evpn
[*DCI-PE1-GW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 both evpn
[*DCI-PE1-GW1-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*DCI-PE1-GW1-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE1-GW1-vpn-instance-vpn1] quit
[*DCI-PE1-GW1] commit
# Configure DCI-PE2-GW2.
[~DCI-PE2-GW2] tunnel-policy te-lsp1
[*DCI-PE2-GW2-tunnel-policy-te-lsp1] tunnel select-seq cr-lsp load-balance-number 1
[*DCI-PE2-GW2-tunnel-policy-te-lsp1] quit
[*DCI-PE2-GW2] ip vpn-instance vpn1
[*DCI-PE2-GW2-vpn-instance-vpn1] ipv4-family
[*DCI-PE2-GW2-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*DCI-PE2-GW2-vpn-instance-vpn1-af-ipv4] tnl-policy te-lsp1 evpn
[*DCI-PE2-GW2-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 both evpn
[*DCI-PE2-GW2-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*DCI-PE2-GW2-vpn-instance-vpn1-af-ipv4] quit
[*DCI-PE2-GW2-vpn-instance-vpn1] quit
[*DCI-PE2-GW2] commit
# Configure DCI-PE2-GW2.
[~DCI-PE2-GW2] bgp 100
[*DCI-PE2-GW2-bgp] ipv4-family vpn-instance vpn1
[*DCI-PE2-GW2-bgp-vpn1] import-route direct
[*DCI-PE2-GW2-bgp-vpn1] advertise l2vpn evpn
[*DCI-PE2-GW2-bgp-vpn1] quit
[*DCI-PE2-GW2] commit
Step 6 Configure an EVPN instance on each DCI-PE-GW and establish a BGP EVPN peer
relationship between the DCI-PE-GWs, and configure each DCI-PE-GW to advertise
IRB routes.
# Configure DCI-PE1-GW1.
[~DCI-PE1-GW1] evpn vpn-instance evrf1 bd-mode
# Configure DCI-PE2-GW2.
[~DCI-PE2-GW2] evpn vpn-instance evrf1 bd-mode
[*DCI-PE2-GW2-evpn-instance-evrf1] route-distinguisher 10:1
[*DCI-PE2-GW2-evpn-instance-evrf1] vpn-target 11:1
[*DCI-PE2-GW1-evpn-instance-evrf1] tnl-policy te-lsp1
[*DCI-PE2-GW2-evpn-instance-evrf1] quit
[*DCI-PE2-GW2] bridge-domain 10
[*DCI-PE2-GW2-bd10] evpn binding vpn-instance evrf1
[*DCI-PE2-GW2-bd10] quit
[*DCI-PE2-GW2] bgp 100
[*DCI-PE2-GW2-bgp] peer 1.1.1.1 as-number 100
[*DCI-PE2-GW2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*DCI-PE2-GW2-bgp] l2vpn-family evpn
[*DCI-PE2-GW2-bgp-af-evpn] peer 1.1.1.1 enable
[*DCI-PE2-GW2-bgp-af-evpn] peer 1.1.1.1 advertise irb
[*DCI-PE2-GW2-bgp-af-evpn] quit
[*DCI-PE2-GW2-bgp] quit
[*DCI-PE2-GW2] commit
# Configure DCI-PE2-GW2.
[~DCI-PE2-GW2] interface gigabitethernet 1/0/0.1 mode l2
[*DCI-PE2-GW2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*DCI-PE2-GW2-GigabitEthernet1/0/0.1] rewrite pop single
[*DCI-PE2-GW2-GigabitEthernet1/0/0.1] bridge-domain 10
[*DCI-PE2-GW2-GigabitEthernet1/0/0.1] quit
[*DCI-PE2-GW2] interface Vbdif10
[*DCI-PE2-GW2-Vbdif10] ip binding vpn-instance vpn1
[*DCI-PE2-GW2-Vbdif10] ip address 10.2.1.1 255.255.255.0
[*DCI-PE2-GW2-Vbdif10] arp collect host enable
[*DCI-PE2-GW2-Vbdif10] quit
[*DCI-PE2-GW2] commit
# Configure DCI-PE-GW2.
[~DCI-PE-GW2] evpn source-address 3.3.3.3
[*DCI-PE-GW2] commit
EVPN-Instance evrf1:
Number of Mac Routes: 4
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*>i 0:48:00e0-fc12-3456:0:0.0.0.0 3.3.3.3
*>i 0:48:00e0-fc12-3456:32:20.1.1.1 3.3.3.3
*> 0:48:00e0-fc12-7890:0:0.0.0.0 0.0.0.0
*> 0:48:00e0-fc12-7890:32:10.1.1.1 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 3.3.3.3
----End
Configuration Files
● DCI-PE1-GW1 configuration file
#
sysname DCI-PE1-GW1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
tnl-policy te-lsp1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy te-lsp1 evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 100
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise irb
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 192.168.1.0 0.0.0.255
mpls-te enable
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
evpn source-address 1.1.1.1
#
return
● P configuration file
#
sysname P
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 192.168.1.0 0.0.0.255
network 192.168.10.0 0.0.0.255
mpls-te enable
#
return
● DCI-PE2-GW2 configuration file
#
sysname DCI-PE2-GW2
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 10:1
tnl-policy te-lsp1
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy te-lsp1 evpn
evpn mpls routing-enable
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 192.168.10.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Pos3/1/3
link-protocol ppp
undo shutdown
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 192.168.10.0 0.0.0.255
mpls-te enable
#
tunnel-policy te-lsp1
tunnel select-seq cr-lsp load-balance-number 1
#
evpn source-address 3.3.3.3
#
return
Networking Requirements
An L3VPN is deployed over the MAN and is being replaced with an EVPN. If a lot
of devices are deployed on the MAN, end-to-end replacement may not be
implemented at a time. Therefore, co-existence of the L3VPN and EVPN occurs
during the network reconstruction. On the network shown in Figure 3-177, an
L3VPN is deployed between the UPE and NPE1, and an EVPN is deployed between
NPE1 and NPE2. To allow communication between the L3VPN and EVPN,
configure the border device NPE1 between the two networks.
Loopback 1 1.1.1.1/32
Loopback 1 2.2.2.2/32
Loopback 1 3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs on the UPE, NPE1, and NPE2. In this example, OSPF runs
between the UPE and NPE1, and IS-IS runs between NPE1 and NPE2.
2. Configure MPLS LDP on the UPE, NPE1, and NPE2.
3. Configure L3VPN instances on the UPE and NPE1, and establish a BGP VPNv4
connection between them.
4. Establish a BGP EVPN connection between NPE1 and NPE2.
5. Configure an L3VPN instance and binding it to an interface on NPE2 to access
Layer 3 services.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the UPE and NPEs
● RDs of the L3VPN instances
● Import and export VPN targets for the L3VPN instances
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Deploy IGPs on the UPE, NPE1, and NPE2. In this example, OSPF runs between the
UPE and NPE1, and IS-IS runs between NPE1 and NPE2.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on the UPE, NPE1, and NPE2.
For configuration details, see Configuration Files in this section.
Step 4 Configure L3VPN instances on the UPE and NPE1, and establish a BGP VPNv4
connection between them.
# Configure the UPE.
[~UPE] ip vpn-instance vpn1
[*UPE-vpn-instance-vpn1] ipv4-family
[*UPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 10:1
[*UPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 both
[*UPE-vpn-instance-vpn1-af-ipv4] quit
[*UPE-vpn-instance-vpn1] quit
[*UPE] interface GigabitEthernet 2/0/0
[*UPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*UPE-GigabitEthernet2/0/0] ip address 192.168.20.1 255.255.255.0
[*UPE-GigabitEthernet2/0/0] quit
[*UPE] bgp 100
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*UPE-bgp] ipv4-family vpnv4
[*UPE-bgp-af-vpnv4] peer 2.2.2.2 enable
[*UPE-bgp-af-vpnv4] quit
[*UPE-bgp] ipv4-family vpn-instance vpn1
[*UPE-bgp-vpn1] import-route direct
[*UPE-bgp-vpn1] quit
[*UPE] commit
# Configure NPE1.
[~NPE1] ip vpn-instance vpn1
[*NPE1-vpn-instance-vpn1] ipv4-family
[*NPE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*NPE1-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 both
[*NPE1-vpn-instance-vpn1-af-ipv4] quit
[*NPE1-vpn-instance-vpn1] quit
[*NPE1] bgp 100
[*NPE1-bgp] peer 1.1.1.1 as-number 100
[*NPE1-bgp] peer 1.1.1.1 connect-interface LoopBack1
[*NPE1-bgp] ipv4-family vpnv4
[*NPE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*NPE1-bgp-af-vpnv4] quit
[*NPE1] commit
# Configure NPE2.
[~NPE2] bgp 100
[*NPE2-bgp] peer 2.2.2.2 as-number 100
[*NPE2-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*NPE2-bgp] l2vpn-family evpn
[*NPE2-bgp-af-evpn] peer 2.2.2.2 enable
[*NPE2-bgp-af-evpn] quit
[*NPE2] commit
[*NPE1] commit
EVPN-Instance evrf1:
Number of Mac Routes: 5
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*> 0:48:00e0-fc12-3456:0:0.0.0.0 0.0.0.0
* 0.0.0.0
*> 0:48:00e0-fc12-3456:32:192.168.30.2 0.0.0.0
*> 0:48:00e0-fc12-7890:0:0.0.0.0 0.0.0.0
*> 0:48:00e0-fc12-7890:32:192.168.30.1 0.0.0.0
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 10:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
peer 192.168.20.2 as-number 65410
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.30.1 255.255.255.0
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
#
return
Networking Requirements
On the network shown in Figure 3-178, PE1 and PE2 are egress devices of the
data center network. PE1 and PE2 work in active-active mode with a bypass
VXLAN tunnel deployed between them. They use an anycast VTEP address to
establish a VXLAN tunnel with the TOR. In this manner, PE1, PE2, and the TOR can
communicate with each other. PE1 and PE2 communicate with the external
network through the VPLS network, on which PW redundancy is configured.
Specifically, the PE-AGG connects to PE1 and PE2 through primary and secondary
PWs, respectively.
Loopback 1 1.1.1.1/32
Loopback 2 1.1.1.100/32
Loopback 3 1.1.1.20/32
Loopback 1 2.2.2.2/32
Loopback 2 2.2.2.100/32
Loopback 3 1.1.1.20/32
Loopback 1 3.3.3.3/32
Loopback 1 4.4.4.100/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure interface IP addresses, an IGP, and MPLS functions on each device.
2. Configure BGP EVPN on PE1 and PE2.
3. Configure a VXLAN tunnel between PE1 and PE2.
4. Configure primary and secondary PWs on the PE-AGG.
5. Configure PWs on PE1 and PE2 and set the PWs to AC mode.
Data Preparation
To complete the configuration, you need the following data:
● Interfaces and their IP addresses
● EVPN instance name
● RD and RT of the EVPN instance
Procedure
Step 1 Configure interface IP addresses, an IGP, and MPLS functions on each device.
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 3 Configure a VXLAN tunnel between PE1 and PE2.
1. Configure an EVPN instance and bind it to a BD on each PE.
# Configure PE1.
[~PE1] evpn vpn-instance evpn1 bd-mode
[*PE1-evpn-instance-evpn1] route-distinguisher 11:11
[*PE1-evpn-instance-evpn1] vpn-target 1:1 export-extcommunity
[*PE1-evpn-instance-evpn1] vpn-target 1:1 import-extcommunity
[*PE1-evpn-instance-evpn1] quit
[*PE1] bridge-domain 10
[*PE1-bd10] vxlan vni 10 split-horizon-mode
[*PE1-bd10] evpn binding vpn-instance evpn1
[*PE1-bd10] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in
this section.
2. Configure an ingress replication list on each PE.
# Configure PE1.
[~PE1] interface nve 1
[*PE1-Nve1] source 1.1.1.20
[*PE1-Nve1] bypass source 1.1.1.100
[*PE1-Nve1] mac-address 00e0-fc12-3456
[*PE1-Nve1] vni 10 head-end peer-list protocol bgp
[*PE1-Nve1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in
this section.
Step 4 Configure primary and secondary PWs on the PE-AGG.
# Configure the PE-AGG.
[~PE-AGG] mpls l2vpn
[*PE-AGG-l2vpn] quit
[*PE-AGG] vsi vsi1 bd-mode
Step 5 Configure PWs on PE1 and PE2 and set the PWs to AC mode.
# Configure PE1.
[~PE1] mpls l2vpn
[*PE1-l2vpn] quit
[*PE1] vsi vsi1 bd-mode
[*PE1-vsi1] pwsignal ldp
[*PE1-vsi1-ldp] vsi-id 1
[*PE1-vsi1-ldp] peer 3.3.3.3 ac-mode
[*PE1-vsi1-ldp] quit
[*PE1-vsi1] quit
[*PE1] bridge-domain 10
[*PE1-bd10] l2 binding vsi vsi1
[*PE1-bd10] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Run the display vxlan tunnel command on PE1 and check information about the
VXLAN tunnels.
[~PE1] display vxlan tunnel
Number of vxlan tunnel : 2
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531842 1.1.1.100 2.2.2.100 up dynamic 01:31:05
4026531843 1.1.1.20 4.4.4.100 up dynamic 00:32:51
Run the display vsi command on PE1 and check the VSI status.
[~PE1] display vsi
Total VSI number is 1, 1 is up, 0 is down, 1 is LDP mode, 0 is BGP mode, 0 is BGPAD mode, 0 is mixed
mode, 0 is unspecified mode
--------------------------------------------------------------------------
Vsi Mem PW Mac Encap Mtu Vsi
Name Disc Type Learn Type Value State
--------------------------------------------------------------------------
vsi1 -- ldp qualify vlan 1500 up
Run the display vsi name vsi1 protect-group 10 command on the PE-AGG and
check information about the PW protection group in the VSI.
[~PE-AGG] display vsi name vsi1 protect-group 10
Protect-group: 10
-------------------------------------------------------------------------------
PeerIp:VcId Pref Active
-------------------------------------------------------------------------------
1.1.1.1:1 1 Active
2.2.2.2:1 2 Inactive
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn
bypass-vxlan enable
#
evpn vpn-instance evpn1 bd-mode
route-distinguisher 11:11
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls l2vpn
#
vsi vsi1 bd-mode
pwsignal ldp
vsi-id 1
peer 3.3.3.3 ac-mode
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance evpn1
l2 binding vsi vsi1
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.14.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.1.13.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface LoopBack2
ip address 1.1.1.100 255.255.255.255
isis enable 1
#
interface LoopBack3
ip address 1.1.1.20 255.255.255.255
isis enable 1
#
interface Nve1
source 1.1.1.20
bypass source 1.1.1.100
mac-address 00e0-fc12-3456
vni 10 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.100 as-number 100
peer 2.2.2.100 connect-interface LoopBack2
peer 4.4.4.100 as-number 100
peer 4.4.4.100 connect-interface LoopBack2
#
ipv4-family unicast
undo synchronization
peer 2.2.2.100 enable
peer 4.4.4.100 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.100 enable
peer 2.2.2.100 advertise encap-type vxlan
peer 4.4.4.100 enable
peer 4.4.4.100 advertise encap-type vxlan
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.13.0 0.0.0.255
#
return
● PE2 configuration file
#
sysname PE2
#
evpn
bypass-vxlan enable
#
evpn vpn-instance evpn1 bd-mode
route-distinguisher 11:11
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls l2vpn
#
vsi vsi1 bd-mode
pwsignal ldp
vsi-id 1
peer 3.3.3.3 ac-mode
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance evpn1
l2 binding vsi vsi1
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.2.14.1 255.255.255.0
isis enable 1
#
interface GigabitEthernet1/0/2
undo shutdown
Networking Requirements
At present, the IP bearer network uses L2VPN and L3VPN (HVPN) to carry Layer 2
and Layer 3 services, respectively. The protocols are complex. EVPN can carry both
Layer 2 and Layer 3 services. To simplify service bearer protocols, many IP bearer
networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3
services, needs to evolve to EVPN L3VPN HVPN. On the network shown in Figure
3-179, the UPE and SPE are connected at the access layer, and the SPE and NPE
are connected at the aggregation layer. Before an EVPN L3VPN HoVPN is deployed
to implement E2E interworking, separate IGPs must be deployed at the access and
aggregation layers to implement interworking at the different layers. On an EVPN
L3VPN HoVPN, a UPE does not have specific routes to NPEs and can only send
service data to SPEs over default routes. As a result, route isolation is
implemented. An EVPN L3VPN HoVPN can use devices with relatively poor route
management capabilities as UPEs, reducing network deployment costs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between
the UPE and SPE, and IS-IS runs between the SPE and NPE.
2. Configure MPLS LDP on the UPE, SPE, and NPE.
3. Create a VPN instance on each of the UPE, SPE, and NPE.
4. Bind the VPN instances to the AC interfaces on the UPE and NPE.
5. Configure a default static route for the VPN instance on the SPE.
6. Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
7. Configure BGP-EVPN peer relationships between the UPE and SPE, and
between the SPE and NPE, and specify the UPE as a lower-level PE of the SPE.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the UPE (1.1.1.1), SPE (2.2.2.2), and NPE (3.3.3.3)
● VPN instance name (vpn1) and RD (100:1)
● VPN targets 2:2 for EVPN
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between the
UPE and SPE, and IS-IS runs between the SPE and NPE.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on the UPE, SPE, and NPE.
Step 5 Bind the VPN instances to the AC interfaces on the UPE and NPE.
# Configure the UPE.
[~UPE] interface GigabitEthernet 2/0/0
[*UPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*UPE-GigabitEthernet2/0/0] ip address 192.168.20.1 255.255.255.0
[*UPE-GigabitEthernet2/0/0] quit
[*UPE] commit
Step 7 Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
[~NPE] ip ip-prefix default index 10 permit 0.0.0.0 0
[*NPE] route-policy SPE deny node 10
[*NPE-route-policy] if-match ip-prefix default
[*NPE-route-policy] quit
[*NPE] route-policy SPE permit node 20
[*NPE-route-policy] quit
[*NPE] ip vpn-instance vpn1
[*NPE-vpn-instance-vpn1] ipv4-family
[*NPE-vpn-instance-vpn1-af-ipv4] import route-policy SPE evpn
[*NPE-vpn-instance-vpn1-af-ipv4] quit
[*NPE-vpn-instance-vpn1] quit
[*NPE] commit
Step 8 Configure BGP-EVPN peer relationships between the UPE and SPE, and between
the SPE and NPE, and specify the UPE as a lower-level PE of the SPE.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*UPE-bgp] l2vpn-family evpn
[*UPE-bgp-af-evpn] peer 2.2.2.2 enable
[*UPE-bgp-af-evpn] quit
[*UPE-bgp] ipv4-family vpn-instance vpn1
[*UPE-bgp-vpn1] advertise l2vpn evpn
[*UPE-bgp-vpn1] import-route direct
[*UPE-bgp-vpn1] quit
[*UPE-bgp] quit
[*UPE] commit
Run the display ip routing-table vpn-instance vpn1 command on the NPE. The
command output shows the VPN routes.
[~NPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
Run the display bgp evpn all routing-table command on the UPE. The command
output shows the default EVPN routes received from the SPE.
[~UPE] display bgp evpn all routing-table
Local AS number : 100
Run the display ip routing-table vpn-instance vpn1 command on the UPE. The
command output shows the default VPN routes.
[~UPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
policy vpn-target
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.30.1 255.255.255.0
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
policy vpn-target
peer 2.2.2.2 enable
#
route-policy SPE deny node 10
if-match ip-prefix default
#
route-policy SPE permit node 20
#
ip ip-prefix default index 10 permit 0.0.0.0 0
#
return
Networking Requirements
At present, the IP bearer network uses L2VPN and L3VPN (HVPN) to carry Layer 2
and Layer 3 services, respectively. The protocols are complex. EVPN can carry both
Layer 2 and Layer 3 services. To simplify service bearer protocols, many IP bearer
networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3
services, needs to evolve to EVPN L3VPN HVPN. On the network shown in Figure
3-180, the UPE and SPE are connected at the access layer, and the SPE and NPE
are connected at the aggregation layer. Before an EVPN L3VPN H-VPN is deployed
to implement E2E interworking, separate IGPs must be deployed at the access and
aggregation layers to implement interworking at the different layers. On an EVPN
L3VPN H-VPN, UPEs function as RR clients to receive the specific routes reflected
by SPEs functioning as RRs. This mechanism facilitates route management and
traffic forwarding control.
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between
the UPE and SPE, and IS-IS runs between the SPE and NPE.
2. Configure MPLS LDP on the UPE, SPE, and NPE.
3. Create a VPN instance on each of the UPE and NPE.
4. Bind the VPN instances to the AC interfaces on the UPE and NPE.
5. Configure BGP-EVPN peer relationships between the UPE and SPE, and
between the SPE and NPE.
6. On the SPE, specify the UPE as the BGP-EVPN RR client, and configure the
SPE to use its own IP address as the next hop of BGP EVPN routes being
advertised to the peer.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the UPE (1.1.1.1), SPE (2.2.2.2), and NPE (3.3.3.3)
● VPN instance name (vpn1) and RD (100:1)
● VPN targets 2:2 for EVPN
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between the
UPE and SPE, and IS-IS runs between the SPE and NPE.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on the UPE, SPE, and NPE.
For configuration details, see Configuration Files in this section.
Step 4 Create a VPN instance on each of the UPE and NPE.
# Configure the UPE.
[~UPE] ip vpn-instance vpn1
[*UPE-vpn-instance-vpn1] ipv4-family
[*UPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*UPE-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 both evpn
[*UPE-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*UPE-vpn-instance-vpn1-af-ipv4] quit
[*UPE-vpn-instance-vpn1] quit
[*UPE] commit
Step 5 Bind the VPN instances to the AC interfaces on the UPE and NPE.
# Configure the UPE.
[~UPE] interface GigabitEthernet 2/0/0
[*UPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*UPE-GigabitEthernet2/0/0] ip address 192.168.20.1 255.255.255.0
[*UPE-GigabitEthernet2/0/0] quit
[*UPE] commit
Step 6 Configure BGP-EVPN peer relationships between the UPE and SPE, and between
the SPE and NPE.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*UPE-bgp] l2vpn-family evpn
[*UPE-bgp-af-evpn] peer 2.2.2.2 enable
[*UPE-bgp-af-evpn] quit
[*UPE-bgp] ipv4-family vpn-instance vpn1
[*UPE-bgp-vpn1] advertise l2vpn evpn
[*UPE-bgp-vpn1] import-route direct
[*UPE-bgp-vpn1] quit
[*UPE-bgp] quit
[*UPE] commit
Step 7 On the SPE, specify the UPE as the BGP-EVPN RR client, and configure the SPE to
use its own IP address as the next hop of BGP EVPN routes being advertised to the
peer.
# Configure the SPE.
[~SPE] bgp 100
[*SPE-bgp] l2vpn-family evpn
[*SPE-bgp-af-evpn] peer 1.1.1.1 reflect-client
[*SPE-bgp-af-evpn] peer 1.1.1.1 next-hop-local
[*SPE-bgp-af-evpn] peer 3.3.3.3 reflect-client
[*SPE-bgp-af-evpn] peer 3.3.3.3 next-hop-local
[*SPE-bgp-af-evpn] quit
[*SPE-bgp] quit
[*SPE] commit
Run the display ip routing-table vpn-instance vpn1 command on the NPE and
UPE. The command output shows the VPN routes received from the remote end.
The following example uses the command output on the NPE.
[~NPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
Networking Requirements
At present, the IP bearer network uses L2VPN and L3VPN (HVPN) to carry Layer 2
and Layer 3 services, respectively. The protocols are complex. EVPN can carry both
Layer 2 and Layer 3 services. To simplify service bearer protocols, many IP bearer
networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3
services, needs to evolve to EVPN L3VPN HVPN. During evolution, if a lot of
devices are deployed on the network, end-to-end evolution may not be
implemented at a time. As a result, co-existence of the L3VPN and EVPN occurs.
On the network shown in Figure 3-181, the UPE and SPE are connected at the
access layer, and the SPE and NPE are connected at the aggregation layer.
Separate IGPs are deployed at the access and aggregation layers to implement
interworking at the different layers. An EVPN L3VPN HoVPN is deployed between
the UPE and SPE, and a common L3VPN is deployed between the SPE and NPE.
The SPE advertises only default EVPN routes to the UPE. After receiving specific
routes (EVPN routes) from the UPE, the SPE encapsulates these routes into VPNv4
routes and advertises them to the NPE.
Figure 3-181 Splicing between an EVPN L3VPN HoVPN and a common L3VPN
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between
the UPE and SPE, and IS-IS runs between the SPE and NPE.
2. Configure MPLS LDP on the UPE, SPE, and NPE.
3. Create a VPN instance on each of the UPE, SPE, and NPE.
4. Bind the VPN instances to the AC interfaces on the UPE and NPE.
5. Configure a default static route for the VPN instance on the SPE.
6. Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the UPE (1.1.1.1), SPE (2.2.2.2), and NPE (3.3.3.3)
● VPN instance name (vpn1) and RD (100:1)
● VPN targets 1:1 (import and export) of vpn1 and 2:2 for EVPN
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between the
UPE and SPE, and IS-IS runs between the SPE and NPE.
For configuration details, see Configuration Files in this section.
Step 3 Configure MPLS LDP on the UPE, SPE, and NPE.
For configuration details, see Configuration Files in this section.
Step 4 Create a VPN instance on each of the UPE, SPE, and NPE.
# Configure the UPE.
[~UPE] ip vpn-instance vpn1
[*UPE-vpn-instance-vpn1] ipv4-family
[*UPE-vpn-instance-vpn1-af-ipv4] route-distinguisher 100:1
[*UPE-vpn-instance-vpn1-af-ipv4] vpn-target 1:1 both
[*UPE-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 both evpn
[*UPE-vpn-instance-vpn1-af-ipv4] evpn mpls routing-enable
[*UPE-vpn-instance-vpn1-af-ipv4] quit
[*UPE-vpn-instance-vpn1] quit
[*UPE] commit
[*NPE-vpn-instance-vpn1] quit
[*NPE] commit
Step 5 Bind the VPN instances to the AC interfaces on the UPE and NPE.
Step 7 Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
[~NPE] ip ip-prefix default index 10 permit 0.0.0.0 0
[*NPE] route-policy SPE deny node 10
[*NPE-route-policy] if-match ip-prefix default
[*NPE-route-policy] quit
[*NPE] route-policy SPE permit node 20
[*NPE-route-policy] quit
[*NPE] ip vpn-instance vpn1
[*NPE-vpn-instance-vpn1] ipv4-family
[*NPE-vpn-instance-vpn1-af-ipv4] import route-policy SPE
[*NPE-vpn-instance-vpn1-af-ipv4] quit
[*NPE-vpn-instance-vpn1] quit
[*NPE] commit
Step 8 Configure a BGP-VPNv4 peer relationship between the SPE and NPE.
Step 9 Configure a BGP-EVPN peer relationship between the UPE and SPE, specify the
UPE as a lower-level PE of the SPE and configure the UPE to import the default
VPN route.
Run the display ip routing-table vpn-instance vpn1 command on the NPE. The
command output shows the VPN routes.
[~NPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
Run the display bgp evpn all routing-table command on the UPE. The command
output shows the default EVPN routes received from the SPE.
[~UPE] display bgp evpn all routing-table
Local AS number : 100
Run the display ip routing-table vpn-instance vpn1 command on the UPE. The
command output shows the default VPN routes received from the SPE.
[~UPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 2:2 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
policy vpn-target
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
● SPE configuration file
#
sysname SPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 2:2 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise route-reoriginated evpn ip
#
ipv4-family vpn-instance vpn1
network 0.0.0.0
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 upe
peer 1.1.1.1 import reoriginate
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
#
return
● NPE configuration file
#
sysname NPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
import route-policy SPE
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 3.3.3.3
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.30.1 255.255.255.0
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
route-policy SPE deny node 10
if-match ip-prefix default
#
route-policy SPE permit node 20
#
ip ip-prefix default index 10 permit 0.0.0.0 0
#
return
Networking Requirements
At present, the IP bearer network uses L2VPN and L3VPN (HVPN) to carry Layer 2
and Layer 3 services, respectively. The protocols are complex. EVPN can carry both
Layer 2 and Layer 3 services. To simplify service bearer protocols, many IP bearer
networks will evolve to EVPN. Specifically, L3VPN HVPN, which carries Layer 3
services, needs to evolve to EVPN L3VPN HVPN. During evolution, if a lot of
devices are deployed on the network, end-to-end evolution may not be
implemented at a time. As a result, co-existence of the L3VPN and BD EVPN
L3VPN occurs. On the network shown in Figure 3-182, the UPE and SPE are
connected at the access layer, and the SPE and NPE are connected at the
aggregation layer. Separate IGPs are deployed at the access and aggregation
layers to implement interworking at the different layers. An L3VPN HoVPN is
deployed between the UPE and SPE, and an EVPN L3VPN is deployed between the
SPE and NPE. The SPE advertises only default L3VPN routes to the UPE. After
receiving specific routes (L3VPN routes) from the UPE, the SPE encapsulates these
routes into EVPN routes and advertises them to the NPE.
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between
the UPE and SPE, and IS-IS runs between the SPE and NPE.
2. Configure MPLS LDP on the UPE, SPE, and NPE.
3. Create a VPN instance on each of the UPE, SPE, and NPE.
4. Bind the VPN instances to the AC interfaces on the UPE and NPE.
5. Configure a default static route for the VPN instance on the SPE.
6. Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
7. Configure a BGP-EVPN peer relationship between the SPE and NPE.
8. Configure a BGP-VPNv4 peer relationship between the UPE and SPE, specify
the UPE as the lower-level PE of the SPE, and configure the UPE to import the
default VPN route.
9. Configure route regeneration on the SPE.
Data Preparation
To complete the configuration, you need the following data:
● MPLS LSR IDs of the UPE (1.1.1.1), SPE (2.2.2.2), and NPE (3.3.3.3)
● VPN instance name (vpn1) and RD (100:1)
● VPN targets 1:1 (import and export) of vpn1 and 2:2 for EVPN
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
For configuration details, see Configuration Files in this section.
Step 2 Deploy IGPs on the UPE, SPE, and NPE. In this example, OSPF runs between the
UPE and SPE, and IS-IS runs between the SPE and NPE.
Step 5 Bind the VPN instances to the AC interfaces on the UPE and NPE.
# Configure the UPE.
[~UPE] interface GigabitEthernet 2/0/0
[*UPE-GigabitEthernet2/0/0] ip binding vpn-instance vpn1
[*UPE-GigabitEthernet2/0/0] ip address 192.168.20.1 255.255.255.0
[*UPE-GigabitEthernet2/0/0] quit
[*UPE] commit
Step 7 Configure a route policy on the NPE to prevent the NPE from receiving default
routes.
[~NPE] ip ip-prefix default index 10 permit 0.0.0.0 0
[*NPE] route-policy SPE deny node 10
Step 8 Configure a BGP-EVPN peer relationship between the SPE and NPE.
# Configure the SPE.
[~SPE] bgp 100
[*SPE-bgp] peer 3.3.3.3 as-number 100
[*SPE-bgp] peer 3.3.3.3 connect-interface LoopBack1
[*SPE-bgp] l2vpn-family evpn
[*SPE-bgp-af-evpn] undo policy vpn-target
[*SPE-bgp-af-evpn] peer 3.3.3.3 enable
[*SPE-bgp-af-evpn] quit
[*SPE-bgp] quit
[*SPE] commit
Step 9 Configure a BGP-VPNv4 peer relationship between the UPE and SPE, specify the
UPE as the lower-level PE of the SPE, and configure the SPE to import default VPN
routes.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*UPE-bgp] ipv4-family vpnv4
[*UPE-bgp-af-vpnv4] peer 2.2.2.2 enable
[*UPE-bgp-af-vpnv4] quit
[*UPE-bgp] ipv4-family vpn-instance vpn1
[*UPE-bgp-vpn1] import-route direct
[*UPE-bgp-vpn1] quit
[*UPE-bgp] quit
[*UPE] commit
Run the display ip routing-table vpn-instance vpn1 command on the NPE. The
command output shows the VPN routes received from the UPE.
[~NPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
Run the display ip routing-table vpn-instance vpn1 command on the UPE. The
command output shows the default VPN routes.
[~UPE] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 5 Routes : 5
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 2:2 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 upe
peer 1.1.1.1 import reoriginate
#
ipv4-family vpn-instance vpn1
network 0.0.0.0
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.3 enable
peer 3.3.3.3 advertise route-reoriginated vpnv4
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
#
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0
#
return
Networking Requirements
On the network shown in Figure 3-183, EVPN is configured on the PEs to carry
Layer 2 multicast services. PE1 is the root node, and its access side is the multicast
source. PE2 and PE3 are leaf nodes, and their access sides are receivers. By default,
when the EVPN function is deployed on a network to carry Layer 2 multicast
services, multicast data packets are broadcast on the network. The devices that do
not need to receive the multicast data packets also receive these packets, which
wastes network bandwidth resources. To resolve this issue, deploy IGMP snooping
over EVPN MPLS. After IGMP snooping over EVPN MPLS is deployed, IGMP
snooping packets are transmitted on the network through EVPN routes, and
multicast forwarding entries are generated on devices. Multicast data packets
from a multicast source are advertised only to the devices that need these packets,
saving network bandwidth resources.
Interfaces 1 through 3 in this example represent GE 1/0/0, GE 2/0/0, and GE 3/0/0, respectively.
Precautions
When you configure IGMP snooping over EVPN MPLS, note the following:
● For the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites, and the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● The local loopback interface address is recommended as the source address
on each PE.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IGP on the backbone network to allow PEs and the RR to
communicate.
2. Configure MPLS and mLDP P2MP both globally and per interface on each
node of the backbone network and set up an mLDP P2MP tunnel.
3. Create an EVPN instance in BD mode and a BD on each PE, and bind the BD
to the EVPN instance on each PE.
4. Configure a source address on each PE.
5. Configure each PE's sub-interface connected to a CE.
6. Configure an ESI for each PE's interface connected to a CE.
7. Configure BGP EVPN peer relationships between the PEs and RR, and on the
RR, specify the PEs as RR clients.
8. Configure EVPN to use an mLDP P2MP tunnel for service transmission on
each PE.
9. Configure IGMP snooping over EVPN MPLS on each PE.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name: evrf1
● EVPN instance evrf1's RD on each PE: 100:1; RT: 1:1
Procedure
Step 1 Assign an IP address to each interface on the RR and PEs according to Figure
3-183. For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network to allow PEs and the RR to
communicate. OSPF is used as an IGP in this example.
# Configure PE1.
[~PE1] ospf 1
[*PE1-ospf-1] area 0
[*PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[*PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[*PE1-ospf-1-area-0.0.0.0] commit
[~PE1-ospf-1-area-0.0.0.0] quit
[~PE1-ospf-1] quit
# Configure PE2.
[~PE2] ospf 1
[*PE2-ospf-1] area 0
[*PE2-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[*PE2-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[*PE2-ospf-1-area-0.0.0.0] commit
[~PE2-ospf-1-area-0.0.0.0] quit
[~PE2-ospf-1] quit
# Configure PE3.
[~PE3] ospf 1
[*PE3-ospf-1] area 0
[*PE3-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[*PE3-ospf-1-area-0.0.0.0] network 4.4.4.4 0.0.0.0
[*PE3-ospf-1-area-0.0.0.0] commit
[~PE3-ospf-1-area-0.0.0.0] quit
[~PE3-ospf-1] quit
Step 3 Configure MPLS and mLDP P2MP both globally and per interface on each node of
the backbone network and set up an mLDP P2MP tunnel.
# Configure PE1.
[~PE1] mpls lsr-id 1.1.1.1
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] mldp p2mp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet 2/0/0
[*PE1-GigabitEthernet2/0/0] mpls
[*PE1-GigabitEthernet2/0/0] mpls ldp
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] mpls lsr-id 2.2.2.2
[*PE2] mpls
[*PE2-mpls] quit
[*PE2] mpls ldp
[*PE2-mpls-ldp] mldp p2mp
[*PE2-mpls-ldp] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] mpls
[*PE2-GigabitEthernet2/0/0] mpls ldp
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
# Configure PE3.
[~PE3] mpls lsr-id 4.4.4.4
[*PE3] mpls
[*PE3-mpls] quit
[*PE3] mpls ldp
[*PE3-mpls-ldp] mldp p2mp
[*PE3-mpls-ldp] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] mpls
[*PE3-GigabitEthernet1/0/0] mpls ldp
[*PE3-GigabitEthernet1/0/0] commit
[~PE3-GigabitEthernet1/0/0] quit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 100:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[*PE3-evpn-instance-evrf1] route-distinguisher 100:1
[*PE3-evpn-instance-evrf1] vpn-target 1:1
[*PE3-evpn-instance-evrf1] quit
[*PE3] bridge-domain 10
[*PE3-bd10] evpn binding vpn-instance evrf1
[*PE3-bd10] quit
[*PE3] commit
[*PE1] commit
# Configure PE2.
[~PE2] evpn source-address 2.2.2.2
[*PE2] commit
# Configure PE3.
[~PE3] evpn source-address 4.4.4.4
[*PE3] commit
# Configure PE2.
[~PE2] interface eth-trunk 10
[*PE2-Eth-Trunk10] quit
[*PE2] interface eth-trunk 10.1 mode l2
[*PE2-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE2-Eth-Trunk10.1] rewrite pop single
[*PE2-Eth-Trunk10.1] bridge-domain 10
[*PE2-Eth-Trunk10.1] quit
[*PE2] interface gigabitethernet 2/0/0
[*PE2-GigabitEthernet2/0/0] eth-trunk 10
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] e-trunk 1
[*PE2-e-trunk-1] peer-address 4.4.4.4 source-address 2.2.2.2
[*PE2-e-trunk-1] quit
[*PE2] interface eth-trunk 20
[*PE2-Eth-Trunk20] e-trunk 1
[*PE2-Eth-Trunk20] e-trunk mode force-master
[*PE2-Eth-Trunk20] quit
[*PE2] interface eth-trunk 20.1 mode l2
[*PE2-Eth-Trunk20.1] encapsulation dot1q vid 100
[*PE2-Eth-Trunk20.1] rewrite pop single
[*PE2-Eth-Trunk20.1] bridge-domain 10
[*PE2-Eth-Trunk20.1] quit
[*PE2] interface gigabitethernet 3/0/0
[*PE2-GigabitEthernet3/0/0] eth-trunk 20
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] commit
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 10.1 mode l2
[*PE3-Eth-Trunk10.1] encapsulation dot1q vid 100
[*PE3-Eth-Trunk10.1] rewrite pop single
[*PE3-Eth-Trunk10.1] bridge-domain 10
[*PE3-Eth-Trunk10.1] quit
[*PE3] interface gigabitethernet 1/0/0
[*PE3-GigabitEthernet1/0/0] eth-trunk 10
[*PE3-GigabitEthernet1/0/0] quit
[*PE3] e-trunk 1
# Configure PE3.
[~PE3] interface eth-trunk 10
[*PE3-Eth-Trunk10] esi 0000.1111.3333.4444.5555
[*PE3-Eth-Trunk10] quit
[*PE3] interface eth-trunk 20
[*PE3-Eth-Trunk20] esi 0000.2222.3333.3333.4444
[*PE3-Eth-Trunk20] quit
[*PE3] commit
Step 8 Configure BGP EVPN peer relationships between the PEs and RR, and on the RR,
specify the PEs as RR clients.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 3.3.3.3 as-number 100
[*PE2-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 3.3.3.3 enable
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] commit
# Configure PE3.
[~PE3] bgp 100
[*PE3-bgp] peer 3.3.3.3 as-number 100
[*PE3-bgp] peer 3.3.3.3 connect-interface loopback 0
[*PE3-bgp] l2vpn-family evpn
After completing the configurations, run the display bgp evpn peer command on
the RR. The command output shows that BGP peer relationships have been
established between the PEs and RR and are in the Established state.
[~RR] display bgp evpn peer
Step 9 Configure EVPN to use an mLDP P2MP tunnel for service transmission on each PE.
# Configure PE1.
[~PE1] evpn vpn-instance evrf1 bd-mode
[~PE1-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE1-evpn-instance-evrf1-inclusive] root
[*PE1-evpn-instance-evrf1-inclusive-root] mldp p2mp
[*PE1-evpn-instance-evrf1-inclusive-root-mldpp2mp] root-ip 1.1.1.1
[*PE1-evpn-instance-evrf1-inclusive-root-mldpp2mp] quit
[*PE1-evpn-instance-evrf1-inclusive-root] quit
[*PE1-evpn-instance-evrf1-inclusive] quit
[*PE1-evpn-instance-evrf1] quit
[*PE1] commit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[~PE2-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE2-evpn-instance-evrf1-inclusive] leaf
[*PE2-evpn-instance-evrf1-inclusive-leaf] quit
[*PE2-evpn-instance-evrf1] quit
[*PE2] commit
# Configure PE3.
[~PE3] evpn vpn-instance evrf1 bd-mode
[~PE3-evpn-instance-evrf1] inclusive-provider-tunnel
[*PE3-evpn-instance-evrf1-inclusive] leaf
[*PE3-evpn-instance-evrf1-inclusive-leaf] quit
[*PE3-evpn-instance-evrf1] quit
[*PE3] commit
# Configure PE2.
[~PE2] igmp-snooping enable
[*PE2] bridge-domain 10
[*PE2-bd10] igmp-snooping enable
[*PE2-bd10] igmp-snooping proxy
[*PE2-bd10] igmp-snooping signal-synch enable
[*PE2-bd10] evi vpn-target 100:1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE3.
[~PE3] igmp-snooping enable
[*PE3] bridge-domain 10
[*PE3-bd10] igmp-snooping enable
[*PE3-bd10] igmp-snooping proxy
[*PE3-bd10] igmp-snooping signal-synch enable
[*PE3-bd10] evi vpn-target 100:1
[*PE3-bd10] quit
[*PE3] commit
Run the display bgp evpn all routing-table join-route command on a PE. The
command output shows information about Join routes. The following example
uses the command output on PE1.
[~PE1] display bgp evpn all routing-table join-route
Local AS number : 100
EVPN-Instance evrf1:
Number of IGMP Join Synch Routes: 1
Network(ESI/EthTagId/IpAddrLen/SAddr/IpAddrLen/GAddr/IpAddrLen/OAddr)
NextHop
*> 0000.1111.2222.4444.5555:0:0:0.0.0.0:32:225.0.0.1:32:1.1.1.1 127.0.0.1
*> 0000.1111.3333.4444.5555:0:0:0.0.0.0:32:225.0.0.1:32:1.1.1.1 127.0.0.1
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
igmp-snooping enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
inclusive-provider-tunnel
root
mldp p2mp
root-ip 1.1.1.1
#
mpls lsr-id 1.1.1.1
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
igmp-snooping enable
igmp-snooping proxy
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
inclusive-provider-tunnel
leaf
#
mpls lsr-id 4.4.4.4
#
mpls
#
bridge-domain 10
evpn binding vpn-instance evrf1
igmp-snooping enable
igmp-snooping proxy
igmp-snooping signal-synch enable
evi vpn-target 100:1 export-extcommunity
evi vpn-target 100:1 import-extcommunity
#
mpls ldp
mldp p2mp
#
e-trunk 1
peer-address 2.2.2.2 source-address 4.4.4.4
#
interface Eth-Trunk10
esi 0000.1111.3333.4444.5555
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
esi 0000.2222.3333.3333.4444
e-trunk 1
e-trunk mode force-master
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 100
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/0
undo shutdown
eth-trunk 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface gigabitethernet 3/0/0
eth-trunk 20
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
bgp 100
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 3.3.3.3 enable
#
l2vpn-family evpn
Networking Requirements
On the network shown in Figure 3-184, the EVPN and VPN functions are
configured to transmit Layer 2 and Layer 3 traffic to allow communication
between different sites on the backbone network. If Site 1 and Site 2 are
connected through the same subnet, create an EVPN instance on each PE to store
EVPN routes. Layer 2 forwarding is based on an EVPN route that matches a MAC
address. If Site 1 and Site 2 are connected through different subnets, create a VPN
instance on each PE to store VPN routes. In this situation, Layer 2 traffic is
terminated, and Layer 3 traffic is forwarded through a Layer 3 gateway. In this
example, PEs transmit service traffic over SR-MPLS TE tunnels.
Interface 1, interface 2, and sub-interface 1.1 in this example represent GE 1/0/0, GE 2/0/0, and
GE 1/0/0.1, respectively.
Configuration Notes
When configuring BD EVPN IRB over SR-MPLS TE, note the following:
● On the same EVPN instance, the export VPN target list of a site shares VPN
targets with the import VPN target lists of the other sites, and the import VPN
target list of a site shares VPN targets with the export VPN target lists of the
other sites.
● Using the local loopback interface address of a PE as the EVPN source address
is recommended.
Configuration Roadmap
The configuration roadmap is as follows:
7. Configure and apply a tunnel policy so that EVPN can recurse to SR-MPLS TE
tunnels.
8. Establish BGP EVPN peer relationships between PEs.
9. Configure CEs to communicate with PEs.
Data Preparation
To complete the configuration, you need the following data:
● EVPN instance name (evrf1) and VPN instance name (vpn1)
● EVPN instance evrf1's RD (100:1) and RT (1:1) on PE1, EVPN instance evrf1's
RD (200:1) and RT (1:1) on PE2, VPN instance vpn1's RD (100:2) and RT (2:2)
on PE1, and VPN instance vpn1's RD (200:2) and RT (2:2) on PE2
Procedure
Step 1 Configure IP addresses for the interfaces connecting PEs and P2 according to
Figure 3-184.
# Configure PE1.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface loopback 1
[*PE1-LoopBack1] ip address 1.1.1.1 32
[*PE1-LoopBack1] quit
[*PE1] interface gigabitethernet2/0/0
[*PE1-GigabitEthernet2/0/0] ip address 10.1.1.1 24
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure P.
<HUAWEI> system-view
[~HUAWEI] sysname P
[*HUAWEI] commit
[~P] interface loopback 1
[*P-LoopBack1] ip address 2.2.2.2 32
[*P-LoopBack1] quit
[*P] interface gigabitethernet1/0/0
[*P-GigabitEthernet1/0/0] ip address 10.1.1.2 24
[*P-GigabitEthernet1/0/0] quit
[*P] interface gigabitethernet2/0/0
[*P-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
<HUAWEI> system-view
[~HUAWEI] sysname PE2
[*HUAWEI] commit
[~PE2] interface loopback 1
[*PE2-LoopBack1] ip address 3.3.3.3 32
[*PE2-LoopBack1] quit
[*PE2] interface gigabitethernet2/0/0
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.2 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
Step 2 Configure an IGP to allow communication between PE1, PE2, and P. IS-IS is used
as an IGP protocol in this example.
# Configure PE1.
[~PE1] isis 1
[*PE1-isis-1] is-level level-2
[*PE1-isis-1] network-entity 00.1111.1111.1111.00
[*PE1-isis-1] quit
[*PE1] interface loopback 1
[*PE1-LoopBack1] isis enable 1
[*PE1-LoopBack1] quit
[*PE1] interface GigabitEthernet 2/0/0
[*PE1-GigabitEthernet2/0/0] isis enable 1
[*PE1-GigabitEthernet2/0/0] quit
[*PE1] commit
# Configure the P.
[~P] isis 1
[*P-isis-1] is-level level-2
[*P-isis-1] network-entity 00.1111.1111.2222.00
[*P-isis-1] quit
[*P] interface loopback 1
[*P-LoopBack1] isis enable 1
[*P-LoopBack1] quit
[*P] interface GigabitEthernet 1/0/0
[*P-GigabitEthernet1/0/0] isis enable 1
[*P-GigabitEthernet1/0/0] quit
[*P] interface GigabitEthernet 2/0/0
[*P-GigabitEthernet2/0/0] isis enable 1
[*P-GigabitEthernet2/0/0] quit
[*P] commit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-2
[*PE2-isis-1] network-entity 00.1111.1111.3333.00
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2-LoopBack1] isis enable 1
[*PE2-LoopBack1] quit
[*PE2] interface GigabitEthernet 2/0/0
[*PE2-GigabitEthernet2/0/0] isis enable 1
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
After completing the configuration, run the display isis peer command to check
that the status of the IS-IS neighbor relationship between PE1, PE2, and P is Up.
Run the display ip routing-table command to view that the PEs have learned the
routes to Loopback1 of each other.
The following example uses the command output on PE1.
[~PE1] display isis peer
Peer information for ISIS(1)
The next sid label command uses the adjacency label from PE1 to P which is dynamically
generated using IS-IS. This adjacency label can be obtained using the display segment-routing
adjacency mpls forwarding command.
[~PE1] display segment-routing adjacency mpls forwarding
Segment Routing Adjacency MPLS Forwarding Information
# Configure the P.
[~P] mpls lsr-id 2.2.2.2
[*P] mpls
[*P-mpls] mpls te
[*P-mpls] quit
[*P] segment-routing
[*P-segment-routing] quit
[*P] isis 1
[*P-isis-1] cost-style wide
[*P-isis-1] traffic-eng level-2
[*P-isis-1] segment-routing mpls
[*P-isis-1] segment-routing global-block 153616 153800
[*P-isis-1] quit
# Configure PE2.
[~PE2] mpls lsr-id 3.3.3.3
[*PE2] mpls
[*PE2-mpls] mpls te
[*PE2-mpls] quit
[*PE2] segment-routing
[*PE2-segment-routing] quit
[*PE2] isis 1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] traffic-eng level-2
[*PE2-isis-1] segment-routing mpls
[*PE2-isis-1] segment-routing global-block 153616 153800
[*PE2-isis-1] quit
[*PE2] interface loopback 1
[*PE2] explicit-path pe2tope1
[*PE2-explicit-path-pe2tope1] next sid label 48120 type adjacency
[*PE2-explicit-path-pe2tope1] next sid label 48121 type adjacency
[*PE2-explicit-path-pe2tope1] quit
[*PE2] interface tunnel1
[*PE2-Tunnel1] ip address unnumbered interface loopback 1
[*PE2-Tunnel1] tunnel-protocol mpls te
[*PE2-Tunnel1] destination 1.1.1.1
[*PE2-Tunnel1] mpls te tunnel-id 1
[*PE2-Tunnel1] mpls te signal-protocol segment-routing
[*PE2-Tunnel1] mpls te path explicit-path pe2tope1
[*PE2-Tunnel1] mpls te reserved-for-binding
[*PE2-Tunnel1] quit
[*PE2] commit
Secondary HopLimit : -
BestEffort HopLimit : -
Secondary Explicit Path Name: -
Secondary Affinity Prop/Mask: 0x0/0x0
BestEffort Affinity Prop/Mask: 0x0/0x0
IsConfigLspConstraint: -
Hot-Standby Revertive Mode: Revertive
Hot-Standby Overlap-path: Disabled
Hot-Standby Switch State: CLEAR
Bit Error Detection: Disabled
Bit Error Detection Switch Threshold: -
Bit Error Detection Resume Threshold: -
Ip-Prefix Name : -
P2p-Template Name : -
PCE Delegate : No LSP Control Status : Local control
Path Verification : No
Entropy Label : None
Associated Tunnel Group ID: - Associated Tunnel Group Type: -
Auto BW Remain Time : - Reopt Remain Time :-
Segment-Routing Remote Label : -
Binding Sid :- Reverse Binding Sid : -
[*PE1] commit
# Configure PE2.
[~PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] route-distinguisher 200:1
[*PE2-evpn-instance-evrf1] vpn-target 1:1
[*PE2-evpn-instance-evrf1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] ipv4-family
[*PE2-vpn-instance-vpn1-af-ipv4] route-distinguisher 200:2
[*PE2-vpn-instance-vpn1-af-ipv4] vpn-target 2:2 evpn
[*PE2-vpn-instance-vpn1-af-ipv4] quit
[*PE2-vpn-instance-vpn1] evpn mpls routing-enable
[*PE2-vpn-instance-vpn1] quit
[*PE2] bridge-domain 10
[*PE2-bd10] evpn binding vpn-instance evrf1
[*PE2-bd10] quit
[*PE2] commit
# Configure PE2.
[~PE2] evpn source-address 3.3.3.3
[*PE2] commit
Step 6 Configure the Layer 2 Ethernet sub-interfaces connecting PEs and CEs.
# Configure PE1.
[~PE1] interface GigabitEthernet 1/0/0
[*PE1-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.1111
[*PE1-Gigabitethernet1/0/0] quit
[*PE1] interface GigabitEthernet 1/0/0.1 mode l2
[*PE1-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
[*PE1-GigabitEthernet 1/0/0.1] rewrite pop single
[*PE1-GigabitEthernet 1/0/0.1] bridge-domain 10
[*PE1-GigabitEthernet 1/0/0.1] quit
[*PE1] commit
# Configure PE2.
[~PE2] interface GigabitEthernet 1/0/0
[*PE2-Gigabitethernet1/0/0] esi 0011.1111.1111.1111.2222
[*PE2-Gigabitethernet1/0/0] quit
[*PE2] interface GigabitEthernet 1/0/0.1 mode l2
[*PE2-GigabitEthernet 1/0/0.1] encapsulation dot1q vid 10
[*PE2-GigabitEthernet 1/0/0.1] rewrite pop single
[*PE2-GigabitEthernet 1/0/0.1] bridge-domain 10
[*PE2-GigabitEthernet 1/0/0.1] quit
[*PE2] commit
Step 7 Configure a vBDIF interface on each PE and bind the vBDIF interface to a VPN
instance.
# Configure PE1.
[~PE1] interface Vbdif10
[*PE1-Vbdif10] ip binding vpn-instance vpn1
[*PE1-Vbdif10] ip address 192.168.1.1 255.255.255.0
[*PE1-Vbdif10] quit
[*PE1] commit
# Configure PE2.
Step 8 Configure and apply a tunnel policy so that EVPN can recurse to SR-MPLS TE
tunnels.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel binding destination 3.3.3.3 te Tunnel1
[*PE1-tunnel-policy-srte] quit
[*PE1] evpn vpn-instance evrf1 bd-mode
[*PE1-evpn-instance-evrf1] tnl-policy srte
[*PE1-evpn-instance-evrf1] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] tnl-policy srte evpn
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
# Configure PE2.
[~PE2] tunnel-policy srte
[*PE2-tunnel-policy-srte] tunnel binding destination 1.1.1.1 te Tunnel1
[*PE2-tunnel-policy-srte] quit
[*PE2] evpn vpn-instance evrf1 bd-mode
[*PE2-evpn-instance-evrf1] tnl-policy srte
[*PE2-evpn-instance-evrf1] quit
[*PE2] ip vpn-instance vpn1
[*PE2-vpn-instance-vpn1] tnl-policy srte evpn
[*PE2-vpn-instance-vpn1] quit
[*PE2] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 3.3.3.3 as-number 100
[*PE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 3.3.3.3 enable
[*PE1-bgp-af-evpn] peer 3.3.3.3 advertise irb
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 1.1.1.1 as-number 100
[*PE2-bgp] peer 1.1.1.1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 1.1.1.1 enable
[*PE2-bgp-af-evpn] peer 1.1.1.1 advertise irb
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] ipv4-family vpn-instance vpn1
[*PE2-bgp-vpn1] import-route direct
[*PE2-bgp-vpn1] advertise l2vpn evpn
[*PE2-bgp-vpn1] quit
[*PE2-bgp] quit
[*PE2] commit
After completing the configuration, run the display bgp evpn peer command to
check that BGP peer relationships have been established between PEs and are in
the Established state. The following example uses the command output on PE1.
[~PE1] display bgp evpn peer
# Configure CE2.
[~CE2] interface GigabitEthernet 1/0/0.1 mode l2
[*CE2-GigabitEthernet1/0/0.1] encapsulation dot1q vid 10
[*CE2-GigabitEthernet1/0/0.1] rewrite pop single
[*CE2-GigabitEthernet1/0/0.1] quit
EVPN-Instance evrf1:
Number of A-D Routes: 3
Network(ESI/EthTagId) NextHop
*> 0011.1111.1111.1111.1111:0 127.0.0.1
*>i 0011.1111.1111.1111.2222:0 3.3.3.3
*>i 0011.1111.1111.1111.2222:4294967295 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 2
Network(EthTagId/MacAddrLen/MacAddr/IpAddrLen/IpAddr) NextHop
*>i 0:48:00e0-fc12-3456:32:192.168.2.2 3.3.3.3
*> 0:48:00e0-fc12-7890:32:192.168.1.2 0.0.0.0
EVPN-Instance evrf1:
Number of Inclusive Multicast Routes: 2
Network(EthTagId/IpAddrLen/OriginalIp) NextHop
*> 0:32:1.1.1.1 127.0.0.1
*>i 0:32:3.3.3.3 3.3.3.3
EVPN-Instance evrf1:
Number of ES Routes: 2
Network(ESI) NextHop
*> 0011.1111.1111.1111.1111 127.0.0.1
*>i 0011.1111.1111.1111.2222 3.3.3.3
EVPN-Instance evrf1:
Number of Mac Routes: 1
BGP routing table entry information of 0:48:00e0-fc12-3456:32:192.168.2.2:
Route Distinguisher: 200:1
Remote-Cross route
Label information (Received/Applied): 54011 48182/NULL
From: 3.3.3.3 (10.2.1.2)
Route Duration: 0d20h42m36s
Relay Tunnel Out-Interface: Tunnel1
Original nexthop: 3.3.3.3
Qos information : 0x0
Ext-Community: RT <1 : 1>, Mac Mobility <flag:1 seq:0 res:0>
AS-path Nil, origin incomplete, localpref 100, pref-val 0, valid, internal, best, select, pre 255
Route Type: 2 (MAC Advertisement Route)
Ethernet Tag ID: 0, MAC Address/Len: 00e0-fc12-3456/48, IP Address/Len: 0.0.0.0/0, ESI:
0000.0000.0000.0000.0000
Not advertised to any peer yet
[~PE1] display bgp evpn all routing-table prefix-route 0:192.168.2.0:24
BGP local router ID : 10.1.1.1
Local AS number : 100
Total routes of Route Distinguisher(200:2): 1
BGP routing table entry information of 0:192.168.2.0:24:
Label information (Received/Applied): 48185/NULL
From: 3.3.3.3 (10.2.1.2)
Route Duration: 0d20h38m31s
Relay IP Nexthop: 10.1.1.2
Relay Tunnel Out-Interface: GigabitEthernet1/0/0
Original nexthop: 3.3.3.3
Qos information : 0x0
Ext-Community: RT <2 : 2>
AS-path Nil, origin incomplete, MED 0, localpref 100, pref-val 0, valid, internal, best, select, pre 255, IGP
cost 20
Route Type: 5 (Ip Prefix Route)
Ethernet Tag ID: 0, IP Prefix/Len: 192.168.2.0/24, ESI: 0000.0000.0000.0000.0000, GW IP Address: 0.0.0.0
Not advertised to any peer yet
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 100:1
tnl-policy srte
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:2
apply-label per-instance
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
explicit-path pe2tope1
next sid label 48120 type adjacency
next sid label 48121 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.1111.1111.3333.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 153616 153800
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet1/0/0
undo shutdown
esi 0011.1111.1111.1111.2222
#
interface GigabitEthernet1/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
isis enable 1
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
isis enable 1
isis prefix-sid absolute 153720
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te reserved-for-binding
mpls te tunnel-id 1
mpls te path explicit-path pe2tope1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
#
tunnel-policy srte
tunnel binding destination 1.1.1.1 te Tunnel1
#
evpn source-address 3.3.3.3
#
return
Networking Requirements
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
IPv4 mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load-balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-185 shows the networking of an NFVI distributed gateway (SR tunnels).
DC-GWs, which are the border gateways of the DCN, exchange Internet routes
with external devices over PEs. L2GW/L3GW1 and L2GW/L3GW2 are connected to
VNFs. VNF1 and VNF2 that function as virtualized NEs are deployed to implement
the vUGW functions and vMSE functions, respectively. VNF1 and VNF2 are each
connected to L2GW/L3GW1 and L2GW/L3GW2 through IPUs. SR tunnels are
established between PEs and DC-GWs and between DC-GWs and L2GW/L3GWs to
carry IPv4 service traffic.
LoopBack1 7.7.7.7
LoopBack1 8.8.8.8
LoopBack1 3.3.3.3/32
LoopBack2 33.33.33.33/32
LoopBack1 4.4.4.4/32
LoopBack2 44.44.44.44/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet 1/0/5 -
GigabitEthernet 1/0/6 -
LoopBack1 1.1.1.1/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet 1/0/5 -
GigabitEthernet 1/0/6 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to
ensure Layer 3 interworking between ASs.
2. Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
3. Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-
GWs and L2GW/L3GWs.
4. Configure EVPN instances on DC-GWs and L2GW/L3GWs and bind the EVPN
instances to BDs.
5. Configure L3VPN instances on DC-GWs and L2GW/L3GWs and bind the
L3VPN instances to VBDIF interfaces.
6. Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
7. Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs.
8. Configure the BGP VPNv4 function on DC-GWs and PEs.
9. Configure Layer 2 sub-interfaces on DC-GWs and L2GW/L3GWs and configure
the static VPN routes destined for VNFs on L2GW/L3GWs.
10. Configure L2GW/L3GWs to import static VPN routes through BGP EVPN.
Configure and apply a route-policy to L3VPN instances so that the static VPN
routes can retain the original next hops.
11. Configure static default VPN routes and loopback addresses on DC-GWs. The
loopback addresses are used to establish BGP VPN peer relationships with
VNFs. Configure and apply a route-policy to L3VPN instances so that DC-GWs
can advertise static default VPN routes and VPN loopback routes only through
BGP EVPN.
12. Establish BGP VPN peer relationships between DC-GWs and VNFs.
13. Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
Procedure
Step 1 Configure IP addresses for all interfaces, including loopback interfaces, on PEs, DC-
GWs, and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to ensure
Layer 3 interworking between ASs.
For configuration details, see Configuration Files in this section.
Step 3 Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 4 Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-GWs
and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 5 Configure EVPN instances on DC-GWs and L2GW/L3GWs and bind the EVPN
instances to BDs.
# Configure DC-GW1.
[~DCGW1] evpn vpn-instance evrf1 bd-mode
[*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
[*DCGW1-evpn-instance-evrf1] vpn-target 1:1
[*DCGW1-evpn-instance-evrf1] quit
[*DCGW1] evpn vpn-instance evrf2 bd-mode
[*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
[*DCGW1-evpn-instance-evrf2] vpn-target 2:2
[*DCGW1-evpn-instance-evrf2] quit
[*DCGW1] evpn vpn-instance evrf3 bd-mode
[*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
[*DCGW1-evpn-instance-evrf3] vpn-target 3:3
[*DCGW1-evpn-instance-evrf3] quit
[*DCGW1] evpn vpn-instance evrf4 bd-mode
[*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
[*DCGW1-evpn-instance-evrf4] vpn-target 4:4
[*DCGW1-evpn-instance-evrf4] quit
[*DCGW1] bridge-domain 10
[*DCGW1-bd10] evpn binding vpn-instance evrf1
[*DCGW1-bd10] quit
[*DCGW1] bridge-domain 20
[*DCGW1-bd20] evpn binding vpn-instance evrf2
[*DCGW1-bd20] quit
[*DCGW1] bridge-domain 30
[*DCGW1-bd30] evpn binding vpn-instance evrf3
[*DCGW1-bd30] quit
[*DCGW1] bridge-domain 40
[*DCGW1-bd40] evpn binding vpn-instance evrf4
[*DCGW1-bd40] quit
[*DCGW1] commit
Repeat this step for DC-GW2 and L2GW/L3GWs. For configuration details, see
Configuration Files in this section.
Step 6 Configure L3VPN instances on DC-GWs and L2GW/L3GWs.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 33:33
[*DCGW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] evpn mpls routing-enable
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] interface vbdif10
[*DCGW1-Vbdif10] ip binding vpn-instance vpn1
[*DCGW1-Vbdif10] ip address 10.1.1.1 24
[*DCGW1-Vbdif10] arp generate-rd-table enable
[*DCGW1-Vbdif10] anycast-gateway enable
[*DCGW1-Vbdif10] mac-address 00e0-fc00-0003
[*DCGW1-Vbdif10] quit
[*DCGW1] interface vbdif20
[*DCGW1-Vbdif20] ip binding vpn-instance vpn1
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] evpn mpls routing-enable
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] interface vbdif10
[*L2GW/L3GW1-Vbdif10] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif10] ip address 10.1.1.1 24
[*L2GW/L3GW1-Vbdif10] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif10] anycast-gateway enable
[*L2GW/L3GW1-Vbdif10] mac-address 00e0-fc00-0003
[*L2GW/L3GW1-Vbdif10] arp collect host enable
[*L2GW/L3GW1-Vbdif10] quit
[*L2GW/L3GW1] interface vbdif20
[*L2GW/L3GW1-Vbdif20] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif20] ip address 10.2.1.1 24
[*L2GW/L3GW1-Vbdif20] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif20] anycast-gateway enable
[*L2GW/L3GW1-Vbdif20] mac-address 00e0-fc00-0004
[*L2GW/L3GW1-Vbdif20] arp collect host enable
[*L2GW/L3GW1-Vbdif20] quit
[*L2GW/L3GW1] interface vbdif30
[*L2GW/L3GW1-Vbdif30] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif30] ip address 10.3.1.1 24
[*L2GW/L3GW1-Vbdif30] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif30] anycast-gateway enable
[*L2GW/L3GW1-Vbdif30] mac-address 00e0-fc00-0001
[*L2GW/L3GW1-Vbdif30] arp collect host enable
[*L2GW/L3GW1-Vbdif30] quit
[*L2GW/L3GW1] interface vbdif40
[*L2GW/L3GW1-Vbdif40] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif40] ip address 10.4.1.1 24
[*L2GW/L3GW1-Vbdif40] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif40] anycast-gateway enable
[*L2GW/L3GW1-Vbdif40] mac-address 00e0-fc00-0005
[*L2GW/L3GW1-Vbdif40] arp collect host enable
[*L2GW/L3GW1-Vbdif40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-srte] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy srte
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] tunnel-policy srte
[*DCGW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*DCGW1-tunnel-policy-srte] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] tunnel-policy srte
[*L2GW/L3GW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*L2GW/L3GW1-tunnel-policy-srte] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure DC-GW1.
[~DCGW1] ip ip-prefix uIP index 10 permit 10.10.10.10 32
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise arp
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] evpn soure-address 1.1.1.1
[*L2GW/L3GW1] evpn
[*L2GW/L3GW1-evpn] vlan-extend private enable
[*L2GW/L3GW1-evpn] vlan-extend redirect enable
[*L2GW/L3GW1-evpn] local-remote frr enable
[*L2GW/L3GW1-evpn] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 9 Configure the BGP VPNv4 function on DC-GWs and PEs.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[~DCGW1-vpn-instance-vpn1] vpn-target 10:1
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpnv4
[*DCGW1-bgp-af-vpnv4] peer 7.7.7.7 enable
[*DCGW1-bgp-af-vpnv4] peer 8.8.8.8 enable
[*DCGW1-bgp-af-vpnv4] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 77:77
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 10:1
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface GigabitEthernet1/0/3
[*PE1-GigabitEthernet1/0/3] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet1/0/3] ip address 10.7.1.1 255.255.255.0
[*PE1-GigabitEthernet1/0/3] quit
[*PE1] bgp 200
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*PE1-bgp-af-vpnv4] peer 4.4.4.4 enable
[*PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 10 Configure Layer 2 sub-interfaces on DC-GWs and L2GW/L3GWs and configure the
static VPN routes destined for VNFs on L2GW/L3GWs.
# Configure DC-GW1.
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] lacp e-trunk system-id 00e0-fc00-0002
[*L2GW/L3GW1] lacp e-trunk priority 1
[*L2GW/L3GW1] e-trunk 1
[*L2GW/L3GW1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*L2GW/L3GW1-e-trunk-1] priority 5
[*L2GW/L3GW1-e-trunk-1] quit
[*L2GW/L3GW1] interface Eth-Trunk 10
[*L2GW/L3GW1-Eth-Trunk10] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk10] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk10] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk10] esi 0000.1111.1111.1111.1111
[*L2GW/L3GW1-Eth-Trunk10] quit
[*L2GW/L3GW1] interface Eth-Trunk 20
[*L2GW/L3GW1-Eth-Trunk20] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk20] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk20] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk20] esi 0000.2222.2222.2222.2222
[*L2GW/L3GW1-Eth-Trunk20] quit
[*L2GW/L3GW1] interface Eth-Trunk 30
[*L2GW/L3GW1-Eth-Trunk30] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk30] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk30] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk30] esi 0000.3333.3333.3333.3333
[*L2GW/L3GW1-Eth-Trunk30] quit
[*L2GW/L3GW1] interface Eth-Trunk 40
[*L2GW/L3GW1-Eth-Trunk40] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk40] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk40] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk40] esi 0000.4444.4444.4444.4444
[*L2GW/L3GW1-Eth-Trunk40] quit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 11 Configure L2GW/L3GWs to import static VPN routes through BGP EVPN. Configure
and apply a route-policy for L3VPN instances so that the static VPN routes can
retain the original next hops.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] import-route static
[*L2GW/L3GW1-bgp-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] route-policy sp deny node 20
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-ipv4] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1-ipv4] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 12 Configure static default VPN routes and loopback addresses on DC-GWs. The
loopback addresses are used to establish BGP VPN peer relationships with VNFs.
Configure and apply a route-policy for L3VPN instances so that DC-GWs can
advertise static default VPN routes and VPN loopback routes only through BGP
EVPN.
# Configure DC-GW1.
[~DCGW1] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
[*DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ip address 33.33.33.33 255.255.255.255
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-vpn1] import-route direct
[*DCGW1-bgp-vpn1] network 0.0.0.0 0
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ip-prefix lp index 10 permit 33.33.33.33 32
[*DCGW1] route-policy dp permit node 10
[*DCGW1-route-policy] if-match tag 2000
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ip-prefix lp
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
Step 13 Establish BGP VPN peer relationships between DC-GWs and VNFs.
# Configure DC-GW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW1-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW1-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW1-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
# Configure DC-GW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv4-family vpn-instance vpn1
[*DCGW2-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW2-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
Step 14 Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] maximum load-balancing 16
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] maximum load-balancing 16
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 15 Verify the configuration.
After completing the configurations, run the display ip routing-table vpn-
instance vpn1 command on PEs to view the mobile phone route information and
VNF route information in VPN routing tables.
[~PE1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 11 Routes : 14
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 77:77
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 7.7.7.7
#
mpls
mpls te
#
explicit-path PtoD74
next sid label 48092 type adjacency
next sid label 48091 type adjacency
#
explicit-path PtoD73
next sid label 48090 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0007.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ip address 10.7.1.1 255.255.255.0
#
interface LoopBack1
ip address 7.7.7.7 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoD73
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoD74
#
bgp 200
peer 3.3.3.3 as-number 100
peer 3.3.3.3 ebgp-max-hop 255
peer 3.3.3.3 connect-interface LoopBack1
peer 3.3.3.3 egress-engineering
peer 4.4.4.4 as-number 100
peer 4.4.4.4 ebgp-max-hop 255
peer 4.4.4.4 connect-interface LoopBack1
peer 4.4.4.4 egress-engineering
#
ipv4-family unicast
undo synchronization
network 7.7.7.7 255.255.255.255
network 10.6.5.0 255.255.255.0
network 10.6.7.0 255.255.255.0
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
link-state-family unicast
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
#
ip route-static 3.3.3.3 255.255.255.255 10.6.5.1
ip route-static 4.4.4.4 255.255.255.255 10.6.7.2
ip route-static 8.8.8.8 255.255.255.255 10.6.7.2
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
return
● PE2 configuration file
#
sysname PE2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 88:88
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 8.8.8.8
#
mpls
mpls te
#
explicit-path PtoD83
next sid label 48092 type adjacency
next sid label 48090 type adjacency
#
explicit-path PtoD84
next sid label 48091 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0008.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.6.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ip address 10.8.1.1 255.255.255.0
#
interface LoopBack1
ip address 8.8.8.8 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoD83
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoD84
#
bgp 200
peer 3.3.3.3 as-number 100
tnl-policy srte
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 10:1 export-extcommunity
vpn-target 11:1 import-extcommunity evpn
vpn-target 10:1 import-extcommunity
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path DtoL31
next sid label 48003 type adjacency
#
explicit-path DtoL32
next sid label 48002 type adjacency
next sid label 48003 type adjacency
#
explicit-path DtoP37
next sid label 48121 type adjacency
#
explicit-path DtoP38
next sid label 48002 type adjacency
next sid label 48121 type adjacency
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/1.2 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/2.2 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.5.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
ospf prefix-sid index 30
#
interface LoopBack2
ip binding vpn-instance vpn1
ip address 33.33.33.33 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL31
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL32
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 7
tnl-policy srte
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 10:1 export-extcommunity
vpn-target 11:1 import-extcommunity evpn
vpn-target 10:1 import-extcommunity
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path DtoL41
next sid label 48020 type adjacency
next sid label 48021 type adjacency
#
explicit-path DtoL42
next sid label 48021 type adjacency
#
explicit-path DtoP47
next sid label 48002 type adjacency
next sid label 48121 type adjacency
#
explicit-path DtoP48
next sid label 48121 type adjacency
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/1.2 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/2.2 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.6.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf prefix-sid index 40
#
interface LoopBack2
ip binding vpn-instance vpn1
ip address 44.44.44.44 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL41
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL42
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 7
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
arp collect host enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD13
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD14
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
undo policy vpn-target
bestroute add-path path-number 16
peer 2.2.2.2 enable
peer 2.2.2.2 advertise arp
peer 3.3.3.3 enable
peer 3.3.3.3 advertise arp
peer 3.3.3.3 capability-advertise add-path both
peer 3.3.3.3 advertise add-path path-number 16
peer 4.4.4.4 enable
ip vpn-instance vpn1
ipv4-family
route-distinguisher 22:22
apply-label per-instance
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path LtoD23
next sid label 48004 type adjacency
next sid label 48002 type adjacency
#
explicit-path LtoD24
next sid label 48003 type adjacency
#
explicit-path LtoL
next sid label 48004 type adjacency
#
e-trunk 1
priority 5
peer-address 1.1.1.1 source-address 2.2.2.2
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
anycast-gateway enable
arp collect host enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
anycast-gateway enable
arp collect host enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
arp collect host enable
#
interface Vbdif40
ip binding vpn-instance vpn1
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
ospf prefix-sid index 20
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD23
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD24
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
undo policy vpn-target
bestroute add-path path-number 16
peer 1.1.1.1 enable
peer 1.1.1.1 advertise arp
peer 3.3.3.3 enable
peer 3.3.3.3 advertise arp
peer 3.3.3.3 capability-advertise add-path both
peer 3.3.3.3 advertise add-path path-number 16
peer 4.4.4.4 enable
peer 4.4.4.4 advertise arp
peer 4.4.4.4 capability-advertise add-path both
Networking Requirements
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
IPv6 mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-186 shows the networking of an NFVI distributed gateway (SR tunnels).
DC-GWs, which are the border gateways of the DCN, exchange Internet routes
with external devices over PEs. L2GW/L3GW1 and L2GW/L3GW2 are connected to
VNFs. VNF1 and VNF2 that function as virtualized NEs are deployed to implement
the vUGW functions and vMSE functions, respectively. VNF1 and VNF2 are each
connected to L2GW/L3GW1 and L2GW/L3GW2 through IPUs. SR tunnels are
established between PEs and DC-GWs and between DC-GWs and L2GW/L3GWs to
carry IPv6 service traffic.
GigabitEthernet1/0/3 2001:DB8:7::1/64
LoopBack1 7.7.7.7
GigabitEthernet1/0/3 2001:DB8:8::1/64
LoopBack1 8.8.8.8
GigabitEthernet1/0/3 10.6.5.1/24
LoopBack1 3.3.3.3/32
LoopBack2 2001:DB8:33::33/128
GigabitEthernet1/0/3 10.6.6.1/24
LoopBack1 4.4.4.4/32
LoopBack2 2001:DB8:44::44/32
GigabitEthernet1/0/3 -
GigabitEthernet1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 1.1.1.1/32
GigabitEthernet1/0/3 -
GigabitEthernet1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to
ensure Layer 3 interworking between ASs.
2. Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
3. Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-
GWs and L2GW/L3GWs.
4. Configure EVPN instances on DC-GWs and L2GW/L3GWs and bind the EVPN
instances to BDs.
5. Configure L3VPN instances on DC-GWs and L2GW/L3GWs and bind the
L3VPN instances to VBDIF interfaces.
6. Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
7. Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs.
8. Configure the BGP VPNv6 function on DC-GWs and PEs.
9. Configure Layer 2 sub-interfaces on DC-GWs and L2GW/L3GWs and configure
the static VPN routes destined for VNFs on L2GW/L3GWs.
10. Configure L2GW/L3GWs to import static VPN routes through BGP EVPN.
Configure and apply a route-policy to L3VPN instances so that the static VPN
routes can retain the original next hops.
11. Configure static default VPN routes and loopback addresses on DC-GWs. The
loopback addresses are used to establish BGP VPN peer relationships with
VNFs. Configure and apply a route-policy to L3VPN instances so that DC-GWs
can advertise static default VPN routes and VPN loopback routes only through
BGP EVPN.
12. Establish BGP VPN peer relationships between DC-GWs and VNFs.
13. Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
Procedure
Step 1 Configure IP addresses for all interfaces, including loopback interfaces, on PEs, DC-
GWs, and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to ensure
Layer 3 interworking between ASs.
For configuration details, see Configuration Files in this section.
Step 3 Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 4 Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-GWs
and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 5 Configure EVPN instances on DC-GWs and L2GW/L3GWs and bind the EVPN
instances to BDs.
# Configure DC-GW1.
[~DCGW1] evpn vpn-instance evrf1 bd-mode
[*DCGW1-evpn-instance-evrf1] route-distinguisher 1:1
[*DCGW1-evpn-instance-evrf1] vpn-target 1:1
[*DCGW1-evpn-instance-evrf1] quit
[*DCGW1] evpn vpn-instance evrf2 bd-mode
[*DCGW1-evpn-instance-evrf2] route-distinguisher 2:2
[*DCGW1-evpn-instance-evrf2] vpn-target 2:2
[*DCGW1-evpn-instance-evrf2] quit
[*DCGW1] evpn vpn-instance evrf3 bd-mode
[*DCGW1-evpn-instance-evrf3] route-distinguisher 3:3
[*DCGW1-evpn-instance-evrf3] vpn-target 3:3
[*DCGW1-evpn-instance-evrf3] quit
[*DCGW1] evpn vpn-instance evrf4 bd-mode
[*DCGW1-evpn-instance-evrf4] route-distinguisher 4:4
[*DCGW1-evpn-instance-evrf4] vpn-target 4:4
[*DCGW1-evpn-instance-evrf4] quit
[*DCGW1] bridge-domain 10
[*DCGW1-bd10] evpn binding vpn-instance evrf1
[*DCGW1-bd10] quit
[*DCGW1] bridge-domain 20
[*DCGW1-bd20] evpn binding vpn-instance evrf2
[*DCGW1-bd20] quit
[*DCGW1] bridge-domain 30
[*DCGW1-bd30] evpn binding vpn-instance evrf3
[*DCGW1-bd30] quit
[*DCGW1] bridge-domain 40
[*DCGW1-bd40] evpn binding vpn-instance evrf4
[*DCGW1-bd40] quit
[*DCGW1] commit
Repeat this step for DC-GW2 and L2GW/L3GWs. For configuration details, see
Configuration Files in this section.
Step 6 Configure L3VPN instances on DC-GWs and L2GW/L3GWs.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
[*DCGW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] interface vbdif10
[*DCGW1-Vbdif10] ip binding vpn-instance vpn1
[*DCGW1-Vbdif10] ipv6 enable
[*DCGW1-Vbdif10] ipv6 address 2001:db8:1::1 64
[*DCGW1-Vbdif10] ipv6 nd generate-rd-table enable
[*DCGW1-Vbdif10] anycast-gateway enable
[*DCGW1-Vbdif10] mac-address 00e0-fc00-0003
[*DCGW1-Vbdif10] ipv6 nd dad attempts 0
[*DCGW1-Vbdif10] quit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] interface vbdif10
[*L2GW/L3GW1-Vbdif10] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif10] ipv6 enable
[*L2GW/L3GW1-Vbdif10] ipv6 address 2001:db8:1::1 64
[*L2GW/L3GW1-Vbdif10] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif10] anycast-gateway enable
[*L2GW/L3GW1-Vbdif10] mac-address 00e0-fc00-0003
[*L2GW/L3GW1-Vbdif10] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif10] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif10] quit
[*L2GW/L3GW1] interface vbdif20
[*L2GW/L3GW1-Vbdif20] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif20] ipv6 enable
[*L2GW/L3GW1-Vbdif20] ipv6 address 2001:db8:2::1 64
[*L2GW/L3GW1-Vbdif20] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif20] anycast-gateway enable
[*L2GW/L3GW1-Vbdif20] mac-address 00e0-fc00-0004
[*L2GW/L3GW1-Vbdif20] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif20] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif20] quit
[*L2GW/L3GW1] interface vbdif30
[*L2GW/L3GW1-Vbdif30] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif30] ipv6 enable
[*L2GW/L3GW1-Vbdif30] ipv6 address 2001:db8:3::1 64
[*L2GW/L3GW1-Vbdif30] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif30] anycast-gateway enable
[*L2GW/L3GW1-Vbdif30] mac-address 00e0-fc00-0001
[*L2GW/L3GW1-Vbdif30] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif30] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif30] quit
[*L2GW/L3GW1] interface vbdif40
[*L2GW/L3GW1-Vbdif40] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif40] ipv6 enable
[*L2GW/L3GW1-Vbdif40] ipv6 address 2001:db8:4::1 64
[*L2GW/L3GW1-Vbdif40] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif40] anycast-gateway enable
[*L2GW/L3GW1-Vbdif40] mac-address 00e0-fc00-0005
[*L2GW/L3GW1-Vbdif40] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif40] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-srte] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] tnl-policy srte
[*PE1-vpn-instance-vpn1-af-ipv6] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] tunnel-policy srte
[*DCGW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*DCGW1-tunnel-policy-srte] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] tunnel-policy srte
[*L2GW/L3GW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*L2GW/L3GW1-tunnel-policy-srte] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure DC-GW1.
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 2.2.2.2 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise nd
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] evpn soure-address 1.1.1.1
[*L2GW/L3GW1] evpn
[*L2GW/L3GW1-evpn] vlan-extend private enable
[*L2GW/L3GW1-evpn] vlan-extend redirect enable
[*L2GW/L3GW1-evpn] local-remote frr enable
[*L2GW/L3GW1-evpn] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 9 Configure the BGP VPNv6 function on DC-GWs and PEs.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[~DCGW1-vpn-instance-vpn1] ipv6-family
[~DCGW1-vpn-instance-vpn1-af-ipv6] vpn-target 10:1
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[~DCGW1-vpn-instance-vpn1] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpnv6
[*DCGW1-bgp-af-vpnv6] peer 7.7.7.7 enable
[*DCGW1-bgp-af-vpnv6] peer 8.8.8.8 enable
[*DCGW1-bgp-af-vpnv6] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] route-distinguisher 77:77
[*PE1-vpn-instance-vpn1-af-ipv6] vpn-target 10:1
[*PE1-vpn-instance-vpn1] quit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 10 Configure Layer 2 sub-interfaces on DC-GWs and L2GW/L3GWs and configure the
static VPN routes destined for VNFs on L2GW/L3GWs.
# Configure DC-GW1.
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] lacp e-trunk system-id 00e0-fc00-0002
[*L2GW/L3GW1] lacp e-trunk priority 1
[*L2GW/L3GW1] e-trunk 1
[*L2GW/L3GW1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*L2GW/L3GW1-e-trunk-1] priority 5
[*L2GW/L3GW1-e-trunk-1] quit
[*L2GW/L3GW1] interface Eth-Trunk 10
[*L2GW/L3GW1-Eth-Trunk10] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk10] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk10] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk10] esi 0000.1111.1111.1111.1111
[*L2GW/L3GW1-Eth-Trunk10] quit
[*L2GW/L3GW1] interface Eth-Trunk 20
[*L2GW/L3GW1-Eth-Trunk20] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk20] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk20] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk20] esi 0000.2222.2222.2222.2222
[*L2GW/L3GW1-Eth-Trunk20] quit
[*L2GW/L3GW1] interface Eth-Trunk 30
[*L2GW/L3GW1-Eth-Trunk30] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk30] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk30] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk30] esi 0000.3333.3333.3333.3333
[*L2GW/L3GW1-Eth-Trunk30] quit
[*L2GW/L3GW1] interface Eth-Trunk 40
[*L2GW/L3GW1-Eth-Trunk40] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk40] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk40] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk40] esi 0000.4444.4444.4444.4444
[*L2GW/L3GW1-Eth-Trunk40] quit
[*L2GW/L3GW1] interface Eth-Trunk 10.1 mode l2
[*L2GW/L3GW1-Eth-Trunk10.1] encapsulation dot1q vid 10
[*L2GW/L3GW1-Eth-Trunk10.1] rewrite pop single
[*L2GW/L3GW1-Eth-Trunk10.1] bridge-domain 10
[*L2GW/L3GW1-Eth-Trunk10.1] quit
[*L2GW/L3GW1] interface Eth-Trunk 20.1 mode l2
[*L2GW/L3GW1-Eth-Trunk20.1] encapsulation dot1q vid 20
[*L2GW/L3GW1-Eth-Trunk20.1] rewrite pop single
[*L2GW/L3GW1-Eth-Trunk20.1] bridge-domain 20
[*L2GW/L3GW1-Eth-Trunk20.1] quit
[*L2GW/L3GW1] interface Eth-Trunk 30.1 mode l2
[*L2GW/L3GW1-Eth-Trunk30.1] encapsulation dot1q vid 30
[*L2GW/L3GW1-Eth-Trunk30.1] rewrite pop single
[*L2GW/L3GW1-Eth-Trunk30.1] bridge-domain 30
[*L2GW/L3GW1-Eth-Trunk30.1] quit
[*L2GW/L3GW1] interface Eth-Trunk 40.1 mode l2
[*L2GW/L3GW1-Eth-Trunk40.1] encapsulation dot1q vid 40
[*L2GW/L3GW1-Eth-Trunk40.1] rewrite pop single
[*L2GW/L3GW1-Eth-Trunk40.1] bridge-domain 40
[*L2GW/L3GW1-Eth-Trunk40.1] quit
[~L2GW/L3GW1] interface GigabitEthernet1/0/3
[*L2GW/L3GW1-GigabitEthernet1/0/3] eth-trunk 10
[*L2GW/L3GW1-GigabitEthernet1/0/3] quit
[*L2GW/L3GW1] interface GigabitEthernet1/0/4
[*L2GW/L3GW1-GigabitEthernet1/0/4] eth-trunk 20
[*L2GW/L3GW1-GigabitEthernet1/0/4] quit
[~L2GW/L3GW1] interface GigabitEthernet1/0/5
[*L2GW/L3GW1-GigabitEthernet1/0/5] eth-trunk 30
[*L2GW/L3GW1-GigabitEthernet1/0/5] quit
[*L2GW/L3GW1] interface GigabitEthernet1/0/6
[*L2GW/L3GW1-GigabitEthernet1/0/6] eth-trunk 40
[*L2GW/L3GW1-GigabitEthernet1/0/6] quit
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:DB8:5::5 128 2001:DB8:1::2 preference 255
tag 1000 inter-protocol-ecmp
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:DB8:5::5 128 2001:DB8:2::2 preference 255
tag 1000 inter-protocol-ecmp
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:DB8:6::6 128 2001:DB8:3::2 preference 255
tag 1000 inter-protocol-ecmp
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:DB8:6::6 128 2001:DB8:4::2 preference 255
tag 1000 inter-protocol-ecmp
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 11 Configure L2GW/L3GWs to import static VPN routes through BGP EVPN. Configure
and apply a route-policy to L3VPN instances so that the static VPN routes can
retain the original next hops.
# Configure L2GW/L3GW1.
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 12 Configure static default VPN routes and loopback addresses on DC-GWs. The
loopback addresses are used to establish BGP VPN peer relationships with VNFs.
Configure and apply a route-policy to L3VPN instances so that DC-GWs can
advertise static default VPN routes and VPN loopback routes only through BGP
EVPN.
# Configure DC-GW1.
[~DCGW1] ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
[*DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ipv6 enable
[*DCGW1-LoopBack2] ipv6 address 2001:db8:33::33 128
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-6-vpn1] import-route direct
[*DCGW1-bgp-6-vpn1] network :: 0
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ipv6-prefix lp index 10 permit 2001:db8:33::33 128
[*DCGW1] route-policy dp permit node 10
[*DCGW1-route-policy] if-match tag 2000
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ipv6 address prefix-list lp
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
Step 13 Establish BGP VPN peer relationships between DC-GWs and VNFs.
# Configure DC-GW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
# Configure DC-GW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv6-family vpn-instance vpn1
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
[*DCGW2-bgp-6-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
Step 14 Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpn1
[*PE1-bgp-6-vpn1] maximum load-balancing 16
[*PE1-bgp-6-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] maximum load-balancing 16
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-6-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-6-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 15 Verify the configuration.
After completing the configurations, run the display ipv6 routing-table vpn-
instance vpn1 command on PEs to view the mobile phone route information and
VNF route information in VPN routing tables.
[~PE1] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1
Destinations : 9 Routes : 12
Destination : :: PrefixLength : 0
NextHop : :: Preference : 60
Cost :0 Protocol : Static
RelayNextHop : :: TunnelID : 0x0
Interface : NULL0 Flags : DB
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 77:77
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 7.7.7.7
#
mpls
mpls te
#
explicit-path PtoD74
next sid label 48092 type adjacency
next sid label 48091 type adjacency
#
explicit-path PtoD73
next sid label 48090 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0007.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:7::1/64
#
interface LoopBack1
ip address 7.7.7.7 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoD73
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoD74
#
bgp 200
peer 3.3.3.3 as-number 100
peer 3.3.3.3 ebgp-max-hop 255
peer 3.3.3.3 connect-interface LoopBack1
peer 3.3.3.3 egress-engineering
peer 4.4.4.4 as-number 100
peer 4.4.4.4 ebgp-max-hop 255
peer 4.4.4.4 connect-interface LoopBack1
peer 4.4.4.4 egress-engineering
#
ipv4-family unicast
undo synchronization
network 7.7.7.7 255.255.255.255
network 10.6.5.0 255.255.255.0
network 10.6.7.0 255.255.255.0
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
link-state-family unicast
#
ipv6-family vpnv6
policy vpn-target
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv6-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
#
ip route-static 3.3.3.3 255.255.255.255 10.6.5.1
ip route-static 4.4.4.4 255.255.255.255 10.6.7.2
ip route-static 8.8.8.8 255.255.255.255 10.6.7.2
#
ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
return
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL32
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 7
mpls te path explicit-path DtoP37
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 8
mpls te path explicit-path DtoP38
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 egress-engineering
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
peer 8.8.8.8 egress-engineering
#
ipv4-family unicast
undo synchronization
network 10.6.1.0 255.255.255.0
network 10.6.5.0 255.255.255.0
import-route static
import-route ospf 100
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
link-state-family unicast
#
ipv6-family vpnv6
policy vpn-target
peer 7.7.7.7 enable
peer 7.7.7.7 advertise route-reoriginated evpn ipv6
peer 8.8.8.8 enable
peer 8.8.8.8 advertise route-reoriginated evpn ipv6
#
ipv6-family vpn-instance vpn1
network :: 0
import-route direct
maximum load-balancing 16
advertise l2vpn evpn
peer 2001:DB8:5::5 as-number 100
peer 2001:DB8:5::5 connect-interface LoopBack2
ipv6 enable
ipv6 address 2001:DB8:2::1/64
mac-address 00e0-fc00-0004
ipv6 nd dad attempts 0
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:3::1/64
mac-address 00e0-fc00-0001
ipv6 nd dad attempts 0
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:4::1/64
mac-address 00e0-fc00-0005
ipv6 nd dad attempts 0
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/1.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/1.2 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/2.2 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.6.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf prefix-sid index 40
#
interface LoopBack2
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:44::44/128
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL41
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL42
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 7
mpls te path explicit-path DtoP47
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 8
mpls te path explicit-path DtoP48
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 egress-engineering
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
peer 8.8.8.8 egress-engineering
#
ipv4-family unicast
undo synchronization
network 10.6.1.0 255.255.255.0
network 10.6.5.0 255.255.255.0
import-route static
import-route ospf 100
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
link-state-family unicast
#
ipv6-family vpnv6
policy vpn-target
peer 7.7.7.7 enable
peer 7.7.7.7 advertise route-reoriginated evpn ipv6
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
mac-duplication
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 11:11
apply-label per-instance
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path LtoD13
next sid label 48002 type adjacency
#
explicit-path LtoD14
next sid label 48003 type adjacency
next sid label 48003 type adjacency
#
explicit-path LtoL
next sid label 48003 type adjacency
#
e-trunk 1
priority 5
peer-address 2.2.2.2 source-address 1.1.1.1
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
mac-address 00e0-fc00-0003
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:2::1/64
mac-address 00e0-fc00-0004
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:3::1/64
mac-address 00e0-fc00-0001
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:4::1/64
mac-address 00e0-fc00-0005
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD13
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD14
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
ospf prefix-sid index 20
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD23
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
#
evpn source-address 2.2.2.2
#
return
Networking Requirements
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
IPv4 mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-187 shows the networking of an NFVI distributed gateway (BGP VPNv4
over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs. E2E SR tunnels are established between PEs and L2GW/
L3GWs to carry IPv4 service traffic.
LoopBack1 7.7.7.7
LoopBack1 8.8.8.8
LoopBack1 3.3.3.3/32
LoopBack2 33.33.33.33/32
LoopBack1 4.4.4.4/32
LoopBack2 44.44.44.44/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 1.1.1.1/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to
ensure Layer 3 interworking between ASs.
2. Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
3. Configure SR-MPLS TE tunnels between PEs and L2GW/L3GWs and between
DC-GWs and L2GW/L3GWs.
4. Configure EVPN instances on L2GW/L3GWs and bind the EVPN instances to
BDs.
5. Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
6. Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
7. Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs and
configure DC-GWs as RRs.
8. Configure the BGP VPNv4 function on L2GW/L3GWs and PEs.
9. Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to
VNFs and the static VPN routes destined for VNFs.
10. Configure L2GW/L3GWs to import static VPN routes through BGP EVPN.
Configure and apply a route-policy to L3VPN instances so that the static VPN
routes can retain the original next hops.
11. On DC-GWs, configure the loopback addresses used to establish BGP VPN
peer relationships with VNFs. Configure and apply a route-policy to L3VPN
instances so that DC-GWs can only advertise VPN loopback routes and mobile
phone routes through BGP EVPN and the GWIP attribute is added to mobile
phone routes and deleted from the routes received by EVPN peers.
12. Establish BGP VPN peer relationships between DC-GWs and VNFs and
configure DC-GWs to function as RRs.
13. Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
Procedure
Step 1 Configure IP addresses for all interfaces, including loopback interfaces, on PEs, DC-
GWs, and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to ensure
Layer 3 interworking between ASs.
For configuration details, see Configuration Files in this section.
Step 3 Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 4 Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-GWs
and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 5 Configure EVPN instances on L2GW/L3GWs and bind the EVPN instances to BDs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] evpn vpn-instance evrf1 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf1] route-distinguisher 1:1
[*L2GW/L3GW1-evpn-instance-evrf1] vpn-target 1:1
[*L2GW/L3GW1-evpn-instance-evrf1] quit
[*L2GW/L3GW1] evpn vpn-instance evrf2 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf2] route-distinguisher 2:2
[*L2GW/L3GW1-evpn-instance-evrf2] vpn-target 2:2
[*L2GW/L3GW1-evpn-instance-evrf2] quit
[*L2GW/L3GW1] evpn vpn-instance evrf3 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf3] route-distinguisher 3:3
[*L2GW/L3GW1-evpn-instance-evrf3] vpn-target 3:3
[*L2GW/L3GW1-evpn-instance-evrf3] quit
[*L2GW/L3GW1] evpn vpn-instance evrf4 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf4] route-distinguisher 4:4
[*L2GW/L3GW1-evpn-instance-evrf4] vpn-target 4:4
[*L2GW/L3GW1-evpn-instance-evrf4] quit
[*L2GW/L3GW1] bridge-domain 10
[*L2GW/L3GW1-bd10] evpn binding vpn-instance evrf1
[*L2GW/L3GW1-bd10] quit
[*L2GW/L3GW1] bridge-domain 20
[*L2GW/L3GW1-bd20] evpn binding vpn-instance evrf2
[*L2GW/L3GW1-bd20] quit
[*L2GW/L3GW1] bridge-domain 30
[*L2GW/L3GW1-bd30] evpn binding vpn-instance evrf3
[*L2GW/L3GW1-bd30] quit
[*L2GW/L3GW1] bridge-domain 40
[*L2GW/L3GW1-bd40] evpn binding vpn-instance evrf4
[*L2GW/L3GW1-bd40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 6 Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 77:77
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 10:1
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface GigabitEthernet1/0/3
[*PE1-GigabitEthernet1/0/3] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet1/0/3] ip address 10.7.1.1 255.255.255.0
[*PE1-GigabitEthernet1/0/3] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 33:33
[*DCGW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] evpn mpls routing-enable
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] vpn-target 10:1
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] evpn mpls routing-enable
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] interface vbdif10
[*L2GW/L3GW1-Vbdif10] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif10] ip address 10.1.1.1 24
[*L2GW/L3GW1-Vbdif10] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif10] anycast-gateway enable
[*L2GW/L3GW1-Vbdif10] mac-address 00e0-fc00-0003
[*L2GW/L3GW1-Vbdif10] arp collect host enable
[*L2GW/L3GW1-Vbdif10] quit
[*L2GW/L3GW1] interface vbdif20
[*L2GW/L3GW1-Vbdif20] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif20] ip address 10.2.1.1 24
[*L2GW/L3GW1-Vbdif20] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif20] anycast-gateway enable
[*L2GW/L3GW1-Vbdif20] mac-address 00e0-fc00-0004
[*L2GW/L3GW1-Vbdif20] arp collect host enable
[*L2GW/L3GW1-Vbdif20] quit
[*L2GW/L3GW1] interface vbdif30
[*L2GW/L3GW1-Vbdif30] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif30] ip address 10.3.1.1 24
[*L2GW/L3GW1-Vbdif30] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif30] anycast-gateway enable
[*L2GW/L3GW1-Vbdif30] mac-address 00e0-fc00-0001
[*L2GW/L3GW1-Vbdif30] arp collect host enable
[*L2GW/L3GW1-Vbdif30] quit
[*L2GW/L3GW1] interface vbdif40
[*L2GW/L3GW1-Vbdif40] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif40] ip address 10.4.1.1 24
[*L2GW/L3GW1-Vbdif40] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif40] anycast-gateway enable
[*L2GW/L3GW1-Vbdif40] mac-address 00e0-fc00-0005
[*L2GW/L3GW1-Vbdif40] arp collect host enable
[*L2GW/L3GW1-Vbdif40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 7 Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-srte] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy srte
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] tunnel-policy srte
[*L2GW/L3GW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*L2GW/L3GW1-tunnel-policy-srte] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 8 Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs and configure
DC-GWs as RRs.
# Configure DC-GW1.
[~DCGW1] ip ip-prefix uIP index 10 permit 10.10.10.10 32
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 reflect-client
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 reflect-client
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 reflect-client
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 import reoriginate
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 import reoriginate
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] evpn soure-address 1.1.1.1
[*L2GW/L3GW1] evpn
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 9 Configure the BGP VPNv4 function on L2GW/L3GWs and PEs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] peer 7.7.7.7 as-number 200
[*L2GW/L3GW1-bgp] peer 7.7.7.7 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 7.7.7.7 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] peer 8.8.8.8 as-number 200
[*L2GW/L3GW1-bgp] peer 8.8.8.8 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 8.8.8.8 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] ipv4-family vpnv4
[*L2GW/L3GW1-bgp-af-vpnv4] peer 7.7.7.7 enable
[*L2GW/L3GW1-bgp-af-vpnv4] peer 7.7.7.7 advertise route-reoriginated evpn ip
[*L2GW/L3GW1-bgp-af-vpnv4] peer 8.8.8.8 enable
[*L2GW/L3GW1-bgp-af-vpnv4] peer 8.8.8.8 advertise route-reoriginated evpn ip
[*L2GW/L3GW1-bgp-af-vpnv4] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure PE1.
[~PE1] bgp 200
[*PE1-bgp] peer 1.1.1.1 as-number 200
[*PE1-bgp] peer 1.1.1.1 ebgp-max-hop 255
[*PE1-bgp] peer 1.1.1.1 connect-interface LoopBack1
[*PE1-bgp] peer 2.2.2.2 as-number 200
[*PE1-bgp] peer 2.2.2.2 ebgp-max-hop 255
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*PE1-bgp] ipv4-family vpnv4
[*PE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*PE1-bgp-af-vpnv4] peer 2.2.2.2 enable
[*PE1-bgp-af-vpnv4] quit
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 10 Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to VNFs
and the static VPN routes destined for VNFs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] lacp e-trunk system-id 00e0-fc00-0002
[*L2GW/L3GW1] lacp e-trunk priority 1
[*L2GW/L3GW1] e-trunk 1
[*L2GW/L3GW1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*L2GW/L3GW1-e-trunk-1] priority 5
[*L2GW/L3GW1-e-trunk-1] quit
[*L2GW/L3GW1] interface Eth-Trunk 10
[*L2GW/L3GW1-Eth-Trunk10] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk10] e-trunk 1
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 11 Configure L2GW/L3GWs to import static VPN routes through BGP EVPN. Configure
and apply a route-policy to L3VPN instances so that the static VPN routes can
retain the original next hops.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] import-route static
[*L2GW/L3GW1-bgp-vpn1] import-route direct
[*L2GW/L3GW1-bgp-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 12 On DC-GWs, configure the loopback addresses used to establish BGP VPN peer
relationships with VNFs. Configure and apply a route-policy to L3VPN instances so
that DC-GWs can only advertise VPN loopback routes and mobile phone routes (in
this example, the destination IP address of mobile phone routes destined for a
VNF is 10.10.10.10) through BGP EVPN and the GWIP attribute is added to mobile
phone routes and deleted from the routes received by EVPN peers.
# Configure DC-GW1.
[~DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ip address 33.33.33.33 255.255.255.255
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-vpn1] import-route direct
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ip-prefix lp index 10 permit 33.33.33.33 32
[*DCGW1] route-policy dp permit node 5
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] apply gateway-ip origin-nexthop
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ip-prefix lp
[*DCGW1-route-policy] quit
[*DCGW1] route-policy delGWIP permit node 10
[*DCGW1-route-policy] apply gateway-ip none
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1] import route-policy delGWIP evpn
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
Step 13 Establish BGP VPN peer relationships between DC-GWs and VNFs and configure
DC-GWs to function as RRs.
# Configure DC-GW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
# Configure DC-GW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv4-family vpn-instance vpn1
[*DCGW2-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW2-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW2-bgp-vpn1] peer 5.5.5.5 reflect-client
[*DCGW2-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW2-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW2-bgp-vpn1] peer 6.6.6.6 reflect-client
[*DCGW2-bgp-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
Step 14 Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] maximum load-balancing 16
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] maximum load-balancing 16
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 15 Verify the configuration.
After completing the configurations, run the display ip routing-table vpn-
instance vpn1 command on PEs to view the mobile phone route information and
VNF route information in VPN routing tables. The following example uses the
command output on PE1.
[~PE1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 11 Routes : 16
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 77:77
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 7.7.7.7
#
mpls
mpls te
#
explicit-path PtoL71
next sid label 48090 type adjacency
next sid label 1000 type binding-sid
#
explicit-path PtoL72
next sid label 1000 type binding-sid
next sid label 48091 type adjacency
next sid label 2000 type binding-sid
#
explicit-path PtoP12
next sid label 48092 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0007.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ip address 10.7.1.1 255.255.255.0
#
interface LoopBack1
ip address 7.7.7.7 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoL71
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoL72
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 300
mpls te path explicit-path PtoP12
mpls te binding-sid label 1000
#
bgp 200
#
explicit-path PtoL82
next sid label 48091 type adjacency
next sid label 2000 type binding-sid
#
explicit-path PtoP21
next sid label 48092 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0008.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.6.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ip address 10.8.1.1 255.255.255.0
#
interface LoopBack1
ip address 8.8.8.8 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoL81
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoL82
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 300
mpls te path explicit-path PtoP21
mpls te binding-sid label 1000
#
bgp 200
peer 1.1.1.1 as-number 100
peer 1.1.1.1 ebgp-max-hop 255
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
evpn source-address 3.3.3.3
#
return
● DC-GW2 configuration file
#
sysname DCGW2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 44:44
apply-label per-instance
import route-policy delGWIP evpn
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
explicit-path DtoL42
next sid label 48003 type adjacency
#
explicit-path DtoL41
next sid label 48002 type adjacency
next sid label 48003 type adjacency
#
segment-routing
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.6.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf prefix-sid index 40
#
interface LoopBack2
ip binding vpn-instance vpn1
ip address 44.44.44.44 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL41
mpls te binding-sid label 1000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL42
mpls te binding-sid label 2000
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 egress-engineering
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
peer 8.8.8.8 egress-engineering
#
ipv4-family unicast
undo synchronization
network 10.6.1.0 255.255.255.0
network 10.6.6.0 255.255.255.0
import-route static
import-route ospf 100
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
link-state-family unicast
#
ipv4-family vpn-instance vpn1
import-route direct
maximum load-balancing 16
advertise l2vpn evpn
peer 5.5.5.5 as-number 100
peer 5.5.5.5 connect-interface LoopBack2
peer 5.5.5.5 route-policy p1 export
peer 5.5.5.5 reflect-client
peer 6.6.6.6 as-number 100
peer 6.6.6.6 connect-interface LoopBack2
peer 6.6.6.6 route-policy p1 export
peer 6.6.6.6 reflect-client
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 capability-advertise add-path both
peer 1.1.1.1 advertise add-path path-number 16
peer 2.2.2.2 enable
peer 2.2.2.2 reflect-client
peer 2.2.2.2 capability-advertise add-path both
peer 2.2.2.2 advertise add-path path-number 16
peer 3.3.3.3 enable
peer 3.3.3.3 reflect-client
peer 3.3.3.3 route-policy stopuIP export
#
ospf 100
opaque-capability enable
segment-routing mpls
segment-routing global-block 160000 161000
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.6.1.0 0.0.0.255
network 10.6.3.0 0.0.0.255
mpls-te enable
#
route-policy delGWIP permit node 10
apply gateway-ip none
#
route-policy dp permit node 5
if-match ip-prefix uIP
apply gateway-ip origin-nexthop
#
route-policy dp permit node 15
if-match ip-prefix lp
#
route-policy p1 deny node 10
#
route-policy stopuIP deny node 10
if-match ip-prefix uIP
#
route-policy stopuIP permit node 20
#
ip ip-prefix lp index 10 permit 44.44.44.44 32
ip ip-prefix uIP index 10 permit 10.10.10.10 32
#
ip route-static 7.7.7.7 255.255.255.255 10.6.1.1
ip route-static 8.8.8.8 255.255.255.255 10.6.6.2
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
evpn source-address 4.4.4.4
#
return
● L2GW/L3GW1 configuration file
#
sysname L2GW/L3GW1
#
lacp e-trunk system-id 00e0-fc00-0002
lacp e-trunk priority 1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
mac-duplication
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
anycast-gateway enable
arp collect host enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
arp collect host enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
arp collect host enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD13
mpls te binding-sid label 3000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD14
mpls te binding-sid label 4000
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path LtoP17
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path LtoP18
#
bgp 100
#
return
● L2GW/L3GW2 configuration file
#
sysname L2GW/L3GW2
#
lacp e-trunk system-id 00e0-fc00-0002
lacp e-trunk priority 1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
mac-duplication
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 22:22
apply-label per-instance
tnl-policy srte
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 10:1 export-extcommunity
vpn-target 11:1 import-extcommunity evpn
vpn-target 10:1 import-extcommunity
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path LtoD23
next sid label 48004 type adjacency
next sid label 48002 type adjacency
#
explicit-path LtoD24
next sid label 48003 type adjacency
#
explicit-path LtoP27
next sid label 3000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoP28
next sid label 4000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoL
next sid label 48004 type adjacency
#
e-trunk 1
priority 5
peer-address 1.1.1.1 source-address 2.2.2.2
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
anycast-gateway enable
arp collect host enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
anycast-gateway enable
arp collect host enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
arp collect host enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
arp collect host enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
ospf prefix-sid index 20
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD23
mpls te binding-sid label 3000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD24
mpls te binding-sid label 4000
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path LtoP27
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path LtoP28
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
ipv4-family vpnv4
policy vpn-target
peer 7.7.7.7 enable
peer 7.7.7.7 advertise route-reoriginated evpn ip
peer 8.8.8.8 enable
peer 8.8.8.8 advertise route-reoriginated evpn ip
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
Networking Requirements
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
IPv6 mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-188 shows the networking of an NFVI distributed gateway (BGP VPNv6
over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs. E2E SR tunnels are established between PEs and L2GW/
L3GWs to carry IPv6 service traffic.
LoopBack1 7.7.7.7
LoopBack1 8.8.8.8
LoopBack1 3.3.3.3/32
LoopBack2 2001:DB8:33::33/128
LoopBack1 4.4.4.4/32
LoopBack2 2001:DB8:44::44/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 1.1.1.1/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to
ensure Layer 3 interworking between ASs.
2. Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
3. Configure SR-MPLS TE tunnels between PEs and L2GW/L3GWs and between
DC-GWs and L2GW/L3GWs.
4. Configure EVPN instances on L2GW/L3GWs and bind the EVPN instances to
BDs.
5. Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
6. Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
7. Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs and
configure DC-GWs as RRs.
8. Configure the BGP VPNv6 function on L2GW/L3GWs and PEs.
9. Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to
VNFs and the static VPN routes destined for VNFs.
10. Configure L2GW/L3GWs to import static VPN routes through BGP EVPN.
Configure and apply a route-policy to L3VPN instances so that the static VPN
routes can retain the original next hops.
11. On DC-GWs, configure the loopback addresses used to establish BGP VPN
peer relationships with VNFs. Configure and apply a route-policy to L3VPN
instances so that DC-GWs can only advertise VPN loopback routes and mobile
phone routes through BGP EVPN and the GWIP attribute is added to mobile
phone routes and deleted from the routes received by EVPN peers.
12. Establish BGP VPN peer relationships between DC-GWs and VNFs and
configure DC-GWs to function as RRs.
13. Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
Procedure
Step 1 Configure IP addresses for all interfaces, including loopback interfaces, on PEs, DC-
GWs, and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to ensure
Layer 3 interworking between ASs.
For configuration details, see Configuration Files in this section.
Step 3 Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 4 Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-GWs
and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 5 Configure EVPN instances on L2GW/L3GWs and bind the EVPN instances to BDs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] evpn vpn-instance evrf1 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf1] route-distinguisher 1:1
[*L2GW/L3GW1-evpn-instance-evrf1] vpn-target 1:1
[*L2GW/L3GW1-evpn-instance-evrf1] quit
[*L2GW/L3GW1] evpn vpn-instance evrf2 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf2] route-distinguisher 2:2
[*L2GW/L3GW1-evpn-instance-evrf2] vpn-target 2:2
[*L2GW/L3GW1-evpn-instance-evrf2] quit
[*L2GW/L3GW1] evpn vpn-instance evrf3 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf3] route-distinguisher 3:3
[*L2GW/L3GW1-evpn-instance-evrf3] vpn-target 3:3
[*L2GW/L3GW1-evpn-instance-evrf3] quit
[*L2GW/L3GW1] evpn vpn-instance evrf4 bd-mode
[*L2GW/L3GW1-evpn-instance-evrf4] route-distinguisher 4:4
[*L2GW/L3GW1-evpn-instance-evrf4] vpn-target 4:4
[*L2GW/L3GW1-evpn-instance-evrf4] quit
[*L2GW/L3GW1] bridge-domain 10
[*L2GW/L3GW1-bd10] evpn binding vpn-instance evrf1
[*L2GW/L3GW1-bd10] quit
[*L2GW/L3GW1] bridge-domain 20
[*L2GW/L3GW1-bd20] evpn binding vpn-instance evrf2
[*L2GW/L3GW1-bd20] quit
[*L2GW/L3GW1] bridge-domain 30
[*L2GW/L3GW1-bd30] evpn binding vpn-instance evrf3
[*L2GW/L3GW1-bd30] quit
[*L2GW/L3GW1] bridge-domain 40
[*L2GW/L3GW1-bd40] evpn binding vpn-instance evrf4
[*L2GW/L3GW1-bd40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 6 Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] route-distinguisher 77:77
[*PE1-vpn-instance-vpn1-af-ipv6] vpn-target 10:1
[*PE1-vpn-instance-vpn1-af-ipv6] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface GigabitEthernet1/0/3
[*PE1-GigabitEthernet1/0/3] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet1/0/3] ipv6 enable
[*PE1-GigabitEthernet1/0/3] ipv6 address 2001:DB8:7::1 64
[*PE1-GigabitEthernet1/0/3] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 33:33
[*DCGW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] route-distinguisher 11:11
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] vpn-target 11:1 evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] vpn-target 10:1
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] evpn mpls routing-enable
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] interface vbdif10
[*L2GW/L3GW1-Vbdif10] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif10] ipv6 enable
[*L2GW/L3GW1-Vbdif10] ipv6 address 2001:db8:1::1 64
[*L2GW/L3GW1-Vbdif10] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif10] anycast-gateway enable
[*L2GW/L3GW1-Vbdif10] mac-address 00e0-fc00-0003
[*L2GW/L3GW1-Vbdif10] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif10] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif10] quit
[*L2GW/L3GW1] interface vbdif20
[*L2GW/L3GW1-Vbdif20] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif20] ipv6 enable
[*L2GW/L3GW1-Vbdif20] ipv6 address 2001:db8:2::1 64
[*L2GW/L3GW1-Vbdif20] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif20] anycast-gateway enable
[*L2GW/L3GW1-Vbdif20] mac-address 00e0-fc00-0004
[*L2GW/L3GW1-Vbdif20] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif20] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif20] quit
[*L2GW/L3GW1] interface vbdif30
[*L2GW/L3GW1-Vbdif30] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif30] ipv6 enable
[*L2GW/L3GW1-Vbdif30] ipv6 address 2001:db8:3::1 64
[*L2GW/L3GW1-Vbdif30] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif30] anycast-gateway enable
[*L2GW/L3GW1-Vbdif30] mac-address 00e0-fc00-0001
[*L2GW/L3GW1-Vbdif30] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif30] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif30] quit
[*L2GW/L3GW1] interface vbdif40
[*L2GW/L3GW1-Vbdif40] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif40] ipv6 enable
[*L2GW/L3GW1-Vbdif40] ipv6 address 2001:db8:4::1 64
[*L2GW/L3GW1-Vbdif40] ipv6 nd generate-rd-table enable
[*L2GW/L3GW1-Vbdif40] anycast-gateway enable
[*L2GW/L3GW1-Vbdif40] mac-address 00e0-fc00-0005
[*L2GW/L3GW1-Vbdif40] ipv6 nd collect host enable
[*L2GW/L3GW1-Vbdif40] ipv6 nd dad attempts 0
[*L2GW/L3GW1-Vbdif40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 7 Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-srte] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv6-family
[*PE1-vpn-instance-vpn1-af-ipv6] tnl-policy srte
[*PE1-vpn-instance-vpn1-af-ipv6] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] tunnel-policy srte
[*DCGW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*DCGW1-tunnel-policy-srte] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
[*DCGW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
[*DCGW1-vpn-instance-vpn1-af-ipv6] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] tunnel-policy srte
[*L2GW/L3GW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*L2GW/L3GW1-tunnel-policy-srte] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] tnl-policy srte evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 8 Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs and configure
DC-GWs as RRs.
# Configure DC-GW1.
[~DCGW1] ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ipv6 address prefix-list uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 enable
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 reflect-client
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 enable
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 reflect-client
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 enable
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 reflect-client
[*DCGW1-bgp-af-evpn] peer 4.4.4.4 route-policy stopuIP export
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] l2vpn-family evpn
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 9 Configure the BGP VPNv6 function on L2GW/L3GWs and PEs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] peer 7.7.7.7 as-number 200
[*L2GW/L3GW1-bgp] peer 7.7.7.7 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 7.7.7.7 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] peer 8.8.8.8 as-number 200
[*L2GW/L3GW1-bgp] peer 8.8.8.8 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 8.8.8.8 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] ipv6-family vpnv6
[*L2GW/L3GW1-bgp-af-vpnv6] peer 7.7.7.7 enable
[*L2GW/L3GW1-bgp-af-vpnv6] peer 7.7.7.7 advertise route-reoriginated evpn ipv6
[*L2GW/L3GW1-bgp-af-vpnv6] peer 8.8.8.8 enable
[*L2GW/L3GW1-bgp-af-vpnv6] peer 8.8.8.8 advertise route-reoriginated evpn ipv6
[*L2GW/L3GW1-bgp-af-vpnv6] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure PE1.
[~PE1] bgp 200
[*PE1-bgp] peer 1.1.1.1 as-number 200
[*PE1-bgp] peer 1.1.1.1 ebgp-max-hop 255
[*PE1-bgp] peer 1.1.1.1 connect-interface LoopBack1
[*PE1-bgp] peer 2.2.2.2 as-number 200
[*PE1-bgp] peer 2.2.2.2 ebgp-max-hop 255
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*PE1-bgp] ipv6-family vpnv6
[*PE1-bgp-af-vpnv6] peer 1.1.1.1 enable
[*PE1-bgp-af-vpnv6] peer 2.2.2.2 enable
[*PE1-bgp-af-vpnv6] quit
[*PE1-bgp] ipv6-family vpn-instance vpn1
[*PE1-bgp-6-vpn1] import-route static
[*PE1-bgp-6-vpn1] import-route direct
[*PE1-bgp-6-vpn1] advertise l2vpn evpn
[*PE1-bgp-6-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 10 Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to VNFs
and the static VPN routes destined for VNFs.
# Configure L2GW/L3GW1.
[*L2GW/L3GW1] ipv6 route-static vpn-instance vpn1 2001:db8:6::6 128 2001:db8:4::2 preference 255
tag 1000 inter-protocol-ecmp
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 11 Configure L2GW/L3GWs to import static VPN routes through BGP EVPN. Configure
and apply a route-policy to L3VPN instances so that the static VPN routes can
retain the original next hops.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-6-vpn1] import-route static
[*L2GW/L3GW1-bgp-6-vpn1] import-route direct
[*L2GW/L3GW1-bgp-6-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-6-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv6-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv6] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 12 On DC-GWs, configure the loopback addresses used to establish BGP VPN peer
relationships with VNFs. Configure and apply a route-policy to L3VPN instances so
that DC-GWs can only advertise VPN loopback routes and mobile phone routes (in
this example, the destination IP address of mobile phone routes destined for a
VNF is 2001:DB8:10::10) through BGP EVPN and the GWIP attribute is added to
mobile phone routes and deleted from the routes received by EVPN peers.
# Configure DC-GW1.
[~DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ipv6 enable
[*DCGW1-LoopBack2] ipv6 address 2001:db8:33::33 128
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-vpn6] advertise l2vpn evpn
[*DCGW1-bgp-vpn6] import-route direct
[*DCGW1-bgp-vpn6] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ipv6-prefix lp index 10 permit 2001:DB8:33::33 128
[*DCGW1] route-policy dp permit node 5
[*DCGW1-route-policy] if-match ipv6 address prefix-list uIP
[*DCGW1-route-policy] apply ipv6 gateway-ip origin-nexthop
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ipv6 address prefix-list lp
[*DCGW1-route-policy] quit
[*DCGW1] route-policy delGWIP permit node 10
[*DCGW1-route-policy] apply ipv6 gateway-ip none
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv6-family
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
Step 13 Establish BGP VPN peer relationships between DC-GWs and VNFs and configure
DC-GWs to function as RRs.
# Configure DC-GW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
[*DCGW1-bgp-6-vpn1] peer 2001:db8:5::5 reflect-client
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
[*DCGW1-bgp-6-vpn1] peer 2001:db8:6::6 reflect-client
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
# Configure DC-GW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv6-family vpn-instance vpn1
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 route-policy p1 export
[*DCGW2-bgp-6-vpn1] peer 2001:db8:5::5 reflect-client
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 as-number 100
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 connect-interface LoopBack2
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 route-policy p1 export
[*DCGW2-bgp-6-vpn1] peer 2001:db8:6::6 reflect-client
[*DCGW2-bgp-6-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
Step 14 Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv6-family vpn-instance vpn1
[*PE1-bgp-6-vpn1] maximum load-balancing 16
[*PE1-bgp-6-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv6-family vpn-instance vpn1
[*DCGW1-bgp-6-vpn1] maximum load-balancing 16
[*DCGW1-bgp-6-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv6-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-6-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-6-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 15 Verify the configuration.
After completing the configurations, run the display ipv6 routing-table vpn-
instance vpn1 command on PEs to view the mobile phone route information and
VNF route information in VPN routing tables. The following example uses the
command output on PE1.
[~PE1] display ipv6 routing-table vpn-instance vpn1
Routing Table : vpn1
Destinations : 10 Routes : 14
Destination : :: PrefixLength : 0
NextHop : :: Preference : 60
Cost :0 Protocol : Static
RelayNextHop : :: TunnelID : 0x0
Interface : NULL0 Flags : DB
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 77:77
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 7.7.7.7
#
mpls
mpls te
#
explicit-path PtoL71
next sid label 48090 type adjacency
next sid label 1000 type binding-sid
#
explicit-path PtoL72
next sid label 1000 type binding-sid
next sid label 48091 type adjacency
next sid label 2000 type binding-sid
#
explicit-path PtoP12
next sid label 48092 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0007.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:7::1/64
#
interface LoopBack1
ip address 7.7.7.7 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoL71
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
apply-label per-instance
tnl-policy srte
vpn-target 10:1 export-extcommunity
vpn-target 10:1 import-extcommunity
#
mpls lsr-id 8.8.8.8
#
mpls
mpls te
#
explicit-path PtoL81
next sid label 1000 type binding-sid
next sid label 48090 type adjacency
next sid label 1000 type binding-sid
#
explicit-path PtoL82
next sid label 48091 type adjacency
next sid label 2000 type binding-sid
#
explicit-path PtoP21
next sid label 48092 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0008.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.6.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:8::1/64
#
interface LoopBack1
ip address 8.8.8.8 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoL81
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoL82
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 300
mpls te path explicit-path PtoP21
mpls te binding-sid label 1000
#
bgp 200
peer 1.1.1.1 as-number 100
peer 1.1.1.1 ebgp-max-hop 255
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 ebgp-max-hop 255
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 ebgp-max-hop 255
peer 3.3.3.3 connect-interface LoopBack1
peer 3.3.3.3 egress-engineering
peer 4.4.4.4 as-number 100
peer 4.4.4.4 ebgp-max-hop 255
peer 4.4.4.4 connect-interface LoopBack1
peer 4.4.4.4 egress-engineering
#
ipv4-family unicast
undo synchronization
network 8.8.8.8 255.255.255.255
network 10.6.6.0 255.255.255.0
network 10.6.7.0 255.255.255.0
import-route direct
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
link-state-family unicast
#
ipv6-family vpnv6
policy vpn-target
peer 1.1.1.1 enable
peer 2.2.2.2 enable
#
ipv6-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn
#
ip route-static 3.3.3.3 255.255.255.255 10.6.7.1
ip route-static 4.4.4.4 255.255.255.255 10.6.6.1
ip route-static 7.7.7.7 255.255.255.255 10.6.7.1
#
ipv6 route-static vpn-instance vpn1 :: 0 NULL0 tag 2000
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
return
● DC-GW1 configuration file
#
sysname DCGW1
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 33:33
apply-label per-instance
import route-policy delGWIP evpn
#
route-policy dp permit node 15
if-match ipv6 address prefix-list lp
#
route-policy p1 deny node 10
#
route-policy stopuIP deny node 10
if-match ipv6 address prefix-list uIP
#
route-policy stopuIP permit node 20
#
ip route-static 7.7.7.7 255.255.255.255 10.6.5.2
ip route-static 8.8.8.8 255.255.255.255 10.6.1.2
#
ip ipv6-prefix lp index 10 permit 2001:DB8:33::33 128
ip ipv6-prefix uIP index 10 permit 2001:DB8:10::10 128
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
evpn source-address 3.3.3.3
#
return
● DC-GW2 configuration file
#
sysname DCGW2
#
ip vpn-instance vpn1
ipv6-family
route-distinguisher 44:44
apply-label per-instance
import route-policy delGWIP evpn
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
explicit-path DtoL42
next sid label 48003 type adjacency
#
explicit-path DtoL41
next sid label 48002 type adjacency
next sid label 48003 type adjacency
#
segment-routing
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.6.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf prefix-sid index 40
#
interface LoopBack2
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:44::44/128
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL41
mpls te binding-sid label 1000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL42
mpls te binding-sid label 2000
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 egress-engineering
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
peer 8.8.8.8 egress-engineering
#
ipv4-family unicast
undo synchronization
network 10.6.1.0 255.255.255.0
network 10.6.6.0 255.255.255.0
import-route static
import-route ospf 100
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
link-state-family unicast
#
ipv6-family vpn-instance vpn1
import-route direct
maximum load-balancing 16
advertise l2vpn evpn
peer 2001:DB8:5::5 as-number 100
peer 2001:DB8:5::5 connect-interface LoopBack2
peer 2001:DB8:5::5 route-policy p1 export
peer 2001:DB8:5::5 reflect-client
peer 2001:DB8:6::6 as-number 100
peer 2001:DB8:6::6 connect-interface LoopBack2
peer 2001:DB8:6::6 route-policy p1 export
priority 5
peer-address 2.2.2.2 source-address 1.1.1.1
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
mac-address 00e0-fc00-0003
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:2::1/64
mac-address 00e0-fc00-0004
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:3::1/64
mac-address 00e0-fc00-0001
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:4::1/64
mac-address 00e0-fc00-0005
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD13
mpls te binding-sid label 3000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD14
mpls te binding-sid label 4000
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path LtoP17
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path LtoP18
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
ipv6-family vpnv6
policy vpn-target
peer 7.7.7.7 enable
peer 7.7.7.7 advertise route-reoriginated evpn ipv6
peer 8.8.8.8 enable
peer 8.8.8.8 advertise route-reoriginated evpn ipv6
#
ipv6-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
undo policy vpn-target
bestroute add-path path-number 16
peer 3.3.3.3 enable
peer 3.3.3.3 capability-advertise add-path both
peer 3.3.3.3 advertise add-path path-number 16
peer 3.3.3.3 import reoriginate
peer 4.4.4.4 enable
peer 4.4.4.4 capability-advertise add-path both
apply-label per-instance
tnl-policy srte
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 10:1 export-extcommunity
vpn-target 11:1 import-extcommunity evpn
vpn-target 10:1 import-extcommunity
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 2.2.2.2
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path LtoD23
next sid label 48004 type adjacency
next sid label 48002 type adjacency
#
explicit-path LtoD24
next sid label 48003 type adjacency
#
explicit-path LtoP27
next sid label 3000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoP28
next sid label 4000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoL
next sid label 48004 type adjacency
#
e-trunk 1
priority 5
peer-address 1.1.1.1 source-address 2.2.2.2
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
mac-address 00e0-fc00-0003
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:2::1/64
mac-address 00e0-fc00-0004
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:3::1/64
mac-address 00e0-fc00-0001
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ipv6 enable
ipv6 address 2001:DB8:4::1/64
mac-address 00e0-fc00-0005
ipv6 nd dad attempts 0
ipv6 nd collect host enable
ipv6 nd generate-rd-table enable
anycast-gateway enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
ospf prefix-sid index 20
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD23
mpls te binding-sid label 3000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD24
mpls te binding-sid label 4000
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path LtoP27
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
ipv6 route-static vpn-instance vpn1 2001:DB8:6::6 128 2001:DB8:4::2 preference 255 tag 1000 inter-
protocol-ecmp
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
evpn source-address 2.2.2.2
#
return
Networking Requirements
The NFVI telco cloud solution uses the DCI+DCN networking. A large amount of
IPv4 mobile phone traffic is sent to vUGWs and vMSEs on the DCN. After being
processed by the vUGWs and vMSEs, the mobile phone traffic is forwarded over
the DCN to destination devices on the Internet. The destination devices send
traffic to mobile phones in similar ways. To achieve these functions and ensure
traffic load balancing on the DCN, you need to deploy the NFVI distributed
gateway function.
Figure 3-189 shows the networking of an NFVI distributed gateway (BGP EVPN
over E2E SR tunnels). DC-GWs, which are the border gateways of the DCN,
exchange Internet routes with external devices over PEs. L2GW/L3GW1 and
L2GW/L3GW2 are connected to VNFs. VNF1 and VNF2 that function as virtualized
NEs are deployed to implement the vUGW functions and vMSE functions,
respectively. VNF1 and VNF2 are each connected to L2GW/L3GW1 and L2GW/
L3GW2 through IPUs. BGP EVPN peer relationships are established between PEs
and L2GW/L3GWs and between DC-GWs and L2GW/L3GWs, and DC-GWs are
deployed as EVPN RRs. E2E SR tunnels are established between PEs and L2GW/
L3GWs to carry IPv4 service traffic.
LoopBack1 7.7.7.7
LoopBack1 8.8.8.8
LoopBack1 3.3.3.3/32
LoopBack2 33.33.33.33/32
LoopBack1 4.4.4.4/32
LoopBack2 44.44.44.44/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 1.1.1.1/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet1/0/5 -
GigabitEthernet1/0/6 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to
ensure Layer 3 interworking between ASs.
2. Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
3. Configure SR-MPLS TE tunnels between PEs and L2GW/L3GWs and between
DC-GWs and L2GW/L3GWs.
4. Configure EVPN instances on L2GW/L3GWs and bind the EVPN instances to
BDs.
5. Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
6. Configure a tunnel policy on PEs, DC-GWs, and L2GW/L3GWs.
7. Configure the BGP EVPN function on PEs, DC-GWs and L2GW/L3GWs and
configure L2GW/L3GWs as RRs.
8. Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to
VNFs and the static VPN routes destined for VNFs.
9. Configure L2GW/L3GWs to import static VPN routes through BGP EVPN.
Configure and apply a route-policy to L3VPN instances so that the static VPN
routes can retain the original next hops.
10. Configure and apply a route-policy to L3VPN instances on PEs so that PEs can
delete the GWIP attribute carried in the VNF routes received by EVPN peers.
11. On DC-GWs, configure the loopback addresses used to establish BGP VPN
peer relationships with VNFs. Configure and apply a route-policy to L3VPN
instances so that DC-GWs can only advertise VPN loopback routes through
BGP EVPN and delete the GWIP attribute from the routes received from EVPN
peers.
12. Establish BGP VPN peer relationships between DC-GWs and VNFs and
configure DC-GWs to function as RRs.
13. Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
Procedure
Step 1 Configure IP addresses for all interfaces, including loopback interfaces, on PEs, DC-
GWs, and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 2 Configure a routing protocol between PEs, DC-GWs, and L2GW/L3GWs to ensure
Layer 3 interworking between ASs.
For configuration details, see Configuration Files in this section.
Step 3 Establish EBGP peer relationships between PEs and DC-GWs and IBGP peer
relationships between DC-GWs and L2GW/L3GWs.
For configuration details, see Configuration Files in this section.
Step 4 Configure SR-MPLS TE tunnels between PEs and DC-GWs and between DC-GWs
and L2GW/L3GWs.
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 6 Configure L3VPN instances on PEs, DC-GWs, and L2GW/L3GWs and bind the
L3VPN instances on L2GW/L3GWs to VBDIF interfaces.
# Configure PE1.
[~PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] route-distinguisher 77:77
[*PE1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] evpn mpls routing-enable
[*PE1-vpn-instance-vpn1] quit
[*PE1] interface GigabitEthernet1/0/3
[*PE1-GigabitEthernet1/0/3] ip binding vpn-instance vpn1
[*PE1-GigabitEthernet1/0/3] ip address 10.7.1.1 255.255.255.0
[*PE1-GigabitEthernet1/0/3] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 33:33
[*DCGW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] evpn mpls routing-enable
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] route-distinguisher 11:11
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] vpn-target 11:1 evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] evpn mpls routing-enable
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] interface vbdif10
[*L2GW/L3GW1-Vbdif10] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif10] ip address 10.1.1.1 24
[*L2GW/L3GW1-Vbdif10] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif10] anycast-gateway enable
[*L2GW/L3GW1-Vbdif10] mac-address 00e0-fc00-0003
[*L2GW/L3GW1-Vbdif10] arp collect host enable
[*L2GW/L3GW1-Vbdif10] quit
[*L2GW/L3GW1] interface vbdif20
[*L2GW/L3GW1-Vbdif20] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif20] ip address 10.2.1.1 24
[*L2GW/L3GW1-Vbdif20] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif20] anycast-gateway enable
[*L2GW/L3GW1-Vbdif20] mac-address 00e0-fc00-0004
[*L2GW/L3GW1-Vbdif20] arp collect host enable
[*L2GW/L3GW1-Vbdif20] quit
[*L2GW/L3GW1] interface vbdif30
[*L2GW/L3GW1-Vbdif30] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif30] ip address 10.3.1.1 24
[*L2GW/L3GW1-Vbdif30] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif30] anycast-gateway enable
[*L2GW/L3GW1-Vbdif30] mac-address 00e0-fc00-0001
[*L2GW/L3GW1-Vbdif30] arp collect host enable
[*L2GW/L3GW1-Vbdif30] quit
[*L2GW/L3GW1] interface vbdif40
[*L2GW/L3GW1-Vbdif40] ip binding vpn-instance vpn1
[*L2GW/L3GW1-Vbdif40] ip address 10.4.1.1 24
[*L2GW/L3GW1-Vbdif40] arp generate-rd-table enable
[*L2GW/L3GW1-Vbdif40] anycast-gateway enable
[*L2GW/L3GW1-Vbdif40] mac-address 00e0-fc00-0005
[*L2GW/L3GW1-Vbdif40] arp collect host enable
[*L2GW/L3GW1-Vbdif40] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
# Configure PE1.
[~PE1] tunnel-policy srte
[*PE1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*PE1-tunnel-policy-srte] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] ipv4-family
[*PE1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*PE1-vpn-instance-vpn1-af-ipv4] quit
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] tunnel-policy srte
[*DCGW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*DCGW1-tunnel-policy-srte] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] ipv4-family
[*DCGW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*DCGW1-vpn-instance-vpn1-af-ipv4] quit
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] tunnel-policy srte
[*L2GW/L3GW1-tunnel-policy-srte] tunnel select-seq sr-te load-balance-number 3
[*L2GW/L3GW1-tunnel-policy-srte] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] ipv4-family
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] tnl-policy srte evpn
[*L2GW/L3GW1-vpn-instance-vpn1-af-ipv4] quit
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 8 Configure the BGP EVPN function on DC-GWs and L2GW/L3GWs and configure
DC-GWs as RRs.
# Configure PE1.
[~PE1] ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
[*PE1] bgp 200
[*PE1-bgp] peer 1.1.1.1 as-number 200
[*PE1-bgp] peer 1.1.1.1 ebgp-max-hop 255
[*PE1-bgp] peer 1.1.1.1 connect-interface LoopBack1
[*PE1-bgp] peer 2.2.2.2 as-number 200
[*PE1-bgp] peer 2.2.2.2 ebgp-max-hop 255
[*PE1-bgp] peer 2.2.2.2 connect-interface LoopBack1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 1.1.1.1 enable
[*PE1-bgp-af-evpn] peer 2.2.2.2 enable
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] import-route static
[*PE1-bgp-vpn1] import-route direct
[*PE1-bgp-vpn1] advertise l2vpn evpn
[*PE1-bgp-vpn1] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] ip ip-prefix uIP index 10 permit 10.10.10.10 32
[*DCGW1] route-policy stopuIP deny node 10
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] quit
[*DCGW1] route-policy stopuIP permit node 20
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] l2vpn-family evpn
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] peer 7.7.7.7 as-number 200
[*L2GW/L3GW1-bgp] peer 7.7.7.7 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 7.7.7.7 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] peer 8.8.8.8 as-number 200
[*L2GW/L3GW1-bgp] peer 8.8.8.8 ebgp-max-hop 255
[*L2GW/L3GW1-bgp] peer 8.8.8.8 connect-interface LoopBack1
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 reflect-client
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 reflect-client
[*L2GW/L3GW1-bgp-af-evpn] peer 7.7.7.7 enable
[*L2GW/L3GW1-bgp-af-evpn] peer 8.8.8.8 enable
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] evpn soure-address 1.1.1.1
[*L2GW/L3GW1] evpn
[*L2GW/L3GW1-evpn] vlan-extend private enable
[*L2GW/L3GW1-evpn] vlan-extend redirect enable
[*L2GW/L3GW1-evpn] local-remote frr enable
[*L2GW/L3GW1-evpn] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 9 Configure the Eth-Trunk Layer 2 sub-interfaces connecting L2GW/L3GWs to VNFs
and the static VPN routes destined for VNFs.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] lacp e-trunk system-id 00e0-fc00-0002
[*L2GW/L3GW1] lacp e-trunk priority 1
[*L2GW/L3GW1] e-trunk 1
[*L2GW/L3GW1-e-trunk-1] peer-address 2.2.2.2 source-address 1.1.1.1
[*L2GW/L3GW1-e-trunk-1] priority 5
[*L2GW/L3GW1-e-trunk-1] quit
[*L2GW/L3GW1] interface Eth-Trunk 10
[*L2GW/L3GW1-Eth-Trunk10] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk10] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk10] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk10] esi 0000.1111.1111.1111.1111
[*L2GW/L3GW1-Eth-Trunk10] quit
[*L2GW/L3GW1] interface Eth-Trunk 20
[*L2GW/L3GW1-Eth-Trunk20] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk20] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk20] e-trunk mode force-master
[*L2GW/L3GW1-Eth-Trunk20] esi 0000.2222.2222.2222.2222
[*L2GW/L3GW1-Eth-Trunk20] quit
[*L2GW/L3GW1] interface Eth-Trunk 30
[*L2GW/L3GW1-Eth-Trunk30] mode lacp-static
[*L2GW/L3GW1-Eth-Trunk30] e-trunk 1
[*L2GW/L3GW1-Eth-Trunk30] e-trunk mode force-master
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 10 Configure L2GW/L3GWs to import static VPN routes through BGP EVPN. Configure
and apply a route-policy to L3VPN instances so that the static VPN routes can
retain the original next hops.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] import-route static
[*L2GW/L3GW1-bgp-vpn1] import-route direct
[*L2GW/L3GW1-bgp-vpn1] advertise l2vpn evpn import-route-multipath
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] route-policy sp permit node 10
[*L2GW/L3GW1-route-policy] if-match tag 1000
[*L2GW/L3GW1-route-policy] apply gateway-ip origin-nexthop
[*L2GW/L3GW1-route-policy] quit
[*L2GW/L3GW1] ip vpn-instance vpn1
[*L2GW/L3GW1-vpn-instance-vpn1] export route-policy sp evpn
[*L2GW/L3GW1-vpn-instance-vpn1] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 11 Configure and apply a route-policy to L3VPN instances on PEs so that PEs can
delete the GWIP attribute carried in the VNF routes received by EVPN peers.
# Configure PE1.
[~PE1] route-policy delGWIP permit node 10
[*PE1-route-policy] apply gateway-ip none
[*PE1-route-policy] quit
[*PE1] ip vpn-instance vpn1
[*PE1-vpn-instance-vpn1] import route-policy delGWIP evpn
[*PE1-vpn-instance-vpn1] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
Step 12 On DC-GWs, configure the loopback addresses used to establish BGP VPN peer
relationships with VNFs. Configure and apply a route-policy to L3VPN instances so
that DC-GWs can only advertise VPN loopback routes and mobile phone routes (in
this example, the destination IP address of mobile phone routes destined for a
VNF is 10.10.10.10) through BGP EVPN and the GWIP attribute is added to mobile
phone routes and deleted from the routes received by EVPN peers.
# Configure DC-GW1.
[~DCGW1] interface LoopBack2
[*DCGW1-LoopBack2] ip binding vpn-instance vpn1
[*DCGW1-LoopBack2] ip address 33.33.33.33 255.255.255.255
[*DCGW1-LoopBack2] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] advertise l2vpn evpn
[*DCGW1-bgp-vpn1] import-route direct
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] ip ip-prefix lp index 10 permit 33.33.33.33 32
[*DCGW1] route-policy dp permit node 5
[*DCGW1-route-policy] if-match ip-prefix uIP
[*DCGW1-route-policy] apply gateway-ip origin-nexthop
[*DCGW1-route-policy] quit
[*DCGW1] route-policy dp permit node 15
[*DCGW1-route-policy] if-match ip-prefix lp
[*DCGW1-route-policy] quit
[*DCGW1] route-policy delGWIP permit node 10
[*DCGW1-route-policy] apply gateway-ip none
[*DCGW1-route-policy] quit
[*DCGW1] ip vpn-instance vpn1
[*DCGW1-vpn-instance-vpn1] export route-policy dp evpn
[*DCGW1-vpn-instance-vpn1] import route-policy delGWIP evpn
[*DCGW1-vpn-instance-vpn1] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
Step 13 Establish BGP VPN peer relationships between DC-GWs and VNFs and configure
DC-GWs to function as RRs.
# Configure DC-GW1.
[~DCGW1] route-policy p1 deny node 10
[*DCGW1-route-policy] quit
[*DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW1-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW1-bgp-vpn1] peer 5.5.5.5 reflect-client
[*DCGW1-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW1-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW1-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW1-bgp-vpn1] peer 6.6.6.6 reflect-client
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
# Configure DC-GW2.
[~DCGW2] route-policy p1 deny node 10
[*DCGW2-route-policy] quit
[*DCGW2] bgp 100
[*DCGW2-bgp] ipv4-family vpn-instance vpn1
[*DCGW2-bgp-vpn1] peer 5.5.5.5 as-number 100
[*DCGW2-bgp-vpn1] peer 5.5.5.5 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 5.5.5.5 route-policy p1 export
[*DCGW2-bgp-vpn1] peer 5.5.5.5 reflect-client
[*DCGW2-bgp-vpn1] peer 6.6.6.6 as-number 100
[*DCGW2-bgp-vpn1] peer 6.6.6.6 connect-interface LoopBack2
[*DCGW2-bgp-vpn1] peer 6.6.6.6 route-policy p1 export
[*DCGW2-bgp-vpn1] peer 6.6.6.6 reflect-client
[*DCGW2-bgp-vpn1] quit
[*DCGW2-bgp] quit
[*DCGW2] commit
Step 14 Configure the load balancing function on PEs, DC-GWs, and L2GW/L3GWs.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] ipv4-family vpn-instance vpn1
[*PE1-bgp-vpn1] maximum load-balancing 16
[*PE1-bgp-vpn1] quit
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*PE1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] quit
[*PE1] commit
Repeat this step for PE2. For configuration details, see Configuration Files in this
section.
# Configure DC-GW1.
[~DCGW1] bgp 100
[*DCGW1-bgp] ipv4-family vpn-instance vpn1
[*DCGW1-bgp-vpn1] maximum load-balancing 16
[*DCGW1-bgp-vpn1] quit
[*DCGW1-bgp] l2vpn-family evpn
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 1.1.1.1 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 capability-advertise add-path both
[*DCGW1-bgp-af-evpn] peer 2.2.2.2 advertise add-path path-number 16
[*DCGW1-bgp-af-evpn] quit
[*DCGW1-bgp] quit
[*DCGW1] commit
Repeat this step for DC-GW2. For configuration details, see Configuration Files in
this section.
# Configure L2GW/L3GW1.
[~L2GW/L3GW1] bgp 100
[*L2GW/L3GW1-bgp] ipv4-family vpn-instance vpn1
[*L2GW/L3GW1-bgp-vpn1] maximum load-balancing 16
[*L2GW/L3GW1-bgp-vpn1] quit
[*L2GW/L3GW1-bgp] l2vpn-family evpn
[*L2GW/L3GW1-bgp-af-evpn] bestroute add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 3.3.3.3 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 4.4.4.4 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 7.7.7.7 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 7.7.7.7 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] peer 8.8.8.8 capability-advertise add-path both
[*L2GW/L3GW1-bgp-af-evpn] peer 8.8.8.8 advertise add-path path-number 16
[*L2GW/L3GW1-bgp-af-evpn] quit
[*L2GW/L3GW1-bgp] quit
[*L2GW/L3GW1] commit
Repeat this step for L2GW/L3GW2. For configuration details, see Configuration
Files in this section.
Step 15 Verify the configuration.
After completing the configurations, run the display ip routing-table vpn-
instance vpn1 command on PEs to view the mobile phone route information and
VNF route information in VPN routing tables. The following example uses the
command output on PE1.
[~PE1] display ip routing-table vpn-instance vpn1
Route Flags: R - relay, D - download to fib, T - to vpn-instance, B - black hole route
------------------------------------------------------------------------------
Routing Table : vpn1
Destinations : 11 Routes : 16
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 77:77
apply-label per-instance
import route-policy delGWIP evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 7.7.7.7
#
mpls
mpls te
#
explicit-path PtoL71
next sid label 48090 type adjacency
next sid label 1000 type binding-sid
#
explicit-path PtoL72
next sid label 1000 type binding-sid
next sid label 48091 type adjacency
next sid label 2000 type binding-sid
#
explicit-path PtoP12
next sid label 48092 type adjacency
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0007.00
traffic-eng level-2
segment-routing mpls
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.7.1 255.255.255.0
isis enable 1
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.5.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpn1
ip address 10.7.1.1 255.255.255.0
#
interface LoopBack1
ip address 7.7.7.7 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path PtoL71
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path PtoL72
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 300
mpls te path explicit-path PtoP12
mpls te binding-sid label 1000
#
bgp 200
peer 1.1.1.1 as-number 100
peer 1.1.1.1 ebgp-max-hop 255
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 ebgp-max-hop 255
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 ebgp-max-hop 255
peer 3.3.3.3 connect-interface LoopBack1
peer 3.3.3.3 egress-engineering
peer 4.4.4.4 as-number 100
peer 4.4.4.4 ebgp-max-hop 255
peer 4.4.4.4 connect-interface LoopBack1
peer 4.4.4.4 egress-engineering
#
ipv4-family unicast
undo synchronization
network 7.7.7.7 255.255.255.255
network 10.6.5.0 255.255.255.0
network 10.6.7.0 255.255.255.0
import-route direct
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
link-state-family unicast
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 capability-advertise add-path both
peer 2.2.2.2 enable
peer 2.2.2.2 capability-advertise add-path both
#
route-policy delGWIP permit node 10
apply gateway-ip none
#
ip route-static 3.3.3.3 255.255.255.255 10.6.5.1
ip route-static 4.4.4.4 255.255.255.255 10.6.7.2
ip route-static 8.8.8.8 255.255.255.255 10.6.7.2
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
return
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
return
● DC-GW1 configuration file
#
sysname DCGW1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 33:33
apply-label per-instance
import route-policy delGWIP evpn
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 3.3.3.3
#
mpls
mpls te
#
explicit-path DtoL31
next sid label 48003 type adjacency
#
explicit-path DtoL32
next sid label 48002 type adjacency
next sid label 48003 type adjacency
#
segment-routing
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.5.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
ospf prefix-sid index 30
#
interface LoopBack2
ip binding vpn-instance vpn1
ip address 33.33.33.33 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL31
mpls te binding-sid label 1000
#
interface Tunnel2
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.6.1.0 0.0.0.255
network 10.6.2.0 0.0.0.255
mpls-te enable
#
route-policy delGWIP permit node 10
apply gateway-ip none
#
route-policy dp permit node 5
if-match ip-prefix uIP
apply gateway-ip origin-nexthop
#
route-policy dp permit node 15
if-match ip-prefix lp
#
route-policy p1 deny node 10
#
route-policy stopuIP deny node 10
if-match ip-prefix uIP
#
route-policy stopuIP permit node 20
#
ip ip-prefix lp index 10 permit 33.33.33.33 32
ip ip-prefix uIP index 10 permit 10.10.10.10 32
#
ip route-static 7.7.7.7 255.255.255.255 10.6.5.2
ip route-static 8.8.8.8 255.255.255.255 10.6.1.2
#
tunnel-policy srte
tunnel select-seq sr-te load-balance-number 3
#
evpn source-address 3.3.3.3
#
return
● DC-GW2 configuration file
#
sysname DCGW2
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 44:44
apply-label per-instance
import route-policy delGWIP evpn
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 4.4.4.4
#
mpls
mpls te
#
explicit-path DtoL42
next sid label 48003 type adjacency
#
explicit-path DtoL41
next sid label 48002 type adjacency
next sid label 48003 type adjacency
#
segment-routing
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.3.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 10.6.6.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
ospf prefix-sid index 40
#
interface LoopBack2
ip binding vpn-instance vpn1
ip address 44.44.44.44 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path DtoL41
mpls te binding-sid label 1000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path DtoL42
mpls te binding-sid label 2000
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 7.7.7.7 egress-engineering
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
peer 8.8.8.8 egress-engineering
#
ipv4-family unicast
undo synchronization
network 10.6.1.0 255.255.255.0
network 10.6.6.0 255.255.255.0
import-route static
import-route ospf 100
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
link-state-family unicast
#
#
lacp e-trunk system-id 00e0-fc00-0002
lacp e-trunk priority 1
#
evpn
vlan-extend private enable
vlan-extend redirect enable
local-remote frr enable
#
mac-duplication
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
tnl-policy srte evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
#
bridge-domain 10
evpn binding vpn-instance evrf1
#
bridge-domain 20
evpn binding vpn-instance evrf2
#
bridge-domain 30
evpn binding vpn-instance evrf3
#
bridge-domain 40
evpn binding vpn-instance evrf4
#
explicit-path LtoD13
next sid label 48002 type adjacency
#
explicit-path LtoD14
next sid label 48003 type adjacency
next sid label 48003 type adjacency
#
explicit-path LtoP17
next sid label 3000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoP18
next sid label 4000 type binding-sid
next sid label 48121 type adjacency
#
explicit-path LtoL
next sid label 48003 type adjacency
#
e-trunk 1
priority 5
peer-address 2.2.2.2 source-address 1.1.1.1
#
segment-routing
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
anycast-gateway enable
arp collect host enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
anycast-gateway enable
arp collect host enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
anycast-gateway enable
arp collect host enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0005
anycast-gateway enable
arp collect host enable
#
interface Eth-Trunk10
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.1111.1111.1111.1111
#
interface Eth-Trunk10.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface Eth-Trunk20
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.2222.2222.2222.2222
#
interface Eth-Trunk20.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface Eth-Trunk30
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.3333.3333.3333.3333
#
interface Eth-Trunk30.1 mode l2
encapsulation dot1q vid 30
rewrite pop single
bridge-domain 30
#
interface Eth-Trunk40
mode lacp-static
e-trunk 1
e-trunk mode force-master
esi 0000.4444.4444.4444.4444
#
interface Eth-Trunk40.1 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet1/0/3
undo shutdown
eth-trunk 10
#
interface GigabitEthernet1/0/4
undo shutdown
eth-trunk 20
#
interface GigabitEthernet1/0/5
undo shutdown
eth-trunk 30
#
interface GigabitEthernet1/0/6
undo shutdown
eth-trunk 40
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
ospf prefix-sid index 10
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path LtoD13
mpls te binding-sid label 3000
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te signal-protocol segment-routing
mpls te tunnel-id 2
mpls te path explicit-path LtoD14
mpls te binding-sid label 4000
#
interface Tunnel3
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te signal-protocol segment-routing
mpls te tunnel-id 3
mpls te path explicit-path LtoL
#
interface Tunnel7
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 7.7.7.7
mpls te signal-protocol segment-routing
mpls te tunnel-id 100
mpls te path explicit-path LtoP17
#
interface Tunnel8
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 8.8.8.8
mpls te signal-protocol segment-routing
mpls te tunnel-id 200
mpls te path explicit-path LtoP18
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
peer 7.7.7.7 as-number 200
peer 7.7.7.7 ebgp-max-hop 255
peer 7.7.7.7 connect-interface LoopBack1
peer 8.8.8.8 as-number 200
peer 8.8.8.8 ebgp-max-hop 255
peer 8.8.8.8 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 7.7.7.7 enable
peer 8.8.8.8 enable
#
ipv4-family vpn-instance vpn1
import-route direct
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
undo policy vpn-target
bestroute add-path path-number 16
peer 3.3.3.3 enable
peer 3.3.3.3 reflect-client
peer 3.3.3.3 capability-advertise add-path both
peer 3.3.3.3 advertise add-path path-number 16
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 capability-advertise add-path both
peer 4.4.4.4 advertise add-path path-number 16
peer 7.7.7.7 enable
peer 7.7.7.7 capability-advertise add-path both
peer 7.7.7.7 advertise add-path path-number 16
peer 8.8.8.8 enable
peer 8.8.8.8 capability-advertise add-path both