Professional Documents
Culture Documents
OG For PTN E2E Management - (V100R002C01 - 01) PDF
OG For PTN E2E Management - (V100R002C01 - 01) PDF
V100R002C01
Issue 01
Date 2010-08-16
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Related Versions
The following table lists the product versions related to this document.
Intended Audience
The Manager U2000 Operation Guide for PTN End-to-End Management describes the
operations, such as how to configure the communication, clock and service of the PTN
equipment on the U2000. This document also provides the acronyms and abbreviations.
This document guides the user to understand basic operations of the U2000.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
GUI Conventions
Convention Description
Change History
Updates between document versions are cumulative. Therefore, the latest document version
contains all updates made to previous versions.
Contents
10 Configuring VRRP.................................................................................................................10-1
10.1 Overview of VRRP.....................................................................................................................................10-2
10.2 Configuration Flow for VRRP....................................................................................................................10-3
10.3 Operation Tasks of Configuring VRRP......................................................................................................10-3
10.3.1 Configuring and Deploying an L3VPN Service................................................................................10-3
10.3.2 Configuring VRRP VR Information..................................................................................................10-5
10.3.3 Configuring Information About Objects Under Tracking of a VRRP VR .......................................10-6
10.4 Testing VRRP.............................................................................................................................................10-7
10.5 Configuration Case of VRRP......................................................................................................................10-8
10.5.1 Example Description..........................................................................................................................10-8
10.5.2 Configuration Process......................................................................................................................10-10
12 Modifying Configurations...................................................................................................12-1
12.1 Modifying the Basic Information About Services in Batches.....................................................................12-2
12.2 Modifying Tunnel Attributes......................................................................................................................12-2
12.2.1 Modifying a Tunnel............................................................................................................................12-3
12.2.2 Deleting a Tunnel...............................................................................................................................12-3
12.2.3 Deleting a tunnel from the network Side...........................................................................................12-4
12.2.4 Undeploying a tunnel.........................................................................................................................12-4
12.3 Modifying PWE3 Attributes.......................................................................................................................12-5
12.3.1 Modifying a PWE3 Service................................................................................................................12-5
12.3.2 Modifying the Tunnel Carrying PWE3 Services...............................................................................12-6
12.3.3 Deleting a PWE3 Service...................................................................................................................12-6
12.3.4 Deleting a PWE3 Service on the Network Side.................................................................................12-7
12.3.5 Undeploying a PWE3 Service............................................................................................................12-7
12.4 Modifying VPLS Attributes........................................................................................................................12-8
12.4.1 Modifying a VPLS Service................................................................................................................12-8
12.4.2 Modifying the Tunnel Carrying VPLS Services................................................................................12-9
12.4.3 Deleting a VPLS Service....................................................................................................................12-9
12.4.4 Deleting a VPLS Service from the U2000 Side...............................................................................12-10
12.4.5 Undeploying a VPLS Service..........................................................................................................12-10
12.5 Modifying the Attributes of a L3VPN Service.........................................................................................12-11
12.5.1 Modifying a L3VPN Service............................................................................................................12-11
12.5.2 Deleting an L3VPN Service.............................................................................................................12-12
12.5.3 Deleting a L3VPN Service from the Network.................................................................................12-12
12.5.4 Undeploying a L3VPN Service........................................................................................................12-13
Index.................................................................................................................................................i-1
Figures
Tables
Table 9-6 Parameter planning for NNI-side MC-PW APS (dual-homing protection with 1:1 MC-PW APS and
MC-LAG in the example)...................................................................................................................................9-20
Table 9-7 Parameter planning for MC synchronization communication (dual-homing protection with 1:1 MC-PW
APS and MC-LAG in the example)....................................................................................................................9-21
Table 9-8 Parameters for LAG1 on PE1 and LAG2 on PE2 (dual-homing protection with 1:1 MC-PW APS and
MC-LAG in the example)...................................................................................................................................9-21
Table 9-9 Parameters for the MC-LAG protection groups on PE1 and PE2 (dual-homing protection with 1:1 MC-
PW APS and MC-LAG in the example).............................................................................................................9-21
Table 10-1 Planning of VRRP VR information................................................................................................. 10-9
Table 10-2 Planning of Advanced VRRP VR Information................................................................................10-9
Table 10-3 Planning of Information About Objects Under Tracking of a VRRP VR ....................................10-10
Table 10-4 Planning of Advanced VRRP VR Information..............................................................................10-12
Table 10-5 Parameters for Tracking More BFD Sessions or Interfaces...........................................................10-14
Table 11-1 Configuration tasks of a composite service..................................................................................... 11-7
Table 11-2 NE parameters................................................................................................................................11-13
Table 11-3 Planning of parameters for configuring the PWE3 service............................................................11-13
Table 11-4 Planning of parameters for configuring the VPLS service............................................................11-14
Table 11-5 Planning of parameters for configuring the composite service......................................................11-15
Table 11-6 Planning of parameters for configuring the PWE3 service............................................................11-17
Table 11-7 Planning of parameters for configuring the VPLS service............................................................11-17
Table 11-8 Planning of parameters for configuring the LAG..........................................................................11-20
Table 11-9 Planning of parameters for configuring the PWE3 service............................................................11-21
Table 11-10 Planning of parameters for configuring the composite service....................................................11-21
Table 11-11 Planning of parameters for configuring the LAG........................................................................11-22
Table 11-12 Planning of parameters for configuring the PWE3 service..........................................................11-22
This topic describes the process of configuring PTN services in terms of network deployment,
service discovery, service deployment, and service assurance.
Optional
Configure tunnels
Mandatory
l Network deployment: is the prerequisite for the service deployment and includes adding
equipment to the NMS, upload/synchronize data, and configuring basic routes, configuring
control plane, and tunnels.
l Service discovery: discovers the existing services on the NMS for unified management and
includes the discovery of tunnels, single services, and composite services.
l Service deployment
Viewing service resources: Before deployment services, you can view the service
resources to check the available service resources.
Predeploying services: Predeploying services refers to creating services on the NMS.
After services are predeployed, the configuration data of the services are not deployed
to equipment. To create services, you can either manually enter the parameters of the
services or use a template to create the services in batches.
Service deployment: deploys the configuration data of services to equipment.
l Service assurance: includes service monitoring and fault location. Service monitoring
includes service alarm monitoring and service performance monitoring. By monitoring
service alarms, you can view affected services, and then locate the failure point through
the test diagnosis tool.
Service monitoring: monitors the alarms and performance of services and views the
service topology. The service topology provides rich service operation accesses.
According to the topology color, you can discover alarms according to the topology
color and view the related alarms in the topology view.
Fault location: By monitoring service alarms, you can view affected services, and then
locate the failure point through the test and check.
This topic describes how to automatically search IP services. With this function, you can recover
the services existing on the current network to the end to end management module of the NMS
for monitoring. In this manner, you can ensure the normal running of these services.
Prerequisite
Data synchronization must be complete on the related equipment.
Procedure
Step 1 Choose Service > Search for IP Service from the main menu.
Step 2 On the Discovery Policy tab page, set the discovery policy.
1. Specify the equipment range for automatically searching IP services.
l Click the All option button to discover all the NEs on the entire network.
l Click the Select NE option button, and then click Add. In the dialog box that is
displayed, select one or more NEs, and then click OK to discover the specified NEs.
2. In the Discover Service navigation tree, select the check box to the left of the related service
to specify the type of the services to be searched.
3. On the lower-right part, click each service tab to configure the customer policy and
discovery policy.
Customer association policies are classified into the following types:
l Set Customer: The searched services are automatically associated with the specified
customer.
l Do Not Set Customer: The automatically searched services are not associated with any
customer.
NOTE
Only support discovering L3VPN service by VRF ID or VRF connectivity for PTN equipment.
4. After the configuration, click Start.
Step 3 Click the Discovery Result tab. A progress bar is displayed indicating the progress of
automatically discovering services.
You can view the automatically searched services on the Add Service, Modify Service, and
Discrete Service tab pages, as shown in the following figure. After selecting a record and
clicking Jump Service, you can access the service management user interface for this service.
----End
3 Managing Tunnel
By using the tunnel technology, you can create private data transmission channels on a PSN
network to transparently transmit packets.
MPLS Tunnel
As a transmission technology, the multi-protocol label switching (MPLS) can realize transparent
transmission of data packets among users. The MPLS tunnel is the tunnel defined in the MPLS
protocol. Independent from the service, the MPLS tunnel realizes the end-to-end transmission
and carries the PWs related to the service.
Figure 3-1 shows how the MPLS tunnel is used as the service transmission channel.
FE FE
MPLS tunnel
ATM STM-1 ATM STM-1
PW
The MPLS tunnel only provides an end-to-end channel, and does not care which service is
encapsulated in the PW it carries. Data packets are first encapsulated in the PW, which is stuck
with an MPLS label and sent to the MPLS tunnel for transmission. At the sink end, data packets
are recovered and retain the original service features. In the tunnel, the intermediate nodes are
called Transit nodes. Hence, a tunnel contains the Ingress node, Egress node and Transit nodes.
Based on signaling types, MPLS tunnels can be classified into three types, that is, the static CR
tunnel, RSVP TE tunnel, and LDP tunnel. These three types of tunnels are different and the
details are as follows:
l Static CR tunnel: You need to specify the nodes that a static CR tunnel traverses. In addition,
you can also specify the bandwidth and QoS of the tunnel.
l RSVP TE tunnel: You need to specify only the ingress and egress nodes for an RSVP TE
tunnel. The MPLS protocol automatically calculates a route for the tunnel. In addition, you
can specify constraint nodes to plan a specific route for the tunnel. You can configure FRR
protection and the QoS function for an RSVP TE tunnel. Therefore, an RSVP tunnel is
more flexible and safer than a static CR tunnel.
l LDP: You only need to specify the ingress and egress nodes for an LDP tunnel. Then, the
LDP protocol sets up a route for the tunnel. An LDP tunnel functions on the network that
supports the MPLS domain and thus is more flexible.
IP Tunnel
If ATM or CES emulation service that travels through an IP network is required, the PTN
equipment can use the IP tunnel to carry the service. Figure 3-2 shows the protocol stack model
of the ATM service. In the case of the IP tunnel, the situation is similar to that where "IP header"
replaces the MPLS external label (MPLS tunnel label) to establish a tunnel in the IP network.
An ATM emulation service can be provided between NE A and NE B, even though the IP
network between NE A and NE B does not support the MPLS.
ATM ATM
PTN Router Router PTN
switch switch
IP network
NE A NE B
MPLS-RSVP Protocol
Multi-protocol label switch resource reservation protocol (MPLS-RSVP) supports the
distribution of MPLS labels. In addition, when transmitting the label binding message, it carries
the resource reservation information, used as a signaling protocol to create, delete or modify the
tunnel in the MPLS network.
The MPLS-RSVP is a notification mechanism of the resource reservation in the network, which
realizes the bandwidth reservation on the control plane. As a label distribution protocol, it is
used to set up the LSP in the MPLS network.
The LSP set up by using the MPLS-RSVP is of a certain reservation style. When the RSVP
session is set up, the receive end determines which reservation style to be used, and thus
determines which LSP to be used.
l Fixed-filter (FF) style: When this style is used, resources are reserved for each transmit end
individually. Thus, transmit ends in the same session cannot share the resources with each
other.
l Shared-explicit (SE) style: When this style is used, resources are reserved for all transmit
ends in the same session. Thus, transmit ends can share the resources.
NOTE
The parameters of the MPLS-RSVP state timer include the refreshing period of the Path or Resv
message, multiple of the path state block (PSB) timeout and reservation state block (RSB)
timeout.
In the case of the creation of the LSP, the transmit end adds the LABEL_REQUEST object to
the Path message. When the receive end receives the Path message with the LABEL_REQUEST
object, it distributes one label and adds the label to the LABEL object of the Resv message.
The LABEL_REQUEST object is saved in the PSB of the upstream node, and the LABEL object
is saved in the RSB of the downstream node. When the message indicating that the number of
message refreshing times exceeds the multiple of the PSB or RSB timeout is not continuously
received, the corresponding state in the PSB or RSB is deleted.
Assume that there is a resource reservation request, which does not pass the access control on
some nodes. In some cases, this request is not supposed to be immediately deleted, but it cannot
stop other requests from using its reserved resources. In this case, the node enters the blockade
state, and the blockade state block (BSB) is generated on the node of the downstream. When the
message indicating that the number of the message refreshing times exceeds the multiple of the
PSB or RSB timeout is continuously received, the corresponding state in the BSB is deleted.
MPLS-LDP Protocol
The multi-protocol label switch label distribution protocol (MPLS-LDP) is used for the label
switched routers (LSR) to distribute labels in the network.
MPLS-LDP Peer Entities
The MPLS-LDP peer entities refer to two NEs, where LDP session exists, use the MPLS-LDP
to exchange labels mapping relation.
MPLS-LDP Session
The MPLS-LDP session is used to exchange label mapping and releasing messages between
different equipment. The MPLS-LDP session consists the following two types:
l Local MPLS-LDP session, in which the two NEs used to set up the session is directly
connected.
l Remote MPLS-LDP session, in which the two NEs used to set up the session is not directly
connected.
MPLS-LDP Message Types
The MPLS-LDP protocol mainly uses the following four types of messages:
l Discovery message, which is used to notify and maintain the existence of the equipment
in the network.
l Session message, which is used to set up, maintain and end the session between MPLS-
LDP peer entities.
l Advertisement message, which is used to create, change and delete the label mapping.
l Notification message, which is used to provide the constructive message and error
notification.
3.1.3 Principles
Multi-protocol label switching (MPLS) is a tunnel technology and enables a routing and
switching platform that integrates the switching and forwarding technologies of labels and
network-layer routing technologies. In the MPLS architecture, the control plane is
connectionless and uses the powerful and flexible routing function of the IP network to meet the
network requirements of new application; the data plane is connection-oriented and uses short
and fixed-length labels to encapsulate packets for implementation of fast forwarding.
FEC
Forwarding equivalence class (FEC) is a class of packets that are forwarded in the same way on
an MPLS network.
Label
Label is a short and length-fixed identifier. The label identifies the FEC that a packet belongs
to and functions only in the MPLS domain. One FEC may involve multiple labels but one label
can only indicate one FEC.
LDP
Label distribution protocol (LDP) is the control protocol for MPLS. Similar to the signaling
protocol of a traditional network, the LDP protocol is responsible for creation and maintenance
of LSPs and PWs, FEC classification, and label distribution. MPLS can use the following label
distribution protocols:
l Protocols exclusive for label distribution, such as LDP.
l Existing protocols extended to support label distribution, such as RSVP-TE.
LSP
On an MPLS network, the trail that an FEC traverses is a label switched path (LSP), that is, a
unidirectional trail from the ingress to egress. LSPs are classified into static LSPs and dynamic
LSPs. Static LSPs should be manually configured and dynamic LSPs are dynamically generated
by the LDP protocol.
LSR
Label switching router (LSR) is the basic element in an MPLS domain. All LSRs support the
MPLS protocol. Each node on an LSP is an LSR. An edge LSR (LER) is at the edge of an MPLS
domain and connects to other user networks. The core LSR is in the center of an MPLS domain.
Packets travel along an LSP and enter an MPLS domain. The incoming LER is the ingress, the
outgoing LER is the egress, and the intermediate nodes are the transit nodes.
NHLFE
Next hop label forwarding entry (NHLFE) describes the operations that an LSR performs on
labels, including push, swap, and pop.
Working Principles
This topic describes how to create a tunnel and the working principles of a tunnel.
Set up the Allocate the ingress label Allocate the ingress label
forward entry and set up the forward entry and set up the forward entry
Tunnel
Ingress Egress
IMA E1 Transit node IMA E1
node node
FE FE
MPLS Tunnel
ATM STM-1 ATM STM-1
Packet Push Swap Pop
FEC
PW
At each LSR, the LDP protocol and traditional routing protocol work together to set up the route
table and label mapping table for the FEC as required. Each LSR node receives packets and
performs the NHLFE operations for the packets:
l Push: The ingress node receives packets and checks for the FEC that the packets belong
to. Then, the ingress node adds labels on the packets and transmits the encapsulated MPLS
packets to the next hop through the egress interface.
l Swap: A transit node uses the forward unit to forward the packets only according to packet
labels and the label forward table. A transit node does not perform any Layer 3 operation
for the packets.
l Pop: The egress node stripes the labels from the packets and forwards the packets.
Basic Information
APS (Automatic Protection Switching)
The automatic protection switching (APS) protocol is used to coordinate actions of the source
and the sink in the case of bidirectional protection switching. By the APS protocol, the source
and the sink cooperate with each other to perform functions such as protection switching,
switching delay, and WTR function.
According to ITU-T Y.1720, the source and the sink both need to select channels in the APS.
In this case, the APS protocol is required for coordination. In the case of bidirectional protection
switching, the APS protocol needs to be used regardless of the revertive mode.
The APS protocol is always transmitted through the protection tunnel. Then, the equipment at
either end knows that the tunnel from which the APS protocol is received is the protection tunnel
of the opposite end and thus to determine whether the configuration about the working tunnel
and the protection tunnel is consistent at the two ends.
Switching Mode
MPLS APS provides two switching modes, that is, single-ended switching and dual-ended
switching.
In the case of single-ended switching, when one end detects a fault, it only performs switching
on the local end and does not instruct the opposite end to perform any switching.
In the case of dual-ended switching, when one end detects a fault, it performs switching on the
local end and also instructs the opposite end to perform switching.
Single-ended switching does not require the APS protocol for negotiation and it features rapid
and stable switching.
Dual-ended switching ensures that the services are transmitted in a consistent channel, which
facilitates service management.
Revertive Mode
The MPLS APS function supports two revertive modes, that is, revertive mode and non-revertive
mode.
In the non-revertive mode, services are not switched from the protection tunnel to the working
tunnel even the working tunnel is restored to the normal state.
In the revertive mode, services are switched from the protection tunnel to the original working
tunnel if the working tunnel is restored to the normal state within the WTR time.
WTR Time
The WTR time refers to the period from the time when the original working tunnel is restored
to the time when the services are switched from the protection tunnel to the original working
tunnel.
In certain scenarios, the state of the working tunnel is unstable. In this case, setting the WTR
time can prevent frequent switching of services between the working tunnel and the protection
tunnel.
Hold-off Time
The hold-off time refers to the period from the time when the equipment detects a fault to the
time when the switching operation is performed.
When the equipment is configured with the MPLS APS protection and other protection, setting
the hold-off time can ensure that other protection switching operations are performed first.
By using the U2000, the user can configure 1+1 or 1:1 protection for MPLS tunnels that carry
important services.
CE
CE
Working
tunnel
Ingress Egress
node Protection node
tunnel
Third-party IP
network
Node B
PE RNC
PE
RSVP TE
network
MPLS network
IP tunnel
Static CR tunnel
RSVP TE tunnel
LDP tunnel
An edge node on one network receives services from Node B, and transmits the services to the
RNC connected to another PE. In this case, a point-to-point MPLS tunnel can be used. The
application scenarios of different tunnels are as follows:
l When an IP tunnel transmits services, the service can be transparently transmitted on a
third-party IP network. Therefore, IP tunnels are used mainly when the services that the
PTN equipment transmits need to be transparently transmitted on a third-party IP network.
l When a static CR tunnel transmits services, the service can be transparently transmitted on
an entire MPLS network. Therefore, static CR tunnels are used mainly when high QoS is
not required and the routes are specified.
l When an RSVP TE tunnel transmits services, the service can be transparently transmitted
on an entire RSVP TE network. RSVP TE tunnels are used when high QoS and resource
usage are required on a network.
l When an LDP tunnel transmits services, the service can be transparently transmitted on an
entire MPLS network. LDP tunnels are widely used on MPLS VPNs. To prevent traffic
congestion on a certain node of a VPN, you can configure the LDP over RSVP feature.
That is, the LSP of an LDP tunnel traverses the RSVP TE domain and thus the LDP tunnel
can transmit VPN services.
When all the preceding tunnels traverse the third-party equipment, you can set the third-party
equipment as a virtual node to ensure that the tunnels are created properly.
Configure and manage Tunnels by following the configuration flow shown in Figure 3-7.
Required
Start
Optional
Creating Network
Configure the
network-side interface
Configure the
control plane
End
For the detailed configuration tasks shown in Figure 3-7, see Table 3-1.
Task Remarks
2. Configure the network-side Set the general attributes and Layer 3 attributes (tunnel
interface enable status and IP address) for interfaces to carry the
tunnel carrying.
Task Remarks
3. Configure the LSR ID Specifies the LSR ID for each NE that a service
traverses and the start value of the global label space.
Each LSR ID is unique on a network.
4. Configure the control plane Set the protocol parameters related to the control plane
to create the tunnel.
l When you create a static CR tunnel to carry services,
you do not need to set the parameters relevant to the
control plane but you need to manually add labels.
l When you create an RSVP TE tunnel to carry
services, the LDP automatically distributes labels. In
this case, you need to set the parameters relevant to
the control plane.
1. Set the IGP-ISIS protocol parameters.
2. Set the MPLS-RSVP protocol parameters.
l When you create an LDP Tunnel to carry services,
the LDP automatically distributes labels. In this case,
you need to set the parameters relevant to the control
plane.
1. Set the IGP-ISIS protocol parameters.
2. Create the MPLS-LDP.
l When you create an IP tunnel to carry services, the
label distribution protocol automatically allocates
the forwarding label value. In addition, you need to
configure parameters relevant to the control plane.
Create a static route table.
NOTE
To configure parameters relevant to the control plane, refer to
the descriptions of configuring the control plane in the
Operation Guide for PTN NE Management.
This topic describes how to create a tunnel protection group. If a tunnel protection group is
created, the services carried over the active tunnel is switched over to the protection tunnel when
the working tunnel is faulty.
3.3.4 Automatic Search for Protection Groups
This topic describes automatic search for protection groups. With this function, you can recover
the protection group existing on the current network to the end to end management module of
the NMS for monitoring. In this manner, you can ensure the normal running of these protection
group.
3.3.5 Deploying a Tunnel
This topic describes how to apply the settings of a tunnel to NEs.
3.3.6 Reoptimizing an RSVP TE Tunnel
When you reoptimize a tunnel, the trails of the tunnel are recalculated.
3.3.7 Viewing a Discrete Tunnel
To view a discrete tunnel facilitates the management of discrete tunnels.
3.3.8 Checking the Correctness of the Tunnel Configuration
After configuring a tunnel, you can check the connectivity of the tunnel by using the Test and
Check function.
3.3.9 Perform Tunnel Protection Group Switching
On the U2000, you can perform tunnel protection switching.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must complete the correct configuration of the port attributes.
You must complete the correct setting of the LSR ID for each NE.
The control plane must be configured for the RSVP-TE, and IP tunnels.
Procedure
Step 1 Choose Service > Tunnel > Create Tunnel from the main menu.
l When you create a reverse tunnel, the U2000 automatically allocates different Tunnel Name to the
forward and reverse tunnels. If you manually set Tunnel Name for the forward tunnel, the U2000
automatically set Tunnel Name to Forward Tunnel Name+_Reverse for the reverse tunnel.
l When "Signaling Type" is set to be Static CR. In this case, if you select Create Reverse Tunnel, the
U2000 creates two unidirectional tunnels in two opposite directions. If you select Create Bidirectional
Tunnel, the U2000 creates a bidirectional tunnel, which has two opposite directions.
l When "Signaling Type" is set to be Static CR, You can select the "Create Protection Group", the tunnel
and the protection group of tunnel are created at the same time.
l The "Template" parameter is available only when the "Signaling Type" parameter is set to RSVP
TE. You can configure the detailed information of a tunnel by using a template.
l When you create a RSVP-TE tunnel, you can select the "Configure As Bypass Tunnel" check box
to create a bypass protection tunnel.
l A static CR tunnel is created on the basis of certain constraints. The mechanism for creating and
managing those constraints are constraint-based routing (CR). Different from a static tunnel, the
establishment of a CR tunnel depends on the routing information and other conditions, for example,
the specified bandwidth, the fixed route, and QoS parameters. The PTN supports only the static CR
tunnel.
b. In the dialog box that is displayed, select the required NE and click OK.
NOTE
In the case of an RSVP-TE, LDP, or IP tunnel, you need to specify only the source and sink nodes
of the tunnel. In the case of a static CR tunnel, you need to specify the source node, sink node, and
transit nodes of the tunnel.
You can choose Add and select Virtual Node from the drop-down list to specify virtual nodes
through which a tunnel travels. A virtual node simulates an NE beyond the management range of the
U2000. The virtual node is used for creating a tunnel whose source NE is on the U2000 but the sink
NE is not on the U2000.
2. Optional: In the case of a static CR tunnel.
Set the route calculation for the U2000 as follows:
a. Select Auto-Calculate route. Then, the U2000 automatically calculates the routes for
a tunnel after you finish steps 2 and 3.
b. Set Restriction Bandwidth(Kbit/s) and specify the source and sink nodes.
c. Specify route constraint. Specifically, you can click Route Restriction and specify
route constraint in the dialog box that is displayed. Alternatively, you can specify the
explicit and excluded restriction through shortcut menu items in the physical topology.
NOTE
A layer 2 link must be configured before route calculation, refer the chapter of topology management
to configure the layer 2 link.
By default, the shortest route is selected from the routes that are calculated according to Restriction
Bandwidth(Kbit/s) and route constraints.
You select Create Protection Group and click Configure Protection Group to configure
parameters relevant to the protection group.
Step 4 Click Details to configure details of the tunnel.
NOTE
Step 5 Optional: In the case of static CR tunnel, if you select Create Protection Group, you can click
Configure Protection Group to configure parameters relevant to the protection group and click
Configure OAM to configure OAM parameters relevant to the protection group.
Step 6 Select the Deploy check box and click OK.
NOTE
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A tunnel is available on NEs only when it is
enabled.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Batch Create Tunnel from the main menu.
Step 2 Configure basic information. In the Basic Information field, set the Network Type, Protocol
Type and Signaling Type parameters.
NOTE
If you select the Deploy check box, the tunnel information is stored on the U2000 and applied to NEs. By
default, the Deploy check box is selected.
When you select the Deploy and Enable check box, A tunnel is available on NEs only when it is enabled.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Context
Figure 3-8 shows the window for creating a tunnel protection group.
Precautions:
l The MPLS APS protection must not be coupled with the FRR, LMSP, LAG, and microwave
1+1 protection.
l The protection tunnel should not carry any extra service.
Procedure
Step 1 Choose Service > Tunnel > Create Protection Group from the main menu.
Step 2 Configure basic information of a tunnel protection group.
NOTE
If a tunnel protection group is of the 1+1 protection type, services are dually fed on the source and selectively
received on the sink. If a tunnel protection is of the 1:1 protection type, services are processed in the single
fed single receiving mode.
Single-ended switching refers to the scenario wherein only the local end is switched but the peer end is not
notified to switch when a fault occurs at one end. Single-ended switching does not negotiate by using
negotiation packets. Therefore, it is fast and reliable.
Dual-ended switching refers to the scenario wherein the local end is switched and the peer end is notified
to switch when a fault occurs at one end. In the case of dual-ended switching, the come-and-go path of a
service is the same. This facilitates service management.
Step 3 Click Add. In the dialog box that is displayed, select the working tunnel and the protection tunnel
and click OK.
Step 4 Optional: Select a required tunnel, click Configure OAM, and then configure the OAM
information of the tunnel. An OAM packet is used to detect the connectivity of a link. When a
fault occurs on the working tunnel, services are switched to the protection tunnel.
NOTE
By default, the OAM status is enabled for the protection tunnel, to ensure that the duration of switching to
the protection tunnel is less than 50 ms, set the detect type to FFD and the frequency to 3.3.
It is optional to configure OAM. If you do not configure it, the U2000, by default, enables the OAM of the
tunnel protection group when you configure the tunnel protection group.
You can set other OAM parameters only when you set OAM Status to Enabled. You can set Detection
Packet Type and Detection Packet Period(ms) only when you set Detection Mode of the sink to
Manual. The value of SF Threshold must be equal to or greater than the value of SD Threshold.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Search for Protection Group from the main menu.
Step 2 In the dialog box that is displayed, click Add, select required equipment, and then click OK.
Step 3 Click OK. A dialog box is displayed indicating the number of protection groups.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a tunnel whose settings are not applied to NEs and choose Deploy from the shortcut
menu.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Only deployed RSVP TE tunnels can be reoptimized.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click the required tunnel and choose Reoptimize from the shortcut menu. The
Reoptimization dialog box is displayed.
Step 4 Click Add or Delete to set the route constraints for the reoptimization of the tunnel.
NOTE
Step 5 Click OK. In the dialog box that is displayed, click OK.
Step 6 Right-click the tunnel and choose View LSP Topology from the shortcut menu to view the
actual route of the tunnel after optimization.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
A discrete tunnel must exist on the U2000.
Procedure
Step 1 Choose Service > Tunnel > Manage Discrete Tunnel from the main menu.
Step 2 Click Filter Criteria. In the dialog box that is displayed, set filtering criteria and click Filter.
Step 3 On the discrete tunnel management window, select a discrete tunnel, click the corresponding
tab to view details.
Step 4 Optional: Select a discrete tunnel, click Delete button and click Yes in the dialog box displayed.
----End
Prerequisite
The tunnel must be deployed.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service and choose Test and Check from the shortcut menu.
Set diagnosis parameters based on the requirements of operation and maintenance. The meaning
of each option is as follows:
1. Service Check:to check the connectivity of a static CR tunnel, you can verify that the labels
of the NEs that the tunnel traverses are consistent.
2. OAM Tool: check the connectivity by performing the ping operation on each layer.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must complete the creation of the tunnel protection group and you must have enable the
protocol status.
Context
l 1+1 protection
Services are transmitted over the working tunnel and protection tunnel at the same time.
Then, the receive end selects a tunnel according to the status of the two tunnels and receives
the services from the tunnel. That is, the services are dually fed and selectively received.
When the receive end detects loss of signals over the working tunnel or when the working
tunnel is detected as faulty by the MPLS OAM, the receive end receives the signals from
the protection tunnel. In this manner, the services are switched.
l 1:1 protection
Normally, services are transmitted over the working tunnel. That is, the services are singly
fed and received. When the working tunnel is faulty, the equipment at the two ends
negotiates through the APS protocol. Then, the transmit end transmits the services over the
protection tunnel and the receive end receives the services from the protection tunnel. In
this manner, the services are switched.
CAUTION
When other switching operations, excluding the exercise switching, are performed, the services
may be interrupted.
Procedure
Step 1 Choose Service > Tunnel > Manage Protection Group from the main menu.
Step 2 Check the switching status of the tunnel protection group. Right-click the protection group under
test, and choose Query Switching Status from the shortcut menu to refresh the status of the
tunnel protection group.
Step 3 Optional: When the Protocol Status is Disabled for the protection group, click Hop
Information tab, set Protocol Status to Enabled for the device of protection group, click
Apply.
NOTE
When the Protocol Status is Enabled, you can perform tunnel protection switching.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l The running status of RSVP TE tunnel is UP.
l OAM cannot be configured for an IP and LDP tunnel.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a required tunnel and choose OAM > Configure OAM from the shortcut menu.
Step 4 In the dialog box that is displayed, set OAM parameters of the tunnel.
Step 6 Select one or more tunnel, right click and choose OAM > Enable OAM to enable the OAM of
tunnel.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a required tunnel and choose View VPN from the shortcut menu.
NOTE
You can view the VPN service of only one tunnel at a time.
You can view the end-to-end services that are transmitted in a tunnel, but not the discrete services that are
transmitted in the tunnel.
Step 4 View information of the VPN service carried on the tunnel in View VPN window.
Step 5 Optional: Select a required VPN service, click View Details. In the relevant service
management window, you can view or modify parameters of the VPN service.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select a required tunnel and view the topology of the tunnel.
Perform the following operations as required.
l In the topology, right-click an NE and choose View Real-Time Performance or NE
Explorer from the shortcut menu.
l In the topology, right-click a link and choose Fast Diagnose, View VPN, View LSP
Topology, or Alarm from the shortcut menu.
NOTE
You can view the real-time performance of only the tunnels that are in the deployed states. You can view
the LSP topology of the RSVP TE tunnel that is in the UP running state.
Step 4 Optional: On the window for creating a tunnel, click the Service Topology tab to view topology
information of the new tunnel.
Step 5 Optional: In the Main Topology, click Current View, select Tunnel View from the drop-down
list, and then view the topology of the tunnel in the network-side tunnel topology view.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 View the runtime performance of a tunnel. Right-click the NE and choose View Real-Time
Performance from the shortcut menu in the topology view.
Step 4 Create a monitoring instance for a tunnel. For details, refer to the chapter of monitoring instance
management in Performance Management System (PMS).
Step 5 View the history performance of a tunnel. Right-click a required tunnel and choose
Performance > View History Data from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 4 Right-click a required tunnel and choose Alarm > View Current Alarm from the shortcut menu
to view current alarms of the tunnel.
Step 5 Right-click a required tunnel and choose Alarm > View History Alarm from the shortcut menu
to view history alarms of the tunnel.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click the tunnel and select Update Running Status, view the Running Status parameter
of a tunnel.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
The RSVP-TE tunnel and the LDP tunnel must support this operation and the running status of
tunnel is UP.
The function of viewing a tunnel must be supported.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a required tunnel and choose View LSP Topology from the shortcut menu. Click
the link in the dialog box that is displayed, view the actual routing information of the tunnel.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
In the case of a static CR tunnel, the IS-IS protocol must be enabled on the source and sink ports
of an MPLS tunnel. Alternatively, a diagnose test must be initiated at the local NE and a static
route in control plane must be configured on the opposite NE.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
2. Optional: Select the LSP Ping or ICMP Ping check box and click . In the dialog
box that is displayed, set parameters of the ping test and click OK.
3. Optional: Select the LSP Tracert or ICMP Tracert check box and click . In the
dialog box that is displayed, set parameters of the LSP tracert test and click OK.
4. Click Run and view the test result on the right pane.
----End
This topic describes a networking diagram and the corresponding example of configuring a static
RSVP TE tunnel on the network.
3.5.3 Configuration Example (IP and LDP Tunnels)
This topic describes a networking diagram and the corresponding example of configuring an IP
tunnel and an LDP tunnel on the network.
Example Description
This topic describes O&M scenarios and networking diagrams.
As shown in Figure 3-9, the service between NodeB and RNC is to be carried by static CR
tunnels. NE1 accesses the service from NodeB. Then, the service is transmitted to the 10GE ring
on the convergence layer through the GE ring on the access layer. Finally, the service is
converged at NE3 and transmitted to RNC.
If the service requires high network security, configure the MPLS APS protection to ensure
service transmission.
l Working tunnel: NE1-NE2-NE3. NE2 is a transit node.
l Protection tunnel: NE1-NE6-NE5-NE4-NE3. NE6, NE5, and NE4 are transit nodes. When
the working tunnel becomes faulty, the service on it is switched to the protection tunnel for
protection.
NE4
NE5 10GE ring on
convergence
GE ring on layer
NE6 access layer
RNC
Working Tunnel
NE1 and NE6 are OptiX PTN 1900 NEs. NE2, NE3, NE4 and NE5 are OptiX PTN 3900 NEs.
Figure 3-10 shows the planning details of boards on the NE and interfaces on the boards.
Working Tunnel
Service Planning
There are services between NodeB and RNC. Two static MPLS tunnels are to be created. One
is the working tunnel and the other is the protection tunnel. Then, the services can be securely
transmitted on the network.
Parameter Protection
Working Tunnel
Tunnel
Parameter Protection
Working Tunnel
Tunnel
Parameter Vlaue
WTR Time(min) 5
Hold-off Time(100ms) 0
Configuration Process
This topic describes how to configure the static CR tunnel.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must understand the networking, requirements and service planning of the example.
A network must be created and Allocate IP addresses to ports automatically. Allocating IP
addresses to ports automatically refer to Allocating IP Addresses to Ports Automatically.
Procedure
Step 1 Set LSR IDs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set LSR ID, Start of Global Label Space, and other parameters. Click Apply.
3. Display the NE Explorer of NE2, NE3, NE4, NE5, and NE6 separately and perform the
preceding two steps to set the parameters, such as LSR ID.
3. Configure the NE list. On the physical topology, double-click NE1, NE2, and NE3 to add
them to the NE list and set the corresponding NE roles.
Parameter Example Value Principle for Value
Selection
4. Click Details to set the advanced parameters of the reverse tunnel. Click OK.
For route details, see the descriptions of route settings in Table 3-2.
Step 4 Creating the protection group.
1. Choose Service > Tunnel > Create Protection Group from the main menu.
2. Configure basic information of a tunnel protection group.
Parameter Example Value Principle for Value
Selection
3. Click Add. In the dialog box that is displayed, select the working tunnel and the protection
tunnel and click OK.
4. Configure the type of tunnel.
5. Configure attributes of the tunnel protection group, choose Deploy, click OK.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
As shown in Figure 3-11, Company A has branches in City 1 and City 2. Real-time service
transmission is required between the branches. In this case, an MPLS tunnel can be created to
carry the real-time services.
Real-time services require high network reliability. Hence, FRR protection should also be
configured for the MPLS tunnel between NE1 and NE3.
l The NE1-to-NE3 working tunnel is along the NE1-NE2-NE3 trail. NE2 is the transit node.
l The NE1-to-NE3 bypass tunnel 1 is along the NE1-NE4-NE3 trail. When the NE1-NE2
link fails or the NE2 has a fault, bypass tunnel 1 protects the working tunnel.
l The NE2-to-NE3 bypass tunnel 2 is along the NE2-NE4-NE3 trail. When the NE2-NE3
link fails, bypass tunnel 2 protects the working tunnel.
NE1
A Company NE3
City1
A Company
City2
NE2
Working Tunnel
Bypass Tunnel 1
Bypass Tunnel 2
Figure 3-12 shows the NE planning. NE1 is an OptiX PTN 1900 NE. NE2, NE3 and NE4 are
OptiX PTN 3900 NEs.
10.1.3.2
4-EFG2-2 10.1.4.1 10.1.5.1
1- 1-EG16-2
EG16-3
NE1
A Company 1- NE3
4-EFG2-1 EG16-3 10.1.2.1
City1 10.1.4.2
10.1.1.2 1-EG16-1
A Company
1-EG16-1 1-EG16-2
City2
10.1.1.1 10.1.2.2
NE2
Working Tunnel
Bypass Tunnel 1
Bypass Tunnel 2
Service Planning
The services between the branches of Company A are carried by the working tunnel. Bypass
tunnel 1 and bypass tunnel 2 provide FRR protection for the working tunnel.
On the NNI side of the NEs, the GE boards are used and a GE ring is built on the boards. Assume
that the IP addresses of the ports of NEs are the same as those listed in Table 3-4 after the U2000
automatically allocates the IP addresses of ports.
Since the service bandwidth is 10Mbit/s, the bypass tunnel should have bandwidth more than
10Mbit/s. In addition, the service travels through several NEs. Hence, several bypass tunnels
are required to completely protect the tunnel for the service. According to the actual condition,
two bypass tunnels are planned for the FRR.
Table 3-5 lists the planned parameters of the working tunnel and the two bypass tunnels.
Enable FRR Yes (Forward and Yes (Forward and Yes (Forward and
Reverse Tunnels) Reverse Tunnels) Reverse Tunnels)
FRR BW Type facility (Forward and facility (Forward and facility (Forward and
Reverse Tunnels) Reverse Tunnels) Reverse Tunnels)
FRR Bandwidth 10000 (Forward and 10000 (Forward and 10000 (Forward and
(Kbit/s) Reverse Tunnels) Reverse Tunnels) Reverse Tunnels)
LSP Type E-LSP (Forward and E-LSP (Forward and E-LSP (Forward and
Reverse Tunnels) Reverse Tunnels) Reverse Tunnels)
Configuration Process
This topic describes how to configure the RSVP TE tunnel in the example.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 Set LSR IDs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set LSR ID, Start of Global Label Space, and other parameters. Click Apply.
3. Display the NE Explorer for NE2, NE3, and NE4 separately. Set the parameters such as
LSR ID of each NE by following the previous two steps.
Parameter Example Value Principle for Value
Selection
3. Click the Port Configuration tab page. Click New. In the dialog box displayed, click
Add. Select 4-EFG2-1(Port-1) and 4-EFG2-2(Port-2), and click OK.
Set parameters as follows:
4. Display the NE Explorers of NE2, NE3, and NE4 separately and refer to Step 2.1 through
Step 2.3 to set control plane parameters for NE2, NE3, and NE4.
The parameters of NE2, NE3, and NE4 are the same as those of NE1, except that the ports
specified for NE2, NE3, and NE4 are different as follows:
3. Configure the NE list. On the physical topology, double-click NEs to add them to the NE
list. Then, specify the ingress and egress NEs.
Choose Trail Information > Route Restriction, right-click, and choose Insert
Instance.
FRR Protect Type Forward and Reverse The bypass tunnel that a
Tunnels: Node Protection PLR selects is required to
protect the adjacent
downstream node of the
PLR and the link between
the adjacent downstream
node and the PLR.
2. Configure the NE list. On the physical topology, double-click NEs to add them to the NE
list. Then, specify the ingress and egress NEs.
Choose Trail Information > Affinity Information, right-click, and choose Insert
Instance.
Choose Trail Information > Route Restriction, right-click, and choose Insert
Instance.
Choose QoS.Information.
4. In the tunnel management window, configure the protection interface for a bypass tunnel
after the bypass tunnel 1 is successfully created and its running status is in UP state.
The parameters of bypass tunnel 2 are the same as those of bypass tunnel 2, except the
tunnel names, tunnel IDs, IP addresses and protection interface.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
As shown in Figure 3-13, NE1 receives services transmitted from Node B. Then, the services
are carried in two tunnels, that is, an IP tunnel and an LDP tunnel, between Node B and the
RNC. Specifically, the IP tunnel traverses a third-party IP network and the LDP tunnel traverses
an MPLS network. The services are converged on NE3 and then transmitted to the RNC.
l IP tunnel: NE1-a third-party IP network-NE3.
l LDP tunnel: NE1-an MPLS network-NE3.
In Figure 3-13, NE1 is the OptiX PTN 950 and NE3 is the OptiX PTN 3900. The Figure
3-13 shows the planning of boards and ports on each NE.
DSLAM Third-Party
IP Network 10.0.2.2
10.0.5.2
3-EG16-1(Port-1)
10.0.5.1
4-SHD4-1(Port-1) 10.0.2.1
2-EG2-1(Port-1) 1-EX2-1(Port-1)
Node B NE1 10.0.0.1 RNC
10.0.0.2 NE3
MPLS Network
I P Tunnel
LDP Tunnel
Service Planning
To transmit services between Node B and the RNC, you need to create an IP tunnel and an LDP
tunnel.
Assume that the IP addresses of the ports of NEs are the same as those listed in Table 3-6 after
the U2000 automatically allocates the IP addresses of ports.
Parameter Value
Route List ID 1 1
Parameter Value
LSP Retransmission 5 5
Interval(s)
Minimum LSP 30 30
Transmission Interval
(ms)
Protocol Type IP IP
Tunnel ID 90 91
Parameter Value
EXP 2 2
Configuration Process
This topic describes how to configure the IP tunnel and LDP tunnel.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must understand the networking, requirements and service planning of the example.
Procedure
Step 1 Set LSR IDs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set LSR ID, Start of Global Label Space and Start of Multicast Label Space. Click
Apply.
3. In the NE Explorer of NE3, refer to the preceding two steps to set the parameters, such as
the LSR ID.
Parameter Example Value Principle for Value
Selection
3. Click Apply. The Operation Result dialog box is displayed indicating that the operation
is successful.
4. Enable the IGP-ISIS protocol of the protection MPLS tunnel. In the NE Explorer, select
NE1 and choose Configuration > Control Plane Configuration > IGP-ISIS
Configuration from the Function Tree.
5. Click the Node Configuration tab page. Click New. Configure the related parameters in
the dialog box displayed.
6. Click the Port Configuration tab and then click New. Click Add in the dialog box
displayed. Then, select 2-EG2-1(Port-1) on the port tab page. Click OK.
7. Click Apply. The Operation Result dialog box is displayed indicating that the operation
is successful.
8. Choose Session Configuration and click Create. Set Opposite LSR ID to 1.0.0.3 in the
Create LDP Peer Entity dialog box. Click OK.
9. Configure the MPLS-LDP peer for the protection LDP tunnel. Choose Configuration >
Control Plane Configuration > MPLS-LDP Configuration from the Function Tree.
Click Port Configure and set Enable LDP of 2-EG2-1(Port-1) to Enabled.
10. Click Apply. The Operation Result dialog box is displayed indicating that the operation
is successful.
11. In the NE Explorer of NE3, refer to Step 2.1 through Step 2.3 to configure the static routes
for NE3.
Parameter Example Value Principle for Value
Selection
12. In the NE Explorer of NE3, refer to Step 2.4 through Step 2.7 to enable the IGP-ISIS
protocol for NE3. The settings of the IS-IS protocol for NE3 are consistent with the settings
of the IS-IS protocol for NE1.
13. In the NE Explorer of NE3, refer to Step 2.8 through Step 2.10 to configure the peer of
NE3.
Parameter Example Value Principle for Value
Selection
3. On the physical topology, double-click NE1 and NE3 and set relevant parameters in the
NE list.
5. Select Deploy and click Apply. In the dialog box displayed, click Close.
NOTE
If you select Deploy, the created tunnel is saved on the U2000 and applied to the corresponding NEs.
By default, Deploy is selected.
3. On the physical topology, double-click NE1 and NE3 and set relevant parameters in the
NE list.
4. Click Details and set EXP of the forward and reverse tunnels to 2.
5. Select Deploy and click Apply. In the dialog box displayed, click Close.
NOTE
If you select Deploy, the created tunnel is saved on the U2000 and applied to the corresponding NEs.
By default, Deploy is selected.
----End
By using a service template, you can create services more quickly and easily. You can customize
a service template according to actual O&M requirements.
Prerequisite
You must be an NM user with "NE operator" authority or higher.
Procedure
Step 1 Choose Service > Service Template from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Click Create and select the required type of the service template from the drop-down list.
Step 4 Set the parameters relevant to the service template.
NOTE
To set the new service template as the default template, select Set as Default Template.
----End
Prerequisite
You must be an NM user with "NE operator" authority or higher.
Context
The following example describes how to create a tunnel service by using an RSVP TE Tunnel
Template.
Procedure
Step 1 Choose Service > Tunnel > Create Tunnel from the main menu.
Step 2 Configure the general information about a tunnel.
1. Set Protocol Type to MPLS and set Signaling Type to RSVP TE.
2. Click . In the dialog box displayed, select the service template to be used.
3. Click OK. A dialog box is displayed, indicating that the parameters not contained in the
new template may be lost.
4. Click Confirm.
5. Select Create Reverse Tunnel or Configure As Bypass Tunnel as required.
Step 3 Configure the NE list. Select source and sink NEs. In NE List, set the location of an NE in a
tunnel as follows:
You can select an NE by using any of the following three methods:
l Method 1: On the physical topology in the upper right portion, select an NE, right-click, and
choose Add from the shortcut menu.
l Method 2: On the physical topology in the upper right portion, double-click an NE.
l Method 3:
1. Click Add and select NE from the drop-down list.
2. In the dialog box displayed, select an NE and click OK.
Step 4 Select Deploy and click OK.
NOTE
If Deploy is not selected, the tunnel is saved only on the U2000. If Deploy is selected, the tunnel is save
on the U2000 and delivered to corresponding NEs. By default, Deploy is selected.
When Deploy is selected, Enable is selected accordingly. A tunnel on the NE side can be used only when
the tunnel is enabled.
----End
Procedure
Step 1 Choose Service > Service Resource > Common Resource Management from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
NOTE
You can set the filter criteria, such as Resource Type and NE Name. In this manner, only the information
meeting the filter criteria is displayed in the query result area.
Step 3 In the query result area, you can view the Resource Type, Resource Value, and Service
Sum information about a resource.
Step 4 After selecting a resource that is already added to a service, you can click the Details tab to view
the details about the resource, such as Resource Value, NE Name, Service Name, Service
Type, Customer, and Service Deployment Status.
Step 5 On the Details tab page, right-click a resource and choose View Service from the shortcut menu.
The service management user interface for the service corresponding to the selected resource is
displayed.
Step 6 Optional: Click Print to set the print parameters and prints the related data on the current user
interface.
Step 7 Optional: Click Save to export all the service resources in the query result area to a file of the
specified format.
NOTE
----End
Procedure
Step 1 Choose Service > Service Resource > SAI from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
NOTE
You can set the filter criteria, such as Service Type and NE name. In this manner, only the information
meeting the filter criteria is displayed in the query result area.
Step 3 In the query result area, you can view the details about the interface.
Step 4 Right-click an interface that is already bound to a service and choose View Service from the
shortcut menu. The service management user interface for the service corresponding to the
selected interface is displayed.
Step 5 Optional: Click Print to set the print parameters and prints the related data on the current user
interface.
Step 6 Optional: Click Save to export all the interface resources in the query result area to a file of the
specified format.
NOTE
----End
6.1.1 Introduction
In a packet switched network (PSN), PWE3 is a Layer 2 service bearing technology that emulates
as faithfully as possible the basic behaviors and characteristics of ATM services, Ethernet
services, low-rate time division multiplexing (TDM) circuit services, and other services. Such
a technology can interconnect the traditional network with PSN network to share resources and
expand the network.
6.1.2 Reference Standards and Protocols
This topic describes the compliant standards and protocols for various technologies used in the
PWE3.
6.1.3 Principle
This topic describes the basic principle and various technologies used to implement the PWE3.
6.1.4 Overview of IP Line
IP line services are private line services provided by the PTN equipment. In the case of IP line
services, IP packets are encapsulated into PWs for transmission.
6.1.5 Principle of IP Line
The PTN equipment supports UNI-NNI IP line services and transports the services in a point-
to-point manner. In addition, the PTN equipment supports dual-homing protection for IP line
services.
6.1.6 The Application of PWE3 Service
This topic describes a typical application of the PWE3.
6.1.1 Introduction
In a packet switched network (PSN), PWE3 is a Layer 2 service bearing technology that emulates
as faithfully as possible the basic behaviors and characteristics of ATM services, Ethernet
services, low-rate time division multiplexing (TDM) circuit services, and other services. Such
a technology can interconnect the traditional network with PSN network to share resources and
expand the network.
Definition
PWE3 is a Layer 2 service bearing technology, mainly used to emulate essential behaviors and
characteristics of services such as ATM, frame relay, Ethernet, low-rate TDM circuit, and
synchronous optical network (SONET)/synchronous digital hierarchy (SDH) as faithfully as
possible in a PSN.
PWE3 is a point-to-point L2 VPN(Virtual Private Network) technology. PWE3 has the following
features: Adding new signaling; reducing cost of signaling; regulating the auto-negotiation mode
of multiple hops; achieving flexible networking diagrams. The PWE3 protocol can reduce packet
exchange, avoid repeated PW creations and deletions caused by network unstabilities.
Objectives
With development of the IP network, the IP network has great compatibility and great capabilities
for expansion, upgrade, and interoperation. The traditional communication network, which has
poor capabilities for expansion, upgrade, and interoperation, is restricted by the transmission
mode and service type. In addition, newly built networks support few services and are unsuitable
for interoperation management. Hence, during the upgrade and expansion of traditional
communication networks, you should consider whether to build duplicated networks or use
existing or common network resources. PWE3 is a solution that combines traditional
communication networks with the existing packet networks.
PWE3 has certain advantages of MPLS L2VPN. In addition, PWE3 can be used to interconnect
traditional networks with PSNs. Hence, resources can be shared and networks can be expanded.
6.1.3 Principle
This topic describes the basic principle and various technologies used to implement the PWE3.
VPN1 VPN1
Site1 CE1 CE3 Site2
Forwarder Forwarder
PE1 P PE2
MPLS
VPN2 CE2 Network CE4 VPN2
Site1 Site2
AC
PW
PW Signal
Tunnel
The VPN1 packet flow from CE1 to CE3 is taken as an example. The basic data flow is as
follows:
l Layer 2 packets are sent to CE1 first, and the packets gain access to PE1 through the link.
l After PE1 receives the packets, the forwarder selects the PWs for forwarding packets.
l PE1 generates two MPLS labels (a private network label and a public network label)
according to the PW forwarding table entries. The private network label is used to identify
the PW, and the public network label is used for a service to traverse over the tunnel to
PE2.
l The Layer 2 packets reach PE2 through the public network. Then, the system prompts
private network labels (on the P equipment, public network labels are prompted in the last
hop but one).
l The forwarder of PE2 selects the link for forwarding packets, and then forwards the Layer
2 packets to CE3.
MPLS Network
PE1 P PE2
PW
CE1 CE2
MPLS Network
CE1 CE2
Static PW
The static PW does not use the signaling protocol for parameter negotiation. The information
required by the static PW is manually specified through commands, and the data is transmitted
between PEs through the tunnel.
Dynamic PW
The dynamic PW is a PW constructed through signaling protocol. The U-PE switches the PW
label through the LDP, and bundles the corresponding CE through PW ID. After the tunnel that
connects two PEs is successfully constructed and the label switching and bundling are complete,
if the link of the two PEs is up, a PW is constructed.
l Multi-hop extension
The multi-hop PW function is added, which extends the network mode.
The multi-hop PW lowers the requirement on the count of LDP connections of the access
equipment, that is, lowers the overheads of the LDP session of the access nodes.
Multi-hop access nodes meet the PW convergence requirement, which facilitates the
network flexibility and is applicable to different levels (access, convergence, and core).
l TDM interface extension
Supports more telecommunication low-speed TDM interfaces. The functions of TDM
packet sequencing, and clock extraction and synchronization are added through the control
word (CW) and the forwarding plane Real-time Transport Protocol (RTP).
The advantages of the low-speed TDM interfaces are as follows:
The encapsulation type is added to support the encapsulation of low-speed TDMs.
Supports integration of the PSTN, TV, and data networks.
It is a mode to substitute the traditional DDN service.
l Other extensions
Other extensions at the control plane are as follows:
The negotiation mechanism of the fragmentation capability is added to the control plane.
The PW connectivity check, such as the virtual circuit connectivity verification (VCCV)
and PW operation administration and maintenance (OAM), is added, which improves
the quick convergence capability and reliability of the network.
VCCV
Virtual circuit connectivity verification (VCCV) is a technology that is used to verify and
diagnose the connectivity of a PW forwarding trail.
VCCV is an end-to-end PW fault detection and diagnosis mechanism. That is, the VCCV is the
control channel on which connectivity verification messages are sent between the PW ingress
and egress nodes.
The objective of the VCCV is to verify and further diagnose the connectivity of the PW
forwarding trail.
The VCCV PING is a tool that help you to manually check the connection status of the virtual
circuit. The VCCV PING is achieved through the extended LSP-PING. The VCCV defines a
serial of messages exchanged between PEs to verify the connectivity of the PW. To ensure that
the packets of the VCCV and data packets in the PW pass through the same trail, the VCCV
packets and the PW packets must have the same encapsulation mode and pass through the same
tunnel.
P1 S-PE P2
W
ic P Sta
yn am tic
PW
D
U-PE1 U-PE2
CE-A CE-B
PW Protection
To implement quick data switching, the PW protection mechanism ensures that services can be
quickly switched to another PW when one PW fails.
PW Redundancy
As shown in Figure 6-5, CE1 is connected to PE1 through a single link. CE2 is connected to
PE2 and PE3 in dual-homing mode.
NOTE
PWs between PE equipment must be created by using the LDP signaling.
l Create a PW between PE1 and PE3. This PW is the working PW.
l Create a PW between PE1 and PE2. This PW is the protection PW.
l Detect faults between CE and PE.
l When the active trail CE2- PE3- PE1- CE1 is faulty, the service traffic can be quickly
switched to the standby trail CE2- PE2- PE1- CE1.
l After the fault on the active trail CE2- PE3- PE1- CE1 is rectified, the service traffic is
switched to the original trail.
W
PE3
Working PW
PE1 W PE3
CE1 CE2
P
P
PE2 PE4
Working PW
Protection PW
Backup Protection
As shown in Figure 6-7, CE1 is connected to PE1 and CE2 is connected to PE2.
PE1 PE2
CE1 CE2
Working PW
Protection PW
PW APS Protection
As shown in Figure 6-8, CE1 is connected to PE1 and CE2 is connected to PE2 and PE3.
l Between PE1 and PE2, create a PW.
l Between PE1 and PE3 and between PE2 and PE3, create PWs.
l When trail CE1- PE1- PE2- CE2 is faulty, the service traffic can be quickly switched to
the protection trail CE1- PE1- PE3- PE2- CE2.
PE2
PE1 W
CE1 P CE2
PE3
Working PW
Protection PW
Definition
ATM cell transparent transmission is a technology that is used to bear ATM cells on the PWE3
virtual circuit.
Objective
The ATM cell transparent transmission uses the PSN network to connect traditional ATM
network resources and emulates traditional ATM services on the PSN network. In this case,
traditional ATM network services are emulated to the maximum when traversing the PSN
network. Therefore, end users can rarely sense any difference and the existing investment of
customers and operators are fully utilized in the network integration and construction.
The ATM cell transparent transmission in N-to-1 VCC mode can be classified into remote
ATM cell transparent transmission in N-to-1 VCC mode and local ATM cell transparent
transmission in N-to-1 VCC mode.
l ATM cell transparent transmission in 1-to-1 virtual path connection (VPC) mode
In this mode, a PW bears an ATM VPC cell. This mode supports all AAL types. Compared
with the ATM cell transparent transmission in 1-to-1 VCC mode, the tunnel packet of this
mode contains only the value of VCI. The output equipment then determines the destination
CE based on the value of VCI.
Because a PW bears only one ATM VPC cell, the PVCs for the PEs are mapped through
the PW, that is, the MPLS PW functions as the ATM switch to support the VPI switching
without configuring the switching relation on the PE.
The ATM cell transparent transmission in 1-to-1 VPC mode can be classified into remote
ATM cell transparent transmission in 1-to-1 VPC mode and local ATM cell transparent
transmission in 1-to-1 VPC mode.
l ATM cell transparent transmission in N-to-1 VPC mode
In this mode, a PW bears multiple ATM VPC cells. This mode supports all AAL types.
Because a PW bears multiple ATM VPC cells, the tunnel packet contains the value of VPI
and VCI. The encapsulation modes of the ATM cell transparent transmission in N-to-1
VPC and N-to-1 VCC modes are the same.
The ATM cell transparent transmission in N-to-1 VPC mode can be classified into remote
ATM cell transparent transmission in N-to-1 VPC mode and local ATM cell transparent
transmission in N-to-1 VPC mode.
The encapsulation modes of the ATM cell transparent transmission are as follows:
l 1-to-1
l N-to-1
The ATM cell transparent transmission has the following transparent transmission modes:
l Cell
l Frame
Table 6-1 describes the features of the ATM cell transparent transmission services of different
levels.
N-to-1 VCC Cell All AALs VC Contains the VPI and VCI.
The control word (CW) is
optional. Supports the VPI/
VCI switching.
1-to-1 VCC Cell All AALs VC Not contain the VPI or VCI.
The CW is mandatory.
Supports the VPI/VCI
switching.
N-to-1 VPC Cell All AALs VP Contains the VPI and VCI.
The CW is optional.
1-to-1 VPC Cell All AALs VP Contains the VPI and not
contain the VCI. The CW is
mandatory.
Interface Cell All AALs Interface Contains the VPI and VCI.
transparent The CW is optional.
transmission
VCC cell transparent Virtual channel connection, which is a basic unit on the ATM
transmission network.
Applicable to transmission of various ATM network services.
VPC cell transparent Virtual path connection, a group of VCCs with the same destination.
transmission Applicable to transmission of various ATM network services,
especially when multiple services with the same destination exist in
the transmission direction. The VPC cell transparent transmission is
quicker and easier for management and configuration than VCC cell
transparent transmission.
Whole port Applicable to the scenario that the VP and VC do not need to be
transparent processed and the equipment functions an ATM transmission private
transmission line.
Table 6-3 describes the comparison between 1-to-1 and N-to-1 modes.
1-to-1 A VCC or VPC maps one PW. All AAL types The VPI and VCI are not
contained.
N-to-1 Multiple VCCs or VPCs map All AAL types The VPI and VCI must
one PW. (N >= 1) be contained in the
encapsulation regardless
whether N = 1 or N > 1.
Packet Encapsulation on an AC
Packet encapsulation mode on an AC is determined by the user access mode. User access modes
can be VLAN access and Ethernet access. Each user access mode is described as follows:
l VLAN access: In VLAN access mode, the header of each Ethernet frame sent between CEs
and PEs carries a VLAN tag. This tag is a service delimiter that is used to identify users in
an ISP network. It is called provider-tag (P-tag).
l Ethernet access: In Ethernet access mode, the header of each Ethernet frame sent between
CEs and PEs does not carry any P-tag. If the frame header carries a VLAN tag, the VLAN
tag is the internal VLAN tag of the user packet, and is called user-tag (U-tag). The U-tag
is carried in a packet before the packet is sent to a CE and is thus not added by the CE. The
U-tag is used by the CE to identify which VLAN the packet belongs to, and is meaningless
to PEs.
Packet Encapsulation on a PW
Packet encapsulation modes on a PW can be Raw mode and Tagged mode, as shown follows:
l Raw mode
The P-tag is not transmitted on the PW. If a PE receives the packet with a P-tag from a CE,
the PE strips the P-tag, adds double MPLS labels (outer label and inner label) to the packet,
and then forwards the packet. If a PE receives the packet without a P-tag from a CE, the
PE directly adds double MPLS labels to the packet, and then forwards the packet. If a PE
sends a packet to a CE, the PE adds or does not add the P-tag to the packet as required, and
then forwards the packet to the CE. Note that the PE is not allowed to rewrite or remove
any existing tag.
l Tagged mode
The frame sent to a PW must carry the P-tag. If a PE receives the packet with a P-tag from
a CE, the PE directly adds double MPLS labels to the packet without stripping the P-tag,
and then forwards the packet; if a PE receives the packet without a P-tag from a CE, the
PE adds a null tag and double MPLS labels to the packet, and then forwards the packet. If
a PE sends a packet to a CE, the PE rewrites, removes, or preserves the service delimiter
of the packet as required, and then forwards the packet to the CE.
l User: Services gain access to the AC in Ethernet access mode. The outermost C-VLAN
tag or S-VLAN tag of a user packet functions as the user VLAN tag (U-TAG) for the
forwarding of the user packet.
l Service: Services gain access to the AC in VLAN access mode. The outermost C-VLAN
tag or S-VLAN tag of a user packet functions as the service VLAN tag (P-TAG) and is not
involved in the forwarding of the user packet.
L2 User IP
Data
Header Vlan Tag Header
AC
PE1
L2 Tunnel VC L2 User IP
PW Data
Header Label Label Header Vlan Tag Header
PE2
AC L2 User IP
Data
Header Vlan Tag Header
CE2
As shown in Figure 6-9, when you set the service demarcation tag to User, the AC adopts the
Ethernet encapsulation mode and the PW adopts the raw mode. Therefore, packets transmitted
from the CE to the PE contains the user VLAN tags (U-TAGs) but no service VLAN tags (P-
TAGs).
Interaction of packets with U-TAGs in the Ethernet raw mode is described as follows:
1. CE1 transmits packets with Layer 2 encapsulation to PE1. The packets contains U-TAGs
but no P-TAGs.
2. When PE1 receives the packets that contain U-TAGs but no P-TAGs, PE1 considers the
U-TAGs as user data without processing them because the U-TAGs are useless to PE1.
3. When PE1 receives the packets that contain P-TAGs but no U-TAGs, PE1 deletes the P-
TAGs from the packets because PWs require raw encapsulation and frames transmitted in
the PWs cannot contain P-TAGs.
4. According to the routing table, PE1 selects tunnels and PWs for the packets.
5. According to the selected tunnels and PWs, PE1 directly adds two types of MPLS tags
(outer tunnel tags and inner VC tags) to the packets, performs Layer 2 encapsulation, and
then forwards the packets.
6. PE2 receives the packets from PE1 and decapsulates the packets. Specifically, PE2 strips
the Layer 2 encapsulation and the two MPLS tags from the packets.
7. PE2 transmits the decapsulated Layer 2 packets from CE1 to CE2. The packets contain U-
TAGs but no P-TAGs.
L2 User IP
Data
Header Vlan Tag Header
AC
PE1
PE2
AC L2 User IP
Data
Header Vlan Tag Header
CE2
As shown in Figure 6-9, when you set the service demarcation tag to User, the AC adopts the
Ethernet encapsulation mode and the PW adopts the tagged mode. Therefore, packets transmitted
from the CE to the PE contains the user VLAN tags (U-TAGs) but no service VLAN tags (P-
TAGs).
Interaction of packets with U-TAGs in the Ethernet raw mode is described as follows:
1. CE1 transmits packets with Layer 2 encapsulation to PE1. The packets contains U-TAGs
but no P-TAGs.
2. When PE1 receives the packets that contain U-TAGs but no P-TAGs, PE1 considers the
U-TAGs as user data without processing them because the U-TAGs are useless to PE1.
3. When PE1 receives the packets that contain no P-TAGs, PE1 adds the P-TAGs in the
packets because PWs require tagged encapsulation and frames transmitted in the PWs must
contain P-TAGs.
4. According to the routing table, PE1 selects tunnels and PWs for the packets.
5. According to the selected tunnels and PWs, PE1 directly adds two types of MPLS tags
(outer tunnel tags and inner VC tags) to the packets, performs Layer 2 encapsulation, and
then forwards the packets.
6. PE2 receives the packets from PE1 and decapsulates the packets. Specifically, PE2 strips
the Layer 2 encapsulation and the two MPLS tags from the packets and then adds the P-
TAGs that is deleted by PE1 to the packets.
7. PE2 transmits the decapsulated Layer 2 packets from CE1 to CE2. The packets contain U-
TAGs but no P-TAGs.
L2 Service IP
Data
Header Vlan Tag Header
AC
PE1
L2 Tunnel VC L2 IP
PW Data
Header Label Label Header Header
PE2
L2 Service IP
AC Data
Header Vlan Tag Header
CE2
As shown in Figure 6-9, when you set the service demarcation tag to Service, the AC adopts
the VLAN encapsulation mode and the PW adopts the raw mode. Therefore, packets transmitted
from the CE to the PE contains the service VLAN tags (P-TAGs) but no user VLAN tags (U-
TAGs).
Interaction of packets with U-TAGs in the VLAN raw mode is described as follows:
1. CE1 transmits packets with Layer 2 encapsulation to PE1. The packets contains P-TAGs
but no U-TAGs.
2. When PE1 receives the packets that contain P-TAGs but no U-TAGs, PE1 deletes the P-
TAGs from the packets because PWs require raw encapsulation and frames transmitted in
the PWs cannot contain P-TAGs.
3. According to the routing table, PE1 selects tunnels and PWs for the packets.
4. According to the selected tunnels and PWs, PE1 directly adds two types of MPLS tags
(outer tunnel tags and inner VC tags) to the packets, performs Layer 2 encapsulation, and
then forwards the packets.
5. PE2 receives the packets from PE1 and decapsulates the packets. Specifically, PE2 strips
the Layer 2 encapsulation and the two MPLS tags from the packets and then adds the P-
TAGs that is deleted by PE1 to the packets.
6. PE2 transmits the decapsulated Layer 2 packets from CE1 to CE2. The packets contain P-
TAGs but no U-TAGs.
L2 Service IP
Data
Header Vlan Tag Header
AC
PE1
L2 Tunnel VC L2 Service IP
PW Data
Header Label Label Header Vlan Tag Header
PE2
L2 Service IP
AC Data
Header Vlan Tag Header
CE2
As shown in Figure 6-9, when you set the service demarcation tag to Service, the AC adopts
the VLAN encapsulation mode and the PW adopts the tagged mode. Therefore, packets
transmitted from the CE to the PE contains the service VLAN tags (P-TAGs) but no user VLAN
tags (U-TAGs).
Interaction of packets with P-TAGs in the VLAN tagged mode is described as follows:
1. CE1 transmits packets with Layer 2 encapsulation to PE1. The packets contains U-TAGs
but no P-TAGs.
2. When PE1 receives the packets that contain P-TAGs but no U-TAGs, PE1 do nothing with
the P-TAGs in the packets because PWs require tagged encapsulation and frames
transmitted in the PWs must contain P-TAGs.
3. According to the routing table, PE1 selects tunnels and PWs for the packets.
4. According to the selected tunnels and PWs, PE1 directly adds two types of MPLS tags
(outer tunnel tags and inner VC tags) to the packets, performs Layer 2 encapsulation, and
then forwards the packets.
5. PE2 receives the packets from PE1 and decapsulates the packets. Specifically, PE2 strips
the Layer 2 encapsulation and the two MPLS tags from the packets.
6. PE2 transmits the decapsulated Layer 2 packets from CE1 to CE2. The packets contain P-
TAGs but no U-TAGs.
Feature Overview
With the growth of wireless networks, the number of base stations that support IP interfaces is
greatly increased, and therefore mobile backhaul networks need to access base station services
through IP packets.
If services are accessed through a traditional L3VPN solution, the restrictions are as follows:
l The access equipment at the edge of a backhaul network must have strong routing
capability. This increases the cost of the access equipment.
l An L3VPN network relies on dynamic routing protocols, and therefore networking is
complex and the protection mechanism cannot satisfy network requirements.
On a mobile backhaul network, the trail between a base station and an RNC is fixed. Therefore,
if you create IP line services between the base station and RNC, the services can fully satisfy
service bearing requirements. In the case of IP line services, IP packets are encapsulated into
PWs. In this manner, IP services from base station are accessed. In addition, features of private
line services such as simple networking, easy management, and complete protection are
maintained.
Networking
As shown in Figure 6-13, an IP line service is created between the OptiX PTN 910/950 and
OptiX PTN 1900/3900/3900-8 for each base station.
The OptiX PTN 910/950 encapsulates IP packets from base stations into a PW, and sends the
PW over an IP line to the OptiX PTN 1900/3900/3900-8. The OptiX PTN 1900/3900/3900-8
decapsulates the packets and sends the packets to an RNC. In this manner, UNI-NNI service
transmission is implemented.
IP Line
IP Line
IP Line
NOTE
IP line services for PTN equipment support the DHCP relay function. That is, a base station can obtain its
IP address through DHCP.
A complete protection mechanism for IP line services on PTN equipment is available. For details,
see Dual-Homing Protection for IP Line Services.
Implementation Principle
The IP line feature is based on the MPLS technology. In the case of IP line, the accessed IP
packets are encapsulated into PWs, and then the packets are transported in point-to-point manner.
The PTN equipment supports UNI-NNI IP line services. Figure 6-14 shows the service
encapsulation process.
A B
IP IP IP
Ethernet PW Label Ethernet
MPLS Label
Ethernet
Normal Running
As shown in Figure 6-15, nodes A and B are connected through PW1. Nodes A and C are
connected through PW2. PW OAM is enabled for PW1 and PW2 to detect PW faults.
In normal cases, packets are sent to node B over PW1 and then to the RNC.
PW1
A
PW2
Service Route
Equipment Fault
Figure 6-16 shows the situation where switching occurs when node B is faulty.
Figure 6-16 Dual-homing protection switching for IP line services in case of an equipment fault
B
PW1
A
PW2
C
B
PW1
A
PW2
C
l When node B is faulty, node A detects the fault through PW OAM, and then node A switches
to PW2.
l Node C detects the fault of node B through the routing protocol, and then node C updates
the route information and accepts the packets sent by node A.
l The route of services from NodeB changes to A-C-RNC.
Link Fault
Figure 6-17 shows the situation where switching occurs when the link between nodes A and B
is faulty.
Figure 6-17 Dual-homing protection switching for IP line services in case of a link fault
B
PW1
A
PW2
C
PW1
A
PW2
C
l Node A detects that PW1 is faulty through PW OAM, and therefore node A switches
services to PW2.
l Through the routing protocol, node B updates route information and accepts the packets
sent by node C.
l The route of services from NodeB changes to A-C-B-RNC.
To prevent service interruption over the link between node B and the RNC or between node C
and the RNC, you can configure VRRP protection for the RNC. For details on VRRP, see VRRP.
BITS
BSC
NMS
CE
CE RNC
CE
CE
PW1
PE
PW3
PW2
AC
PE
E1 interface
PE AC
BTS
CE IMA E1
interface
FE interface
Node B
CE
Figure 6-18 shows a PWE3 single-hop mobile carrier network. On this network, the following
types of services are transmitted:
l BTS is connected to the PSN network through the E1 interface and TDM signals are
transmitted to the BSC by using CES services.
l Node B is connected to the PSN network through the IMA E1 interface and ATM cells are
transmitted to the RNC by using ATM services.
l Node B is connected to the PSN network through the FE interface and Ethernet packets are
transmitted to the NMS by using Ethernet services.
All the preceding services are emulated by using the PWE3 technology and transmitted on PSN
networks. By using the PWE3 technology, carriers can smoothly migrate original access schemes
to PSN networks. This helps to reduce repeated network constructions and lower OPEX.
Create a Network
Configure Tunnel
End
Operation Description
1. Create a network Complete creating the NE, configure the NE data, and creating fibers.
2. Set the NE LSR Specifies the LSR ID for each NE that a service traverses and the start
ID value of the global label space. Each LSR ID is unique on a network.
Operation Description
3. Configure the Set the basic attributes and Layer 3 attributes (such as tunnel enabling
network-side status and IP address) for the interface to bear tunnels.
interface
4. Configure Set the associated protocol parameters of the control plane for creating
Control Plane tunnels.
l To create the static MPLS tunnel to bear the CES service, you do
not need to set the associated parameters of the control plane.
l To create the dynamic MPLS tunnel to bear the CES service, you
need to set the following parameters:
1. IGP-ISIS protocol parameters
2. MPLS-RSVP protocol parameters
To create the dynamic PW to bear services, you need to set the IGP-
ISIS and MPLS-LDP protocol parameters.
l To create the IP tunnel or GRE tunnel to bear the CES service, you
need to add a static route.
6. Configure the Use the CD1 board or tributary card to access the base station services.
service interface
7. Configure CES 1. Create the CES service, including setting the service ID and service
Service name.
2. Set the source and sink information, including setting the board and
channel.
3. Configure the PW, including setting the PW type, label, and tunnel
type.
4. Configure the advanced attributes, including setting the jitter buffer
time, packet loading time, and clock mode.
Optional
Create Network
Configure the
network-side interface
Configure Tunnel
End
1. Create Network Complete creating the NE, configure the NE data, and creating fibers.
2. Configure the Specifies the LSR ID for each NE that a service traverses and the start
LSR ID value of the global label space. Each LSR ID is unique on a network.
3. Configure the Set the basic attributes and Layer 3 attributes (such as tunnel enabling
network-side status and IP address) for the interface to bear tunnels.
interface
Operation Description
4. Configure the Set the associated protocol parameters of the control plane for creating
control plane tunnels.
l To create the static MPLS tunnel to bear the ATM service, you do not
need to set the associated parameters of the control plane.
l To create the dynamic MPLS tunnel to bear the ATM service, you
need to set the following parameters:
1. IGP-ISIS protocol parameters
2. MPLS-RSVP protocol parameters
To create the dynamic PW to bear the service, you need to set the IGP-
ISIS and MPLS-LDP protocol parameters.
l To create the IP tunnel or GRE tunnel to bear the ATM service, you
need to add a static route.
6. Configure the The ATM policy is used to perform the traffic management on the ATM
ATM Policy service.
7. Configure the The ATM interface is used to access the base station services.
ATM interface
8. Configure the 1. Create the ATM service, including setting the service ID and service
UNIs-NNI ATM name, and selecting the service type and connection type.
service 2. Configure the connection, including setting the source information,
PW ID, sink information, and policy.
3. Configure the PW, including setting the PW type, label, and tunnel
type.
4. Configure the CoS mapping and CoS policy of the PW.
Create Network
Configure the
network-side
Interfaces
Configure Tunnel
End
1. Create Network Complete creating the NE, configure the NE data, creating fibers, and
configure clocks.
2. Configure the Specifies the LSR ID for each NE that a service traverses and the start
LSR ID value of the global label space. Each LSR ID is unique on a network.
3. Configure the Set the basic attributes and Layer 3 attributes (such as tunnel enabling
network-side status and IP address) for the interface to bear tunnels.
Interface
Operation Description
4. Configure the Set the associated protocol parameters of the control plane for creating
Control Plane tunnels.
l To create the static MPLS tunnel to bear the E-Line service, you do
not need to set the associated parameters of the control plane.
l To create the dynamic MPLS tunnel to bear the E-Line service, you
need to set the following parameters:
1. IGP-ISIS protocol parameters
2. MPLS-RSVP protocol parameters
To create the dynamic PW to bear services, you need to set the IGP-
ISIS and MPLS-LDP protocol parameters.
l To create the IP tunnel or GRE tunnel to bear the E-Line service, you
need to add a static route.
6. Configure the The QoS policy is used to perform the traffic management on the E-Line
QoS Policy service.
7. Configure the The user-side interface is used to access the base station services.
user-side
Interface
8. Configure the 1. Create the E-Line service, including setting the service ID and service
UNI-NNI E-Line name, and selecting the service type and bearer type.
Service Carried 2. Configure the PW, including setting the PW type, label, and tunnel
by the PW in the type.
per-trail mode
3. Configure the QoS, including setting the UNI and QoS of the PW.
Mandatory
Start
Optional
Create a network
Configure interfaces
Configure a Layer 3
virtual interface
Configure an IP line
service
Configure a QoS
policy
End
2. Configure l Configure a UNI port, which is used from service access from a base
interface. station.
l Configure an NNI port. That is, set the general attributes and Layer
3 attributes (such as Enable Tunnel and IP Address) for the port
so that the port can carry tunnels.
3. Configure a Layer Configure a Layer 3 virtual interface as the sink port for the IP line
3 virtual interface. service.
4. Configure a static An IP line service can be carried only by a static MPLS tunnel.
MPLS tunnel. l You can create a static MPLS tunnel site by site or end to end. When
creating a static MPLS tunnel, you need to set the signaling type to
static and specify the service name, ingress node, egress node, and
transit node.
Task Remarks
5. Configure an IP 1. Create an IP line service. That is, set the service ID and specify the
line service. service name.
2. Set the source and sink. That is, choose boards and a tunnel.
3. Configure a PW. That is, set the PW type, PW label, and tunnel type.
4. Set advanced attributes. That is, set parameters such as QoS for the
UNI port.
PW Redundancy Protection
The PW redundancy protection can be implemented either in the single source and dual sink
mode or in the dual source and single sink mode. To configure the single source and dual sink
shown in Figure 6-23, you need to set PE1 to the source, PE3 the working sink, and PE2 the
protection sink by using the NMS. To configure the dual source and single sink shown in Figure
6-24, you need to set PE3 to the sink, PE1 the working source, and PE2 the protection source
by using the NMS.
Source W
PE3
CE PE1 PE2 CE
P
Protection Sink
Working PW
Protection PW
W Sink
PE1
PE2
CE PE3 CE
P
Protection Source
Working PW
Protection PW
Figure 6-25 shows the process of configuring the PW redundancy protection. In the PWE3
service creation window, set Protection Type to PW redundancy. After the protection is
configured, proceed with the configuration of other parameters. A PWE3 service with the dual-
homing protection is created successfully.
Start
End
PE1 PE3
CE PE2 PE4 CE
P
Protection PW
Figure 6-27 shows the process of configuring the dual-Homing protection for CEs symmetric
access. In the PWE3 service creation window, set Protection Type to Dual-Homing protection
for CEs symmetric access. After the protection is configured, proceed with the configuration
of other parameters. A PWE3 service with the dual-homing protection is created successfully.
Figure 6-27 Process of configuring the dual-Homing protection for CEs symmetric access
Start
Configure Working
Source
Configure Protection
Source
Configure Protection
Sink
End
PW Backup Protection
To configure the PW backup protection shown in Figure 6-28, you need to set PE1 as the FRR
source and PE2 the FRR sink.
Source
Sink
PE1 PE2
CE CE
Working PW
Protection PW
Figure 6-29 shows the process of configuring the PW backup protection. In the PWE3 service
creation window, set Protection Type to PW backup protection. After the protection is
configured, proceed with the configuration of other parameters. A PWE3 service with the PW
backup protection is created successfully.
Configure Source
Configure Sink
End
PW APS Protection
The PW APS protection can be implemented either in the single source and dual sink mode or
in the dual source and single sink mode. To configure the single source and dual sink shown in
Figure 6-30, you need to set PE1 as the source, PE2 the working sink, and PE3 the protection
sink. To configure the dual source and single sink shown in Figure 6-31, you need to set PE1
as the working source, PE2 the sink, and PE3 the protection source.
Source W
PE2
P
PE1 PE3
CE P CE
Protection Sink
Working PW
Protection PW
W Sink
PE1
P
CE PE3 PE2 CE
P
Figure 6-32 shows the process of configuring the PW APS protection. In the PWE3 service
creation window, set Protection Type to PW APS protection. After the protection is
configured, proceed with the configuration of other parameters. A PWE3 service with the PW
APS protection is created successfully.
End
This topic describes how to manage ATM connections, including the operations of adding and
deleting an ATM connection.
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l The DCN function must be disabled for the port that carries the CES service.
l The CES service interface must be configured. Specifically, the interface mode must be
configured to Layer 1 and the frame format and frame mode of the interface must be
configured.
l If the service need be carried by an MPLS Tunnel, you must configure a tunnel first..
l If the service need be carried by an IP or GRE Tunnel, you must configure a tunnel first. .
l To create the dynamic PW to bear the service, you need to set the IGP-ISIS and MPLS-
LDP protocol parameters.
Context
When the interface is used to carry the CES service, you need to set the frame format, to ensure
that the frame format is the same as the service encapsulation format. When the emulation mode
of a CES service is CESoPSN, it is recommended that you set the frame format at the interface
to CRC-4 multiframe. When the emulation mode of a CES service is SATop, the frame format
at the interface should be set to non-framing.
When the UNI interface is used to carry the CES service, you need to set the frame mode.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
Step 2 Set the parameters on the General Attributes tab page.
NOTE
l You can use a template to configure a service. Specifically, you can select a template in the Service
template field. Alternatively, you can create another template.
l Set Service Type to CES.
l If you set Protection Type to PW redundancy or PW APS protection, select Single source and
dual sink or Dual source and single sink on the Node List. You need to configure one source node
and two sink nodes for Single source and dual sink, and two source nodes and one sink node for Dual
source and single sink. One of the corresponding two PWs is the working trail and the other is the
protection trail. PW APS protection supports to be set as the Single source and single sink.
l If Protection Type is CE Dual-homing protection for CEs of symmetric access, you need to
configure two source nodes and two sink nodes. The corresponding two PWs protect each other.
l If Protection Type is PW backup protection, two dynamic PWs are automatically created between
the source node and sink node. The two PWs protect each other.
2. Select a source NE from Physical Topology on the left. Then, the selected NE is displayed
in the upper-right pane.
3. In the right portion of NE Panel, all slots and available cards of the NE are displayed.
According to the service type to be created, select the appropriate card.
4. Select an interface.
5. Set the SAI attribute of the CES service in the SAI configuration. After you complete the
setting, click Add Node, In the lower portion of the window, the new source and sink NEs
are displayed,click OK.
6. Configure the sink NE, protection NE and transit NE with the same method and based on
different protection types.
NOTE
The configuration method is the same for the sink NE, transit NE, and source NE. Hence, only the example
for configuring a source NE is provided as follows.
In the dialog box for configuring the source and sink, you can select multiple lower order timeslots and
create CES services in batches.
Step 4 Optional: Click Configure Source And Sink,select the Unterminated on the left,specify the
LSR ID of unterminated node and click Add Node,In the lower portion of the window, the
unterminated source and sink NEs are displayed, click OK.
NOTE
On a network, if the equipment at one end of a service can be managed by the U2000, and the equipment
at the other end of the service is from another vendor and cannot be managed by the U2000, select
Unterminated to set the LSR ID of the opposite end of the service.
Currently, the PTN equipment in the same management domain can be used to configure unterminated
trails.
If Protection Type is PW backup protection or PW APS protection, the unterminated node cannot be
set.
Step 5 Optional: Click Configure PW Switch Node to add Working and Protection transit NEs
between the source NE and sink NE.
Step 6 Set parameters for the source and sink NEs that are displayed in Node List. To view the topology
of a configured service, click the Service Topology tab in the upper-right area.
Step 7 In the PW pane in lower left portion of the window, configure parameters. Configure general
attributes of the PW.
NOTE
l You can set Signaling Type to Dynamic or Static. If you set Signaling Type to Dynamic, the Forward
Label and Reverse Label are assigned automatically. If you set Signaling Type to Static, the Forward
Label and Reverse Label can be assigned automatically or manually.
l You can set Forward Type and Reverse Type to Static Binding or Select policy. If you set Forward
Type to Static Binding, you need to manually specify a tunnel in the Forward Tunnel field. If you
set Forward Type to Select Policy, you need to set the tunnel priority in the Forward Type field so
that the system selects a tunnel according to the priority.
l You may also set the forward tunnel and reverse tunnel by clicking the Service Topology tab in the
upper-right area. Select a tunnel between the source NE and sink NE, right-click, and then choose
Select Forward Tunnel or Select Reverse Tunnel. In the dialog box that is displayed, select the tunnel
for static binding.
Step 9 Optional: Click the Advanced PW Attribute tab to set parameters for a PW and set the clock
mode of the source and sink NEs.
NOTE
Generally, Packet Loading Time (us) for packets that carry the CES service is 1 ms.
The value of Jitter Compensation Buffering Time(us) must be greater than the value of Packet Loading
Time (us) at the peer end.
Step 10 Optional: If the protection type of service are PW redundancy, PW backup protection or
PW APS protection,click Protection Parameter to set the Protection parameters.
l the protection type of service are PW redundancy or PW backup protection: Set
Protection Mode as 1:1 or 1+1.
l the protection type of service are PW APS Protection: Set the parameters as follows.
NOTE
Currently, the PTN supports PW APS protection with the dual-ended protection switching in 1:1
revertive mode.
Protection Type supports to be set as the Slave protection pair, If the working PWs, protection PWs,
and DNI-PWs of multiple MC-PW APS to be created share the same source and sink with the working
PW, protection PW, and DNI-PW of an MC-PW APS, you can attach these multiple MC-PW APS to
be created to the MC-PW APS (master MC-PW APS). Then, these PWs are considered as being in one
MC-PW APS for synchronous detection and switching. In this manner, the switching time is reduced,
and the OAM resources and APS resources are saved. Then, the entire MC-PW APS performs
protection switching according to the status of the PWs in the master MC-PW APS. The Protection
Group ID of slave protection pair refers to the ID of the protection group configured on PE3 as the
master PW APS protection group.
NOTE
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A service is available on NEs only when it is
enabled.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l If you need to use the port exclusively, disable the DCN function of the UNI port.
l The MPLS tunnel for carrying services must be created if it is used.
l The IP/GRE tunnel for carrying services must be created if it is used.
l To create the dynamic PW to bear the service, you need to set the IGP-ISIS and MPLS-
LDP protocol parameters.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
Step 2 Set the parameters on the General Attributes tab page.
NOTE
l You can use a template to configure a service. Specifically, you can select a template in the Service
template field. Alternatively, you can create another template.
l Set Service Type to ETH.
l If you set Protection Type to PW redundancy or PW APS protection, select Single source and
dual sink or Dual source and single sink on the Node List. You need to configure one source node
and two sink nodes for Single source and dual sink, and two source nodes and one sink node for Dual
source and single sink. One of the corresponding two PWs is the working trail and the other is the
protection trail. PW APS protection supports to be set as the Single source and single sink.
l If Protection Type is CE Dual-homing protection for CEs of symmetric access, you need to
configure two source nodes and two sink nodes. The corresponding two PWs protect each other.
l If Protection Type is PW backup protection, two dynamic PWs are automatically created between
the source node and sink node. The two PWs protect each other.
2. Select a source NE from Physical Topology on the left. Then, the selected NE is displayed
in the upper-right pane.
3. In the right portion of NE Panel, all slots and available cards of the NE are displayed.
According to the service type to be created, select the appropriate card.
4. Select an interface.
5. Set the SAI attribute of the Ethernet service in the SAI configuration. After you complete
the setting, click Add Node, In the lower portion of the window, the new source and sink
NEs are displayed. Click OK.
6. Configure the sink NE, protection NE and transit NE with the same method and based on
different protection types.
NOTE
The configuration method is the same for the sink NE, transit NE, and source NE. Hence, only the example
for configuring a source NE is provided as follows.
Step 4 Optional: Click Configure Source And Sink, select the Unterminated on the left, specify the
LSR ID of unterminated node, and click Add Node, In the lower portion of the window, the
unterminated source and sink NEs are displayed, click OK.
NOTE
On a network, if the equipment at one end of a service can be managed by the U2000, and the equipment
at the other end of the service is from another vendor and cannot be managed by the U2000, select
Unterminated to set the LSR ID of the opposite end of the service.
Currently, the PTN equipment in the same management domain can be used to configure unterminated
trails.
If Protection Type is PW backup protection or PW APS protection, the unterminated node cannot be
set.
Step 5 Optional: Click Configure PW Switch Node to add working and protection transit NEs
between the source NE and sink NE.
Step 6 Set parameters for the source and sink NEs that are displayed in Node List. To view the topology
of a configured service, click the Service Topology tab in the upper-right area.
Step 7 In the PW pane in lower left portion of the window, configure parameters. Configure general
attributes of the PW.
NOTE
l You can set Signaling Type to Dynamic or Static. If you set Signaling Type to Dynamic, the Forward
Label and Reverse Label are assigned automatically. If you set Signaling Type to Static, the Forward
Label and Reverse Label can be assigned automatically or manually.
l You can set Forward Type and Reverse Type to Static Binding or Select policy. If you set Forward
Type to Static Binding, you need to manually specify a tunnel in the Forward Tunnel field. If you
set Forward Type to Select Policy, you need to set the tunnel priority in the Forward Type field so
that the system selects a tunnel according to the priority.
l You may also set the forward tunnel and reverse tunnel by clicking the Service Topology tab in the
upper-right area. Select a tunnel between the source NE and sink NE, right-click, and then choose
Select Forward Tunnel or Select Reverse Tunnel. In the dialog box that is displayed, select the tunnel
for static binding.
Step 9 Optional: Click the SAI QoS tab to view the Local QoS Policy or configure the global
template and service bandwidth of SAI. Alternatively, you can select one of the policies that are
configured in the Global QoS Policy Template field. After you set Bandwidth Limited to
Enabled, the CIR (kbit/s) and PIR (kbit/s) can be set.
Step 10 Optional: Click Service Parameter tab to configure the service parameter. If you set BPDU
to Transparent Transmission, the MTU(byte) cannot be set.
Step 11 Optional: Click the PW QoS tab to configure the global template and service bandwidth of a
PW. Alternatively, you can click Global QoS Policy Template and select the global template
of QoS from the drop-down list. Then, set parameters. After you set Bandwidth Limited of a
PW to Enabled, the CIR (kbit/s) and PIR (kbit/s) can be set.
Step 12 Optional: Click the Advanced PW Attribute tab to set parameters for a PW. When the PW
Type is set to Ethernet Tagged Mode, the TPID and Request VLAN is available.
NOTE
Currently, the PTN supports PW APS protection with the dual-ended protection switching in 1:1
revertive mode.
Protection Type supports to be set as the Slave protection pair, If the working PWs, protection PWs,
and DNI-PWs of multiple MC-PW APS to be created share the same source and sink with the working
PW, protection PW, and DNI-PW of an MC-PW APS, you can attach these multiple MC-PW APS to
be created to the MC-PW APS (master MC-PW APS). Then, these PWs are considered as being in one
MC-PW APS for synchronous detection and switching. In this manner, the switching time is reduced,
and the OAM resources and APS resources are saved. Then, the entire MC-PW APS performs
protection switching according to the status of the PWs in the master MC-PW APS. The Protection
Group ID of slave protection pair refers to the ID of the protection group configured on PE3 as the
master PW APS protection group.
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A service is available on NEs only when it is
enabled.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l The control plane must be configured.
l The interface must be configured. If IMA services are connected, the IMA group must be
configured.
l The ATM policy must be configured.
l The MPLS tunnel for carrying services must be created if it is used.
l The IP/GRE tunnel for carrying services must be created if it is used.
l To create the dynamic PW to bear the service, you need to set the IGP-ISIS and MPLS-
LDP protocol parameters.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
Step 2 Set the parameters on the General Attributes tab page.
NOTE
l You can use a template to configure a service. Specifically, you can select a template in the Service
Template field. Alternatively, you can create another template.
l Set Service Type to ATM.
l If you set Protection Type to PW redundancy or PW APS protection, select Single source and
dual sink or Dual source and single sink on the Node List. You need to configure one source node
and two sink nodes for Single source and dual sink, and two source nodes and one sink node for Dual
source and single sink. One of the corresponding two PWs is the working trail and the other is the
protection trail. PW APS protection supports to be set as the Single source and single sink.
l If Protection Type is CE Dual-homing protection for CEs of symmetric access, you need to
configure two source nodes and two sink nodes. The corresponding two PWs protect each other.
l If Protection Type is PW backup protection, two dynamic PWs are automatically created between
the source node and sink node. The two PWs protect each other.
2. Select a source NE from Physical Topology on the left. Then, the selected NE is displayed
in the upper-right pane.
3. In the right portion of NE Panel, all slots and available cards of the NE are displayed.
According to the service type to be created, select the appropriate card.
4. Select an interface.
5. Set the SAI attribute of the ETH service in the SAI configuration. After you complete the
setting, click Add Node, In the lower portion of the window, the new source and sink NEs
are displayed,click OK.
6. Configure the sink NE, protection NE and transit NE with the same method and based on
different protection types.
7. To configure multiple ATM connections for an ATM service at the same time, select
multiple ports for an NE by using the same method.
NOTE
The configuration method is the same for the sink NE, transit NE, and source NE. Hence, only the example
for configuring a source NE is provided as follows.
Step 4 Optional: Click Configure Source And Sink,select the Unterminated on the left,specify the
LSR ID of unterminated node and click Add Node,In the lower portion of the window, the
unterminated source and sink NEs are displayed, click OK.
NOTE
On a network, if the equipment at one end of a service can be managed by the U2000, and the equipment
at the other end of the service is from another vendor and cannot be managed by the U2000, select
Unterminated to set the LSR ID of the opposite end of the service.
Currently, the PTN equipment in the same management domain can be used to configure unterminated
trails.
If Protection Type is PW backup protection or PW APS protection, the unterminated node cannot be
set.
Step 5 Optional: Click Configure PW Switch Node to add Working and Protection transit NEs
between the source NE and sink NE.
Step 6 Set parameters for the source and sink NEs that are displayed in Node List. To view the topology
of a configured service, click the Service Topology tab in the upper-right area.
Step 7 In the PW pane in lower left portion of the window, configure parameters. Configure general
attributes of the PW.
NOTE
Step 8 Click ATM Link. In the dialog box that is displayed, add the ATM connection, and set relevant
parameters of the ATM connection.
NOTE
After you finishing configuring VPI/VCI of the source and sink, the U2000 assigns the transit VPI/VCI
automatically. In the case of a network consisting of PTN equipment, the transit VPI/VCI can be set.
Moreover, the transit VPI/VCI can be set be different from the VPI/VCI of the source and sink.
Step 12 Optional: If the protection type of service are PW redundancy, PW backup protection or
PW APS protection,click Protection Parameter to set the Protection parameters.
l the protection type of service are PW redundancy or PW backup protection: Set
Protection Mode as 1:1 or 1+1.
l the protection type of service are PW APS protection: Set the parameters as follows.
NOTE
Currently, the PTN supports PW APS protection with the dual-ended protection switching in 1:1
revertive mode.
Protection Type supports to be set as the Slave protection pair, If the working PWs, protection PWs,
and DNI-PWs of multiple MC-PW APS to be created share the same source and sink with the working
PW, protection PW, and DNI-PW of an MC-PW APS, you can attach these multiple MC-PW APS to
be created to the MC-PW APS (master MC-PW APS). Then, these PWs are considered as being in one
MC-PW APS for synchronous detection and switching. In this manner, the switching time is reduced,
and the OAM resources and APS resources are saved. Then, the entire MC-PW APS performs
protection switching according to the status of the PWs in the master MC-PW APS. The Protection
Group ID of slave protection pair refers to the ID of the protection group configured on PE3 as the
master PW APS protection group.
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A service is available on NEs only when it is
enabled.
----End
Postrequisite
After the service is created successful, the service is displayed in the PWE3 service management
window.
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
If you need to use a UNI port exclusively, disable the DCN function at the port.
NOTE
You must configure a VRF UNI port before configuring a UNI port for an IP line service.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
NOTE
2. Choose an NE from the Navigation Tree on the left and choose a corresponding port from
the pane on the right. Then, click Add Node. Set Location to Source or Sink. After the
settings are completed, click OK.
NOTE
Step 4 Configure a PW switching node. Click Configure PW Switch Node, and then choose a PW
switching node between the source and sink NEs. Then, click OK.
NOTE
A PW switching node cannot be the source or sink NE.
Step 5 Configure a PW. Click the PW tab page, and configure basic attributes for the PW.
Forward Label and Reverse Label are stuck to packet headers when Ethernet frames are encapsulated to
PWs. These labels are used for label switching.
Step 6 Click Deploy to deploy the service to NEs. In this case, If you click Enable, the service is
available. Otherwise, the service is only saved on the U2000 but not deployed to NEs. By default,
the U2000 deploys and enables the service.
Step 7 Optional: Set QoS for the service access port. Click Advanced to display a pane on the lower
right side. Click the SAI QoS tab. Set Bandwidth Enabled to Enabled. Then, you can set
parameters such as CIR, PIR, CBS, and PBS. You can also select a configured QoS template by
Step 8 Optional: Set PW QoS. Click the PW QoS tab and set a PW QoS policy. Set Bandwidth
Enabled to Enabled. Then, you can set parameters such as CIR, PIR, CBS, and PBS. You can
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l PWE3 services that are created successfully must exist.
Context
To create a PWE3 services through duplication, you can specify the source, sink, and transit
nodes again, or change certain parameters only.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 3 Create a PWE3 service by duplicating either a protected PWE3 service or an unprotected PWE3
service.
l Methods of duplicating different protected PWE3 services and the corresponding windows
are similar. The following example describes how to duplicate a PWE3 service with the PW
backup protection.
1. Select a PWE3 service with PW backup protection, right-click, and choose Copy from
the shortcut menu.
2. In the Copy PWE3 Service window, modify the attributes relevant to the new service
based on service planning, and click OK.
NOTE
In the Copy PWE3 Service window, all parameters of the original PWE3 service are retained,
including parameters of the access ports. You must change an original access port to another idle
access port before a duplicate service can be created successfully.
l If you want to change the access port to another port of the same NE, change the port directly
in the node list on the left.
l If you want to change the access port to a port of another NE, change the port by any of the
two methods. Method 1: In the topology view, right-click the NE where the required port
resides and choose the corresponding shortcut menu (change source, sink or transit node). In
the dialog box that is displayed, change the service access port. Method 2: In the node list
on the left, delete the corresponding NE. Then configure another NE for the access port.
l Methods of duplicating unprotected Ethernet services, IP E-line services and CES services
and the corresponding windows are similar. The following example describes how to
duplicate a CES service.
1. Select one CES service to be duplicated, right-click, and choose Copy from the shortcut
menu.
2. In the Copy PWE3 Service dialog box, click Add.
3. In the Add Service dialog box, set the source and sink nodes, and then click OK.
NOTE
On a network, if the equipment at one end of a service can be managed by the U2000, and the
equipment at the other end of the service is from another vendor and cannot be managed by the
U2000, select Unterminated to set the LSR ID of the opposite end of the service.
Currently, the PTN equipment in the same management domain can be used to configure
unterminated trails.
NOTE
In the Add Service dialog box, you can set multiple source and sink nodes to create the
corresponding services through duplication. Those services can share intermediate NEs. The
following describes the details.
l The mapping between the source and sink NEs is 1:N or N:1. In this case, only one source
node exists, and a service is created between the source node and a sink node. In this way,
N services are created.
l The mapping between the source and sink NEs is N:N. In this case, a service is created
between a source node and the sink node with the same number. For example, a service is
created between source node 1 and sink node 1 and a service is created between source node
2 and sink node 2. In this way, N services are created.
l The mapping between the source and sink NEs is N:M and M is greater than N. In this case,
a service is created between the source node whose number is smaller than or equal to N and
the sink node with the same number. For the remaining sink nodes, a service is created
between source node N and each remaining sink node. In this way, M services are created.
Select an NE of the same type as the original NE when you select the source, sink, or transit node
for the duplication.
4. Click Advanced and Modify SAI tabs respectively to modify relevant parameters of
the service.
5. Click OK.
l The following describes how to duplicate an unprotected ATM service.
1. Configure the general attributes of the service created through duplication. For details,
see 3.1 through 3.4.
2. In the Service Parameter area, modify the attributes relevant to the ATM connection
of the new service.
3. Click Add Link to add an ATM connection to the new service.
4. Click OK.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l A PWE3 service must be created.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Select one PWE3 service that is configured, right-click, and then choose Deploy from the
shortcut menu.
NOTE
----End
Postrequisite
When you need to delete the service, select the service and click Delete, click Yes in the dialog
box displayed.
NOTE
Deleting a service is to delete a service on a per-NE basis and an end-to-end tunnel. When you choose
Delete from Network Side, only the data about end-to-end services is deleted.
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l Discrete services must exist on the network.
Context
NOTE
You cannot convert a discrete service that has no LSR IDs for both the source and sink ends to an
unterminated service.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Discrete Service from the main menu.
Currently, only the router supports the function of filtering services by port name.
Step 3 Select one or more discrete services, click Convert to Unterminated. Alternatively, right-click
and choose Convert to Unterminated from the shortcut menu.
The adjusted discrete PWE3 service is displayed in the service list in the PWE3 Service
Management window.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l At least a PWE3 service without protection must exist.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Right-click a PWE3 service without protection and choose Protect > Configure Protection
from the shortcut menu.
Step 4 Click Protection Type and select the required service protection type from the drop-down list.
l If you set Protection Type to PW redundancy or PW APS protection, select Single source
and dual sink or Dual source and single sink on the Node List. You need to configure one
source node and two sink nodes for Single source and dual sink, and two source nodes and
one sink node for Dual source and single sink. One of the corresponding two PWs is the
working trail and the other is the protection trail. PW APS protection supports to be set as
the Single source and single sink.
l If Protection Type is CE Dual-homing protection for CEs of symmetric access, you need
to configure two source nodes and two sink nodes. The corresponding two PWs protect each
other.
l If Protection Type is PW backup protection, two dynamic PWs are automatically created
between the source node and sink node. The two PWs protect each other.
Step 5 Click Configure Source And Sink. In the dialog box that is displayed, configure a protection
NE and click OK.
Step 6 Set parameters for the source, sink and protect NEs that are displayed in Node List. To view the
topology of a configured service, click the Service Topology tab in the upper-right area.
Step 7 In the PW pane in lower left portion of the window, configure parameters. Configure general
attributes of the PW.
NOTE
l You can set Forward Type and Reverse Type to Static Binding or Select policy. If you set Forward
Type to Static Binding, you need to manually specify a tunnel in the Forward Tunnel field. If you
set Forward Type to Select Policy, you need to set the tunnel priority in the Forward Type field so
that the system selects a tunnel according to the priority.
l You may also set the forward tunnel and reverse tunnel by clicking the Service Topology tab in the
upper-right area. Select a tunnel between the source NE and sink NE, right-click, and then choose
Select Forward Tunnel or Select Reverse Tunnel. In the dialog box that is displayed, select the tunnel
for static binding.
Step 9 Optional: If you configure protection for an Ethernet or ATM service, click the SAI QoS tab
to view the Local QoS Policy or configure the global template and service bandwidth of SAI.
Alternatively, you can select one of the policies that are configured in the Global QoS Policy
Template field. After you set Bandwidth Limited to Enabled, the CIR (kbit/s) and PIR (kbit/
s) can be set.
Step 10 Optional: Click the PW QoS tab to configure the global template and service bandwidth of a
PW. Alternatively, you can click Global QoS Policy Template and select the global template
of QoS from the drop-down list. Then, set parameters. After you set Bandwidth Limited of a
PW to Enabled, the CIR (kbit/s) and PIR (kbit/s) can be set.
Step 11 Optional: Click the Advanced PW Attribute tab to set parameters for a PW.
Currently, the PTN supports PW APS protection with the dual-ended protection switching in 1:1
revertive mode.
Protection Type supports to be set as the Slave protection pair, If the working PWs, protection PWs,
and DNI-PWs of multiple MC-PW APS to be created share the same source and sink with the working
PW, protection PW, and DNI-PW of an MC-PW APS, you can attach these multiple MC-PW APS to
be created to the MC-PW APS (master MC-PW APS). Then, these PWs are considered as being in one
MC-PW APS for synchronous detection and switching. In this manner, the switching time is reduced,
and the OAM resources and APS resources are saved. Then, the entire MC-PW APS performs
protection switching according to the status of the PWs in the master MC-PW APS. The Protection
Group ID of slave protection pair refers to the ID of the protection group configured on PE3 as the
master PW APS protection group.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Right-click a service and choose Test and Check from the shortcut menu.
Step 4 In the dialog box that is displayed, select the trail to be checked.
Step 5 Set Diagnosis Option.
Set diagnosis parameters based on the requirements of operation and maintenance. The meaning
of each option is as follows:
1. Service Check: check list all service configuration parameters.
2. OAM Tool: check the connectivity by performing the ping operation on each layer.
3. Collect Information: view the information about the public route, LDP peer, LDP session,
and LSP.
4. Tracert: location is used to find out the fault position.
Step 6 Click Run.
Step 7 View the running results.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l You must have created the PW APS protection service and you must have enabled the APS
protocol status.
Context
CAUTION
When other switching operations, excluding the exercise switching, are performed, the services
may be interrupted.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Select a PWE3 service with the PW APS protection. In the lower portion, information about
associated attributes is displayed.
Step 5 Click the Protection Parameter tab. You can query the current status of the PW APS protection
switching.
Step 6 Select a protection record and click Function in the lower right corner.
Step 7 Select a required switching operation from the drop-down list. For details of switching
operations, see PWE3 Service Management.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 4 Click the Service Parameter tab. The ATM link list is displayed.
In the case of a PTN NE, you need to configure the ATM policy. Otherwise, an error message is
displayed.
4. Click OK.
----End
This topic describes how to monitor alarms of a PWE3 service. By creating a service monitoring
template, the maintenance personnel can monitor alarms of services that important to customers,
and learn the running status of services in real time, thus ensuring the normal running of services.
6.5.6 Viewing the Alarms of a PWE3 Service
This topic describes how to view the alarms of a PWE3 service.
6.5.7 Diagnosing a PWE3 Service
This topic describes how to diagnose a PWE3 service by using the ping and tracert.
Prerequisite
You must be an NM user with "network operator" authority or higher.
The equipment must communicate with the U2000 in the normal state.
A PWE3 service must be created.
Context
The Ethernet OAM defines the following concepts:
l Maintenance domain (MD): The MD is a network that requires the OAM. An important
attribute of the MD is level, which restricts the range of OAM operations. The MD can be
embedded but not overlapped. The OAM packet processing principle of the MD is as
follows: Block the low level, transparently transmit the high level, and process the same
level.
l Maintenance association (MA): The MA can be considered as a service-related domain,
which consists of many maintenance end points (MEPs).
l Maintenance end point (MEP): The MEP is the transmitting and terminating points of all
OAM packets. It is relevant to services. The MEP has one unique MEP ID in the MA. In
a network, the MA and MEP ID can determine a unique MEP.
l Maintenance intermediate point (MIP): The MIP is relevant to the MD but irrelevant to the
MA. The MIP cannot initiate the OAM packets. The MIP can respond to and forward LB
and LT packets, but the MIP can forward the CC packets only.
The Ethernet OAM sends CC packets periodically to check the connectivity of services in real
time. The source MEP constructs and sends CC frames periodically. The destination MEP
receives the CC frames and directly starts the CC function. If the destination MEP does not
receive the CC frames from the source in a period of time (for example, 3.5 times transmit
period), the MEP reports the CCLOS alarm automatically.
You can perform the LB test on Ethernet services without interrupting the services, to check the
connectivity of the services for locating and rectifying faults.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Optional: Perform a CC test.
1. In the service list, select a service where you want to configure the OAM, right-click, and
choose Ethernet OAM > Enable CC from the shortcut menu.
2. Select a link and click OK. The source MEP starts the CC check. If the link fails, the
destination MEP reports the CCLOS alarm.
Step 4 Optional: Perform an LB test.
1. In the service list, select a service where you want to configure the OAM, right-click, and
choose Ethernet OAM > LB Test from the shortcut menu.
2. Select a link and right-click, choose Configure and set the LB check parameter information.
3. Click Run to start an LB test. The Operation Result dialog box is displayed, indicating
that the operation is successful.
4. Click Close. View the test result in the LB Check Information tab and LB Statistic
Information tab.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
The equipment must communicate with the U2000 in the normal state.
A PWE3 service must be created and deployed.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 In the service list, select a service to be configured with the PW OAM.
Step 4 Click the PW tab. Then, click the Basic tab.
Step 5 Select one PW and click PW OAM. A dialog box is displayed.
Step 6 After you configure the PW OAM, click OK. The configuration is applied to NEs and the current
dialog box is closed.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l PWE3 services that are created must exist.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 In the service list, select a service to be viewed.
Step 4 View the topology structure of a service.
In the service topology, you can learn PE information of the source and sink ends, and interface
information for connecting to CE.
Step 5 Check alarm information of a service.
If a fault occurs, the corresponding interface and PW of the PE in the service topology is
displayed with fault identifier.
Step 6 You can perform the following operations in the service topology.
l In the service topology, select a PE, right-click, and then choose the following menu items
from the shortcut menu respectively.
Choose NE Explorer to view the NE Explorer of the equipment.
Choose View Real-Time Performance to view the real-time performance of the PW.
l In the service topology, select one interface, right-click, and then choose View Real-Time
Performance to view the real-time performance of the interface.
l In the topology view, select a PW between PEs, right-click, and then choose the following
menu items from the shortcut menu respectively.
Choose Fast Diagnose. In the LSP Ping window that is displayed, diagnose the PW.
Choose View Real-Time Performance to view the real-time performance of the PW.
Choose View Tunnel. In the Tunnel Management dialog box that is displayed, view
the Tunnel information.
----End
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 View the runtime performance of a service. Right-click the NE and choose View Real-Time
Performance from the shortcut menu in the topology view.
Step 4 Create a monitoring instance for a service. For details, refer to the chapter of monitoring instance
management in Performance Management System (PMS).
Step 5 View the history performance of a service. Right-click a required service and choose
Performance > View History Data from the shortcut menu.
----End
Procedure
Step 1 Choose Fault > Service Monitoring > Service Monitoring Template from the main menu.
Step 2 In the Centralized Monitoring dialog box, expand the All Service branch to view alarm
information of all services.
Step 6 Select the monitoring group that is added, right-click, and then choose Add Monitoring
Service from the shortcut menu.
Step 7 In the Add Monitoring Service dialog box, select the corresponding service tab and select the
service to be added. Then, click Add.
Step 8 Click Close.
----End
Context
When a service alarm is generated, certain phenomena occur, including but not limited to:
l The alarm panel blinks.
l The color of the alarm status column in the service list changes.
l The color of the NE, interface, or link in the service topology changes.
If you find a service alarm through preceding phenomena, perform the following operations to
view the detailed alarm information.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
Step 3 Right-click the service with the alarm and choose Alarm > Current Alarm from the shortcut
menu.
You can also choose Alarm > History Alarm from the shortcut menu to view the history alarms
of the service.
Step 4 Select the service alarm in the alarm list and view the detailed alarm information in the details
area.
----End
Postrequisite
Primarily determine the possible cause of the alarm based on the detailed alarm information,
and then locate the fault by using the debugging tool.
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the PWE3 services that meet the filter criteria.
4. In the dialog box that is displayed, select an existing test suite and click OK to bind the test
strategy and test suite.
5. Click Run to implement the preset diagnosis strategy.
Step 6 Set the manual test.
1. Right-click a PWE3 service and choose Test And Check from the shortcut menu.
2. Optional: Select LSP Ping and click . In the dialog box that is displayed, set
parameters relevant to the LSP ping test and click OK.
3. Optional: Select LSP Tracert and click . In the dialog box that is displayed, set
parameters relevant to the LSP tracert test and click OK.
NOTE
If you select Reply mode, details of an error are displayed only when the error occurs in the reply
mode.
4. Click Run and then view test result in the pane on the right.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
A user that requires rights allocation must exist.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 Click Filter. In the dialog box that is displayed, set the filter criteria, and click Filter.
Step 3 Select the required service, right-click, and then choose Confer Service Authority from the
shortcut menu.
Step 4 In Useable User, select the required user and click to add the user to Selected
User.
----End
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service Authority from the main menu.
Step 2 In the dialog box that is displayed, select the required user and view its manageable services in
the right pane.
NOTE
In the right pane, after selecting the required services, you can adjust its service authorization.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
NE4
NE5
10 GE Ring On
Convergence Layer
NE6 GE Ring On
Access Layer
NE1
NE2 NE3
BSC
Protection Tunnel
Figure 6-34 shows the planning of the boards and ports on each NE.
1-EX2-2(port-2)
10.0.3.1
1-EX2-2(port-2)
3-EG16-1(port-1) 10.0.3.2
1-EX2-1(port-1)
10.0.4.2 NE4 10.0.2.2
4-EFG2-2(port-2) 10 GE Ring On
10.0.4.1 NE5
Convergence Layer
GE Ring On
NE6 NE3
Access Layer
1-EX2-1(port-1) 1-EX2-2(port-2)
4-EFG2-1(port-1) 3-EG16-1(port-1) 10.0.1.2 10.0.2.1
10.0.5.2 NE1 10.0.0.2
NE2 1-EX2-1(port-1) 6-MP1-1-CD1-1port-1
4-EFG2-2(port-2)
10.0.1.1 10.0.6.1
10.0.5.1 4-EFG2-1(port-1)
6-L12 10.0.0.1
BSC
Protection Tunnel
Service Planning
This topic describes the planning of the parameters, such as IP addresses, interfaces, and protocol
types involved in this example in table format.
Assume that the IP addresses of the ports of NEs are the same as those listed in Table 6-8 after
the U2000 automatically allocates the IP addresses of ports.
Table 6-10 and Table 6-11 list the planning details of CES service parameters.
Table 6-10 CES service parameters: NE1-NE3 (E1 timeslots partially used)
Parameter Value
Service ID 4
Channelized YES
PW ID 8
PW Type CESoPSN
Forward Label 36
Reverse Label 36
EXP 4
Table 6-11 CES service parameters: NE1-NE3 (E1 timeslots fully used)
Parameter Value
Service ID 5
PW ID 9
PW Type SAToP
Forward Label 37
Reverse Label 37
EXP 4
NOTE
To create an MPLS APS, you can refer to the descriptions of how to create an MPLS tunnel protection
group.
Configuration Process
This topic describes how to configure a CES emulation service.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must learn about the networking requirements and service planning described in the
example.
Procedure
Step 1 Set LSR IDs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set LSR ID, Start of Global Label Space, and other parameters. Click Apply.
3. Display the NE Explorer of NE2, NE3, NE4, NE5, and NE6 separately and perform the
preceding two steps to set the parameters, such as LSR ID.
1. Choose Service > Tunnel > Create Tunnel from the main menu.
2. Set the basic information about the working tunnel.
3. Configure the NE list. On the physical topology, double-click NE1, NE2, and NE3 to add
them to the NE list and set the corresponding NE roles.
Parameter Example Value Principle for Value
Selection
4. Click Details to set the advanced parameters of the reverse tunnel. Click OK.
For route details, see the descriptions of route settings in Table 3-2.
NOTE
Before setting the port mode, ensure that the DCN of the port is disabled.
3. Click Apply. The Operation Result dialog box is displayed, indicating that the operation
is successful. Click Close.
4. Click the Advanced Attributes tab. Select 6-L12-2(Port-2) and set Frame Format to
CRC-4 Multiframe. Select 6-L12-3(Port-3) and set Frame Format to Unframe.
5. Click Apply. The Operation Result dialog box is displayed, indicating that the operation
is successful. Click Close.
3. Click Configure Source And Sink. A dialog box is displayed. On the Physical
Topology in the upper left portion of the window, set NE1 as the source NE, set NE3 as
the sink NE. Set relevant parameters and click OK.
PW ID 8 A PW ID uniquely
identifies a PW on the
entire network.
Packet Loading Time (us) 1000 Set the packet loading time.
6. Click OK.
Step 7 Create remote CES service 2. For details, refer to Step 6.1 through Step 6.6.
PW ID 9 A PW ID uniquely identifies
a PW on the entire network.
Packet Loading Time (us) 1000 Set the packet loading time.
Set this parameter according
to the network planning.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
Figure 6-35 shows the networking diagram of the ATM services. The 3G R99, signaling, and
HSDPA services are required between the two base stations and RNC. NE1 accesses the MPLS
network that consists of PTN equipment. NodeB1 is connected to NE1 through IMA1, and
NodeB2 is connected to NE1 through IMA2. The VPI/VCI switching is performed on NE1, and
the VPI/VCI transparent transmission is performed on NE2 and NE3. Between NE1 and NE3,
three PWs are used to carry the R99, signaling, and HSDPA services respectively. At the remote
end, to transparently transmit the ATM services on the MPLS network, NE2 is connected to
RNC through STM-1.NE1 is the OptiX PTN 1900; NE2, NE3, NE4, and NE5 are the OptiX
PTN 3900; NE6 is the OptiX PTN 950. ATM services are carried in the active tunnel. In addition,
you can create a bypass tunnel to protect real-time services.
The active tunnel is as follows: NE1-NE2-NE3. The bypass tunnel is as follows: NE1-NE6-
NE5-NE4-NE3.
NE4
NE5 10 GE Ring On
Convergence Layer
GE Ring On NE3
NE6 Access Layer
pw1 ATM
NE1 pw2 NE2 STM-1
IMA1 pw3
IMA2 RNC
Tunnel
NodeB 1 PW
NodeB 2 Protection Tunnel
1-EX2-2(Port-2)
10.0.3.1
1-EX2-2(Port-2)
3-EG16-1(Port-1) 10.0.3.2
1-EX2-1(Port-1)
10.0.4.2 NE4 10.0.2.2
2-EG2-2(Port-2) 10 GE Ring On
10.0.4.1 NE5
Convergence Layer
GE Ring On
NE6 NE3
Access Layer
1-EX2-1(-1) 1-EX2-2(Port-2)
2-EG2-1(Port-1) 3-EG16-1(Port-1) 10.0.1.2 10.0.2.1
10.0.5.2 NE1 10.0.0.2
NE2 1-EX2-1(Port-1) 3-MP1-1-AD1-1Port-1
4-EFG2-2(Port-2)
10.0.1.1 10.0.6.1
10.0.5.1 4-EFG2-1(Port-1)
10.0.0.1
1-CXP-MD1-3-L12
RNC
Working Tunnel
Protection Tunnel
NodeB 1
NodeB 2
Service Planning
This topic describes the planning of the parameters, such as IP addresses, interfaces, and protocol
types involved in this example in table format.
Between NE1 and NE3, PW1 transmits R99 services, PW2 transmits HSDPA services, and PW3
transmits signaling services. Therefore, you need to create three ATM services. The two base
stations converge R99 services and access signaling and HSDPA services. Therefore, you need
to create two ATM services connected to the N:1 VCC.
Assume that the IP addresses of the ports of NEs are the same as those listed in Table 6-24 after
the U2000 automatically allocates the IP addresses of ports.
PW ID 35 36 37 35 36 37
PW ID 35 36 37 35 36 37
Sink 3-MP1-1-AD1-1(1-AD1.PORT-1)
Port
Configuration Process
This topic describes how to configure an ATM emulation service.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must learn about the networking requirements and service planning described in the
example.
Procedure
Step 1 Set LSR IDs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set LSR ID, Start of Global Label Space, and other parameters. Click Apply.
3. Display the NE Explorer of NE2, NE3, NE4, NE5, and NE6 separately and perform the
preceding two steps to set the parameters, such as LSR ID.
1. In the NE Explorer, select NE1 and choose Configuration > Control Plane
Configuration > IGP-ISIS Configuration from the Function Tree.
2. Click the Port Configuration tab and click New. In the dialog box that is displayed, click
Add. Select the 4-EFG2-1(Port-1) and 4-EFG2-2(Port-2) ports and click OK.
Set relevant parameters as follows:
l Link Level: level-1-2
l LSP Retransmission Interval(s): 5 (In the case of a point-to-point link, if the local
equipment fails to receive any response in a period after transmitting an LSP, the local
router considers that the LSP is lost or discarded. To ensure the transmission reliability,
the local equipment transmits the LSP again.)
l Minimum LSP Transmission(ms): 30
3. Choose Configuration > Control Plane Configuration > MPLS-LDP Configuration
from the Function Tree.
NOTE
When using a PW to carry services, you need to set the parameters relevant to the MPLS-LDP.
4. Click New. In the Create LDP Peer Entity dialog box, set the LSR ID of the peer end.
Click OK.
The parameters of the IS-IS protocol are set to the same values as those of NE 1. For the
LDP parameters, set the LSR ID to 1.0.0.1.
3. Configure the NE list. On the physical topology, double-click NE1, NE2, and NE3 to add
them to the NE list and set the corresponding NE roles.
4. Click Details to set the advanced parameters of the reverse tunnel. Click OK.
For route details, see the descriptions of route settings in Table 3-2.
Before setting the frame format, ensure that the DCN of the port is disabled.
Set relevant parameters as follows:
l Port: ports from 3-L12-1(Port-1) to 3-L12-8(Port-8)
l Name: NodeB ATM (You can set port names to distinguish different service ports
for easy location and query.)
l Port Mode: Layer 2 (IMA signals are carried.)
l Encapsulation Type: ATM
c. On the Advanced tab page, set Frame Format and Frame Mode for the ports from
3-L12-1(Port-1) to 3-L12-8(Port-8). Click Apply.
Set relevant parameters as follows:
l Port: ports from 3-L12-1(Port-1) to 3-L12-8(Port-8)
l Frame Format: CRC-4 multiframe (The frame format must be same as the cell
format on Node B.)
l Frame Mode: 31
d. Choose Configuration > Interface Management > ATM IMA Management from
the Function Tree. Click the Binding tab.
e. On the Binding tab page, click Configuration. Then, set the bound ports for 1-CXP-1-
MD1-1(Trunk1) and 1-CXP-1-MD1-2(Trunk2). Click OK.
Set the parameters relevant to 1-CXP-1-MD1-1(Trunk1) as follows:
l Available Boards: 1-CXP
l Configurable Ports: 1-CXP-1-MD1-1(Trunk1)
l Level: E1
E1: For the E1 card, when the E1 level is selected, the entire E1 channel is used
to transmit ATM IMA signals.
Fractional E1: For the E1 card, when the fractional E1 level is selected, certain
64 kbit/s timeslots of an E1 channel are used to transmit ATM IMA signals.
For the ATM STM-1 card, when the fractional E1 level is selected, certain 64
kbit/s timeslots of a VC12 lower order path are used to transmit ATM IMA
signals. Before selecting the fractional E1 level, ensure that the serial port for
the 64 kbit/s timeslot is created.
VC12-xv: For the ATM STM-1 card, the VC4 path of an STM-1 contains 63
VC12 lower order paths. When the VC12-xv level is selected, certain VC12
lower order paths of a VC4 path is used to transmit ATM IMA signals.
l Direction: Bidirectional (default)
l Optical Interface: - (In the case of the E1 and fractional E1 levels, you need not
set this parameter. In the case of the VC12-xv level, you need to select the
corresponding optical port, that is, the E1 level in this example.)
l Available Resources: ports from 3-L12-1(Port-1) to 3-L12-4(Port-4)
l Available Timeslots: - (In the case of the E1 and fractional E1 levels, you need not
set this parameter. In the case of the VC12-xv level, you need to select the
corresponding timeslot.)
Set the parameters relevant to 1-CXP-1-MD1-2(Trunk2) as follows:
l Available Boards: 1-CXP
l Configurable Ports: 1-CXP-1-MD1-2(Trunk2)
l Level: E1
l Direction: Bidirectional
l Optical Interface: -
l Available Resources: ports from 3-L12-5(Port-5) to 3-L12-8(Port-8)
l Available Timeslots: -
f. On the IMA Group Management tab page, double-click the IMA Protocol Enable
Status field to enable the IMA protocol. Set other relevant parameters as required.
Click Apply.
The settings of parameters need to be the same as those on Node B.
g. On the ATM Interface Management tab page, set the parameters, such as Max.
VPI and Max. VCI. Click Apply.
Set relevant parameters as follows:
l Port Type: UNI (A UNI port is used to connect to the client-side equipment, and
an NNI port is used to connect the ATM equipment on a core network.)
l ATM Cell Payload Scrambling: Enabled
l Max. VPI: 8 (Set this parameter according to the networking planning. You can
determine the value range of VPIs by setting Max. VPI. The value of the VPI
ranges between 0 and (2 MaxVPIbits - 1).)
l Max. VPI: 7 (Set this parameter according to the networking planning. You can
determine the value range of VCIs by setting Max. VCI. The value of the VCI
ranges between 0 and (2 MaxVCIbits - 1).)
l VCC-Supported VPI Count: 32 (Set this parameter according to the networking
planning.)
l Loopback: No Loopback
2. Configure ATM ports on RNC.
a. In the NE Explorer, select NE3 and choose Configuration > Interface
Management > SDH Interface from the Function Tree to configure ports on RNC.
b. On the Layer 2 Attributes tab page, select 3-MP1-1-AD1-1(Port-1) and set the
parameters, such as Max. VPI and Max. VCI, for the port. Click Apply.
2. Click Configure Source And Sink. A dialog box is displayed. On the Physical
Topology in the upper left portion of the window, set NE1 as the source NE, set NE3 as
the sink NE. Set relevant parameters and click OK.
PW ID 35 A PW ID uniquely
identifies a PW on the
entire network.
4. Click ATM Link. In the dialog box that is displayed, set the parameters relevant to the
connection.
PW ID 36 A PW ID uniquely
identifies a PW on the
entire network.
8. Create the ATMService-Signaling service. For details, refer to the preceding steps.
PW ID 37 A PW ID uniquely
identifies a PW on the
entire network.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
As shown in Figure 6-37, both company A and company B have branches in city 1 and city 2.
Branches of each company need to communicate with each other. Services from the two
companies must be isolated. NE1 is connected to company A and Company B in city 1 and NE3
is connected to company A and Company B in city 2. NE1 accesses services from city 1, NE2
transparently transmits the services, and NE3 transmits the services to city 2. Similarly, NE3
accesses services from city 2, NE2 transparently transmits the services, and NE1 transmits the
services to city 1.
You can configure Ethernet private line services to meet the requirements of communication
between the branches of company A and between the branches of company B. Two PWs carry
the services of company A and company B respectively and share bandwidth of a same tunnel.
In the case of Company A, the branches require the common Internet access service, CIR=10
Mbit/s, PIR=30 Mbit/s, VLAN ID=100.
In the case of Company B, the branches require the data service, CIR=30 Mbit/s, PIR=50 Mbit/
s, VLAN ID=200.
NE1 is the OptiX PTN 1900; NE2 and NE3 are the OptiX PTN 3900.
NE4
NE5 10 GE Ring On
Convergence Layer
Access NE3
Layer 5-EX2-1Port-1
20-EFF8-1Port-1 10.0.1.2
20-EFF8-2Port-2
10.0.0.2
NE1 5-EX2-1Port-1
3-EFF8-3Port-3 NE2
10.0.1.1
10.0.0.1 20-EFF8-1Port-1
3-EFF8-1Port-1
3-EFF8-2Port-2
Compnay B
Compnay A Compnay A
Compnay B
Service Planning
This topic describes the planning of the parameters, such as IP addresses, interfaces, and protocol
types involved in this example in table format.
Table 6-48 lists the planning details of the tunnel that carries a PW.
Tunnel ID 1 2
Service ID 1 2
Bearer Type PW PW
PW ID 35 45
PW Ingress Label 20 30
PW Egress Label 20 30
Configuration Process
This topic describes how to configure an Ethernet private line emulation service.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must learn about the networking requirements and service planning described in the
example.
Procedure
Step 1 Set LSR IDs for NEs.
1. In the NE Explorer, select NE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set the parameters, such as LSR ID and Start of Global Label Space, for the NE. Click
Apply.
3. Display the NE Explorers of NE2 and NE3 and perform the preceding two steps to set the
parameters, such as the LSR ID.
l IP Address: 10.0.0.1
l IP Mask: 255.255.255.252
4. Display the NE Explorers of NE2 and NE3 and set the parameters relevant to each port.
For details, refer to Step 2.1 through Step 2.3.
l NE2
General Attributes
Port: 20-EFF8-1(Port-1), 5-EX2-1(Port-1)
Enable Port: Enabled
Port Mode: Layer 3 (NNI port for carrying tunnels)
Working Mode: Auto-Negotiation (The working mode of this port must be set
to the same value as that of the interconnected port.)
Max Frame Length (byte): 1620 (Set this parameter according to the lengths of
data packets. All received data packets whose lengths are greater than the
parameter value are discarded.)
Layer 3 Attributes
Enable Tunnel: Enabled
TE Measurement: 10 (The link with a smaller TE measurement value is preferred
for route selection of a tunnel. You can intervene in the route selection by
adjusting the TE measurement of a link. The smaller the value of the TE
measurement, the higher the priority of the link is.)
Specify IP Address: Manually (You can set the IP address for a port when
Manually is selected.)
20-EFF8-1(Port-1) IP Address: 10.0.0.2
5-EX2-1(Port-1) IP Address: 10.0.1.1
IP Mask: 255.255.255.252
l NE3
General Attributes
Port: 20-EFF8-1(Port-1), 20-EFF8-2(Port-2)
Enable Port: Enabled
Port Mode: Layer 2 (UNI port for accessing services of company A and
company B.)
Encapsulation Type: 802.1Q
Working Mode: Auto-Negotiation
Max Frame Length: 1620
Port: 5-EX2-1(Port-1)
Enable Port: Enabled
Port Mode: Layer 3 (NNI port for carrying tunnels)
Working Mode: Auto-Negotiation
Max Frame Length(byte): 1620
Layer 3 Attributes
Port: 5-EX2-1(Port-1)
Enable Tunnel: Enabled
TE Measurement: 10
Specify IP Address: Manually
IP Address: 10.0.1.2
IP Mask: 255.255.255.252
3. Click Configure Source And Sink. A dialog box is displayed. On the Physical
Topology in the upper left portion of the window, set NE1-3-EFF8-1 as the source NE,
NE3-20-EFF8-1 as the sink NE. Set relevant parameters and click OK.
PW ID 35 A PW ID uniquely
identifies a PW on the
entire network.
5. Click Advanced and configure SAI QoS, PW QoS, Advanced PW Attributes, and
Service parameter. Use the default value for SAI QoS.
Default Forwarding BE
Priority
PW ID 45 A PW ID uniquely
identifies a PW on the
entire network.
Default Forwarding BE
Priority
----End
Example Description
This section describes the function requirement, network diagram, and service planning of an
example.
IP packets from the NodeB travel through the IP line, NE2, and finally reaches the RNC through
the VRF.
NOTE
A VRF instance synchronizes route information. NE2 does not store the IP address of NodeB (the IP line
is static and no protocol synchronizes routes), and thus the DIP with the packets sent to the RNC is the IP
address of NodeB.
To ensure that the packets are sent from the RNC to NodeB, the IP address of the UNI port on NE2 and
the port IP address of the NodeB must be in the same network segment. Note that a NodeB may have two
IP addresses, that is, service IP address and port IP address.
The IP address of the Layer 3 virtual port and the IP address of NodeB must be in the same network segment.
NOTE
Service configuration on the OptiX PTN 3900-8 is the same as that on the OptiX PTN 3900, except for the
slots for service boards. For details on service configuration on the OptiX PTN 3900-8, see this example
about service configuration on the OptiX PTN 3900.
Service Planning
Table 6-65 lists the planning of parameters for NEs.
Table 6-66 lists the planning of bearer tunnels for the PWs.
Tunnel ID 01 01
VRF ID 1
RD 100:1
RT 100:1
Parameter Description
Mask 255.255.255.252
Priority Default: 60
Parameter Description
PW ID Automatically Allocated
Forward Label 20
Reverse Label 30
Encapsulation MPLS
Configuration Process
This section describes how to configure an end-to-end IP line service.
Prerequisite
You must be an NM user with "network operator" authority or higher.
If an MPLS tunnel is used to carry services, you need to crate a static MPLS tunnel. .
If an IP/GER tunnel is used to carry services, you need to create an IP/GRE tunnel..
If you need to use a UNI port exclusively, disable the DCN function at the port. .
Procedure
Step 1 Set LSR IDs for NEs.
1. Navigate to the NE Explorer of NE1, and choose Configuration > MPLS Management
> Basic Configuration from the Function Tree.
2. Set parameters such as LSR ID and Start of Global Label Space for NE1. Then, click
Apply.
Parameter Value (for This Setting Rule
Example)
3. Navigate to the NE Explorer of NE2 and repeat the preceding steps to set parameters
(including LSR ID) for NE2.
Parameter Value (for This Setting Rule
Example)
l NE2
Port: 1-EG16-1 (Port-1)
General attributes
Port: 1-EG16-1 (Port-1)
Enable Port: Enabled
Port Mode: Layer 3 (NNI, for carrying a tunnel)
Working Mode: Auto-Negotiation (The working modes of the local port and
opposite port must be the same.)
Max Frame Length (byte): 1620 (Set this parameter according to the length of
service data packets. All the received packets with a length exceeding the
maximum frame length are discarded.)
Layer 3 attributes
Enable Tunnel: Enabled
TE Measurement: 10 (This parameter indicates link cost. A link with less link
cost is selected for a tunnel with preference. You can intervene in route selection
by adjusting TE measurement. A smaller TE measurement value indicates a
higher priority.)
Specify IP Address: Manually (When you set this parameter to Manually, you
can set an IP address for the port.)
IP Address: 10.1.1.1
IP Mask: 255.255.255.252
l For this example, set Protocol Type to MPLS. When you set Protocol Type to IP,
Signaling Type and Template are unavailable.
l For this example, set Signaling Type to Static CR.
l For this example, select only Create Reverse Tunnel. When you select Create Reverse
Tunnel, a forward tunnel and a reverse tunnel are created. Otherwise, only a forward
tunnel is created. When you select Create Bidirectional Tunnel, a bidirectional tunnel
is created. When you select Create Protection, a protection tunnel is also created.
l NOTE
he OptiX PTN equipment supports only static constraint-based routing (CR) tunnels. A static CR
tunnel is based on certain constraints, which are established and managed through the CR
mechanism. Unlike a static tunnel, a static CR tunnel can be created when the routing information
is available and certain constraints, such as specified bandwidth, selected path, and QoS
parameters, are met. When you set Signaling Type to Static CR, you can select Create Reverse
Tunnel. When you set Signaling Type to RSVP TE, you can set Template to copy tunnel details
from a template.
l For this example, select only Create Reverse Tunnel. When you select Create Reverse
Tunnel, a forward tunnel and a reverse tunnel are created. Otherwise, only a forward
tunnel is created. When you select Create Bidirectional Tunnel, a bidirectional tunnel
is created. When you select Create Protection, a protection tunnel is also created.
Reverse Tunnel
l NE2:30
Reverse Tunnel
l NE1:30
Reverse Tunnel
l NE2:10.1.1.2
3. Add NE2 where a service is to be created to NE List. You can also right-click NE2 in
Physical Topology and choose Add NE to Service.
4. In VRF Configuration, select General to set basic attributes of VRF.
6. In VRF Configuration, choose Router Configuration > Static Router > Static Router
Object, and set static router objects.
The sink port of an IP line service must be a virtual IP port, that is, a Layer 3 virtual port.
4. Configure a PW. Click the PW tab and set general attributes of the PW.
l PW ID can be Automatically Allocated. The PW ID is networkwide unique. That is,
one PW ID indicates only one PW.
l Set Forward Type and Reverse Type to Static Binding.
l Select a created forward tunnel for Forward Tunnel.
l Select a created reverse tunnel for Reverse Tunnel.
l Set Signaling Type to Dynamic.
NOTE
Forward Label and Reverse Label are stuck to packet headers when IP packets are encapsulated
to PWs. These labels are used for label switching.
l Set Encapsulation to MPLS.
5. Apply the service configuration to NEs. Click Deploy to apply the service configuration
to NEs and also select Enable to provision the service.
6. Click Advanced and then set SAI QoS, PW QoS, and Advanced PW Attribute.
Table 6-74 QoS parameter settings for the service access port
Parameter Value (for This Setting Rule
Example)
----End
Definition
The Virtual Private LAN Service (VPLS), also called the Transparent LAN Service (TLS) or
virtual private switched network service, is a Layer 2 VPN (L2VPN) technology that is based
on Multi-Protocol Label Switching (MPLS) and Ethernet technologies.
Purpose
The primary goal of VPLS is to interconnect multiple Ethernet LANs through the Packet
Switched Network (PSN). In this manner, these LANs can function as one LAN. VPLS can
implement the multipoint-to-multipoint VPN networking; therefore, by using the VPLS
technology, service providers (SPs) can provide the Ethernet-based multipoint services through
MPLS backbone networks. In addition, by utilizing the VPLS solution in which MPLS virtual
circuits (VCs) function as the Ethernet bridge links, SPs can transparently transmit LAN services
on the MPLS network.
CE CE
VLAN1 VLAN1
VSI 1 VSI 1
PE PE
VSI 2 VSI 2
CE VSI 1 VSI 2 CE
VLAN2 VLAN2
PE
CE CE
VLAN1 VLAN2
between sites; PEs learn the source MAC addresses and create MAC forwarding entries when
forwarding packets, and then maps the MAC addresses to attachment circuits (ACs) and PWs.
The basic VPLS transport components include ACs, virtual circuits (VCs), forwarders, tunnels,
encapsulation, PW signaling protocol, and Quality of Service (QoS).
Figure 7-2 shows the location of each basic VPLS transport component in the VPLS network.
PE3
MPLS
PE2
Network
Forwarder
PE1
AC
CE1 CE2
PW
VPN1 VPN2
Site1 PWSignal
Site1
Tunnel
The following takes the flow direction of VPN1 packets from CE1 to CE3 as an example to
show the basic direction of the data flow. CE1 forwards Layer 2 packets to PE1. After PE1
receives these packets, the forwarder selects a PW to forward these packets to PE2. Then the
forwarder of PE2 forwards these packets to CE3.
Note that STP can run in the private network of the L2VPN, and all the BPDUs of STP are
transparently transmitted in the ISP network.
Packet Encapsulation on an AC
Packet encapsulation mode on an AC is determined by the user access mode. User access modes
can be VLAN access and Ethernet access. Each user access mode is described as follows:
l VLAN access: In VLAN access mode, the header of each Ethernet frame sent between CEs
and PEs carries a VLAN tag. This tag is a service delimiter that is used to identify users in
an ISP network. It is called provider-tag (P-tag).
l Ethernet access: In Ethernet access mode, the header of each Ethernet frame sent between
CEs and PEs does not carry any P-tag. If the frame header carries a VLAN tag, the VLAN
tag is the internal VLAN tag of the user packet, and is called user-tag (U-tag). The U-tag
is carried in a packet before the packet is sent to a CE and is thus not added by the CE. The
U-tag is used by the CE to identify which VLAN the packet belongs to, and is meaningless
to PEs.
Packet Encapsulation on a PW
Packet encapsulation modes on a PW can be Raw mode and Tagged mode, as shown follows:
l Raw mode
The P-tag is not transmitted on the PW. If a PE receives the packet with a P-tag from a CE,
the PE strips the P-tag, adds double MPLS labels (outer label and inner label) to the packet,
and then forwards the packet. If a PE receives the packet without a P-tag from a CE, the
PE directly adds double MPLS labels to the packet, and then forwards the packet. If a PE
sends a packet to a CE, the PE adds or does not add the P-tag to the packet as required, and
then forwards the packet to the CE. Note that the PE is not allowed to rewrite or remove
any existing tag.
l Tagged mode
The frame sent to a PW must carry the P-tag. If a PE receives the packet with a P-tag from
a CE, the PE directly adds double MPLS labels to the packet without stripping the P-tag,
and then forwards the packet; if a PE receives the packet without a P-tag from a CE, the
PE adds a null tag and double MPLS labels to the packet, and then forwards the packet. If
a PE sends a packet to a CE, the PE rewrites, removes, or preserves the service delimiter
of the packet as required, and then forwards the packet to the CE.
VPLS-A VPLS-B
CE-1 CE-4
VPLS-B VPLS-A
CE-1
ISP Network CE-4
UPE UPE
NPE
UPE UPE
VPLS-A VPLS-B
CE-2 CE-3
VPLS-B VPLS-A
CE-2 CE-3
Create a network
Configure
interface
Configure the
control plane
Configure Tunnel
Configure VPLS
Service
End
1. Create Network Complete creating the NE, Configure the NE data, creating fibers, and
Configure clocks.
2. Configure the Specifies the LSR ID for each NE that a service traverses and the start
LSR ID value of the global label space. Each LSR ID is unique on a network.
3. Configure the Set the basic attributes and Layer 3 attributes (such as tunnel enabling
network-side status and IP address) for the interface to bear tunnels.
Interface
Operation Description
4. Configure the Set the associated protocol parameters of the control plane for creating
Control Plane tunnels.
l To create the static MPLS tunnel to bear the VPLS service, you do
not need to set the associated parameters of the control plane.
l To create the dynamic MPLS tunnel to bear the VPLS service, you
need to set the following parameters:
1. IGP-ISIS protocol parameters
2. MPLS-RSVP protocol parameters
To create the dynamic PW to bear services, you need to set the IGP-
ISIS and MPLS-LDP protocol parameters.
l To create the IP tunnel or GRE tunnel to bear the VPLS service, you
need to add a static route.
6. Configure the The QoS policy is used to perform the traffic management on the VPLS
QoS Policy service.
7. Configure the 1. Create the VPLS service, including setting the service ID and service
VPLS Service name, and selecting the service type and bearer type.
2. Configure the user-side interface is used to access the base station
services.
3. Configure the PW, including setting the PW type, label, and tunnel
type.
4. Configure the QoS, including setting the UNI and QoS of the PW.
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
The DCN function of a port carrying services must be disabled if the port need be exclusively
used.
Context
NOTE
The parameters with are mandatory.
Procedure
Step 1 Choose Service > VPLS Service > Create VPLS Service from the main menu.
NOTE
When setting parameters, pay attention to the following points:
l Service Template: When creating services, you can use an existing template to improve the efficiency
of applying service configuration. It is recommended that you create a service template for typical
services or services with same or similar parameters.
l Networking Mode: The scenario of typical networking involves common networking scenarios. In
special scenarios, you can customize a networking scenario.
l Service Type: The values Service VPLS and Management VPLS are the same for the PTN equipment.
l VSI Name: The PTN equipment does not support this parameter.
l VSI ID: By default, the U2000 automatically allocates VSI IDs. You can click Auto-Assign to re-
allocate VSI IDs.
If the typical scenario defined by the U2000 is selected, you can click the Add drop-down button to select
the defined PE type as required.
Step 4 In NE List, select an NE, and click Details. On the VSI Configuration tab page, set the relevant
VSI parameters.
NOTE
l You need to set the parameters for all the NEs in NE List.
l It is recommended that you set Split Horizon Group parameters to prevent multicast storms.
Specifically, add the PWs of NEs to split horizon groups.
l When Binding Type is set to Static Binding, you need to select the tunnel to bind.
l When Binding Type is set to Select Policy, the U2000 automatically selects the required tunnel
according to the policy.
l By default, the U2000 automatically allocates PW IDs.
3. Click the SAI QoS tab, select an SAI, click Configure, and then choose QoS Policy or
QoS CAR Template. In the dialog box that is displayed, set the relevant information for
the SAI QoS.
Step 7 Select the Deploy check box and click OK.
NOTE
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A service is available on NEs only when it is
enabled.
----End
Postrequisite
After the service is created successful, the service is displayed in the VPLS service management
window.
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
A VPLS service that is created but not deployed exists.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Select the required VPLS service to be deployed, right-click, and then choose Deploy from the
shortcut menu.
Step 4 Select the required VPLS service to be enabled, right-click, and then choose Enable from the
shortcut menu.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
VSI resource must exist in the network and can be discovered on the U2000.
Context
Discrete VSI resource are mainly classified into the following two types:
l Services created incompletely on NEs but discovered on the U2000
l Discrete services manually created on the U2000
NOTE
In the Manage VSI Resource list, the service whose Service Name is empty is a discrete service.
Purpose of creating a new service: When you need to modify a service running in the existing
network but the specific configuration of the service is not determined, you can create a new
service based on the current configuration. If the new service meets the requirement, you can
add the created new service to the service running in the existing network. This improves the
efficiency of service deployment.
Purpose of converting to services: After a VPLS VPN network runs for a period, certain discrete
VSIs may be generated. With the function of adjusting a discrete service, you can add the discrete
VSIs to existing services.
Procedure
Step 1 Choose Service > VPLS Service > Manage VSI Resource from the main menu.
Step 2 Click Filter. In the dialog box that is displayed, set the filter criteria and filter the VSI resource.
Step 3 Optional: Create new service.
1. Select one or more VSI resource, right-click, and then choose Create New Service from
the shortcut menu.
2. In the dialog box that is displayed, set basic information and general VSI information of
the service and click OK.
The new service is displayed in the service list of the Manage VPLS Service window.
Step 4 Optional: Convert to service.
1. Select one or more VSI resource, right-click, and then choose Convert to Service from the
shortcut menu.
2. In the dialog box that is displayed, click Filter and set the filter criteria.
3. Click OK. Then, select a required service in the query result, and then click OK.
Step 5 Optional: Delete the VSI resource.
1. Select one or more VSI resource, and click Delete.
2. In the dialog box that is displayed, click Yes.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
VPLS services must be created and deployed on NEs.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Right-click a service and choose Test and Check from the shortcut menu.
Step 4 In the dialog box that is displayed, select the trail to be checked.
Set diagnosis parameters based on the requirements of operation and maintenance. The meaning
of each option is as follows:
1. Service Check: It checks whether the configuration data of the source is consistent with
that of the sink.
2. OAM Tool: check the connectivity by performing the ping operation on each layer.
3. Collect Information: view the information about the public route, LDP peer, LDP session,
and LSP.
4. Traceroute: location is used to find out the fault position.
----End
Prerequisite
NEs must communicate with the NMS in the normal state.
Context
Ethernet OAM defines the following concepts:
l MD: short for maintenance domain. It refers to the network that requires OAM. An
important attribute of MDs is the level, which defines the OAM scope. MDs can be nested
but cannot be overlapped. MDs process OAM packets by following the rule of blocking
low-level packets, transparent transmitting high-level packets, and processing same-level
packets.
l MA: short for maintenance association. It can be considered as a service-related domain
that is composed of several MEPs.
l MEP: short for maintenance end point. It is the originating and terminating points of all
OAM packets and is related to services. Each MEP has a unique MEP ID in the MA. On a
network, an MA and an MEP ID can uniquely identify an MEP.
l MIP: short for maintenance intermediate point. MIP is related to an MD but irrelevant to
an MA. An MIP cannot send OAM packets. An MIP can respond to and forward LB packets
and LT packets, and can only forward CC packets.
Ethernet OAM checks the service connectivity in real time by periodically sending CC packets.
The source MEP periodically constructs and sends CC packets. After receiving the CC packets
from the source MEP, the destination MEP directly starts the CC check. If the destination MEP
does not receive any CC packets from the source MEP within a certain period, such as 3.5 times
the sending period, the destination MEP reports the CCLOS alarm.
Ethernet OAM checks the connectivity of a service through LB tests. The source MEP constructs
and transmits an LBM frame and starts the timer for timing. If the destination MEP or MIP
receives the LBM frame, it constructs and transmits an LBR frame to the source MEP. The LB
detection is successful. If the source MEP timer times out, the LB detection fails.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
2. In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
3. Select a link and click OK, the source MEP starts the CC check. If the link falls, the
destination MEP reports the CCLOS alarm.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Select a service in the service list. The Topology tab page displays the topology of this service.
Step 4 You can perform the following operations in the topology view:
1. Right-click an NE and then perform one of the following operations:
a. Choose Current Alarm from the shortcut menu to browse current alarm of the NE.
b. Choose History Alarm from the shortcut menu to history current alarm of the NE.
c. Choose NE Explore from the shortcut menu to jump to the NE explore window of
the selected NE.
d. Choose Fast Diagnose from the shortcut menu to diagnose the selected VSI.
2. Right-click an interface and then perform one of the following operations:
a. Choose Current Alarm from the shortcut menu to browse current alarm of the
interface.
b. Choose History Alarm from the shortcut menu to history current alarm of the
interface.
c. Choose View Real-Time Interface Performance from the shortcut menu. In the real-
time performance window that is displayed, set the related parameters to view the real-
time performance of the selected interface. If you view the real-time performance for
the first time, you need to select the real-time performance indicators to be viewed.
3. Right-click a PW and then perform one of the following operations:
a. Choose Current Alarm from the shortcut menu to browse current alarm of the PW.
b. Choose History Alarm from the shortcut menu to history current alarm of the PW.
c. Choose View Tunnel from the shortcut menu to view the tunnel used by the selected
PW.
d. Choose View Real-Time PW Performance from the shortcut menu. In the real-time
performance window that is displayed, set the related parameters to view the real-time
performance of the selected PW. If you view the real-time performance for the first
time, you need to select the real-time performance indicators to be viewed.
e. Choose Fast Diagnose from the shortcut menu to diagnose the selected PW.
4. Perform one of the following operations without selecting any node or link:
l Right-click in the blank area and choose Legend from the shortcut menu, The legend
is displayed in the topology view.
l Right-click in the blank area and choose Toolbar from the shortcut menu, The toolbar
is displayed in the topology view.
l Right-click in the blank area and choose Synchronize the Main Topology to refresh
the current topology view according to the NE layout in the Main Topology.
l Right-click in the blank area and choose Save to save the current NE layout in the
topology.
l Right-click in the blank area and choose Hide Interface from the shortcut menu.
Interfaces are not displayed in the topology view.
l Right-click in the blank area and choose Hide CE from the shortcut menu. CEs are not
displayed in the topology view.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Context
By viewing the performance data, you can know whether the VPLS service is normally running
in a certain period.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 View the runtime performance of a service. Right-click the PW and choose View Runtime
Performance from the shortcut menu in the topology view.
Step 4 Create a monitoring instance for a service. For details, refer to the chapter of monitoring instance
management in Performance Management System (PMS).
Step 5 View the history performance of a service. Right-click a required service and choose
Performance > View History Data from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Context
You can monitor the alarm status of the specified services by customizing the monitoring
template.
Procedure
Step 1 Choose Fault > Service Monitoring > Service Monitoring Template from the main menu.
Step 2 Right-click in the monitoring list and choose Select Monitoring Groupfrom the shortcut menu.
Step 3 In the dialog box that is displayed, click Add. In the dialog box that is displayed, set the name
of the monitoring group and click OK.
Step 4 Right-click a monitoring group to be configured and choose Add Monitoring Service from the
shortcut menu. In the dialog box that is displayed, select a service to be monitored and click
Add to add the service to the monitoring group.
----End
Context
When a service alarm is generated, certain phenomena occur, including but not limited to:
l The alarm panel blinks.
l The color of the status column in the service list changes.
l The color of the NE, interface, or link in the service topology changes.
If you find a service alarm through preceding phenomena, perform the following operations to
view the detailed alarm information.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Right-click the service with the alarm and choose Alarm > Current Alarm from the shortcut
menu, view the current alarms of the service.
You can also choose Alarm > History Alarm from the shortcut menu to view the history alarms
of the service.
Step 4 Select the service alarm in the alarm list and view the detailed alarm information in the details
area.
----End
Postrequisite
Primarily determine the possible cause of the alarm based on the detailed alarm information,
and then locate the fault by using the debugging tool.
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
The services to be diagnosed must be deployed.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Right-click a service and choose Diagnose > Create Test Suit from the shortcut menu.
Step 4 In the wizard dialog box, select the link to be diagnosed and click Next.
l In the VPLS Service Management window, right-click in the blank area and choose Diagnose > View
Test Strategy from the shortcut menu to view the running policy of test cases.
l You can add multiple diagnosis times for a period type.
----End
Postrequisite
In daily operation and maintenance, you can do as follows to view the diagnosis result and know
the service connectivity:
1. Right-click a service in the VPLS Service Management window and choose Diagnose >
View Test Result from the shortcut menu.
2. In the dialog box that is displayed, view the history data of the service diagnosis result.
3. Determine the service connectivity based on the diagnosis result.
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box that is displayed, set the filter criteria, and click Filter.
The NMS displays the VPLS services that meet the filter criteria.
Step 3 Select the required service, right-click, and then choose Confer Service Authority from the
shortcut menu.
Step 4 In Useable User, select the required user and click to add the user to Selected
User.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service Authority from the main menu.
Step 2 In the dialog box that is displayed, select the required user and view its manageable services in
the right pane.
NOTE
l In the right pane, you can adjust the authorization of a service after selecting it. To be specific, the
selected user has the right to a service after you select the service.
l The selected user has the rights to all VPLS services after you select All Services.
----End
As shown in Figure 7-5, the three CE networks need communicate with each other. Each CE
network VPLS service has the same VLAN value, that is, 100. MPLS Tunnel 1, MPLS Tunnel
2, and MPLS Tunnel 3 exist among the three PEs.
Among the CE networks, three types of services, including the voice service, data service, and
common Internet access service, are available. The complex traffic classification can be
performed at the access side, and different QoS policies for assured bandwidth can be configured.
The Network can prevents multicast storms.
FE
UNI for CE3: 1-EG16-19-ETFC-1
NE 1 NNI for CE1: 1-EG16-20-POD41-1
NNI for CE2: 1-EG16-20-POD41-2
MPLS Tunnel 3
NE 3
MPLS PSN FE CE 3
Tunnel 1 VLAN=100
MPLS Tunnel 2
NE 2
FE
UNI for CE2: 1-EG16-19-ETFC-1
CE 2 NNI for CE3: 1-EG16-20-POD41-1
VLAN=100 NNI for CE1: 1-EG16-20-POD41-2
Attribute
Attribute
VSI ID 1
Attribute NE 1 NE 2 NE 3
Attribute NE 1 NE 2 NE 3
ID 1 1 1
Sub Interface Type VLAN Sub Interface VLAN Sub Interface VLAN Sub Interface
Prerequisite
You must be an NM user with "NE operator" authority or higher.
You must learn the sample network and requirement, and the relevant service planning.
Procedure
Step 1 Choose Service > VPLS Service > Create VPLS Service from the main menu.
Step 3 Select a VPLS service node. To be specific, select NE1, NE2, and NE3 respectively in Physical
Topology at the upper right corner of the window, right-click, and then choose NPE from the
shortcut menu.
Step 4 Set parameters for a VPLS service node. To be specific, select NEs from the NE list in the left
pane, and click Details. Then, set the relevant parameters in VSI Configuration at the lower
right corner of the window.
Split Horizon Group Indicates the PW on the NNI After you configure Split
side of an NE. Horizon Group, ports and
For example, you need to add links can be isolated. This
the PWs between NE1 and setting prevents multicast
NE2 between NE1 and NE3 storms.
to a split horizon group.
Step 5 Set NE SAI parameters. Right-click the SAI Configuration tab in the lower right corner, select
the three NEs, and then click Create.
Sub Interface Type VLAN Sub Interface Set this value based on the
service planning.
Step 6 Select a tunnel for carrying VPLS services manually. To be specific, click the PW
Configuration tab at the lower right corner of the window. Then, select the PWs of the NEs
respectively, and click Modify.
----End
On U2000, You can quickly configure the L3VPN service by using the trail function.
L3VPN
On a L3VPN, the Border Gateway Protocol (BGP) advertises VPN routes and the multiprotocol
label switching (MPLS) forwards VPN packets on backbone networks of service providers
(SPs).
VPN 2
VPN 1 Site
CE Service provider's CE
Site
backbone
P P
PE
PE
PE
VPN 2 P CE VPN 1
P
Site Site
CE
l Customer edge (CE): is an edge device on a customer network. A CE has one or more
interfaces directly connected to an SP network. The CE can be a router, a switch, or a host.
Generally, the CE cannot "sense" VPNs, and need not support MPLS.
l Provider edge (PE): is an edge device on an SP network. A PE is directly connected to the
CE. On an MPLS network, VPN processing is performed on PEs; thus, an MPLS network
is PE-intensive.
l Provider (P): is a backbone device in an SP. A P is not directly connected to CEs. A P need
support only the MPLS forwarding capability and need not maintain VPN information.
PEs and Ps are managed by SPs. CEs are managed by users unless the users trust SPs with the
management right.
A PE can provide the access service for multiple CEs. A CE can access multiple PEs of the same
SP or of different SPs.
BGP
Different from the Interior Gateway Protocol (IGP), BGP focuses on controlling route
transmission and selecting the optimal routes instead of discovering and calculating routes.
VPNs use public networks to transmit the VPN data, and the public networks use IGP to discover
and calculate their routes. The key to construct a VPN is how to control the transmission of VPN
routes and select the optimal route between two PEs.
BGP uses TCP with the port number 179 as the transport-layer protocol. The reliability of BGP
is thus enhanced. Therefore, VPN routes can be directly exchanged between two non-directly
connected PEs.
BGP can transmit any information appended to a route. As the optional BGP attributes, the
information is transparently forwarded by BGP devices that cannot identify those attributes.
VPN routes, thus, can be conveniently transmitted between PEs.
When routes are updated, BGP sends only updated routes rather than all the routes. This
decreases the bandwidth consumed by the route transmission. The transmission of a great
number of routes over a public network becomes possible.
base station controller) through the DHCP protocol. The PTN equipment on a mobile carrier
network can transmit DHCP packets between a base station and a base station controller.
8.2.10 Principle of DHCP Relay
This section describes how the PTN equipment implements relay of DHCP packets between a
mobile network base station (running the DHCP client) and a DHCP server (usually a component
of a base station controller) in two DHCP relay modes.
Site
The concept of site is frequently used in the VPN technology. The following describes a site
from different aspects:
l A site is a group of IP systems with IP connectivity. IP connectivity can be realized
independent from SP networks.
As shown in Figure 8-2, in the networks on the left side, the headquarters of X company
in city A is a site; the branch of X company in city B is another site. IP devices in the two
sites can communicate without through any carrier's network.
Site A Site X
CE
Carrier's CE CE
network Headquarters of Carrier's
X company in City network Headquarters
A
of X company
in CityA
CE
Branch of X
Branch of
company in
X company
CityB Site B
in CityB
l Sites are classified according to the topology relationship between devices rather than the
geographic positions of the devices although the devices in a site are geographically
adjacent to each other in general.
If two IP systems are geographically separated and connected through private lines, the
two systems compose a site if they can communicate without the help of carrier's networks.
As shown in Figure 8-2, in the networks on the right side, if the branch network of city B
is connected with the headquarters network of city A through private lines instead of
carrier's networks, the branch network and the headquarters network compose a site.
l The devices in a site may belong to multiple VPNs. In other words, a site may belong to
multiple VPNs.
As shown in Figure 8-3, the decision-making department of X company in city A (Site A)
is allowed to communicate with the research and development (R&D) department in city
B (Site B) and the financial department in city C (Site C). Site B and Site C are not allowed
to communicate. In this case, two VPNs, namely, VPN 1 and VPN 2 can be established.
Site A and Site B belong to VPN 1; Site A and Site C belong to VPN 2. Site A, thus, belongs
to multiple VPNs.
City B
City A
CE
VPN 1
Site A
X Company
Decision- X Company
making R&D
department CE Site B department
VPN 2
City C
X Company
Carrier's
Financial
netw ork
department
CE
Site C
l A site is connected to an SP network through CEs. A site may contain more than one CE,
but a CE belongs only to one site.
According to different sites, you are recommended to use the following devices as CEs:
If the site is a host, use the host as the CE.
If the site is a subnet, use switches as CEs.
If the site comprises multiple subnets, use routers as CEs.
Sites connected to the same carrier's network can be divided into different sets based on policies.
Only sites that belong to the same set can access each other. A set of sites is a VPN.
NOTE
l In this manual, if two PEs establish BGP sessions and exchange VPN routing information, for one PE,
the other PE is called the peer PE.
l The CE that a PE accesses is called the local CE of the PE.
l The CE that the peer PE accesses is called the remote CE.
l In this chapter, IP addresses of the sites are IPv4 addresses.
VPN Instances
A VPN instance is also called a VPN Routing and Forwarding table (VRF). A PE has multiple
forwarding tables, including a public routing and forwarding table and one or more VPN
instances. That is, a PE has multiple instances, including a public instance and one or more VPN
instances.
VPN1
Site1 CE
Backbone
PE
VPN1
VPN-instance
Public
VPN2
forwarding table
VPN-instance
VPN2
Site2 CE
The differences between a public routing table and a VRF are as follows:
l A public routing table contains the IPv4 routes of all the PEs and Ps, which are generated
by routing protocols or static routes of backbone networks.
l A VRF contains the routes of all sites that belong to the VPN instance. The VRF is obtained
through configuring static routes or by exchanging the VPN route information between a
CE and a PE, and between two PEs.
l A public forwarding table contains the minimum forwarding information extracted from
the corresponding public routing table; a VPN forwarding table contains the minimum
forwarding information extracted from the corresponding VPN routing table according to
the route management policies.
VPN instances on a PE are independent of each other. They are also independent of the public
routing and forwarding table.
Each VPN instance can be perceived as a virtual device, which maintains an independent address
space and has one or more interfaces that connect the PE associated with the instance.
In RFC 2547 (L3VPNs), a VPN instance is called the per-site forwarding table. To be more
specific, every connection between a CE and a PE corresponds to a VPN instance (not a one-to-
one mapping). The VPN instance is bound to the PE interface that connects the CE through
manual configuration.
The independent address space of a VPN instance is realized by using router distinguishers
(RDs). A VPN instance manages VPN membership and routing principles of the directly
connected sites by using the VPN target attributes.
The following describes RDs and the VPN target in detail.
VPN-IPv4 Addresses
Traditional BGP cannot process routes of VPNs with address spaces overlapping. Suppose both
VPN1 and VPN2 use addresses on the segment 10.110.10.0/24, each of them advertises a route
to this network segment, and no load balancing is performed between routes of different VPNs.
BGP selects only one route from the two routes. The other route is thus lost.
The cause to the forementioned problem is that BGP cannot distinguish VPNs with the same IP
address prefix. To solve this problem, BGP/MPLS IP VPN uses the VPN-IPv4 address family.
A VPN-IPv4 address consists of 12 bytes. The first 8 bytes represent the RD; the last 4 bytes
stand for IPv4 address prefix, as shown in Figure 8-5.
When configuring an RD, you only need to specify the Administrator subfield and the Assigned Number
subfield. Two types of the configuration formats of an RD are as follows:
l The RD format is "16-bit AS number:32-bit user-defined number". For example, 100:1.
l The RD format is "32-bit IPv4 address:16-bit user-defined number". For example, 172.1.1.1:1.
In this chapter, an RD value does not contain the Type field.
IPv4 addresses with RDs are called the VPN-IPv4 addresses. After receiving IPv4 routes from
a CE, a PE converts the routes into globally unique VPN-IPv4 routes and advertises the routes
in the public network.
VPN Target
The VPN target, also called route target (RT),is a 32-bit BGP extension community attribute.
BGP/MPLS IP VPN uses the VPN target to control the advertisement of VPN routing
information.
A VPN is associated with one or more VPN target attributes, which have the following types:
l Export target: After learning the IPv4 routes from directly connected sites, a local PE
converts the routes to VPN-IPv4 routes and sets the export target attribute for those routes.
As the BGP extension community attribute, the export target attribute is advertised along
with the routes.
l Import target: After receiving the VPN-IPv4 routes from other PEs, a PE checks the export
target attribute of the routes. If the export target is identical with the import target of a VPN
instance on the PE, the PE adds the route to the VPN routing table.
That is, the VPN target attribute defines the sites that can receive a VPN route, and the sites from
which the PE can receive routes.
After receiving a route from the directly connected CEs, a PE associates the route with one or
more export target attributes. The process during which VPNv4 routes match the import targets
of local VPN instances is called the private network route cross. For details, see the following
sections. BGP advertises the attributes along with the VPN-IPv4 route to related PEs. After
receiving the route, the PEs compare the export target attributes with the import target attributes
of all the VPN instances on the PEs. If the export and import attributes are matched, the route
is installed to the VPN routing tables.
Similar to RDs, a VPN target shown in Figure 8-6 has the following formats:
l 0
The Administrator subfield occupies 2 bytes and the Assigned Number subfield occupies
4 bytes.
The Administrator subfield is a 16-bit AS number; the Assigned Number subfield is a 32-
bit user-defined number.
l 1
The Administrator subfield occupies 4 bytes and the Assigned Number subfield occupies
2 bytes.
The Administrator subfield is a 32-bit IPv4 address; the Assigned Number subfield is a 16-
bit user-defined number.
VPN-Target (8-byte)
NOTE
When configuring a VPN target, you only need to specify the Administrator subfield and the Assigned
Number subfield. Two types of the configuration format of a VPN target are as follows:
l The VPN-Target format is "16-bit AS number:32-bit user-defined number". For example, 100:1.
l The VPN-Target format is "32-bit IPv4 address:16-bit user-defined number". For example, 172.1.1.1:1.
In this chapter, a VPN target value does not contain the Type field.
The reasons that using VPN target instead of RDs as the extension community attributes are as
follows:
l A VPN-IPv4 route has only one RD, but can be associated with multiple VPN targets. With
multiple extension community attributes, BGP can greatly improve the flexibility and
scalability of a network.
l VPN targets are used in controlling route advertisement between different VPNs on a PE.
That is, after being configured with the same VPN target, different VPNs on a PE can import
routes between each other.
l On a PE, different VPNs have different RDs; however, the BGP extension community
attributes are limited. Using RDs as the attributes to import routes confines the network
scalability.
In a BGP/MPLS IP VPN, VPN targets are used to control the advertisement and receipt of VPN
routing information between sites. VPN export targets are independent of import targets. An
export target and an import target can be configured with multiple values; thus, flexible VPN
access control and diversified VPN networking schemes can be implemented. For more
information, see L3VPN.
RDs and RTs are similar in structure, but RDs cannot be replaced with RTs. This is because the
RT is an extended group attribute of BGP, the route cancellation packets of BGP do not carry
the extended attribute. In this case, the received packets have no RT attribute and you need to
define the RD attribute separately.
8.2.2 MP-BGP
The PTN equipment uses the MP-BGP protocol to implement the L3VPN function. This topic
describes the concepts related to MP-BGP.
Introduction to MP-BGP
As previously mentioned, the traditional BGP-4 described in the RFC 1771 can manage only
the IPv4 routing information, but cannot manage the routes of VPNs with overlapped address
spaces.
To correctly process VPN routes, VPNs use Multiprotocol Extensions for BGP-4 described in
RFC 2858. MP-BGP supports multiple network layer protocols. In an MP-BGP Update message,
information about the network layer protocol is described in the Network Layer Reachability
Information (NLRI) and the Next Hop fields.
MP-BGP uses the address family to differentiate network layer protocols. An address family
can be a traditional IPv4 address family or other address families such as VPN-IPv4 address
family. For the values of address families, refer to RFC 1700 (Assigned Numbers).
NOTE
The PTN supports multiple MP-BGP extension applications such as VPN extension, which are configured
in the corresponding views of the address families. By default, for an IPv4 address family, after the peer
address and the AS to which the peer belongs are specified, the local NE has the capability of setting up
sessions with its peer. For other address families, the capability of setting up sessions must be manually
enabled on the local NE.
The transmission of VPN member information and VPN-IPv4 routes between PEs is
implemented by importing extension community attributes into BGP.
l MP_REACH_NLRI
l MP_UNREACH_NLRI
The two attributes are optional non-transitive. BGP speakers without the multiprotocol capability
ignore the two attributes and do not pass them to peers. In a VPN, PEs with the multiprotocol
capability advertise the VPN routing information to the peer PEs or ASBR PEs supporting
multiprotocol through MP-BGP. BGP peers without the multiprotocol capability ignore the
attributes, and do not identify and store the VPN routing information.
NOTE
Optional non-transitive is a BGP attribute type. If a BGP NE does not support this attribute type, the Update
messages with the attributes of this type are ignored, and the messages are not advertised to other peers.
When BGP runs in the interior of the autonomous system, it is referred to as IBGP. When BGP
runs between different autonomous systems, it is referred to as EBGP.
IBGP
EBGP EBGP
CE CE
Internet
MP_REACH_NLRI
Multiprotocol Reachable NLRI (MP_REACH_NLRI) is used to advertise reachable routes and
information about the next hop. The attribute consists of three parts: Address Family
Information, Next Hop Network Address Information, and Network Layer Reachable
Information.
l Address Family Information: consists of 2-byte Address Family Identifier (AFI) and 1-byte
Subsequent Address Family Identifier (SAFI).
l An AFI identifies a network layer protocol. The values of network layer protocols are
described in RFC 1700 (Address Family Number). For example, 1 indicates IPv4.
l An SAFI indicates the type of the NLRI field.
l If the AFI is 1 and the SAFI is 128, it indicates that the address in the NLRI field is an
MPLS-labeled VPN-IPv4 address.
l Next Hop Network Address Information: consists of the 1-byte length of the next-hop
network address and next-hop network address of variable length. A next-hop network
address refers to the network address of the next NE on the path to the destination. In MP-
BGP, before advertising MP_REACH_NLRI to EBGP peers, BGP speakers set the next-
hop network addresses as the addresses of the interface that connects the local NE and the
remote NE. The next-hop network address remains unchanged when MP_REACH_NLRI
is advertised to IBGP peers.
l NLRI: consists of three parts: length, label, and prefix. Figure 8-9 shows the format of the
NLRI field.
Length (1 byte)
VPNv4 update messages exchanged between PEs or ASBR PEs carry MP_REACH_NLRI. An
Update message can carry multiple reachable routes with the same routing attributes.
MP_UNREACH_NLRI
Multiprotocol Unreachable NLRI (MP_UNREACH_NLRI) is used to inform a peer to delete
unreachable routes. Figure 8-10 shows the format of the attribute.
l AFI: Corresponding to the address family values defined in RFC 1700 (Address Family
Number), an AFI identifies a network layer protocol.
l SAFI: Similar to SAFI in MP_REACH_NLRI, an SAFI indicates the NLRI type.
l Withdrawn Routes: Indicates an unreachable route list, which consists of one or more NLRI
fields. In the Withdrawn Routes field, BGP speakers can fill the NLRI field the same as
the reachable route advertised before to withdraw the route.
Update messages carrying MP_UNREACH_NLRI are sent to withdraw the VPN-IPv4 routes.
An Update message can carry information about multiple unreachable routes.
If the labels of routes to be withdrawn are specified in the messages, the routes with specified
labels are withdrawn. If the labels are not specified, only the routes without labels are withdrawn.
Update messages with MP_UNREACH_NLRI do not carry any path-attributes. A peer can
delete routes based on labels because different routes are assigned with different labels.
The optional parameters of negotiation capability in an Open message consist of three parts:
Capability Code, Capability Length, and Capability Value. Figure 8-11 shows the format of the
capability parameters.
l Capability Code: uniquely identifies the capability type. The value 1 indicates that the BGP
speaker has the MP-BGP capability.
l Capability Length: indicates the length of the capability field. For MP-BGP, the length of
the capability field is 4.
l Capability Value: indicates the value of the capability field. The length is variable and
depends on the type specified in Capability Code. Figure 8-12 shows the format of the
Capability Value field in MP-BGP.
The meanings of 2-byte AFI and 1-byte SAFI are the same as those of
MP_REACH_NLRI.
Res. is a 1-byte reserved field. A sender sets the value to 0, and the receiver ignores the
field.
At present, BGP does not support dynamic capability negotiation. After a BGP speaker
advertises an Open message with optional capability fields,
l If the speaker receives a Notification message from its peer, it indicates the peer does not
support the capability. Then the BGP speaker tears down the session with its peer, and
sends an Open message without optional capability field to the peer, attempting a new BGP
connection.
l If the peer supports the capability advertisement; however, the capability fields are
unknown or unsupported, negotiation fails. Then the BGP speaker tears down the session
with its peer, and sends an Open message without the optional capability fields (but may
carry other optional capability fields) to the peer, attempting a new BGP connection.
After any change of BGP capability, such as enabling or disabling label-routing capability,
enabling or disabling address family capability (IPv4, and VPNv4), and enabling GR capability,
the BGP speaker tears down the session with its peer, and then re-negotiates the capability with
its peer.
In these situations, the PE sends Route Refresh messages carrying AFI and SAFI to the peers,
which have successfully negotiated the capability with the PE. If the peers do not support the
Route Refresh messages, the PE resets the sessions of the peers. After receiving the messages,
the peers re-transmit all the routes that satisfy AFI and SAFI.
Then the PE matches the remaining routes with the import targets of VPN instances on the PE.
The matching process is called route-cross of private networks.
The PE matches the VPNv4 routes with local VPN instances without selecting the optimal routes
and checking whether the tunnels exist.
For a route from the local CE of different VPNs, if the next hop is reachable or can be iterated,
the PE also matches the route with the import targets of local VPN instances. The matching
process is called local route cross.
NOTE
To correctly forward a packet, a BGP device must find out a directly reachable address, through which the
packet can be forwarded to the next hop in the routing table. The route to the directly reachable address is
called the dependent route because BGP guides the packet forwarding based on the route. The searching
for a dependent route based on the next-hop address is called route iteration.
Tunnel Iteration
To transmit traffic of private networks across a public network, a tunnel is required to transmit
the traffic. After the private cross-routes are generated, route iteration based on destination IPv4
prefixes is performed. The proper tunnels (except for the local cross routes) are searched out.
Then the tunnel iteration is performed. The routes are injected into the VPN routing table only
after the tunnel iteration succeeds. The process that the routes are iterated to corresponding
tunnels is called tunnel iteration.
After the tunnel iteration succeeds, the tunnel IDs are reserved for subsequent packet forwarding.
A tunnel ID uniquely identifies a tunnel. In VPN packet forwarding, the transmission tunnel is
searched out according to the tunnel ID.
For multiple routes to the same destination, choose one route based on the following rules if
load balancing is not carried out:
l If a route from the local CE and a crossed route to the same destination exist at the same
time, choose the route received from the local CE.
l If a local crossed route and a crossed route from other PEs to the same destination exist,
choose the local crossed route.
For multiple routes to the same destination, choose one route based on the following rules if
load balancing is carried out:
l Preferentially choose the route from the local CE. When one route from the local CE and
multiple crossed routes exist, choose the route from the local CE.
l Load balancing is performed between the routes from the local CE or between the crossed
routes instead of between the routes from the local CE and the crossed routes.
The PE equipment advertises the IPv4 routes received from the local CE through MP-BGP to
VPNv4 routes of the peer PE.
The rules of advertising VPN-IPv4 routes of MP-BGP are the same as that of BGP.
l When multiple valid routes exist, a BGP speaker advertises only the best route to its peer.
l A BGP speaker advertises only the routes used by itself to its peer.
l A BGP speaker advertises the routes obtained through EBGP to all the BGP peers, both
EBGP peers and IBGP peers.
l A BGP speaker does not advertise the IBGP routes to its IBGP peers.
l A BGP speaker advertises the IBGP routes to its EBGP peers when the synchronization
between BGP and IGP is not enabled.
l After a connection is set up, a BGP speaker advertises all the BGP routes to its new peer.
A basic L3VPN refers to a VPN in which only one carrier exists, the MPLS backbone network is located
within an AS, LSPs serve as tunnels, and PEs, Ps, and CEs do not assume multi-roles. (No device assumes
the role of both a PE and a CE.)
Introduction
In a basic BGP/MPLS PN, advertisement of VPN routing information involves CEs and PEs.
Ps need to maintain the routes of only the backbone network, and they need not know VPN
routing information. Generally, PEs maintain the routing information about the VPNs that the
PEs access, and they need not maintain all VPN routes.
After the whole process of route advertisement, the local CE and the remote CE can set up
reachable routes, and VPN routing information can be advertised in the backbone network.
VPN routing and forwarding tables on a PE are isolated from each other and independent of
public routing and forwarding tables. After learning routes from a CE, a PE decides to which
table the routes should be installed. Static routes and routing protocols cannot enable the PE to
make the decision. The decision capability can be realized only through the configuration
described as follows.
l If static routes are used between CEs and PEs, you need to specify VPN instances when
you configure the static routes.
l Generally, static routes are used when CEs are located within a stub VPN, or when CEs
are hosts or switches. If CEs are hosts or switches, generally, static routes to the sites to
which the CEs belong are configured on the connected PEs, and routing protocols are not
required.
NOTE
l If a VPN receives the routes outside the VPN or the routes advertised by non-PEs, and then
advertises the routes to a PE, the VPN is called a transit VPN.
l A VPN that receives only the routes within the VPN and the routes advertised by PEs is called
a stub VPN.
Using static routes between PEs and CEs features simple configurations, and can prevent
route flapping of CEs from affecting the stability of BGP VPNv4 routes of PEs in the
backbone network.
l If IGP is used between CEs and PEs, each VPN uses a process. Different VPNs use different
processes. Hence, you need specify VPN instances when you configure the IGP processes.
l If a site contains backdoor links, the configuration is complicated. For the detailed
configuration, see Extension. In addition, there are some restrictions on the usage of IGP
between CEs and PEs.
l If EBGP is run between CEs and PEs, MP-EBGP peers must be configured in the
corresponding BGP VPN instance views.
When EBGP is run between PEs and CEs, to ensure that routing information is correctly
transmitted, nodes located in different places must be assigned with different AS numbers
because BGP detects route loops based on AS numbers. However, different VPN sites may
use the same AS number because VPN sites use private AS numbers. The AS number of
a transit VPN is globally unique.
Route cross&
tunnel iteration BGP
Update Routing table
Carrying label,RD,
and export RT Message
1. IGP routes are imported into the BGP IPv4 unicast address family of CE2.
2. CE2 advertises an EBGP Update message containing the route to the egress PE. After
receiving the message, the egress PE converts the route to a VPN-IPv4 route, and then
installs the route to the VPN routing table. If the egress PE has a VPN routing table of
another VPN instance, and the import RT of the instance and the export RT of the route
are the same, the route is added to the VPN routing table of the instance.
3. At the same time, the egress PE allocates an MPLS label to the route. Then the egress PE
adds the label and VPN-IPv4 routing information to the NLRI field and the export target
to the extension community attribute field of the MP-IBGP Update message. After that,
the egress PE sends the Update message to the ingress PE.
4. After receiving the message, the ingress PE filters the route based on BGP routing policies.
If the route fails to pass the filtering, the ingress PE discards the route. If the route passes
the filtering, the ingress PE performs the route cross. After the route cross succeeds, the
ingress PE performs tunnel iteration based on the destination IPv4 address to find the proper
tunnel. If the iteration succeeds, the ingress PE stores the tunnel ID and label, and then adds
the route to the VPN routing table of the VPN instance.
5. The ingress PE advertises a BGP Update message containing the route to CE2. The
advertised route is a common IPv4 route.
6. After receiving the route, CE2 installs the route to the BGP routing table. CE2 can import
the route to the IGP routing table by importing BGP routes to IGP.
The preceding process describes the advertisement of a route from CE2 to CE1. To ensure
that CE1 and CE2 can communicate, routes need also be advertised from CE1 to CE2.
Similar to the preceding process, the advertisement of a route from CE1 to CE2 is not
mentioned here.
NOTE
A basic L3VPN refers to a VPN in which only one carrier exists, the MPLS backbone network is located
within an AS, LSPs serve as tunnels, and PEs, Ps, and CEs do not assume multi-roles (No device is a PE
and a CE at the same time.)
In a L3VPN backbone network, a P does not know VPN routing information because VPN
packets are transmitted between PEs through tunnels. The following takes Figure 8-14 as an
example to describe the forwarding of a packet from CE1 to CE2 in the L3VPN. As shown in
Figure 8-14, I-L indicates an inner label; O-L indicates an outer label.
Out-Label Switch
To perform simple flow classification on IP packets in an IP network, you can use the DSCP
labels in the ToS fields of IP packet heads, as shown in Figure 8-15.
7 6 5 4 3 2 1 0 RFC2474
If you use the first six bits, that is, IP precedence, in the type of service (ToS) byte in an IP packet
head to identify the packet, you can classify all packets into 64 types. After packets are classified,
other QoS features can be used for different classes. In this way, the class-based congestion
management and flow shaping are implemented.
When packets are classified at the edge of a network, DSCP labels are normally added to the
packets. Then, the packets can be classified inside the network according to the DSCP labels.
On the basis of the priority, queuing technologies, such as WFQ and CBWFQ, process the
packets in different ways. A downstream network can either use the classification of an upstream
network or re-classify data packets according to its own standards.
After packets are classified and labeled at the edge of a network, differentiated services are
provided according to labels on the intermediate nodes of the network.
Carrier A
NodeB 1
NodeB 2
FE/GE
DHCP server A
PSN
NOTE
As shown in Figure 8-16, carrier A and carrier B share the same bearer network, but networks of different
carriers must be isolated. The DHCP relay functions on networks of two carriers are performed
independently but the processes are the same.
FE
NodeB 1
FE/GE
(DHCP Client) L2VPN
DHCP Server
FE PTN A PTN B
(DHCP Relay)
NodeB 2
(DHCP Client)
The PTN equipment transmits DHCP packets through L2VPN services. The equipment
attaches labels to only client request packets or server reply packets and then forwards the
packets in MPLS mode, but the equipment does not identifies DHCP packets.
l As shown in Figure 8-18Figure 8-19, the bearer network between the PTN equipment is
a Layer 3 network.
E1/FE
NodeB 1
FE/GE
(DHCP Client)
L3VPN
E1/FE
DHCP Server
PTN A PTN B
(DHCP Relay)
NodeB 2
(DHCP Client)
FE
NodeB 1
FE/GE
(DHCP Client)
L3VPN
FE
DHCP Server
PTN A PTN B
(DHCP Relay)
NodeB 2
(DHCP Client)
If a NodeB must communicate with a specific DHCP server, you can adopt the latter mode, DHCP
relay based on interfaces.
DHCP relay can implement relay of DHCP packets through an L2VPN or L3VPN network.
Before learning the two modes of DHCP relay, you must understand the DHCP packet format,
which helps you understand the DHCP relay principle.
NOTE
As shown in Figure 8-20, numbers in the brackets indicate the length of each field. The unit is byte.
Hardware 1 byte Indicates the length of the hardware address. The unit is byte.
Length For Ethernet, the value of this field is 6.
Hops 1 byte Indicates the number of DHCP relays that the current DHCP
packets traverse. This filed is set to 0 on the client. Each time
when the packets traverse a DHCP relay, the value of this
field is increased by 1. This field is used to restrict the number
of DHCP relays that the DHCP packets traverse.
Transaction ID 4 bytes Sets to a random value. Hence, the response packets of the
server match the request packets of the user.
Seconds 2 bytes Indicates the time that elapses after the client starts the DHCP
request. The unit is second.
.
Only the most significant bit of this field is meaningful, and
other bits are set to 0. The most left bit is the broadcast
response label bit, and the values of this bit are as follows:
l 0: The client requires that the server unicast response
packets.
l 1: The client requires that the server broadcast response
packets.
Client IP 4 bytes Indicates the IP address of the client. The IP address can be
Address an IP address assigned by the server to the client or an
(ciaddr) existing IP address of the client. In the initialization state, the
client does not have an IP address. In this case, the value of
this field is 0.0.0.0.
Your (Client) IP 4 bytes Indicates the IP address assigned by the server to the client.
Address When performing a DHCP response, the server fills the IP
(yiaddr) address assigned to the client into this field.
Relay Agent IP 4 bytes Indicates the IP address of the first DHCP relay. When the
Address client sends a DHCP request, if the server and client are not
(giaddr) on the same network, the first DHCP relay fills its IP address
into this field during forwarding of this DHCP request
packet. The server determines the network section address
according to this field, and then selects the address pool for
assigning addresses to users. The server also uses this field
to send a response packet to this DHCP relay, and forwards
the packet to the client through a DHCP relay.
NOTE
If the packet traverses more than one DHCP relay before reaching
the DHCP server, this field of a DHCP relay behind the first DHCP
relay does not change and only the number of hops is increased by
1.
Client Hardware 16 bytes Indicates the MAC address of the client. This field must be
Address consistent with the hardware type and hardware length fields.
(chaddr) When sending a DHCP request, the client fills its hardware
address into this field. For example, in the case of Ethernet,
if the hardware type and hardware length are 1 and 6
respectively, this field must be filled in with a 6-byte
Ethernet MAC address.
Server Host 64 bytes Indicates the name of the server whose configuration
Name information is obtained by the client. This field is filled in
by the DHCP server and it is optional. If this field is filled
in, it must be a character string ended with 0.
File Name 128 bytes Indicates the name of the start configuration file of the client.
This field is filled in by the DHCP server and it is optional.
If this field is filled in, it must be a character string ended
with 0.
Options Variable Indicates the option field of DHCP, and it contains at least
312 bytes. This field contains the configuration information
assigned by the server to the client, such as the IP address of
a gateway NE, IP address of a DNS server, and valid leasing
period when the client can use the IP address.
FE FE/GE
L2VPN
NodeB IP IP
PTN 1 PTN 2 DHCP Server
ETH ETH
FE/GE
E1 L3VPN
IP
NodeB IP DHCP Server
ML-PPP PTN 1 PTN 2 ETH
E1
FE/GE
FE L3VPN
IP IP
NodeB DHCP Server
ETH PTN 1 PTN 2 ETH
The transmission scenarios shown in Figure 8-22 and Figure 8-23 are considered as examples.
The processing flows for L3VPN DHCP relay of the equipment as follows:
l The processing procedure of DHCP relay based on VPN routing and forwarding tables
(VRFs) is as follows:
1. When PTN A, which is enabled with DHCP relay, receives DHCP request packets
from a certain logical port of NodeB.
2. PTN A determines whether the number of relays that the current DHCP packets
traverse exceeds the limit. If yes, the packets are discarded. Otherwise, the number of
relays is added with 1.
3. PTN A selects the IP address of the server as the destination IP address, and sets the
IP address of the packet egress port as the source IP address.
NOTE
When the IP address of the server is selected as the destination IP address, the following modes
are available:
l Sharing mode: The server is selected according to the sharing algorithm.
l Broadcast mode: The packets are sent to each server in the VRF.
After the DHCP server receives the request packets, the remaining processing procedure is the same
as that in the case of DHCP relay based on VRFs.
Intranet VPN
In the simplest intranet, all the users in a VPN form a closed user group. The users within the
group can transmit packets between each other; however, the users cannot communicate with
users outside the VPN. This networking mode is called an intranet VPN. The sites within a VPN
generally belong to the same organization.
In this networking mode, each VPN must be allocated a VPN target as the export target and
import target. In addition, the VPN target cannot be used by other VPNs.
Backbone CE
CE Site3
Site1
VPN2 PE P PE VPN1
VPN2 VPN1
CE Import: 200:1 Import: 100:1 CE
Export: 200:1 Export: 100:1 Site4
Site2
As shown in Figure 8-24, PEs allocate the VPN target of 100:1 to VPN1 and the target of 200:1
to VPN2. The two sites in VPN1 can access each other. The two sites in VPN2 can also access
each other. The sites in VPN1 and those in VPN2 cannot communicate.
Extranet VPN
If a VPN user needs to access some sites of another VPN, the extranet networking mode can be
used.
In extranet mode, if a VPN needs to access a shared site, the export target of the VPN must be
contained in the import target of the VPN instance on the shared site; the import target of the
VPN must be contained in the export target of the VPN instance on the shared site.
Site1
VPN1
CE Import: 100:1
Export: 100:1
VPN1 VPN1
PE1
Site3
PE2
PE3 CE
VPN1
VPN2
Import: 100:1, 200:1
VPN2 Export: 100:1, 200:1
CE Import: 200:1
Export: 200:1
Site2
As shown in Figure 8-25, VPN1 and VPN2 can access Site3 of VPN1.
l PE3 can receive the VPN-IPv4 routes advertised by PE1 and PE2.
l PE1 and PE2 can receive the VPN-IPv4 routes advertised by PE3.
l Thus, Site1 and Site3 of VPN1 can access each other; Site2 of VPN2 and Site3 of VPN1
can access each other.
l PE3 does not advertise the VPN-IPv4 routes from PE1 to PE2 and does not advertise the
VPN-IPv4 routes from PE2 to PE1. Therefore, Site1 of VPN1 and Site2 of VPN2 cannot
access each other.
A Spoke site advertises routes to the Hub site; then the Hub site advertises the routes to other
Spoke sites. No direct route exists between the Spoke sites. Communications between all Spoke
sites are controlled by the Hub site.
In networking model of Hub&Spoke, two VPN targets are configured to stand for Hub and Spoke
respectively. The configuration of a VPN target on a PE must comply with the following rules:
l The export target and the import target of the Spoke-PE in the Spoke site are Spoke and
Hub respectively.
l A Hub-PE requires two interfaces or sub-interfaces. One interface or sub-interface receives
the routes from Spoke-PEs, and the import target of the VPN instance on the interface is
Spoke. The other interface or sub-interface advertises the routes to Spoke-PEs, and the
export target of the VPN instance on the interface is Hub.
Figure 8-26 Route advertisement from Site2 to Site1 in Hub&Spoke networking model
VPN1
7 Spoke-PE
Hub-CE
Site1 5
CE VPN1
6 4 Site3
Hub-PE
Spoke-PE
Spoke-CE
2 3
VPN1 1
CE
Site2
As shown in Figure 8-26, communications between Spoke sites are controlled by the Hub site.
The lines with arrowheads show the process of advertising a route from Site2 to Site1.
l The Hub-PE can receive the VPN-IPv4 routes advertised by all the Spoke-PEs.
l All the Spoke-PEs can receive the VPN-IPv4 routes advertised by the Hub-PE.
l The Hub-PEs advertise the routes from the Spoke-PEs to the spoke-CE, and advertise the
routes from the Hub-CE to all the Spoke-PEs. The Spoke sites, therefore, can access each
other through the Hub site.
l The import target of any Spoke-PE is not the same as the export targets of other Spoke-
PEs. Therefore, any two Spoke-PEs do not directly advertise VPN-IPv4 routes to each
other. The Spoke sites cannot directly access each other.
Figure 8-27 shows the transmission path for data communication between Site 1 and Site 2 in
Figure 8-26. (The direction for data transmission is indicated by arrowheads of lines in the
figure.).
VPN1
Spoke-PE
1
Hub-CE
Site1 3
CE VPN1
2 4
Site3
Hub-PE
Spoke-PE
Spoke-CE
6 5
VPN1 7
CE
Site2
Optional
Create the network
Configure LSR ID
End
1. Create the To create a network, you need to create NEs, configure NE data, and
network create fibers.
2. Configure the Specifies the LSR ID for each NE that a service traverses and the start
LSR ID value of the global label space. Each LSR ID is unique on a network.
3. Configure the Set the general attributes and Layer 3 attributes (tunnel enable status and
NNI interface IP address) for interfaces to carry the tunnel carrying.
Task Remarks
4. Configure the The OptiX PTN 910 supports the following UNI interfaces: Ethernet
UNI interface interface, ML-PPP, xDSL interface, microwave interface, LAG, and
VLAN sub-interface.
The OptiX PTN 950 supports the following UNI interfaces: Ethernet
interface, ML-PPP, xDSL interface, microwave interface, LAG, and
VLAN sub-interface.
The OptiX PTN 1900 supports the following UNI interfaces: Ethernet
interface, ML-PPP, SDH interface, LAG, and VLAN sub-interface.
The OptiX PTN 3900 supports the following UNI interfaces: Ethernet
interface, SDH interface, LAG, and VLAN sub-interface.
NOTE
The equipment can access the IP-over-E1 L3VPN service through ML-PPP on the
UNI side.
5. Configure the Set protocol parameters relevant to the control plane for the tunnel
control plane creation.
l When you create a static MPLS tunnel to carry L3VPN services, you
need not configure the parameters relevant to the control plane.
l When you create a dynamic MPLS tunnel to carry BGP/MPLS
services, you need to configure IGP-ISIS protocol parameters.
Configure the protocol relevant to the control plane to implement the
protocol of the advertised route on the PE-PE side.
l Create an MP-BGP instance and the MP-BGP peer.
Prerequisite
You must be an NM user with "network operator" authority or higher.
If a dynamic tunnel is used to carry the L3VPN service, the IS-IS protocol must be enabled.
Procedure
Step 1 Choose Service > L3VPN Service > Create L3VPN Service from the main menu.
Step 2 In the Service Information area, set the basic information of the L3VPN service.
l Specify Network Type. Then, the U2000 automatically generates the VRF for each
equipment according to the specified network type. By default, the network type is Full-
Mesh.
l By selecting the Service Template check box, you can create a service quickly and
conveniently. Here, only the general procedure for creating a service is described. For details
about how to create a template and use the template to create a service, see 4 Configuring
a Service Template.
NOTE
You can create a service template according to the requirement of service deployment. For example,
you can select the concerned parameters in the template and set the default values of certain parameters.
By applying the template in service creation, you can quickly and efficiently create a service. The
parameter list contains only the selected parameters and their values.
l Set VRF Name, RD, and RT. After you add the equipment, RD, and RT are displayed in
the parameter list for the equipment on the right.
NOTE
l You can enter a value for the VRF ID. Otherwise, the U2000 automatically allocate an ID. In
addition, you can enter a value for the VRF ID only on the PTN equipment.
l The Service Name Auto Relate Description and Description Auto Relate VRF
Description check box is selected by default on the U2000.
Step 3 In the NE List area, add the equipment for creating a service.
To select the equipment, you can also right-click in the physical topology and choose Add Node
to Service from the shortcut menu.
Step 4 Click the Service Topology tab to view the change of the configuration in real time.
Now, you can view the topology that is displayed based on the network type and VRF
information.
Step 5 Set the VRF parameters for each equipment in the parameter list.
1. Configure General.
Double-click to expand General. The values of the general attributes RD and RT are
automatically set to the values that you set in Step 2 (if those values are set). In addition,
you can also change the values of those parameters.
Set the IP DSCP, VRF Description, Routing Policy, Label Distribution Policy , Tunnel
Binding and Max.Route Count parameters.
NOTE
You can also click and to extend and collapse all VRF parameters respectively.
In the case of the static tunnel that is bound, you can press the Delete key to unbind the tunnel.
When you set IP DSCP to Yes, the PTN transparently transmits the DSCP of IP packets. When you
set IP DSCP to No, the PTN modifies the DSCP of IP packets.
You must configure the bandwidth of tunnel when dynamically binding a tunnel.
2. Configure DHCP Relay.
Double-click to expand DHCP Relay. Configure the parameters of Enabled, Server IP
Address, Relay Hops, and Selection Policy.
If you configure and enable a DHCP relay based on VRFs, you can recognize and process
the DHCP request packets that are transmitted from client-side ports.
3. Configure SAI.
Right click and select Insert Instance to add the service access interface.
You can bind multiple interfaces and set the parameters relevant to the interfaces.
Double-click to expand DHCP Relay. Configure the parameters of Enabled and Server
IP Address.
NOTE
You can also click the SAI Configuration tab to add, modify, or delete an SAI or configure the SAI QoS.
If you configure and enable a DHCP relay based on ports, you can accurately control the interaction
between the equipment connected to each port and the DHCP server.
4. Configure Route Configuration.
Set the basic information, such as the BGP peer. In addition, the Route Aggregation and
Route Import parameters are optional.
You can select the routing protocol and set relevant parameters according to actual O&M
requirements.
NOTE
Right click and select Insert Instance to add the ARP list.
l If you clear the Deploy check box, the configuration data information is stored only on the U2000. If
you select the Deploy check box, the configuration data information is stored on the U2000 and applied
to NEs. By default, the Deploy check box is selected.
l When you select the Deploy and Enable check box, A service is available on NEs only when it is
enabled.
----End
Postrequisite
After the service is created successful, the service is displayed in the L3VPN service management
window.
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l Data must be synchronized between the equipment relevant to the service.
l The L3VPN service must be created but not deployed.
Context
After you create the L3VPN service, the service configuration data is saved in the database of
the U2000, instead of being applied to NEs, before deployment. In this case, the service is in the
Undeployment state and you can deploy such a service to apply the service configuration data
to NEs.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select the service to be deployed, right-click, and choose Deploy from the shortcut menu.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l Data must be synchronized between the equipment relevant to the service.
Context
After the L3VPN network runs for a period, certain discrete VRFs may exist on the network.
By using the function of adjusting discrete services, you can add those VRFs to the existing
services or directly delete those VRFs.
In the Manage VRF Resource list, if the value of the Service Name field is empty, it indicates
that the VRF is a discrete VRF.
Procedure
Step 1 Choose Service > L3VPN Service > Manage VRF Resource from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Optional: Convert to service.
1. Select one or more discrete services, right-click, and then choose Convert to Service from
the shortcut menu.
2. In the dialog box that is displayed, click Filter and set the filter criteria.
3. Click OK. Then, select a required service in the query result, and then click OK.
Step 4 Optional: Delete the VRF resource.
1. Select one or more VRF resource, and click Delete.
2. In the dialog box that is displayed, click OK.
----End
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service and choose Test and Check from the shortcut menu.
Step 4 In the dialog box that is displayed, select the trail to be checked.
----End
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l The MP-BGP protocol must be configured on the public network.
l If L3VPN services are carried over dynamic tunnels, the IS-IS protocol must be enabled.
l The DCN function must be disabled for the ports with the L3VPN services.
l The service-related equipment must synchronize data.
Precautions
NOTE
Configuring the DHCP Relay is optional for configuring a L3VPN service. The parameters related to DHCP
Relay are available only when you configure a L3VPN service.
Procedure
Step 1 Choose Service > L3VPN Service > Create L3VPN Service from the Main Menu.
Step 2 Configure the DHCP Relay function.
You can set the parameters related to DHCP Relay either in a template or in a VRF.
l Configure the DHCP Relay function for equipment in the template.
1. Choose Service Template in Service Information.
NOTE
You can create a service template according to the service deployment requirement. For
example, you can define (by selecting items) the related parameters and set the default values
for certain parameters. When creating a service, you can use the template. In this case, the
parameter table lists only the selected parameters and the default values of the parameters. This
ensures quick and effective service creation.
3. In VRF Configuration, select DHCP Relay and set the parameters related to DHCP
Relay.
Set the following parameters about DHCP Relay:
Enable: Enable or disable the DHCP Relay function. To enable the DHCP Relay
function, select Yes.
Service IP Address: Set the IP address of the DHCP server.
Relay Hops: Set the relay hops for the DHCP relay server within a range of 1 to 16, the
default value is 4.
Selection Policy: When the PTN relay equipment selects the server IP address as the
DIP (destination IP address), there are two selection policies, that is, Share and
Broadcast.
Share: The PTN equipment selects a server by running a sharing algorithm.
Broadcast: The PTN equipment broadcasts packets to each server in the VPN
routing and forwarding table (VRF).
l Deploy the DHCP function for interfaces.
1. Add the equipment where a service is to be created to NE List,
or right-click the equipment in Physical and choose Add NE to Service.
2. Click Details, VRF Configuration is displayed.
3. In VRF Configuration, select SAI > Interface > DHCP Relay and set the parameters
related to DHCP Relay.
Set the following parameters about DHCP Relay:
Enable: Enable or disable the DHCP Relay function. To enable the DHCP Relay
function, select Yes.
Service IP Address: Set the IP address of the DHCP server.
----End
While the L3VPN service is running, abnormal status may occur. By viewing the performance
data of the L3VPN service, you may learn the abnormal status in time. In this manner, the
maintenance personnel can take timely measures to avoid faults.
8.6.3 Monitoring Alarms of the L3VPN Service
By creating a service monitoring template, the maintenance personnel can monitor alarms of
services that important to customers, and learn the running status of services in real time, thus
ensuring the normal running of the services.
8.6.4 Viewing the Alarms of an L3VPN Service
This topic describes how to view the alarms of an L3VPN service.
8.6.5 Diagnosing an L3VPN Service
Through the service diagnosis function, the NMS can periodically perform the ping operation.
This helps users to learn the connectivity of service links.
Prerequisite
l You must be an NM user with "NM monitor" authority or higher.
l The L3VPN service must be created successfully.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
In the service topology, you can learn PE information about the source and sink, and interface
information about the connection to the CE.
Step 6 You can perform the following operations in the service topology.
l In the service topology, select a PE, right-click, and then choose the following menu items
from the shortcut menu respectively.
Choose Open NE Explorer, then, the NE Explorer of the equipment is displayed.
Choose VRF Details to view the details information of VRF.
Choose View Real-Time VRF Performance to view the real-time VRF performance of
the service.
Choose Alarm > Current Alarm to view the current alarm of the PE.
Choose Alarm > History Alarm to view the history alarm of the PE.
l In the service topology, select one interface, right-click, and then choose the following menu
items from the shortcut menu respectively.
Choose Configure SAI to view or modify the configurations of the service access
interface.
Choose View Real-Time SAI Performance to view or modify the real-time performance
of the service access interface.
Choose Fast Diagnosis to diagnose the connectivity of the selected VRF. You can use
the VRF Ping or VRF Trace tool in fast diagnosis.
Choose Alarm > Current Alarm to view the current alarm of the service access interface.
Choose Alarm > History Alarm to view the history alarm of the service access interface.
----End
Context
By viewing the performance data, the maintenance personnel can determine whether a service
runs in the normal state within a period of time.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 View the real-time VRF performance of a service. Right-click the NE and choose View Real-
Time VRF Performance from the shortcut menu in the topology view.
Step 4 Create a monitoring instance for a service. For details, refer to the chapter of monitoring instance
management in Performance Management System (PMS).
Step 5 View the history performance of a service. Right-click a required service and choose
Performance > View History Data from the shortcut menu.
----End
Procedure
Step 1 Choose Fault > Service Monitoring > Service Monitoring Template from the main menu.
Step 2 In the Centralized Monitoring dialog box, expand the All Service branch to view alarm
information of all services.
----End
Context
When a service alarm is generated, certain phenomena occur, including but not limited to:
l The alarm panel blinks.
l The color of the status column in the service list changes.
l The color of the NE, interface, or link in the service topology changes.
If you find a service alarm through preceding phenomena, perform the following operations to
view the detailed alarm information.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click the service with the alarm and choose Alarm > Current Alarm from the shortcut
menu, view the current alarms of the service.
You can also choose Alarm > History Alarm from the shortcut menu to view the history alarms
of the service.
Step 4 Select the service alarm in the alarm list and view the detailed alarm information in the details
area.
----End
Postrequisite
Primarily determine the possible cause of the alarm based on the detailed alarm information,
and then locate the fault by using the debugging tool.
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
The services to be diagnosed must be deployed.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service and choose Diagnose > Create Test Suit from the shortcut menu.
Step 4 In the wizard dialog box, select the link to be diagnosed and click Next.
Step 5 Select the test case type.
Step 6 Set Test Time
1. Set Period Type and Run Time.
2. Click Add.
NOTE
l In the L3VPN Service Management window, right-click in the blank area and choose Diagnose >
View Test Strategy from the shortcut menu to view the running policy of test cases.
l You can add multiple diagnosis times for a period type.
----End
Postrequisite
In daily operation and maintenance, you can do as follows to view the diagnosis result and know
the service connectivity:
1. Right-click a service in the L3VPN Service Management window and choose
Diagnose > View Test Result from the shortcut menu.
2. In the dialog box that is displayed, view the history data of the service diagnosis result.
3. Determine the service connectivity based on the diagnosis result.
Prerequisite
You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select the required service, right-click, and then choose Confer Service Authority from the
shortcut menu.
Step 4 In Useable User, select the required user and click to add the user to Selected
User.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service Authority from the main menu.
Step 2 In the dialog box that is displayed, select the required user and view its manageable services in
the right pane.
NOTE
l In the right pane, you can adjust the authorization of a service after selecting it. To be specific, the
selected user has the right to a service after you select the service.
l The selected user has the rights to all VPLS services after you select All Services.
----End
VPN1 VPN2
Site2 Site5
CE CE
VPN1 VPN1
CE
PE2 CE
Site1 Site3
VPN2 VPN2
Service Planning
Site1, Site2, and Site3 belong to VPN1, and Site4, Site5, and Site6 belong to VPN2.
Service Planning
In the case of an intranet, all CE sites in the same VPN can communicate with each other. Site1,
Site2, and Site3 belong to VPN1, and Site4, Site5, and Site6 belong to VPN2. Therefore, you
need to create two BPG/MPLS VPN services.
Parameter Description
VRF ID 1
RD 100:1
RT 100:1
BGP Instance ID 3
AS No. 100
Parameter Description
VRF ID 2
RD 200:1
RT 200:1
BGP Instance ID 4
Parameter Description
AS No. 100
Configuration Process
This topic describes how to configure the intranet VPN services described in the configuration
example.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must learn about the networking requirements and service planning described in the
example.
The network must be created.
Procedure
Step 1 Set LSR IDs for NEs.
1. In the NE Explorer, select PE1 and choose Configuration > MPLS Management > Basic
Configuration from the Function Tree.
2. Set the parameters, such as LSR ID and Start of Global Label Space, for the NE. Click
Apply.
Parameter Example Value Principle for Value
Selection
3. In the NE Explorer, select PE2 and PE3. To set the parameters, such as LSR ID, for PE2
and PE3, see the preceding two steps.
3. On the Layer 3 Attributes tab page, select 3-EG16-1(Port-1) and 3-EG16-2(Port-2), and
set Enable Tunnel to Enabled and Specify IP Address to Manually. Set IP Address and
IP Mask. Then, click Apply.
4. In the NE Explorer, select PE2. To set the attributes of the 3-EG16-1(Port-1) and 3-EG16-2
(Port-2) interfaces for PE2, see Step 2.1 to Step 2.3.
Set relevant parameters as follows:
The settings of the PE2-3-EG16-1(Port-1) port are the same as those of the PE1-3-EG16-1
(Port-1) port. The IP address is 192.168.2.2.
The settings of the PE2-3-EG16-2(Port-2) port are the same as those of the PE1-3-EG16-1
(Port-1) port. The IP address is 192.168.4.2.
5. In the NE Explorer, select PE3. To set the attributes of the 3-EG16-1(Port-1) and 3-EG16-2
(Port-2) interfaces for PE3, see Step 2.1 to Step 2.3.
Set relevant parameters as follows:
The settings of the PE3-3-EG16-1(Port-1) port are the same as those of the PE1-3-EG16-1
(Port-1) port. The IP address is 192.168.4.1.
The settings of the PE3-3-EG16-2(Port-2) port are the same as those of the PE1-3-EG16-1
(Port-1) port. The IP address is 192.168.3.2.
The basic attributes of the PE1-1-EG16-1(Port-1) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
The basic attributes of the PE1-1-EG16-2(Port-2) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
2. In the NE Explorer, select PE2. To set the attributes of the 1-EG16-1(Port-1) and 1-EG16-2
(Port-2) interfaces for PE2, see Step 2.1 to Step 2.3.
Set relevant parameters as follows:
The basic attributes of the PE2-1-EG16-1(Port-1) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
The basic attributes of the PE2-1-EG16-2(Port-2) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
3. In the NE Explorer, select PE3. To set the attributes of the 1-EG16-1(Port-1) and 1-EG16-2
(Port-2) interfaces for PE3, see Step 2.1 to Step 2.3.
Set relevant parameters as follows:
The basic attributes of the PE3-1-EG16-1(Port-1) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
The basic attributes of the PE3-1-EG16-2(Port-2) port are the same as those of the PE1-3-
EG16-1(Port-1) port. Specify IP Address of the layer 3 attribute is Unspecified and Enable
Tunnel is Disabled.
3. Choose Configuration > Control Plane Configuration > MP-BGP Configuration from
the Function Tree. Click the MP-BGP Configuration tab.
4. Click New. In the Create MP-BGP Protocol Instance dialog box, set MP-BGP Instance
ID to 1 and MP-BGP Instance ID to 100. Click Apply.
5. Click the Peer Configuration tab. Click New. In the Create Peer dialog box, set the
parameters. For example, set MP-BGP Instance ID to 1 and AS Number to 100.
6. In the NE Explorer, select PE2. To set the parameters of the control plane for PE2, see the
preceding steps.
The IS-IS protocol parameters of the 3-EG16-1(Port-1) and 3-EG16-2(Port-2) ports are the
same as those of PE1.
The MP-BGP protocol parameters are the same as those of PE1.
Set the following parameters to configure PE1 as an MP-BGP peer.
Parameter Example Value Principle for Value
Selection
7. In the NE Explorer, select PE3. To set the parameters of the control plane for PE3, see the
preceding steps.
The IS-IS protocol parameters of the 3-EG16-1(Port-1) and 3-EG16-2(Port-2) ports are the
same as those of PE1.
The MP-BGP protocol parameters are the same as those of PE1.
Set the following parameters to configure PE1 as an MP-BGP peer.
3. Configure the equipment list, double-click the equipment in the physical topology, and
select the source and sink equipment.
FRR Protect Type Forward and reverse The bypass tunnel that a
tunnels: Node Protection PLR selects is required to
protect the adjacent
downstream node of the
PLR and the link between
the adjacent downstream
node and the PLR.
FRR Protect Type Forward and reverse The bypass tunnel that a
tunnels: Node Protection PLR selects is required to
protect the adjacent
downstream node of the
PLR and the link between
the adjacent downstream
node and the PLR.
7. To configure the dynamic tunnel between PE2 and PE3, see the preceding steps.
FRR Protect Type Forward and reverse The bypass tunnel that a
tunnels: Node Protection PLR selects is required to
protect the adjacent
downstream node of the
PLR and the link between
the adjacent downstream
node and the PLR.
3. Set the required parameters of PE1, PE2, and PE3 on the VRF configuration tab page at
the lower right corner.
----End
exist in this network. Each set of the PE equipment is connected to a CE site.Spoke-PE1, Spoke-
PE2, and Hub-PE are OptiX PTN 3900 NEs.The following shows the connectivity between any
two sites.
l Site Spoke-CE1 and site Hub-CE can communicate with each other.
l Site Spoke-CE2 and site Hub-CE can communicate with each other.
l Site Spoke-CE1 and site Spoke-CE2 cannot communicate with each other directly, the
traffic between the Spoke-CE sites are forwarded by the central site Hub-CE in addition to
the Hub-PE sites.
Service Planning
Site1 and Site2 are Spoke-CE sites and Site3 is a Hub-CE site.
In the case of the Hub&Spoke networking, the communication between the Spoke-CE sites in
the same VPN is controlled by the central site Hub-CE. Specifically, the traffic between the
Spoke-CE sites are forwarded by the central site Hub-CE in addition to the Hub-PE sites.
Table 8-37 shows the VPN parameter planning.
VRF ID Auto-Assign
RD 100:1
Hub RT 100:1
Spoke RT 200:1
Parameter Description
AS No. 100
Configuration Process
This topic describes how to configure the Hub&Spoke VPN service described in the example.
Prerequisite
You must be an NM user with "network operator" authority or higher.
You must learn about the networking requirements and service planning described in the
example.
The network must be created.
Procedure
Step 1 Specify LSR IDs for NEs.
1. In the NE Explorer, select Spoke-PE1 and choose Configuration > MPLS
Management > Basic Management from the Function Tree.
2. Set the parameters, such as LSR ID and Start of Global Label Space, for the NE. Click
Apply.
3. In the NE Explorer, select Spoke-PE2 and Hub-PE. To set the parameters, such as LSR ID,
for Spoke-PE2 and Hub-PE, see Step a and Step b.
Parameter Example Value Principle for Value
Selection
3. On the Layer 3 Attributes tab page, select 3-EG16-1(Port-1), set Enable Tunnel to
Enabled and Specify IP Address to Manually, and set IP Address and IP Mask. Click
Apply.
Parameter Example Value Principle for Value
Selection
The basic attributes of the Spoke-PE1-1-EG16-1(Port-1) port are the same as the basic
attributes of the Spoke-PE1-3-EG16-1(Port-1) port, and the Specify IP Address parameter
in Layer 3 attributes is set to Unspecified and Enable Tunnel is Enabled.
2. In the NE Explorer, select Spoke-PE2. To configure the attributes of the 1-EG16-1(Port-1)
port, see Step Step 2.1 through Step Step 2.3.
Set the required parameters as follows:
The basic attributes of the Spoke-PE2-1-EG16-1(Port-1) port are the same as the basic
attributes of the Spoke-PE1-3-EG16-1(Port-1) port, and the Specify IP Address parameter
in Layer 3 attributes is set to Unspecified and Enable Tunnel is Enabled.
3. In the NE Explorer, select Hub-PE. To configure the attributes of the 1-EG16-1(Port-1)
port, see Step Step 2.1 through Step Step 2.3.
Set the required parameters as follows:
The basic attributes of the Hub-PE-1-EG16-1(Port-1) port are the same as the basic
attributes of the Spoke-PE1-3-EG16-1(Port-1) port, and the Specify IP Address parameter
in Layer 3 attributes is set to Unspecified and Enable Tunnel is Enabled.
3. Choose Configuration > Control Plane Configuration > MP-BGP Configuration from
the Function Tree. Click the MP-BGP Configuration tab.
4. Click New. In the Create MP-BGP Protocol Instance dialog box, set MP-BGP Instance
ID to 1 and AS No. to 100. Click Apply.
5. Click the Peer Configuration tab. Click New. In the Create Peer dialog box, set the
parameters.
6. In the NE Explorer, select Spoke-PE2. To set the parameters of the control plane for Spoke-
PE2, see the preceding steps.
The IS-IS protocol parameters of the 3-EG16-1(Port-1) port are the same as the IS-IS
protocol parameters of Spoke-PE1.
The MP-BGP protocol parameters are the same as the MP-BGP protocol parameters of
Spoke-PE1.
7. In the NE Explorer, select Hub-PE. To set the parameters of the control plane for Hub-PE,
see the preceding steps.
The IS-IS protocol parameters of the 3-EG16-1(Port-1) and 3-EG16-2(Port-2) ports are the
same as the IS-IS protocol parameters of Spoke-PE1.
The MP-BGP protocol parameters are the same as the MP-BGP protocol parameters of
Spoke-PE1.
3. Configure the equipment list, double-click the equipment in the physical topology, and
select the source and sink equipment.
Parameter Example Value Principle for Value
Selection
Hop Type Forward and reverse When you set Hop Type to
tunnels: Strictly Include Strictly Include, the tunnel
strictly follows the
sequence of set IP
addresses during
establishment.
FRR Protect Type Forward and reverse It is required that the bypass
tunnels: Node Protection tunnel selected for a PLR
protect the downstream
neighboring nodes of the
PLR and the links between
the PLR and its
downstream neighboring
nodes.
Enable FRR BW Protect Forward and reverse Select this check box to
tunnels: Yes enable FRR bandwidth
protection.
LSP Type Forward and reverse Currently, you can set LSP
tunnels: E-LSP Type to only E-LSP.
Hop Type Forward and reverse When you set Hop Type to
tunnels: Strictly Include Strictly Include, the tunnel
strictly follows the
sequence of set IP
addresses during
establishment.
FRR Protect Type Forward and reverse It is required that the bypass
tunnels: Node Protection tunnel selected for a PLR
protect the downstream
neighboring nodes of the
PLR and the links between
the PLR and its
downstream neighboring
nodes.
Enabled FRR BW Protect Forward and reverse Select this check box to
tunnels: Yes enable FRR bandwidth
protection.
LSP Type Forward and reverse Currently, you can set LSP
tunnels: E-LSP Type to only E-LSP.
3. Set the required parameters on the VRF Configure tab page in the lower right corner.
----End
The PTN can fast implement dual-homing protection for an E-Line service when a dual-homing
node, the AC link of a dual-homing node, or the PW of a network-side service is faulty. This
topic describes the concept, application, and configuration method of dual-homing protection.
Prerequisites
You must configure the services to be protected by dual-homing protection.
Configuration Flow
Figure 9-1 shows the configuration flow for dual-homing protection.
NOTE
l In the figure, attachment circuit (AC) indicates the access side. In the following description, AC is used to
describe the access side.
l In the figure, MC represents multi-chassis. In the following description, MC is used to describe multi-chassis.
Start
Required
Optional
Configuration Flow
Table 9-1 provides description of each task in the configuration flow for dual-homing protection.
Table 9-1 Description of tasks in the configuration flow for dual-homing protection
Configuration Task Remarks
MC-PW APS protection in the dual-homing protection scenario. Specifically, how to configure
the MC-PW APS and bind the slave MC-PW APS.
Definition
Link aggregation indicates that a group of physical Ethernet interfaces are bound together to
form a logical interface (that is, a LAG). Link aggregation increases bandwidth and provides
link protection. As shown in Figure 9-2, a LAG works between adjacent sets of equipment and
is irrelevant to the entire network structure. On an Ethernet, a link corresponds to a port, so the
link aggregation is also called the port aggregation.
Link 1
Link 2
Link 3
Ethernet Message Ethernet Message
LAG
Equipment supports two aggregation modes, that is, manual aggregation and static aggregation.
There are two service sharing modes for each aggregation mode, that is, loading sharing and
non-load sharing.
Manual aggregation: In this mode, you need to manually create a LAG and add member links
to the LAG. In addition, the link aggregation control protocol (LACP) is not required in this
mode. Therefore, when equipment is interconnected with the equipment that does not support
LACP, the link aggregation still works. As a result, if a unidirectional fault occurs on a member
link (for example, a fiber cut occurs in one direction of an Ethernet optical interface), the transmit
end of the cut fiber cannot detect the fault, and the service is affected (in the load sharing mode)
or interrupted (in the non-load sharing mode).
Static aggregation: In this mode, you need to manually create a LAG and add member links to
the LAG. The LACP protocol is required in this mode. The LACP protocol does not change the
configuration information. Exchanging LACP packets allows the systems at the two ends of a
LAG to negotiate the aggregation instead of fully depending on the configuration of a single
end. As a result, the aggregation is controlled in a more accurate and effective manner.
Load sharing mode: In this mode, service traffic is available on each member link of the LAG,
and the member links share service transmission. To ensure that packets on member links are
in order and that the service traffic is evenly distributed on each member link, on the receive
side, the LAG algorithm is used to re-arrange the disordered packets, and the sharing algorithm
is used to distribute packets to each link of the LAG based on a certain feature value of the
packets (for example, source MAC address or sink MAC address). When LAG members change,
or certain links fail, the system automatically reallocates traffic. This brings many benefits of
the link aggregation, such as higher bandwidth that is increased in a linear manner.
Non-load sharing: There are a maximum of two members in an LAG. One member, which is in
the active state, is used as an active link to carry the service traffic. The other member is in the
standby state. When the active link is faulty, the system activates the link in the standby state to
carry the service traffic.
Networking Application
The equipment supports the LAG application on the UNI side. As shown in Figure 9-3, a LAG
is created. In addition , the intra-card LAG and inter-card LAG are supported. The bandwidth
for Ethernet services between the adjacent equipment is increased in a linear manner. What's
more, link reliability is improved.
Intra-card LAG
Ethernet Ethernet
card card
Ethernet Ethernet
card card
Inter-card LA
MC-LAG
Multi-chassis link aggregation group (MC-LAG) is an extension of LAG defined in IEEE 802.3.
In the case of MC-LAG, the links on multiple NEs are aggregated as one group to increase
bandwidth. When one link or one NE in the group is faulty, MC-LAG functions to switch the
data flow to other available links in the MC-LAG. This section describes the MC-LAG in aspects
of the working principle, application for dual-homing protection, and support of the PTN
equipment.
BTS/NodeB
BSC/RNC
S
PE2 LAG2
S
Standby (not carrying services)
Note:
LAG1 and LAG2 may have one member link.
The MC-LAG consists of single-chassis (SC) LAGs (LAG1 and LAG2) on PE1 and PE2, MC-
LAG between PE1 and PE2, and LAG (LAG3) on BSC/RNC. By means of MC synchronization
communication of MC-LAG, PE1 and PE2 periodically notify the status of LAG1 and LAG2
to each other, and coordinate actions in response to faults. In addition, when the working status
of the AC-side link changes, PE1 and PE2 notify the status change to the NNI-side protection
protocol.
MC synchronization communication can be shared by all MC-LAG and MC-LMSP between
PE1 and PE2. Hence, you need to configure MC synchronization communication tunnel for only
one time. To ensure quick switching and to improve reliability of MC-LAG or MC-LMSP, you
must set up a direct MC synchronization communication tunnel between PE1 and PE2 and
configure protection for the tunnel.
The PTN equipment supports non-load-sharing MC-LAG. That is, only either LAG1 or LAG2
carries services and is active. The PTN equipment supports static SC-LAG and manual SC-LAG
in the MC-LAG. The aggregation modes of the SC-LAGs on the two dual-homing nodes and
BSC/RNC must be the same. In addition, if the MC-LAG contains more than two member links,
the SC-LAG on BSC/RNC must work in load-sharing mode.
1. Static Aggregation
In static aggregation mode, the equipment exchanges the LACP protocol packets to select LAG1
or LAG2 to carry services. The selection process is as follows:
l LAG1, LAG2, and LAG3 exchange protocol packets between each other. Then, LAG1,
LAG2, or LAG3 is selected to determine which link (non-load-sharing MC-LAG) or links
(load-sharing MC-LAG) in the MC-LAG carry services according to the LAG system
priority or system MAC address.
The LAG with the highest system priority is preferred. When the system priorities of the
LAGs are the same, the LAG with the smallest MAC address is preferred. The system MAC
address indicates the system MAC address of the equipment with the LAG.
l When an LAG is selected, the LAG chooses one (non-load-sharing LAG) or more (load-
sharing LAG) member links to carry services according to the priorities and status of its
member ports, and then the LAG negotiates with the opposite end to reach an agreement.
Generally, configure a higher system priority for the SC-LAG on a dual-homing node than that
for the SC-LAG on BSC/RNC so that LAG1 or LAG2 with higher bandwidth carries services.
PE1 and PE2 notify their available bandwidth to each other by means of MC communication.
MC-LAG selects LAG1 or LAG2 with higher available bandwidth to carry services. When the
available bandwidth of LAG1 is the same as that of LAG2, LAG1 or LAG2 is selected in the
preceding process.
2. Manual Aggregation
In manual aggregation mode, LAG1 or LAG2 contains only one member link, and there are the
following conditions:
A. BSC/RNC supports manual LAG and an SC LAG is configured for interconnection
with dual-homing nodes.
When configuring system priorities for LAG1 and LAG2, make sure that the dual-homing nodes
and BSC/RNC carry services over the same link. To ensure normal switching of MC-LAG in
case of a unidirectional fiber cut, configure Ethernet port OAM so that it monitors the working
status of LAG member links. In this case, you need to enable Ethernet port OAM (IEEE 802.3ah)
for the member links of LAG1 and LAG2, and set Link Trace Protocol to 802.3ah for LAG1
and LAG2.
The two dual-homing nodes exchange information by means of MC synchronization
communication, and select LAG1 or LAG2 to carry services according to the system priority or
MAC address of equipment. The dual-homing nodes select the LAG with a higher priority with
preference. When the two LAGs are of the same system priority, the dual-homing nodes select
the LAG on the equipment with a smaller MAC address. Then, BSC/RNC selects a link in LAG3
to carry services according to a certain rule.
B. BSC/RNC does not support LAG but supports extension of IEEE 802.3ah.
In this case, you need to enable Ethernet port OAM (IEEE 802.3ah) and extension of IEEE
802.3ah for the two links in the MC-LAG. For LAG1 and LAG2, set Link Trace Protocol to
IEEE 802.3ah, Switch Protocol to extension of IEEE 802.3ah, and Switch Mode to Passive
(passive only for an LAG). For BSC/RNC, set the switch mode to active.
In this case, BSC/RNC periodically transmits IEEE 802.3ah extension packets over the selected
active link and standby link. The packets contain information about the working status of the
links (active or standby). When receiving the IEEE 802.3ah extension packets, the dual-homing
nodes select LAG1 or LAG2 to carry service packets.
3. MC-LAG Switching Rule
An LAG, static or manual, must be in line with the following switching principles:
l When LAG1 and LAG2 are in non-load-sharing mode, protection of LAG1 or LAG2 takes
place first in case of a link fault. If the member ports of LAG1 or LAG2 are faulty, the
services are switched to the LAG on the opposite equipment.
l When LAG1 and LAG2 are in load-sharing mode, the NEs notify the available bandwidth
of LAG1 and LAG2 to each other. Then, either LAG1 or LAG2 with higher available
bandwidth is selected to carry services.
l When the working status of the AC-side LAG changes, PE1 and PE2 notify the status
change to the NNI-side protection protocol.
E-Line service 1:1 MC-PW APS and MC- Dual-homing nodes, AC-side
LAG links of dual-homing nodes,
and service PWs
NOTE
In case of discrepancy between the load-sharing modes on the dual-homing nodes (PE1 and PE2), the available
bandwidth of the SC LAGs of the nodes are different. When the services are switched from one SC LAG to the
other, service packets may be lost.
Table 9-3 Support for MC-LAG application scenario I (SC LAGs on dual-homing nodes in non-
load-sharing mode)
Position Load Revertive Aggregation Mode Remarks
of LAG Sharing Mode
Mode
Table 9-4 Support for MC-LAG application scenario II (SC LAGs on dual-homing nodes in
load-sharing mode)
Position of Load Sharing Revertive Aggregation Remarks
LAG Mode Mode Mode
services. This topic describes how to configure MC-LAG protection when the AC-side links are
Ethernet links in a dual-homing scenario.
Prerequisite
l You must be an NM user with "network operator" authority or higher.
BTS/NodeB
BSC/RNC
S
PE2 LAG2
S
Standby (not carrying services)
Note:
LAG1 and LAG2 may have one member link.
Configuration Guide
l For details on how the PTN equipment supports MC-LAG, see Table 9-3 and Table 9-4
in MC-LAG.
l If a service on a dual-homing node is configured with protection, the UNI port accessing
the service must be configured as the master port in the SC-LAG on the dual-homing node.
l It is recommended that you set load-sharing modes of the SC-LAGs on dual-homing nodes
PE1 and PE2 as the same.
NOTE
If the load-sharing modes are different, the available bandwidths of the two SC-LAGs are different. When
services are switched from an SC-LAG to another, packet loss may occur.
l You must configure SC-LAGs on the two dual-homing nodes and then configure MC-LAG
protection groups on the two dual-homing nodes.
l The aggregation modes of the SC-LAGs on the two dual-homing nodes and BSC/RNC
must be the same.
l Restoration Mode of the MC-LAG protection groups on the two dual-homing nodes must
be the same.
l Reliability of LAGs in static aggregation mode is higher than that of LAGs in manual
aggregation mode and thus the static aggregation mode is usually recommended. If MC-
LAG is based on interconnection with the equipment that does not support the LACP
protocol, only the manual aggregation mode is applicable.
l If the SC-LAG on BSC/RNC contains more than two member links, the load-sharing mode
of the SC-LAG on BSC/RNC must be set to load sharing.
l For convenient management, maintenance, and fault identification, it is recommended to
configure AC-side MC-LAG as follows:
Set related parameters for all AC-side MC-LAGs so that all the active links in MC-LAG
are on the same dual-homing node.
Set related parameters for AC-side MC-LAGs and configure the working PW (that is,
Service PW) of NNI-side MC-PW APS so that the active AC-side links and the NNI-
side working PW are on the same dual-homing node.
Procedure
Step 1 Display the interface where you can create the services to be protected.
l In the case of a PWE3 service, Choose Service > PWE3 Service > Create PWE3 Service
from the main menu.
l In the case of a VPLS service, Choose Service > VPLS Service > Create VPLS Service
from the main menu.
Step 2 Create a PWE3 service and configure the basic information, source NE, and sink NE of the
service. Click the Service Topology tab page.
Step 3 In the Service Topology view, select two NEs, right-click, and choose ETrunk from the
shortcut menu. The Create Cross-Equipment Link Aggregation Management Group dialog
box is displayed.
Step 4 On the left NE, configure LAG1, the intra-NE link aggregation group. Click .... The Link
Aggregation Group Management window is displayed.
NOTE
l After you select the Automatically Assign check box, the U2000 automatically assigns the LAG
No. Otherwise, you need to manually enter the LAG No.
l When LAG Type is Static, the link aggregation control protocol (LACP) is running. When LAG
Type is Manual, the LACP is not running.
l Sharing means that each member link of the LAG carries the services at the same time and shares
the load together. Non-Sharing indicates that only one member link of the LAG has traffic.
l After creating a LAG of the static aggregation mode, you can query the Link Aggregation Group
Details and Link LACP Packet Statistics of this LAG.
Step 5 On the right NE, configure LAG2, the intra-NE LAG. For details, see descriptions in the
preceding step.
Step 6 On the left NE, configure the inter-NE synchronization communication between the two NEs.
Click .... The Synchronization Protocol Management window is displayed.
1. Select an existing inter-NE protocol channel and click OK.
2. Optional: Click New. In the Create Cross-Equipment Synchronization Protocol dialog
box, set relevant attributes and click OK.
Step 7 On the right NE, configure the inter-NE synchronization communication between the two NEs.
For details, see descriptions in the preceding step.
Step 8 Set relevant attributes and click OK. A dialog box is displayed indicating that the operation is
successful.
----End
Prerequisite
l You must be an NM user with "network operator" authority or higher.
l The MPLS tunnel that carries the PW must be created. For how to create a tunnel, see 3.3.1
Creating a Tunnel.
l All equipment resources, including logical ports, QoS, and PW templates, must be
available.
Networking Diagram
As shown in Figure 9-6, the services from BTS/NodeB are transported to BSC/RNC through
the PTN network. The MC-PW APS consists of the PW APS protection group on PE3 and MC-
PW APS protection groups on PE1 and PE2.
MC-PW MC protection
PE1
APS
1:1 PW APS W AC side
AC side PE3
BSC/RNC
P
BTS/NodeB
PE2
W Working DNI-PW
MC-PW APS
MC-PW APS protection involves the working PW, protection PW, and DNI-PW. In the case of
PW APS, PW OAM functions to detect the status of the working PW, protection PW and DNI-
PW. When PE equipment detects a fault on the working PW, the PE equipment at both ends
performs PW APS protection switching by exchanging the APS protocol. Then, the services on
the working PW are switched to the protection PW. In this manner, the services are protected.
The APS protocol is transported over the protection PW. After dual-homing protection switching
occurs in case of certain faults, the DNI-PW in MC-PW APS carries service packets. In addition,
the DNI-PW is also used for MC communication of status information between dual-homing
nodes. MC-PW ASP achieves MC status communication over DNI-PW so that PE1 and PE2
perform coordinated switching.
If the working PWs, protection PWs, and DNI-PWs of multiple MC-PW APS to be created share
the same source and sink with the working PW, protection PW, and DNI-PW of an MC-PW
APS, you can bind these multiple MC-PW APS to be created to the MC-PW APS (master MC-
PW APS). Then, the protection switching is performed for all the slave MC-PW APS according
to the PW status of the master MC-PW APS. These PWs are considered as being in one MC-
PW APS for synchronous detection and switching. In this manner, the switching time is reduced,
and the OAM resources and APS resources are saved.
Currently, the PTN supports the revertive and non-revertive dual-ended 1:1 PW APS protection.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
Step 2 Create an E-Line service and configure information relevant to the service. For how to configure
a test group, see 6.4.2 Creating an ETH Service.
Step 3 Configure the MC-PW APS protection and slave MC-PW APS protection.
1. In the case of the general attributes of the service, set Protection Type to PW APS
protection.
2. On the Node List tab page, select Single source and dual sink or Dual source and single
sink and configure the corresponding source and sink NEs.
3. In the PW pane. Configure the working PW, protection PW, and DNI-PW.
l If you set Protection Type to Protection group, the master MC-PW APS protection
is created.
l If you set Protection Type to Slave protection pair, the slave MC-PW APS protection
is created. You need to set ID of the master MC-PW APS protection group that the slave
MC-PW APS protection is bound with.
NOTE
You must configure the protection types for the NEs that are involved in the dual-homing protection.
You must configure the master MC-PW APS protection group before binding a slave MC-PW APS
protection group. The working PW, protection PW, and DNI-PW of a slave MC-PW APS protection group
and those of the master MC-PW APS protection group must share the same sources, sinks, and physical
trails. If the physical trails are different, a switching may be performed on the PW in the slave MC-PW
APS protection group that is normal due to the faulty PW of the master MC-PW APS protection group.
Step 4 After a service is successfully created, you need to configure the PW OAM for the service. For
how to configure a test group, see 6.5.2 Configuring PW OAM.
----End
Figure 9-7 Networking diagram for the dual-homing protection with 1:1 MC-PW APS and MC-
LAG
MC-PW MC-LAG
APS PE1
BTS/NodeB LAG1
1:1
PW APS LAG3
PE3
W
A
BTS/NodeB
P
S BSC/RNC
LAG2
PE2
W Working DNI-PW
P Protection MC synchronization
communication
A Active (carrying services)
S Service flow
Standby (not carrying services)
Parameter Planning
Table 9-5 lists the parameter planning for the PWs of NNI-side MC-PW APS.
Table 9-5 Parameter planning for the PWs of NNI-side MC-PW APS (dual-homing protection
with 1:1 MC-PW APS and MC-LAG in the example)
MC-PW APS PW
Parameter
PW 1 PW 2 DNI-PW 3
PW ID 10 20 30
PW Type Ethernet
PW ingress label 10 - 50
PE1
PW egress label 20 - 60
on PE1
PW ingress label - 30 60
on PE2
PW egress label - 40 50
on PE2
PW ingress label 20 40 -
on PE3
PW egress label 10 30 -
on PE3
Tunnel (tunnel 1 2 3
ID)
Table 9-6 lists the parameter planning for the NNI-side MC-PW APS.
Table 9-6 Parameter planning for NNI-side MC-PW APS (dual-homing protection with 1:1
MC-PW APS and MC-LAG in the example)
Parameter PE3 PE2 PE1
Protection 30 20 10
Group ID
Peer - 10 20
Protection
Group ID
Switching 1 1 1
Restoration
time
Switching 0 0 0
Delay Time
Table 9-8 lists the parameter planning for the AC-side (RNC-side) MC-LAG.
Table 9-8 Parameters for LAG1 on PE1 and LAG2 on PE2 (dual-homing protection with 1:1
MC-PW APS and MC-LAG in the example)
LAG No. 1 2
Load Sharing
Automatic
Hash Algorithm
Table 9-9 lists the parameters for the MC-LAG protection groups on PE1 and PE2.
Table 9-9 Parameters for the MC-LAG protection groups on PE1 and PE2 (dual-homing
protection with 1:1 MC-PW APS and MC-LAG in the example)
NE PE1 PE2
Cooperative Channel ID 10 10
Prerequisite
l You must be an NM user with "network operator" authority or higher.
Procedure
Step 1 Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
Step 2 Create an E-Line service and configure information relevant to the service.
1. Click the Service Topology tab. In the service topology, select PE1 and PE2, right-click,
and choose E-Trunk from the shortcut menu. The Create Cross-Equipment Link
Aggregation Management Group window is displayed.
2. Configure the peer ends for inter-NE synchronization communication on both PE1 and
PE2. On PE1, set Cooperative Channel ID and click .... The Synchronization Protocol
Management window is displayed.
3. Click New. In the Create Cross-Equipment Synchronization Protocol dialog box, set
relevant attributes and click OK.
4. Click OK. A dialog box is displayed indicating that the operation is successful. Click
Close.
5. On PE2, configure the inter-NE synchronization communication between PE1 and PE2.
For details, see Step 4.2 to Step 4.4.
6. Configure intra-NE LAG1 for PE1 and intra-NE LAG2 for PE2. On PE1, set Link
Aggregation Group ID and click .... The Link Aggregation Group Management
window is displayed.
7. Click New. In the Create Link Aggregation Group dialog box, set relevant attributes and
click OK.
NOTE
l After you select the Automatically Assign check box, the U2000 automatically assigns the LAG
No. Otherwise, you need to manually enter the LAG No.
l When LAG Type is Static, the link aggregation control protocol (LACP) is running. When LAG
Type is Manual, the LACP is not running.
l Sharing means that each member link of the LAG carries the services at the same time and shares
the load together. Non-Sharing indicates that only one member link of the LAG has traffic.
l After creating a LAG of the static aggregation mode, you can query the Link Aggregation Group
Details and Link LACP Packet Statistics of this LAG.
8. Click OK. A dialog box is displayed indicating that the operation is successful. Click
Close.
9. On PE2, configure LAG2, the intra-NE LAG. For details, see Step 4.6 to Step 4.8.
10. After configuring the inter-NE synchronization communication and intra-NE LAGs for
PE1 and PE2, configure other parameters.
11. Click OK. A dialog box is displayed indicating that the operation is successful. Click
Close.
Step 5 Click OK to complete the creation of the E-Line service and apply the configuration of dual-
homing protection.
Step 6 Configure the PW OAM detection mechanism for a service.
1. Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
2. Click Filter. In the dialog box that is displayed, set the filter criteria, and click OK.
3. The NMS displays the PWE3 services that meet the filter criteria, select a service to be
configured with the PW OAM.
4. Click the PW tab. Then, click the Basic tab.
5. Select one PW and click PW OAM. A dialog box is displayed.
6. Configure the PW OAM. Set the OAM status as Enabled.
7. Click OK. The configuration is applied to NEs and the current dialog box is closed.
Complete the configuration of PW OAM.
----End
10 Configuring VRRP
PTN equipment can achieve dual-homing protection for Layer 3 services by using VRRP. This
chapter describes the concepts, application, and configuration method of VRRP.
10.1 Overview of VRRP
The VRRP protocol is an error tolerance protocol. The VRRP protocol is used to group several
routing devices as one virtual router. The VRRP feature adopts a certain mechanism to switch
services to other routing devices when the next-hop routing device of an NE is faulty. In this
manner, continuous and reliable communication is guaranteed.
10.2 Configuration Flow for VRRP
This section describes the configuration flow for VRRP, with focuses on the configuration tasks
included in the configuration of VRRP and the details on each configuration task.
10.3 Operation Tasks of Configuring VRRP
This section describes the operation tasks of configuring VRRP, which include configuring
VRRP VR information and configuring VRRP VR tracking.
10.4 Testing VRRP
After configuring VRRP, you need to test whether the VRRP is working normally. This section
describes how to test VRRP.
10.5 Configuration Case of VRRP
This section describes a configuration case of VRRP, involving a configuration network
diagram, service planning, and configuration process.
With development of the Internet, people require more reliable networks. A LAN user expects
contact with external networks at any time. In normal cases, all NEs on a LAN are configured
with the same default route, which leads to an egress gateway NE. In this manner, the NEs can
communicate with external networks. When the egress gateway NE is faulty, communication
between the NEs and external networks is interrupted.
The VRRP protocol, put forward by the Internet Engineering Task Force (IETF), aims to ensure
reliability in the situation where NEs on a LAN communicate with external networks.
As shown in Figure 10-1, two OptiX PTN 1900/3900 are configured as a VRRP backup group,
which is a virtual router (containing a master device and a backup device). An RNC needs to
know only the IP address of the virtual router device so as to communicate with external
networks. At this time, the master device is responsible for forwarding services. When the master
device is faulty, services on the master device are switched to the backup device. In this manner,
continuous and reliable services are guaranteed
E-Line/E-LAN AR
P
Ac pa
tiv ck
el et
ink s
Lin
kB
VRRP
FD
Peer
BFD FD
kB
Lin lin
k
by
ta nd
S
kts
ac
Pp
AR
E-Line/E-LAN
Backup
The VRRP protocol enables communication between a master device and a backup device
through an independent channel between them. When the master device is working normally, it
sends a VRRP multicast packet to the backup device at certain intervals
(Advertisement_Interval) to notify the backup device of its normal state. If the backup device
does not receive the VRRP packet from the master device after a period of time
(Master_Down_Interval), the backup device becomes the master device. Then, the new master
device sends an ARP packet to an RNC to update ARP table entries. Therefore, services are
switched to the new master device.
In addition, the VRRP protocol can be bundled with the BFD detection mechanism. Faults can
be detected through BFD sessions, and therefore VRRP quick switching is implemented.
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
Context
NOTE
When configuring basic VRRP information, you must take the following precautions:
l VRRP can be configured only at a Layer 3 interface and the IP address of the interface must be available.
l A maximum of 512 VRs can be configured for one set of equipment.
Procedure
Step 1 Configure an L3VPN service. If no L3VPN service is configured, perform Step 1.1 to create an
L3VPN service. If L3VPN services are configured, perform Step 1.2 to select an L3VPN service.
1. Choose Service > L3VPN Service > Manage L3VPN Service from the Main Menu. Then,
the Manage L3VPN Service tab page is displayed. In this tab page, click Create to display
the Create L3VPN Service tab page. Then, set Service Information and select a node
from Node List. Then, click Details to set General and SAI for VRF.
NOTE
2. If L3VPN services have already been created, quickly select a created L3VPN service by
setting Set Filter Criteria.
Step 2 Deploy an L3VPN service. Right-click a configured L3VPN service, and then choose Deploy.
In this case, you can configure VRRP only after successfully deploying the L3VPN service.
----End
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l You must configure an L3VPN service.
Procedure
Step 1 Configure VRRP VR information. Choose Service > L3VPN Service > Manage L3VPN
Service from the Main Menu. Right-click a created and deployed L3VPN service, and then
choose Configure VRRP to display the VRRP-Based Detection Configuration
Management pane. In this pane, click Create to display the Create VRRP dialog box. Then,
configure the associated parameters in the dialog box. For details on the parameters for basic
VRRP VR information, see Table 1.
Step 2 Configure advanced VRRP VR information. Click Advanced to display the Advanced VRRP
Configuration dialog box. For details on the parameters for advanced VRRP VR information,
see Table 2.
----End
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l The configuration of VRRP VR information must be complete.
Procedure
Step 1 In the Create VRRP dialog box, click Next to go to Step 2: Configure the information about
the VRRP VR monitoring.
Step 2 Optional: Configure tracked peer BFD. Select Tracked Peer BFD and then click Configure
to configure a tracked peer BFD session for testing the link between the master and backup.
Step 3 Configure tracked link BFD. In Tracked Link BFD, configure Link BFD Path and Link
BFD.
Step 4 Optional: Configure objects under tacking of the VR source and VR sink.
NOTE
l The objects under tracking of the VR source and VR sink are as follows:
l BFD Session: You can specify a BFD session under tracking of the VR source and VR sink.
l Interface: You can specify an interface under tracking of the VR source and VR sink.
l OAM: You can specify an Ethernet OAM interface under tracking of the VR source and VR sink.
l PRI Change: You can increase or reduce the equipment priority by setting this parameter.
l Value: You can set the change of equipment priority, and the value range is 1 to 255.
----End
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l You must complete the configuration of basic VRRP VR information.
Context
When testing whether VRRP is working normally by using the VIP ping function, you need to
test the following items:
l Working situation of the master on the VR.
l Whether a VIP address can be used as a default gateway IP address for communication
with the outside.
NOTE
VIP ping may cause ICMP attacks to a VR. Therefore, you need to disable the VIP ping function each time
after testing VRRP. This prevents a VR from being attacked by ICMP packets.
Procedure
Step 1 Enable the VIP ping function. Choose Service > L3VPN Service > Manage L3VPN Service
from the Main Menu. In the displayed Create VRRP dialog box, select Step 1:Configure
VRRP VR Information. Then, click Advanced to display the Advanced VRRP
Configuration dialog box. Set VIP ping to None, Master or Both.
NOTE
l Master: indicates that VIP ping can be performed when the VRRP status machine is in master state.
l Both: indicates that VIP ping can be performed when the VRRP status machine is in any state.
----End
5-EG16-2
VRRP
Peer BFD Link BFD
5-EG16-2
NodeB RNC
5-EG16-1
NE2
Backup
OptiX PTN 3900
OptiX PTN 910/950
OptiX PTN 1900
NOTE
Service configuration on the OptiX PTN 3900-8 is the same as that on the OptiX PTN 3900, except for the
slots for service boards. For details on service configuration on the OptiX PTN 3900-8, see this example
about service configuration on the OptiX PTN 3900.
Service Planning
To implement VRRP, you must configure VRRP VR information and information about objects
under tracking of a VRRP VR. Table 10-1, Table 10-2, and Table 10-3 show the planning.
Parameter Value
VR Type Management VR
VR ID 10
VR IP addres 10.1.1.1
Delay 5s 5s
Advertisement 1s 1s
Interval
Management VR 10 10
ID
Authen Code 1 1
Value 20 10
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l You must configure an L3VPN service.
Procedure
Step 1 Configure basic VRRP VR information. Choose Service > L3VPN Service > Manage L3VPN
Service from the Main Menu. In the displayed Create VRRP dialog box, select Step 1:
Configure VRRP VR information to configure the basic VRRP VR information.
Parameter Value Guideline
Step 2 Configure advanced VRRP VR information. Click Advanced to display the Advanced VRRP
Configuration dialog box. In the dialog box, configure advanced VRRP VR information.
Configuratio 120 (NE1 as the master) 100 (NE2 as the A greater value
n Priority backup) indicates a higher
priority.
l The value 0 indicates
that the current
master on a VR
disables VRRP.
l The value 255 is
reserved for the
equipment whose
VR IP address is the
same as the IP
address of an
interface.
Management 10 10 -
VR ID
Step 3 Configure a tracked BFD session. In the Create VRRP dialog box, click Next to enter Step 2:
Configure the information about the VRRP VR monitoring. Select Track more BFD
session or interface. Ensure quick VRRP switching by tracking a BFD session.
Value 20 10
----End
This topic describes composite service management. A composite service refers to a service
composed of two or more associated services. With composite service management, you can
flexibly combine PWE3, VPLS, and L3VPN services, automatically calculate service
connection points, and manage different services in a centralized manner. Composite service
management applies to the scenarios not supported by single services and meets the requirements
of the Metro Ethernet and bear network solutions.
Composite Service
The composite service is a combination of associated services. Composite service management
is used to support the scenarios that single services cannot support, such as PWE3+PWE3 and
PWE3+VPLS, so as to implement complicated service combinations. The services in the
composite service are associated with each other through service components and connection
points.
On the NMS, the management of composite services complies with the following principles:
l A composite service can contain only basic service information, without service
components or connection points.
l A service belongs to only one composite service.
l A connection point embodies the association between two services.
l When you delete a service component, the related connection points are also deleted.
Service Component
Service components refer to the services to be associated with the composite service. The types
of service components include PWE3, and VPLS.
Connection Point
Connection points represent the association relations between service components. Two or more
services can be associated with each other through connection points. There are two types of
connection points: PW connection points and interface connection points.
The details are as follows:
l Interface connection point: connects the interfaces of service components. Interface
connection points are used to support the PWE3+PWE3 composite services.
l PW connection point: connects the PWs of service components. PW connection points are
used to support the PWE3+VPLS composite service.
In practical networks, such as MAN access networks, if a UPE does not support the dynamic
VLL, the UPE needs to access SPEs through the static VLL. A UPE and an SPE generally
set up an SVC between each other to create a VLL.
VPLS
SPE1 SPE2
Network
PW
VL
L L
VL
UPE1 UPE2
As shown in Figure 11-1, the UPEs add double MPLS labels to the packets sent by the
CEs. The outer layer is the static LSP label and is switched when a packet passes through
the equipment on the access network. The inner label is the VC label that identifies the VC.
The inner label remains unchanged when a packet is transmitted along the LSP.
The packets received by the SPEs contain double labels. The outer label, which is a
statically-configured public network label, is popped up. The inner label decides which
VSI the SVC accesses.
l Dual-homed static VLL+VPLS composite service
To ensure reliable VLL access, the UPE accessing the SPE in dual-homed mode is
introduced. In dual-homed mode, if a PW fails, the data traffic is immediately switched to
another PW, as shown in Figure 11-2.
In VPLS, the bidirectional transmission paths are consistent because the routing
information about Layer 2 forwarding is automatically learned through the MAC addresses
of the data traffic. If a fault occurs, the VPLS traffic of a UPE is switched to another LSP.
The SPE equipment belonging to the VSI deletes the MAC entries of this VSI. After the
switchover or the deletion, the MAC entries need to be learned afresh.
Figure 11-2 Networking diagram of the dual-homed static VLL+VPLS composite service
SPE1 SPE3
UPE1 x UPE2
SPE2 SPE4
CE1
CE2
LDP Message
As shown in Figure 11-2, if a fault occurs on the LSP between UPE 1 and SPE 1, SPE 1
detects the fault and asks the other SPEs to delete the related MAC addresses by sending
LDP messages.
The UPEs detect the LSP status through MPLS OAM. If a fault is detected, the traffic
switchover is performed. After the switchover, the related VSIs on the SPEs learn the MAC
addresses afresh; thus, the traffic can return through the new SPEs. Before other SPEs learn
the MAC addresses, traffic must be broadcast.
After the fault is removed, the UPE receives double VLL broadcast traffic: one from the
SPEs before the switchover, the other from the SPEs after the switchover. The UPE decides
which broadcast traffic to be thrown away. After the fault is rectified, the traffic of the UPE
is not switched back to the original LSP. This is because the SPE is not triggered to send
LDP packets to other SPEs to delete MAC addresses before detecting LSP failures.
PWE3+PWE3
In this application scenario, protection for the services between rings is enhanced. Fibers in each
section of a service are protected, so that the service is well protected.
For example, a PWE3 service between PE1 and PE4 can be divided into three sections, as shown
in Figure 11-3. PW APS protection is configured for the sections from PE1 to PE2 and from
PE3 to PE4 and LAG protection is configured for the section from PE2 to PE3. In this way, each
fiber has its protection link in each section of the service and thus the protection capability of
the PWE3 service is enhanced.
Start
End Optional
11.3.2 Besides automatically discovering a composite service, you can also create
Creating a a composite service as required.
Composite Do as follows:
Service
1. Configure the basic information about the composite service, such as the
name and customer of the composite service.
2. Configure service components. Add the services to be managed, such as
PWE3, VPLS services, to the composite service. You can either select
existing services or create services as required.
3. Configure connection points between services to combine these services.
You can either create connection points or use the NMS to automatically
calculate connection points.
11.3.3 If you need to deploy the service components associated with a component
Deploying a service to an NE, you can perform this operation.
Composite
Service
11.4.1 View the deployment status and alarm status of a composite service.
Viewing the
Status of a
Composite
Service
11.4.2 The topology view displays the topology of services in a visual manner. By
Viewing the viewing the topology of a composite service, you can learn the topology of
Topology of a the composite service and its associated services and the running status of
Composite its associated services.
Service
Prerequisite
IP services must be automatically discovered. For details, see Automatically Discovering IP
Services.
Procedure
Step 1 Choose Service > Composite Service > Search for Composite Service from the main menu.
Step 2 On the Discovery Policy tab page, set the discovery policy.
1. Specify the equipment range for discovering composite services.
l Click the All option button to discover all the NEs on the entire network.
l Click the Select NE option button, and then click Add. In the dialog box that is
displayed, select one or more NEs, and then click OK to discover the specified NEs.
2. Optional: Specifies the customer of the services to be discovered. Only the services of this
customer can be discovered. In this manner, the efficiency of automatic discovery is
increased.
Click the ... button to the right of the Customer Name field. In the dialog box that is
displayed, query customers and select one. Then, click OK.
3. Set the type of the composite services to be discovered, and then click Start.
Step 3 Click the Discovery Result tab. A progress bar is displayed indicating the progress of
automatically discovering services.
You can view the automatically discovered composite services on the Add Service tab page.
After selecting a record and clicking Jump Service, you can access the composite service
management user interface for this service.
----End
single services. In this manner, you can better satisfy the requirements of the Metro Ethernet
and bearer network solutions.
Procedure
Step 1 Choose Service > Composite Service > Create Composite Service from the main menu.
Step 2 In the General area, set Service Name, Customer Name, and Remarks.
Step 3 In the Service Component area, click Select to select the related type of service. In the window
that is displayed, select one or more services, and then click Select. The selected services are
displayed in both the service component list area and the service topology.
The selected services must meet the following conditions:
l PWE3+VPLS
The PWE3 service and VPLS service both have unterminated PWs.
The PW IDs of the two PWs are the same. The peer IP address of the unterminated PW
of the VPLS service is the local IP address of the unterminated PW of the PWE3 service.
The local IP address of the unterminated PW of the VPLS service is the peer IP address
of the unterminated PW of the PWE3 service.
If the unterminated PWs are static, the outgoing label of the PW for one of the two services
is the incoming label of the PW for the other service.
If no eligible services are displayed, you can click Create to create a service.
Step 4 In the Connection Point area, configure the connection point for the composite service. The
configured connection point is displayed in both the connection point list area and the service
topology.
The PW connection point is used for the PWE3+VPLS composite service. The interface
connection point is used for the PWE3+PWE3 composite service.
l Click Auto-Calculate to obtain the connection points automatically calculated by the NMS
for the composite service.
NOTE
----End
Prerequisite
The composite service to be deployed must exist.
Context
Before a created service is deployed, the configurations of the service are stored in the database
of the U2000 instead of being deployed to equipment. The service is in the Undeployed state.
After the service is deployed, the configurations of the service can be deployed to equipment.
Procedure
Step 1 Choose Service > Composite Service > Manage Composite Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click the composite service to be deployed and choose Deploy from the shortcut menu.
After the composite service is deployed, the deployment status of this composite service changes
to Deployed.
----End
Procedure
Step 1 Choose Service > Composite Service > Manage Composite Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 In the service list, you can view the deployment status of each composite service.
Step 4 Select a composite service and click the Service Component tab. Then, you can view the
deployment and alarm status of this composite service in the Deployment Status and Alarm
Status columns.
When a service alarm is generated, certain phenomena occur, including but not limited to:
Step 5 Right-click the service component with the alarm and choose Current Alarm from the shortcut
menu. You can view the detailed alarm information of the service in the details area
----End
Postrequisite
Preliminarily determine the possible cause of the alarm based on the detailed alarm information,
and then locate the fault position by referencing the handling suggestions.
Procedure
Step 1 Choose Service > Composite Service > Manage Composite Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 In the service list, select the composite service to be viewed, and then click the Topology tab.
In the topology view, you can view the topology of the composite service. The service
components are connected to each other through connection points. Each service component is
displayed as a submap. By clicking on the toolbar of the Main Topology, you can view the
legend information.
Step 4 You can perform the following operations in the topology view.
l Right-click a service component and then you can perform the following operations:
Choose Current Alarm from the shortcut menu to check whether the composite service
is normal.
Choose Details from the shortcut menu to access the service management user interface.
Then, you can view the details about this composite service and modify this composite
service as required.
Choose Open from the shortcut menu to view the topology of this service component.
By choose Close, you can collapse the topology structure of the service.
l Right-click a connection point and choose Details from the shortcut menu to view the details
about this connection point.
----End
Example Description
This topic describes O&M scenarios and networking diagrams.
When an Ethernet service is connected to a VPLS service, the two services affect the VLAN
service that is transmitted in them. Therefore, the two services need to be combined as a
composite service for management.
PW FE
UPE 1 UPE 2
Service Planning
This topic describes the service planning of the PWE3+VPLS networking.
1-EG16-1:
100.1.1.3/24
1-EG16-1:
100.1.1.5/24
Node List
PW
PW ID 100 100
Service vpls
Name
Network Full-Mesh-VPLS
Type
VSI ID 100
Service PWE3+VPLS
Name
Customer customer 1
Name
PW pwe3_upe1+vpls
Connection l Name: connection1
Point 1
l Selected PW 1:
PW ID: 100
Equipment Name: UPE 1
Service Name: pwe3_upe1
Service Type: PWE3
l Selected PW 2:
PW ID: 100
Equipment Name: NPE 1
Service Name: vpls
Service Type: VPLS
Service Value
Attribute
PW pwe3_upe2+vpls
Connection l Name: connection2
Point 2
l Selected PW 1:
PW ID: 100
Equipment Name: UPE 2
Service Name: pwe3_upe2
Service Type: PWE3
l Selected PW 2:
PW ID: 100
Equipment Name: NPE 2
Service Name: vpls
Service Type: VPLS
Configuration Process
This topic describes the configuration process of the PWE3+VPLS composite service. The
configuration process of the PWE3+VPLS composite service includes configuring PWE3
services, configuring VPLS services, and configuring the PWE3+VPLS composite service.
Prerequisite
l You must be an NM user with "NE operator" authority or higher.
l IP addresses of all interfaces must be set.
l The parameters of control planes must be set.
l The dynamic tunnel carried service must created.
Procedure
Step 1 Configure PWE3 services.
Configure static PWE3 service 1 on UPE 1 and configure UPE 1 to access NPE 1 through PWE3.
Configure static PWE3 service 2 on UPE 2 and configure UPE 2 to access NPE 2 through static
PWE3.
1. Choose Service > PWE3 Service > Create PWE3 Service from the main menu.
2. Configure PWE3 services according the following data planning. After the configuration,
click OK to make the parameter settings take effect.
Node List
PW
PW ID 100 100
Service vpls
Name
Network Full-Mesh-VPLS
Type
VSI ID 100
Service Value
Attribute
Service Value
Attribute
PW pwe3_upe1+vpls
connectio l Name: connection1
n point 1
l Selected PW 1:
PW ID: 100
Equipment Name: UPE 1
Service Name: pwe3_upe1
Service Type: PWE3
l Selected PW 2:
PW ID: 100
Equipment Name: NPE 1
Service Name: vpls
Service Type: VPLS
PW pwe3_upe2+vpls
connectio l Name: connection2
n point 2
l Selected PW 1:
PW ID: 100
Equipment Name: UPE 2
Service Name: pwe3_upe2
Service Type: PWE3
l Selected PW 2:
PW ID: 100
Equipment Name: NPE 2
Service Name: vpls
Service Type: VPLS
5. After the preceding configurations are complete, click OK to complete the creation of the
composite service.
----End
Postrequisite
Monitor the composite service in real time on the NMS.
In the Composite Service Management service list, select the created composite service. Click
the Topology tab to view the topology of the composite service and obtain the alarms in real
time.
Example Description
This topic describes O&M scenarios and networking diagrams.
In this application scenario, protection for the services between rings is enhanced. Fibers in each
section of a service are protected, so that the service is well protected.
For example, a PWE3 service between PE1 and PE4 can be divided into three sections, as shown
in Figure 11-6. PW APS protection is configured for the sections from PE1 to PE2 and from
PE3 to PE4 and LAG protection is configured for the section from PE2 to PE3. In this way, each
fiber has its protection link in each section of the service and thus the protection capability of
the PWE3 service is enhanced.
PWE3
1-EG16-1 PWE3 1-EG16-1 Service
LAG 1-EG16-1 1-EG16-1 19-ETFC-1
Service
Protection PW Protection PW
19-ETFC-1
PE1 Working PW PE2 19-ETFC-1 PE3 Working PW PE4 RNC
1-EG16-2 1-EG16-2 19-ETFC-2 1-EG16-2 1-EG16-2
19-ETFC-3
Node B
Service Planning
This topic describes the service planning of the PWE3+PWE3 networking.
System Priority 0 0
Node List
Service Value
Attribute
Service PWE3+PWE3
Name
Customer customer1
Name
Configuration Process
This topic describes how to configure the PWE3+PEW3 composite services.
Prerequisite
You must be an NM user with "NE operator" authority or higher.
Procedure
Step 1 Configure the LAG.
Configure parameters relevant to the LAG on both PE2 and PE3.
1. In the NE Explorer, select the NE and choose Configuration > Interface Management >
Link Aggregation Group Management from the Function Tree.
2. Click New, Configure relevant parameters and click OK.
System Priority 0 0
Node List
Interface pwe3+pwe3
Connectio l Name: connection1
n Point
l Type: PWE3+PWE3
l Interface Name: 19-ETFC-1
l Equipment Name: PE2, PE3
5. After the preceding configurations are complete, click OK to complete the creation of the
composite service.
----End
Postrequisite
Monitor the composite service in real time on the NMS.
In the Composite Service Management service list, select the created composite service. Click
the Topology tab to view the topology of the composite service and obtain the alarms in real
time.
12 Modifying Configurations
This topic describes how to modify service configurations, which includes modifying and
deleting service configurations.
Prerequisite
In this example, the modification of the basic information about VPLS services is taken as an
example.
Procedure
Step 1 Choose Service > VPLS Service > Create VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click the VPLS services to be modified and choose Details from the shortcut menu.
Modify the basic information about the selected services. Parameters that can be modified are
Service Name, Customer, Customized Attribute 1, Customized Attribute 2, and
Remarks.
NOTE
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Context
CAUTION
Modifying configurations of a service may interrupt the service running. Exercise caution with
this operation.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select a tunnel, and then click the related tabs to modify the related parameters.
NOTE
If you need to modify only the basic information about a tunnel, right-click the tunnel and choose
Details from the shortcut menu. In the dialog box that is displayed, modify basic information about the
tunnel.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Context
Deleting a tunnel is to delete a configured tunnel from the NMS and equipment at the same time.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Context
Deleting a tunnel from the NMS is to delete a tunnel from only the NMS. In this case, the tunnel
data configured on the equipment still exists. The deleted tunnel is displayed as a discrete tunnel
on the NMS.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from Network Side from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NM monitor" authority or higher.
Procedure
Step 1 Choose Service > Tunnel > Manage Tunnel from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service with Deployment Status being Deployed and choose Undeploy from the
shortcut menu.
----End
Postrequisite
After the service is undeployed, you can redeploy the service. If the service fails to be
undeployed, you can modify the service according to the error message, and then undeploy the
service again.
Context
CAUTION
Modifying configurations of a service may interrupt the service running. Exercise caution with
this operation.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select a PWE3, and then click the related tabs to modify the related parameters.
NOTE
If you need to modify only the basic information about a PWE3, right-click the PWE3 and choose
Details from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Context
CAUTION
Modifying the tunnel that carries a PW may interrupt services.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select a required service and click the PW tab in the lower portion of the window.
Step 4 On the PW tab page, select the required PW and modify Forward Tunnel or Reverse
Tunnel.
NOTE
l When you set the tunnel policy to Static Binding, you can manually select the MPLS/IP tunnel or GRE
tunnel to be bound.
l When you set the tunnel policy to Select Policy, you can manually adjust the policy selection priority.
The only tunnel policy supported by routers is Select policy.
----End
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from the shortcut menu.
Step 4 In the Confirm dialog box, click Yes.
----End
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from Network Side from the shortcut menu.
Step 4 In the Confirm dialog box, click Yes.
----End
Postrequisite
After a PWE3 service is deleted from the network side, the information about the PWE3 service
is deleted from the NMS and cannot be viewed in PWE3 service management. The PW
information related to the PWE3 service, however, can be viewed in discrete service
management.
Procedure
Step 1 Choose Service > PWE3 Service > Manage PWE3 Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service with Deployment Status being Deployed and choose Undeploy from the
shortcut menu.
Step 4 In the Confirm dialog box, click Yes.
After the service is undeployed, the value of Deployment Status changes from Deployed to
Undeployed.
----End
Postrequisite
After the service is undeployed, you can redeploy the service. If the service fails to be
undeployed, you can modify the service according to the error message, and then undeploy the
service again.
Context
CAUTION
Modifying configurations of a service may interrupt the service running. Exercise caution with
this operation.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service and choose Modify from the shortcut menu.
----End
Prerequisite
You must be an NM user with "NE administrator" authority or higher.
Context
CAUTION
Modifying the tunnel that carries a PW may interrupt services.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Select a required service and click the PW tab in the lower portion of the window. Select the
PW to be modified Then, click Details.
Step 4 Optional: For the undeployed VPLS service, in the dialog box that is displayed, select a binding
type from the Tunnel Binding Type drop-down list.
NOTE
l When you set the Tunnel Binding Type to Static binding, you can manually select the MPLS/IP
tunnel or GRE tunnel to be bound.
l When you set the Tunnel Binding Type to Select policy, you can manually adjust the policy selection
priority.
The only tunnel policy supported by routers is Select policy.
Step 5 Click the ... button to the right of the Tunnel field, and then select the required tunnel.
----End
Context
This operation is used to delete the VPLS service configurations from the NMS and NEs.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from the shortcut menu.
----End
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from Network Side from the shortcut menu.
----End
Postrequisite
After a VPLS service is deleted from the network side, the information about the VPLS service
is deleted from the U2000 and cannot be viewed in the VPLS service management window. The
VSI information related to the VPLS service, however, can be viewed in the VSI resource
management window.
Context
Only VPLS services are undeployed, whereas the tunnels that bearer the services are not
undeployed.
Procedure
Step 1 Choose Service > VPLS Service > Manage VPLS Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service with Deployment Status being Deployed and choose Undeploy from the
shortcut menu.
Step 4 In the Confirm dialog box, click Yes.
After the service is undeployed, the value of Deployment Status changes from Deployed to
Undeployed.
----End
Postrequisite
After the service is undeployed, you can redeploy the service. If the service fails to be
undeployed, you can modify the service according to the error message, and then undeploy the
service again.
Context
CAUTION
Modifying configurations of a service may interrupt the service running. Exercise caution with
this operation.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service and choose Modify from the shortcut menu.
----End
Prerequisite
The user must be an NMS user with NE operator rights or higher.
Context
This operation will delete service configuration data from both the NMS and NEs.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from the shortcut menu.
----End
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click one or more services and choose Delete from Network Side from the shortcut menu.
----End
Postrequisite
After the L3VPN service is deleted from the network, the service information is deleted from
the NMS. Therefore, you cannot view the record in L3VPN service management list. The VRF
information to which the service corresponds, however, can be viewed in the discrete service
management list.
Procedure
Step 1 Choose Service > L3VPN Service > Manage L3VPN Service from the main menu.
Step 2 In the Set Filter Criteria dialog box, set the filter criteria. Then, click Filter. The services
meeting the filter criteria are displayed in the query result area.
Step 3 Right-click a service with Deployment Status being Deployed and choose Undeploy from the
shortcut menu.
Step 4 In the Confirm dialog box, click Yes.
After the service is undeployed, the value of Deployment Status changes from Deployed to
Undeployed.
----End
Postrequisite
After the service is undeployed, you can redeploy the service. If the service fails to be
undeployed, you can modify the service according to the error message, and then undeploy the
service again.
Index
A
Advertisement of VPNv4 Routes, 8-16
L
Label Allocation of MP-BGP, 8-15
M
MP-BGP, 8-10
P
Packet Forwarding in a Basic L3VPN, 8-19
PW APS
protection switching, 6-60
R
Route Advertisement of a Basic BGP/MPLS PN, 8-17
V
VPN Route Selection on PEs, 8-15