Professional Documents
Culture Documents
Issue V2.0
Date 2015-7-15
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
NE40E/80E V600R001
CX600 V600R002
V600R003
V600R005
V600R006
V600R007
V600R008
V600R009
V800R005
V800R006
V800R007
V800R008
ME60 V600R002
V600R003
V600R005
V600R006
V600R007
V600R008
V600R009
Product Version
NE5000E V800R002
V800R003
V800R005
V800R006
V800R007
V800R008
NOTE
MTU fragmentation is not supported by
NE5000E V300R007 and the earlier versions.
Change History
Version Release Update Description
Date
V1.0 2013-6-19 The initial release.
V2.0 2015-7-15 Troubleshooting case "VPN site cannot Ping with jumbo
frame of DF=1" was added.
MTU fragmentation mechanisms in NE5000E product are
added.
Description about force-fragment function is added in the
chapter IP MTU Fragmentation.
Contents
4 Appendix .....................................................................................................................................4-1
4.1 Quick Search Table ....................................................................................................................................... 4-1
4.1.1 Packet Structures .................................................................................................................................. 4-1
4.1.2 Number of labels carried in an MPLS packet in various scenarios ...................................................... 4-3
4.2 Standard and RFCs ........................................................................................................................................ 4-4
4.2.1 MIBs .................................................................................................................................................... 4-5
4.3 Abbreviation .................................................................................................................................................. 4-6
Figures
Tables
Table 2-3 Fragmentation implementation for IP packets entering MPLS tunnels ............................................ 2-10
Table 3-1 Interface default MTUs ...................................................................................................................... 3-1
Table 4-2 Number of VLAN tags carried in L2VPN packets ............................................................................. 4-2
1 MTU Overview
2 MTU Fragmentation
NOTE
Generally, only the source and destination nodes need to analyze the IPv6 extension headers. So fragmentation only occurs on the
source node, which is different from IPv4.
If the size (including the IP header and payload) of non-MPLS packets sent from control
plane, is greater than the MTU value configured on an outbound interface:
If the DF field is set to 0 in a packet, the packet is fragmented. The size of each fragment
is less than or equal to the interface MTU.
If the DF field is set to 1 in a packet, the packet is discarded.
If the DF field is set to 1 in a packet and the out interface is enabled with
forcible-fragmentation, the packet is fragmented. Each fragment is forwarded with DF=0.
(By default, forcible-fragmentation is not enabled for control plane. To enable
forcible-fragmentation for control plane, run the clear ip df command in the out
interface.
For the information about the fragmentation for MPLS packet, see the chapter 2.2 MPLS
MTU Fragmentation.
Protocol packets are usually allowed to be fragmented (DF=1), that is, the protocol packets
are usually not be discarded in the original device even when they exceed the MTU. the
protocol packets are not allowed to be fragment (DF=1) only when:
the device is implementing PMTU discover, such as IPv6 PMTU discover, or
LDP/RSVP-TE PMTU negotiation.
the ping -f command is running on the local device.
The difference between fragmentation on motherboards/integrated boards and that on subcards is as follows,
Subcards do not differentiate between Layer 2 and Layer 3 traffic. VLANIF interface transmits both Layer 2
and Layer 3 traffic, and the subcards implement MTU fragmentation on both, with the exception of MPLS
L2VPN traffic. MPLS L2VPN traffic can be fragmented only if a PE UNI is configured as a VLANIF interface.
When the size of the IP header and payload in a Layer 2 packet exceeds the interface MTU, the VLANIF
interface fragments the Layer 2 packets. In each fragment, the size of the IP header and payload is less than or
equal to the interface MTU.
The versions earlier than V600R006 do not support fragmentation for IP packets that enter MPLS tunnels, but
V600R006 and the later versions support. For detailed information, see 2.2 MPLS MTU Fragmentation.
Scenarios Parameters which may affect MPLS MTU value selection ("Y" indicates affect, "N"
indicates no affect, the smallest value among the affecting parameters is selected
as the MPLS MTU)
LDP LSP Y Y Y N N
MPLS-TE Y Y N Y V600R001
version: Y
the later version:
N
Y
LDP over TE Y Y Y N Y
NOTE
In LDP over TE scenario, interface MTU on the tunnel interface affects MPLS MTU value selection,
because the LDP LSP is over TE tunnel and the TE tunnel interface is an out interface of the LDP LSP.
if DF=0, the packet is fragmented. Each fragment (including the IP header and label) is
less than or equal to the MPLS MTU value.
if DF=1, the packet is discarded and an ICMP Datagram Too Big message is sent to the
source end.
The NE40E/80E/CX600/ME60/NE5000E device provides rich board types. In forwarding
plane, different board types may have different MPLS MTU fragmentation rules, as shown in
Table 2-4.
According to RFC 2328, if an OSPF node receives a DD packet with an interface MTU value
less than the MTU of the local outbound interface, the OSPF relationship remains in the
ExStart state and fails to transition to the Full state.
Devices manufactured by different vendors may use the different rules to process DD packets:
Some devices check the MTU values carried in DD packets by default, while allowing
users to disable the MTU check.
Some devices do not check the MTU values carried in DD packets by default, while
allowing users to enable the MTU check.
Other devices forcibly check the MTU values carried in DD packets.
Implementation inconsistencies between vendor-specific devices are a common cause of
OSPF adjacency problems.
NE40E/80E/CX600/ME60/NE5000E devices by default do not check MTU values carried in
DD packets and set the MTU values to 0 bytes before sending DD packets.
NE40E/80E/CX600/ME60/NE5000E devices allow to set the MTU value in DD packets to be
sent over a specified interface. After the DD packets arrive at
NE40E/80E/CX600/ME60/NE5000E device, the device checks the interface MTU field and
allows an OSPF neighbor relationship to reach the Full state only if the interface MTU field in
the packets is less than or equal to the local MTU.
If an MTU value changes (such as when the local outbound interface or its configuration is
changed), an LSR recalculates an MTU value and sends a Label Mapping message carrying
the new MTU value upstream. The comparison process repeats to update MTUs along the
LSP.
If an LSR receives a Label Mapping message that carries an unknown MTU TLV, the LSR
forwards this message to upstream LDP peers.
NE40E/80E/CX600/ME60/NE5000E devices exchange Label Mapping messages to negotiate
MPLS MTU values before they establish LDP LSPs. Each message carries either of the
following two MTU TLVs:
Huawei proprietary MTU TLV: sent by Huawei routers by default. If an LDP peer cannot
recognize this Huawei proprietary MTU TLV, the LDP peer forwards the message with
this TLV so that an LDP peer relationship can still be established between the Huawei
router and its peer.
RFC 3988-compliant MTU TLV: specified by commands on
NE40E/80E/CX600/ME60/NE5000E. NE40E/80E/CX600/ME60/NE5000E uses this
MTU TLV to negotiate with non-Huawei devices.
1. The ingress sends a Path message with the ADSPEC object that carries an MTU value.
The smaller MTU value between the MTU configured on the physical outbound
interface and the configured MPLS MTU is selected.
2. Upon receipt of the Path message, a transit LSR selects the smallest MTU among the
received MTU value, the MTU configured on the physical outbound interface, and the
configured MPLS MTU. The transit LSR then sends a Path message with the ADSPEC
object that carries the smallest MTU value to the downstream LSR. This process repeats
until a Path message reaches the egress.
3. The egress uses the MTU value carried in the received Path message as the PMTU. The
egress then sends a Resv message that carries the PMTU value upstream to the ingress.
By default, Huawei routers implement MTU negotiation for VCs or PWs. Two nodes must
use the same MTU to ensure that a VC or PW is established successfully. L2VPN MTUs are
only used to establish VCs and PWs and do not affect packet forwarding.
To communicate with non-Huawei devices that do not verify L2VPN MTU consistency,
L2VPN MTU consistency verification can be disabled on
NE40E/80E/CX600/ME60/NE5000E. This allows NE40E/80E/CX600/ME60/NE5000E to
establish VCs and PWs with the non-Huawei devices.
Eth-Trunk interface 1500 46 to 9600 Interface MTUs take effect only on Layer 3 traffic.
Eth-Trunk NOTE
sub-interface An Eth-Trunk interface and its sub-interfaces work at Layer 3 by
default and send only Layer 3 traffic. After the portswitch command is
run in the Eth-Trunk interface view, the Eth-Trunk interface and its
sub-interfaces work at Layer 2 and send only Layer 2 traffic.
The interface MTU must be changed in the Eth-Trunk
interface view or sub-interface view, not the member
interface view. After a member interface is added to an
Eth-Trunk interface, the Eth-Trunk interface MTU
automatically takes effect on the member interface. When
the Eth-Trunk interface MTU changes, member interfaces
automatically synchronize their MTU values with the
Eth-Trunk interface MTU.
The MTU values on both ends of an Eth-Trunk link must be
the same. An MTU inconsistency may cause a service
interruption.
If IPv6 runs on an Eth-Trunk interface and its sub-interfaces,
the interface MTU must be greater than or equal to 1280
bytes. A smaller MTU value causes IPv6 errors.
GE interface 1500 LPUI-41 Interface MTUs take effect only on Layer 3 traffic.
GE sub-interface and NOTE
LPUF-10 A GE interface and its sub-interfaces work at Layer 3 by default and
0 board: send only Layer 3 traffic. After the portswitch command is run in the
161 to GE interface view, the GE interface and its sub-interfaces work at
9600 Layer 2 and send only Layer 2 traffic.
Other
boards:
46 to
9600
POS interface 4470 46 to 9600 It is recommended that you retain the default value.
Using an MTU value greater than 160 bytes on POS interfaces
is recommended on an LPUI-41, LPUF-100, and LPUI-100.
MTUs on POS interfaces do not take effect on IP
fragmentation subcards, because the boards do not fragment
packets.
Tunnel interface 1500 46 to 9600 Mainly used in LDP over TE scenario.
VLANIF interface 1500 46 to 9600 For fragmentation-capable subcards, VLANIF interfaces
take effect on both Layer 2 and Layer 3 traffic.
For fragmentation-capable motherboards/integrated boards,
MTUs on VLANIF interfaces take effect only on Layer 3 IP
traffic.
Interface MTUs take effect only on Layer 3 traffic.
Virtual-Ethernet 1500 960 to 1518 Interface MTUs take effect only on Layer 3 traffic.
interface
Virtual-Template 1500 328 to 9600 Interface MTUs configured on VT interfaces are used in the
interface Link Control Protocol (LCP) phase. Two Point-to-Point
Protocol (PPP) nodes exchange packets to negotiate and select
the smaller one between two configured MTUs as the effective
MTU.
An MPLS MTU change on a specific interface does not affect the interface MTU, while an interface
MTU change causes the effective MPLS MTU to update.
number of labels carried in an MPLS packet are different in various scenarios (see 4.1.2
Number of labels carried in an MPLS packet in various scenarios for detailed
information).
The MPLS MTU value must be less than or equal to the interface MTU value. To enable
core routers to support more types of labeled packets, increase interface MTU values, not
MPLS values.
When penultimate hop popping (PHP) is enabled on an LSP within an MPLS L3VPN, the penultimate
NE40E/80E/CX600/ME60/NE5000E removes the last label from an MPLS packet and forwards an IP
packet to the egress based on the MPLS MTU, not the interface MTU.
Trap Information
N/A
Cause Analysis
Perform the following steps:
1. Analyze the path through which data packets pass.
The network topology shows that L3VPN user packets pass through two routers named
PE1 and PE2. PE1 is a Huawei router, and PE2 is a non-Huawei router.
2. View MTU values configured on interfaces along the path.
The interface MTUs on PE1 and PE2 are 1500 bytes.
3. Enable PE1 to send ping packets within a specific VPN to PE2.
The ping vpn is successful when the ping packet sizes are less than or equal to 1500
bytes, whereas the ping fails when the ping packet sizes are greater than 1500 bytes. The
incorrect MTU setting causes the L3VPN user access failure.
4. Analyze L3VPN user packet headers.
Each L3VPN user sends a request to a web server using the Hypertext Transfer Protocol
(HTTP) over a TCP connection.
The data packets replied by web server are large. If a 1500-byte response IP packet is
sent to PE2, PE2 adds two labels (2 bytes + 2 bytes) into the packet and sets the DF field
to 1 before forwarding the packet. The packet becomes 1508 bytes long. PE2 finds that
the packet size is greater than the interface MTU and discards the packet. Then, PE2
replies with an ICMP Datagram Too Big message to the web server.
If the web server reduces the packet size to a value less than 1500 bytes, the response
from the web server can reach the L3VPN user, and the L3VPN user can access the web
server.
If the Datagram Too Big message cannot reach the web server or the web server receives
this message but does not change the MTU value, the web server still sends 1500-byte
packets. Upon receipt, PE2 discards the packets. As a result, the L3VPN user cannot
access the website
The preceding analysis shows that the incorrect MTU setting causes the L3VPN user access
failures.
Troubleshooting Procedure
1. Increase the MTU on the NNI on each PE to 1508 bytes.
After the modification, the L3VPN user cannot access any websites.
2. Check the path through which packets pass. A transmission device resides between PE1
and PE2.
3. Check the MTU fragmentation mechanism on the transmission device.
The transmission device calculates a packet size based on the IP MTU plus 18 bytes
(DMAC + SMAC + Length/Type + CRC). An L3VPN packet (1508 bytes) and 18 bytes
are 1526 bytes, while the MTU value on the transmission device is 1524 bytes. The
transmission device discards packets with the sizes greater than 1524 bytes. As a result,
the L3VPN user device attempts to resend HTTP packets over a TCP connection but fails
to access all websites.
4. Change the MTU on the transmission device to a value greater than or equal to 1526
bytes so that the transmission device does not discard user packets.
5. Initiate a ping.
The ping packets that are 1508 bytes long can reach the destination. The L3VPN user can
access all websites, and the problem is resolved.
Suggestion
If an MTU fault occurs, check MTU settings on both network devices and transmission
devices.
Check the label size in the MTU setting because the size of an MPLS VPN packet with labels
is greater than the IP packet size.
Trap Information
None
Cause Analysis
The sites attached to the L2 network can access each other, and can access websites in the
Internet. So, the trouble may exist between PE2 and site2.
Do the following steps to allocate the trouble.
1. Check whether site1 can ping though site2 or not by using pingdestination-address
command.
If not, there is a route problem, please troubleshooting the route between the site 1 and
site 2.
If yes, go to the step 2.
2. Check whether PE3 can ping though PE4 or not by using the ping -f -s
packetsize-vpn-instancevpn-name destination-ip-address command. (Note:
packetsize >9000 bytes - 20 bytes IP header - 8 bytes ICMP header = 8972 bytes,
because packetsize indicates the ICMP pay load length, not including ICMP header and
IP header. The packet is originated from CPU, so the fragmentation is calculated only
including IP header and IP payload, not including Label. The L3VPN packet fragment is
based on the MTU set in Tunnel interface if the L3 VPN using TE tunnel and L3 VPN
packet is originated from CPU).
If yes, check the MTU of the CE-facing interface on PE3 and PE4, and the PE-facing
interfaces of the edge device on site2, and set all of them to 9000 bytes.
If not, go to the step 3.
3. Check the MTU and MPLS MTU values of outbound interfaces along the path through
which ping-vpn-instance packets pass, including the MTU values of transmission
devices and L2 switches between nodes.
If the L3VPN uses TE tunnel or LDP over TE, the ping -vpn-instance packets on PE are
sent through the tunnel interface, so the MTU value on tunnel interface also take effect
on the ping -vpn-instance packets.
If the L3VPN uses LDP LSP(not including LDP over TE), the ping -vpn-instance
packets on PE are not sent through the tunnel interface, the MTU value on tunnel
interface does not take effect on the ping -vpn-instance packets.
If there is transmission device or L2 switch between routers, please make the
transmission devices or L2 switches allow IP packets with 9000 Bytes to go through.
Troubleshooting Procedure
If the steps stated above are done, to allow IP packet of 9000 Bytes to pass the L3 network, do
the flowing steps:
1. Modify all the interface MTU value on UNI (User-to-Network Interface) to 9000 bytes
2. Modify the MTU & MPLS MTU value of NNI (Network-to-Network Interface) to
9000+ 4*N (the value of N indicates the number of the MPLS labels in the MPLS packet,
which depends on the L3VPN tunnel type, for detailed information, see Number of
labels carried in an MPLS packet in various scenarios and Packet Structures.
3. If the L3VPN uses TE tunnel or LDP over TE, also modify the interface MTU and
MPLS MTU value on tunnel interface to 9000 bytes + 4*N (the value of N indicates the
number of the MPLS labels in the MPLS packet).
If the PE is an NE40E and the version is the V600R001, please shutdown and then undo shutdown the
MPLS tunnel interface and also reset the tunnel LSP (using reset mpls te tunnel-interface command) in
order for configurations to take effect. For other versions, it's not need to reset the tunnel interface and
tunnel LSP.
For calculation methods of MPLS L3VPN packet length during packet fragmentation, see "MPLS MTU
Fragmentation".
Suggestion
In MPLS L3VPN network, the interface MTU and MPLS MTU values on core routers' NNIs
are recommended to be greater than those on core routers' UNIs so that the NNIs can forward
labeled packets sent by the UNIs.
To enable core routers to support more types of labeled packets, increase both the interface
MTU values and MPLS MTU values.
During a service cutover, the optical fiber connecting Router C to the switch is removed from
Router C and installed on Router A. Before the cutover, Router C and Router B establishes an
OSPF neighbor relationship. After the cutover, Router A and Router B cannot establish an
OSPF neighbor relationship, and their OSPF neighbor relationship is in the Exchange state.
The interface configurations on Router C and Router A are correct.
Router C's interface configuration is as follows:
#
interface Vlanif351
description XXXX
ip address x.x.x.158 255.255.255.252
ospf cost 30
mpls
mpls ldp
#
Trap Information
N/A
Cause Analysis
If OSPF packets are dropped, the OSPF neighbor relationship to remain in the Exchange state.
Perform the following steps to analyze the cause:
1. Configure Router A to send ping packets to Router B.
The ping is successful. The route between Router A and Router B is reachable.
2. Check whether MTU negotiation is successful.
Run the display ospf error command on Router A. The MTU option mismatch field is 0,
which indicates that MTU negotiation is successful.
3. Check whether interface MTU values match.
Both interface MTU values configured on Router A and Router C are 1560 bytes.
Enable Router A to send 1560-byte ping packets to Router B. The ping fails.
Check the JumboOctets count. The outbound statistics change, but inbound statistics
remain, which indicates that ICMP messages are sent successfully but there is no reply
received.
4. Change the interface MTUs on Router A and Router B to 1500 bytes.
An OSPF neighbor relationship is successfully established between the two routers.
5. The preceding analysis shows that the switch may have a configuration error.
6. Check the MTU setting on the switch.
The MTU value on the switch is set to 1546 bytes, which is different from the MTU
values on the routers.
The preceding analysis shows that the incorrect MTU setting on the switch causes the
problem.
Troubleshooting Procedure
1. Check the MTU definition on the switch.
Revert the MTU value to 1560 bytes. Use the bisection method to enable the switch to
send ping packets with the sizes ranging from 1500 to 1560 bytes. The ping results show
that a maximum number of 1518 bytes can be sent.
As the ICMP message size specified in a ping packet is 1518 bytes, the IP packet size is
1546 bytes, which is equal to the switch MTU. The IP packet size includes the 1518-byte
ICMP message, 8-byte ICMP header, and 20-byte IP header.
In conclusion, the switch's MTU is equal to the IP MTU and has the same meaning as the
Huawei router's interface MTU.
2. Change the MTU on the switch to 1560 bytes. The OSPF neighbor relationship between
Router A and Router B is successfully established, and the problem is resolved.
Suggestion
To analyze how a vendor-specific device defines a packet size, use the bisection method to
send ping packets with various sizes to find the maximum number of bytes that can be sent
and analyze the ping packet structure.
4 Appendix
Type Format
Untagged
Ethernet
frame
Single-ta
gged
Ethernet
frame
Type Format
Double-t
agged
Ethernet
frame
(also
called
QinQ
frame)
PPPoE
frame
L2TP
packet
MPLS
L3VPN
packet
For information about N's values, see 4.1.2 Number of labels carried in an MPLS packet in various
scenarios.
MPLS
L2VPN
packet
For information about M's values, see 4.1.2 Number of labels carried in an MPLS packet in various
scenarios.
The number of VLAN tags varies according to user test interface types and PW encapsulation types.
For information about N's values, see Table 4-2.
4.2.1 MIBs
OID MIB file MIB table MIB node Description of MIB
node
4.3 Abbreviation
Abbreviation Full Spelling
B
BFD Bidirectional Forwarding Detection
BGP Border Gateway Protocol
BRAS Broadband Remote Access Server
C
CE Customer Edge
CPU Central Processing Unit
CRC Cyclic Redundancy Check
D
DD Data Description packet
F
FEC Forwarding Equivalence Class
FDDI Fiber Distributed Data Interface
G
GE Gigabit Ethernet
GRE Generic Routing Encapsulation
H
HTML Hypertext Markup Language
HTTP Hypertext Transfer Protocol
I
ICMP Internet Control Message Protocol
IEEE Institute of Electrical and Electronics Engineers
IETF Internet Engineering Task Force
IGP Interior Gateway Protocol
IP Internet Protocol
IPSec Internet Protocol Security
IPoE Internet Protocol over Ethernet
IPTV Internet Protocol Television
IPv4 Internet Protocol version 4
IPv6 Internet Protocol version 6
IS-IS Intermediate System to Intermediate System
ISO International Organization for Standardization
ISP Internet Service Provider
L
L2TP Layer 2 Tunneling Protocol
L3VPN Layer 3 Virtual Private Network
LCP Link Control Protocol
LDP Label Distribution Protocol
M
MAC Media Access Control
MPLS Multiprotocol Label Switching
MSS Maximum Segment Size
MTU Maximum Transmission Unit
N
NNI Network to Network Interface
O
OSPF Open Shortest Path First
P
P2P Point to Point
PC Personal Computer
PE Provider Edge
PHP Penultimate Hop Popping
PIC Physical Interface Card
PMTU Path Maximum Transmission Unit
PPP Point-to-Point Protocol
PPPoE PPP over Ethernet
PW Pseudo Wire
PWE3 Pseudo-Wire Emulation Edge to Edge
Q
QinQ 802.1Q in 802.1Q
R
RFC Request For Comments
RSVP-TE Resource Reservation Protocol - Tranffic Engineering
T
TCP Transmission Control Protocol
TE Tranffic Engineering