Professional Documents
Culture Documents
Feature Parameter Description: Transmission Resource Management SRAN5.0
Feature Parameter Description: Transmission Resource Management SRAN5.0
SRAN5.0
Feature Parameter Description
Issue 03
Date 2011-09-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Contents
1 Introduction ................................................................................................................................1-1
1.1 Scope ............................................................................................................................................ 1-1
1.2 Intended Audience ........................................................................................................................ 1-1
1.3 Change History.............................................................................................................................. 1-1
6.6.1 Overview of the Uplink Iub Congestion Control Algorithm ................................................. 6-12
6.6.2 NodeB Backpressure-Based Uplink Congestion Control Algorithm (R99 and HSUPA)..... 6-13
6.6.3 NodeB Uplink Bandwidth Adaptive Adjustment Algorithm .................................................. 6-14
6.6.4 RNC R99 Single Service Uplink Congestion Control Algorithm ......................................... 6-15
6.6.5 NodeB Uplink Congestion Control Algorithm for Cross-Iur Single HSUPA Service ........... 6-15
6.7 Dynamic Bandwidth Adjustment Based on IP PM ...................................................................... 6-16
7 Engineering Guidelines...........................................................................................................7-1
7.1 Configuring Co-TRM (with GSM BSC and UMTS RNC Combined) ............................................. 7-1
7.2 Using Default TRMLOADTH Table ................................................................................................ 7-1
8 Parameters .................................................................................................................................8-1
9 Counters ......................................................................................................................................9-1
10 Glossary ..................................................................................................................................10-1
11 Reference Documents .........................................................................................................11-1
1 Introduction
1.1 Scope
This document mainly describes the management of transmission resources at the base station
controller. The transmission resources refer to those carried on the Abis interface of the 2G system and
on the Iub interface of the 3G system, and those shared by the Abis and Iub interfaces of the common
transmission (co-transmission) system.
This document merges the Transmission Resource Management (TRM) feature descriptions of the 2G,
3G, and co-transmission systems. It describes transmission resources, Quality of Service (QoS), load
control, user plane processing, and associated parameters. It is applicable for R99, HSDPA, and HSUPA.
In this document, HSDPA transport resource management (WRFD-01061014 HSDPA Transport
Resource Management) and HSUPA transport resource management (WRFD-01061207 HSUPA
Transport Resource Management) mainly refer to the transmission resource mapping and load control.
The base station controllers of the 2G, 3G, and co-transmission systems are BSC, RNC, and Multi-Mode Base Station
Controller (MBSC) respectively.
MBSC is the GSM+UMTS multi-mode base station controller introduced in Huawei SRAN3.0 solution.
SRAN3.0 supports the co-transmission resource management (Co-TRM) feature (corresponding to MRFD-211503
Co-Transmission Resources Management on MBSC) only in the co-transmission scenario where the MBSC is
deployed on the base station controller side, and the MBTS is deployed on the base station side. In this scenario,
Co-TRM refers to the common management of IP logical ports (LPs) transmission resources when the 2G system and
the 3G system implement IP-based co-transmission on the Abis and Iub interfaces. Co-TRM improves the usage of
transmission resources and provides the QoS services. In the Co-TRM feature, Abis and Iub share IP LPs, and IP LPs
share IP physical transmission resources. The 2G IP paths are independent of the 3G IP paths. Co-TRM implements
the common load control and traffic shaping within the shared LPs.
SRAN5.0 also supports the Co-TRM feature in the scenario where the GSM BSC and the UMTS RNC are deployed
separately, and IP-based co-transmission is implemented on the base station side. In this scenario, Abis and Iub do not
share LPs and physical ports. Co-TRM improves the transmission bandwidth utilization in the GSM and UMTS
co-transmission scenario. For details, see Bandwidth Sharing of MBTS Multi-Mode Co-Transmission Feature
Parameter Description.
Document Issues
The document issues are as follows:
03 (2011-09-30)
02 (2011-03-30)
01 (2010-05-15)
Draft (2010-03-30)
03 (2011-09-30)
This is the document for the third commercial release of SRAN5.0.
Compared with 02 (2011-03-30) of SRAN5.0, this issue incorporates the following changes:
02 (2011-03-30)
This is the document for the second commercial release of SRAN5.0.
Compared with 01 (2010-05-15) of SRAN5.0, this issue incorporates the following changes:
01 (2010-05-15)
This is the document for the first commercial release of SRAN5.0.
Compared with Draft (2010-03-30) of SRAN5.0, this issue optimizes the description.
Draft (2010-03-30)
This is the draft of the document for SRAN5.0.
Compared with 03 (2010-01-20) of SRAN3.0, this issue incorporates the following changes:
2 Overview of TRM
2.1 Definition of TRM
TRM is the management of transmission resources on the interfaces in various networking modes. The
transmission interfaces of the 2G system include Abis, Ater, and A; the transmission interfaces of the 3G
system include Iub, Iur, Iu-CS, and Iu-PS. Compared with the transmission on the other interfaces, the
transmission on the Abis and Iub interfaces has higher costs, more complicated networking modes, and
greater impact on system performance. Therefore, this document mainly describes the TRM for the Iub
and Abis interface. In the co-transmission system, TRM implements common management of
transmission resources shared by the Abis and Iub interfaces and so TRM is also focused on the Abis
and Iub interfaces. TRM in the co-transmission system is called Co-TRM.
Transmission resources are one type of resource that the radio network access provides. Closely related
to TRM algorithms are Radio Resource Management (RRM) algorithms, such as the scheduling
algorithm and load control algorithm for the Uu interface. The TRM algorithm policies should be
consistent with the RRM algorithm policies.
As shown in Figure 2-1, the TRM feature covers the following aspects:
Transmission resources involved in TRM include physical and logical resources. For details, see
section 3 "Transmission Resources."
Load control is applied to the control plane in TRM. It includes admission control, load reshuffling
(LDR), and overload control (OLC). For details, see section 5 "Load Control."
QoS priority mapping, shaping, and scheduling, dynamic bandwidth adjustment based on IP
Performance Monitor (PM), and congestion control are applied to the user plane in TRM. For details,
see section 4 "Quality of Service" and 6 "User Plane Processing."
Characteristics of 2G TRM
The 2G system supports the TDM and HDLC transmission modes. For details about available
transmission resources, see section 3.2.2 "Physical Layer Resources for TDM " and 3.2.3 "Physical and
Data Link Layer Resources for HDLC Transmission."
Characteristics of 3G TRM
The 3G system supports the ATM transmission mode. Transmission resources of the 3G system are
classified into physical transmission resources, LPs, resource groups, and path resources. For details,
see section 3.2.1 "Physical Layer Resources for ATM ", 3.3.2 "ATM LPs at the RNC", 3.3.6 "Resource
Groups at the BSC/RNC", and 3.4.1 "AAL2 Paths."
The LPs of the 3G system can also be applied in RAN sharing scenario for transmission resource
admission control. For details, see section 3.3.1 "Introduction to LP."
The 3G system also supports configuration of NodeB LPs. For details, see section 3.3.4 "LPs at the
NodeB."
For the Iub hybrid IP transmission mode, non-QoS paths can be further classified into high-quality
paths and low-quality paths. For details, see section 3.4.2 "IP Paths."
The Iub interface of the 3G system supports the ATM&IP dual stack networking and hybrid IP
networking. For details, see section 3.5.1 "2G and 3G Networking."
− Both2G system and 3G system make requests for admission of services according to the bandwidth
reserved for services, and both calculate the bandwidth reserved for services based on activity
factors. For different services of the 2G and 3G systems, the reserved bandwidth differs. For details,
see section 5.3.2 "Calculation of Bandwidth Reserved for Traffic."
− In
the process of transmission resources admission control, the 2G and 3G systems have the same
admission processes, the same admission strategies, and the same principles of preemption and
queuing. Switches and actions of preemption and queuing in the 2G and 3G systems are different.
For details, see section 5.5 "Admission Control."
− In
the processes of LDR and OLC, the principles of congestion and overload detection for the 2G and
3G systems are the same, but the procedures for handling congestion and overload are different. For
details, see section 5.6 "Load Reshuffling and Overload Control."
In the SRAN3.0 solution:
− The2G and 3G systems use the same load threshold table template and use the same command to
configure the table. For details, see section 5.4 "Load Thresholds,"
− The2G and 3G systems use the same activity factor table template and use the same command to
configure the table. For details, see section 5.3.2 "Calculation of Bandwidth Reserved for Traffic."
Characteristics of 2G TRM
The 2G Abis signaling needs to calculate the reserved bandwidth. For details, see section 5.3.1
"Calculation of Bandwidth Reserved for 2G Signaling."
Characteristics of 3G TRM
The GBR of BE services of the 3G system are configurable. For details, see section 5.3 "Calculation of
Reserved Bandwidth."
In Iub hybrid transmission mode, the admission of primary and secondary paths is supported in the
process of transmission resource admission. For details, see section 5.5.4 "Load Balancing."
Characteristics of 2G TRM
In HDLC transmission mode, the HDLC also supports shaping and scheduling functions. For details,
see section 6.2.1 "RNC/BSC Scheduling and Shaping."
The mapping from 2G Abis signaling services to transmission resources is not oriented to adjacent
nodes and therefore needs to be configured separately. For details, see "Mapping from Abis Signaling
Traffic to Transmission Resources."
Characteristics of 3G TRM
The NodeB of the 3G system also supports shaping and scheduling functions. For details, see section
6.2.2 "NodeB Scheduling and Shaping."
The Iub interface of the 3G system implements a series of congestion control algorithms in the user
plane. For details, see section 6.3 "Iub Overbooking."
When the mapping from services to transmission resources is configured, the 3G services are
differentiated by user priority, traffic priority, and type of radio bearer. The 3G system also supports
configuration of primary and secondary paths. For details, see sections 4.3 "Service QoS" and 4.4
"Transmission Resource Mapping."
− Through the transmission resource mapping, RT services can be mapped to high-priority paths and
thus be transmitted preferentially when congestion occurs. This reduces packet loss and
transmission delay. For details, see section 4 "Quality of Service."
− RTservices are admitted at the Maximum Bit Rate (MBR). With appropriate activity factors
configured, the access of more users are allowed under the condition that the QoS is guaranteed.
Overload control and preemption can achieve differentiated services. For details, see section 5 "Load
Control."
Non-real-time (NRT) services, such as interactive and background services
NRT services do not have strict requirements for bandwidth. When transmission resources are
insufficient, the data can be buffered to reduce the traffic throughput. The activity of NRT services does
not follow an obvious rule. When multiple services access the network, the total actual traffic volume
fluctuates significantly.
− Through transmission resource mapping, NRT services can be mapped to low-priority paths and thus
the QoS of RT services can be guaranteed preferentially. For details, see section 4 "Quality of
Service."
− TheTRM feature provides the Guaranteed Bit Rate (GBR) and a user plane congestion control
algorithm, which allow the access of more users under the condition that the QoS is guaranteed. For
details, see section 6 "User Plane Processing."
− Through the Scheduling Priority Indicator (SPI) weighting, bandwidth allocation for NRT services can
be differentiated. For details, see section 6 "User Plane Processing."
SPI is used to indicate the scheduling priorities of services, and SPI weighting is used to adjust the
queuing priorities of scheduling services or to proportionally allocate bandwidth to services in Iub
congestion control. A larger SPI weight indicates a higher queuing priority or a higher bandwidth
allocated to the Iub interface.
Signaling, such as Signaling Radio Bearer (SRB), Session Initiation Protocol (SIP), Network Control
Protocol (NCP), Communication Control Port (CCP), and Abis interface signaling
The traffic volume of signaling is low and its performance is closely related to Key Performance
Indexes (KPIs) of the network. Therefore, through transmission resource mapping, signaling can be
mapped to high-priority paths and the transmission of signaling takes precedence, thus preventing
packet loss and transmission delay.
3 Transmission Resources
3.1 Overview of Transmission Resources
The 2G, 3G, and co-transmission systems can use the transmission resources described in Table 3-1.
Table 3-1 Transmission resources used by the 2G, 3G, and co-transmission systems
Transmission 2G System 3G System Co-Transmission System
Resource
TDM √ - -
HDLC √ - -
IP √ √ √
ATM - √ -
ATM transmission resources and IP transmission resources can be further classified into physical
resources, logical ports, resource groups, and paths.
In TDM and HDLC transmission, the user plane data is carried on the timeslots of physical ports.
Figure 3-1, Figure 3-2, Figure 3-3 and Figure 3-4 show examples of different transmission resources.
Figure 3-1 ATM transmission resources
E1/T1 √ √ √ √ √ √
electrical port
FE/GE - - √ - √ √
electrical port
GE optical - - √ - √ √
port
Unchannelize - - - √ √ -
d
STM-1/OC-3c
optical port
Channelized √ √ √ √ √ √
STM-1/OC-3
optical port
Flex Abis √ - - - - -
resource pool
NB: NodeB or BTS BW: bandwidth BW0: bandwidth of physical ports on the RNC
or BSC or MBSC
The leaf LPs, that is, LP1, LP2, LP3, and LP4, have a one-to-one relationship with the NodeBs. The
bandwidth of each leaf LP is equal to the Iub bandwidth of each corresponding NodeB.
The hub LP, that is, LP125, corresponds to the hub NodeB. The bandwidth of the hub LP is equal to
the Iub bandwidth of the hub NodeB.
The actual rate at a leaf LP is limited by the bandwidth of the leaf LP and the scheduling rate at the hub
LP and physical port.
In the transmission resource admission algorithm, the reserved bandwidth of a leaf LP is limited by not
only the bandwidth of the leaf LP but also the bandwidth of the hub LP and the bandwidth of the
physical port. That is, the total reserved bandwidth of all the LPs under a hub LP cannot exceed the
bandwidth of the hub LP.
The RNC supports multi-level shaping (a maximum of five levels), which involves leaf LPs and hub LPs.
When the ADD IPPATH command is executed to specify the bearer type of IP path as IPLGCPORT, or
when the RNC and MBSC bind the IP LPs through the ADD SCTPLNK command, the path or link can
be set to join an LP.
IP LPs are similar to ATM LPs in terms of principles and application. The current version supports a
maximum of five levels of IP LPs.
The parameters associated with IP LPs are as follows:
LPNTYPE
RSCMNGMODE
CIR
OAMFLOWBW
On the RNC or BSC side, LPs cannot contain transmission resource groups, and transmission resource groups cannot
contain LPs either.
3.4.2 IP Paths
IP paths can be classified into QoS paths and non-QoS paths.
On QoS paths, different services share the bandwidth of paths. The Per Hop Behavior (PHB) of IP
paths is determined by transmission resource mapping. For details about transmission resource
mapping, see section 4.4 "Transmission Resource Mapping."
PHB is the next-hop behavior of the IP path. Services can be prioritized based on the mapping from
PHB to DSCP.
On non-QoS paths, different services do not share the bandwidth of IP paths. The PHB of IP paths is
determined by the path type. Non-QoS paths can be further classified into high-quality paths and
low-quality paths. The low-quality path, denoted as LQ_xxx, is applicable to only hybrid IP
transmission on the Iub interface. In hybrid IP transmission mode, if the physical port is an PPP or
MLPPP port, high-quality paths are configured; if the physical port is an Ethernet port, low-quality
paths are configured.
For details about the hybrid IP transmission on the Iub interface, see section 3.5.1 "2G and 3G
Networking."
The IP path can be configured through the ADD IPPATH command.
For details about the classification of non-QoS paths, see Table 3-6.
Table 3-6 Classification of non-QoS paths
High-Quality Path Low-Quality Path
BE LQ_BE
AF11 LQ_AF11
AF12 LQ_AF12
AF13 LQ_AF13
AF21 LQ_AF21
AF22 LQ_AF22
AF23 LQ_AF23
AF31 LQ_AF31
AF32 LQ_AF32
AF33 LQ_AF33
AF41 LQ_AF41
AF42 LQ_AF42
AF43 LQ_AF43
EF LQ_EF
NOTE
On the Iu-PS interface, even if IPoA transmission is used, IP paths still need to be configured.
High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA) services can be carried
on the same IP path, with HSDPA services carried in the downlink and HSUPA services carried in the uplink.
The typical networking scenarios for the Abis interface are similar to the Iub interface, except that networking scenarios
such as dual stack, hybrid IP, and RAN sharing are not applied to the Abis interface.
For details about the 2G and 3G networking, see the IP BSS Feature Parameter Description of the GBSS and the IP
RAN Feature Parameter Description of the RAN.
The IP transmission mode of the Ater interface supports only TDM networking on IP over E1.
Figure 3-8 Co-transmission scenario where the GSM BSC and the UMTS RNC are deployed separately
For details about the co-transmission networking, see the Common Transmission Feature Parameter
Description.
4 Quality of Service
4.1 Overview
The purpose of TRM algorithms is to guarantee the Quality of Service (QoS). Different types of service
have different QoS requirements.
The Iub or Abis control plane and the Uu signaling require reliable transmission. Packet loss rate and
delay may affect KPIs such as connection delay, handover success rate, access success rate, and call
drop rate.
CS services have requirements for delay and packet loss rate. For example, speech services are
sensitive to end-to-end latency, and data services are sensitive to packet loss.
NRT services are relatively insensitive to delay, but in some scenarios, they are sensitive to delay.
When the load is light, the requirement for delay should be fulfilled. whereas when the load is heavy,
the requirement for delay can be lowered to a certain extent to guarantee the throughput.
The transport layer provides various transport bearers and transport priorities. The appropriate type of
transport bearer and transport priority should be selected according to the traffic classes, user priorities,
traffic priorities, and radio bearer type of service. High-priority services take precedence in transmission
when congestion occurs. This reduces packet loss and transmission delay.
Transmission resource mapping maps services of different QoS requirements to different transport
bearers. Transmission resource mapping (WRFD-050424 Traffic Priority Mapping onto Transmission
Resources) is an important method to guarantee the QoS and differentiate the users and services. It
mainly involves data in the user plane.
This section describes transmission resource mapping and associated concepts such as transport
priorities and service QoS. For the differences in implementing QoS-related services in the 2G TRM, 3G
TRM, and Co-TRM, see the following sections.
4.2.1 DSCP
The DSCP is carried in the header of each IP packet to inform the nodes on the network of the QoS
requirement. Through the DSCP, each router on the propagation path knows which type of service is
required. DSCP provides differentiated services (DiffServ) for layer 3 (L3).
When entering the network, services are differentiated and subject to flow control according to the QoS
requirement. In addition, the DSCP fields of the packets are set. The DSCP field is in the header of each
IP packet. On the network, DiffServ is applied to different types of traffic according to the DSCP values
and services for the traffic are provided. The services include resource allocation, queue scheduling, and
packet discard policies, which are collectively called PHB. All nodes within the DiffServ domain
implement PHB according to the DSCP field in each packet.
Policies for using DSCP are as follows:
The traffic carried on QoS paths uses the DSCPs mapped from services. For details, see "Mapping
from TC to PHB or PVC" and "Mapping from PHB to DSCP."
The traffic carried on the non-QoS path uses the DSCP that the PHB of the IP path corresponds to.
For details, see "Mapping from PHB to DSCP."
It is recommended that you set the path type to QoS path when configuring the IP path. This ensures
simple configuration, better multiplexing, and higher QoS.
Red line: private Blue line: public Black line: connection between routers
network network
Each NodeB or RNC provides an Ethernet port that connects to the MSTP network. The MSTP transmits
the Ethernet data of different QoS to either of the VC trunks according to the VLAN priority in the frame
header of Ethernet data. On the same VC trunk, different NodeB data is distinguished by VLANID.
Figure 4-2 shows an example of using VLAN priorities.
The RNC, NodeB1, and NodeB2 are connected to the same L2 network. Data of NodeB1 (VLAN 10) and
NodeB2 (VLAN 20) is isolated according to different VLANIDs. VLANIDs are attached to data of different
traffic classes sent from the Ethernet port.
Data of different traffic classes use VLAN priorities mapped from DSCP. Then, the L2 network provides
differentiated services based on the VLAN priorities. When IP paths are configured, the VLANFLAG
parameter specifies whether a VLAN is available.
Table 4-1 describes the default mapping from DSCP to VLANPRI.
Table 4-1 Default mapping from DSCP to VLANPRI
DSCP VLANPRI
0-7 0
8-15 1
16-23 2
24-31 3
32-39 4
40-47 5
48-55 6
56-63 7
You can run the SET DSCPMAP command to dynamically configure the mapping from DSCP to
VLANPRI.
At each IP port (such as PPP/MLPPP or Ethernet port) or leaf LP of the RNC, BSC or MBSC, there are
six types of priorities, as shown in Figure 4-4. The default scheduling order is as follows: Queue1 >
Queue2 > WRR (Queue3, Queue4, Queue5, and Queue6), where WRR refers to Weighted Round
Robin.
Figure 4-4 Queues at each IP port or leaf LP of the RNC
Different types of services enter queues of different priorities for transmission. In this way, services are
differentiated. For details, see section 4.4.3 "Mapping from Traffic Bearers to Transport Bearers."
At each ATM port (such as IMA, UNI, fractional ATM, or NCOPT port) or LP of the NodeB, there are four
types of priorities, as shown in Figure 4-5. The scheduling order is as follows: CBR or UBR+ (MCR) >
RT-VBR > NRT-VBR > UBR or UBR+ (non-MCR).
At each IP port (such as PPP/MLPPP or Ethernet port) or LP of the NodeB, there are six types of
priorities, as shown in Figure 4-6. The default scheduling order is as follows: Queue1 > WFQ (Queue2,
Queue3, Queue4, Queue5, and Queue6). Where, WFQ refers to Weighted Fair Queuing.
Figure 4-6 Queues at each IP port or LP of the NodeB
Priority queues are used for RNC backpressure-based downlink congestion control. For details, see
section 6.5.3 "RNC Backpressure-Based Downlink Congestion Control Algorithm."
PS conversational service
PS streaming service
PS interactive service
PS background service
The BSC provides the following traffic classes that can be used in transmission resource mapping
configuration:
Abis OML
Abis RSL
Abis ESL
Abis EML
CS speech service
CS data service
PS data service
2G Abis signaling traffic classes have higher QoS requirement than other traffic classes, except Abis
EML.
Only the mapping of Abis signaling services in the 2G system is non-adjacent-node-oriented configuration. For details,
see "Mapping from Abis Signaling Traffic to Transmission Resources."
The transmission resource mapping of the RNC also supports configuration of primary and secondary paths. For details,
see section 5.5 "Admission Control."
To provide better differentiated services, the RNC and BSC support dynamic configuration of the
transmission resource mapping and thus traffic bearers can be mapped to transport bearers freely. The
RNC also supports separate configuration of transmission resource mapping under an Iub adjacent
node for a certain operator or a certain user priority.
To dynamically configure transmission resource mapping, do as follows:
Step 1 Run the ADD TRMMAP command to specify the mapping from the TCs of a specific interface
type and transport type to a transport bearer.
Step 2 Run the ADD ADJMAP command to use the configured TRMMAP table. When the RNC
ADJMAP is configured, the TRMMAP tables need to be specified for gold, silver, and copper
users respectively.
In the RAN sharing scenario, the operator index needs to be set to specify transmission resource mapping of the
operator under the adjacent node, if the resource management mode is set to EXCLUSIVE.
When the transmission mode on the Iub interface is ATM&IP dual stack or hybrid IP, the load balance index of primary
and secondary paths needs to be configured.
----End
TRANST
CNMNGMODE
CNOPINDEX
TMIGLD
TMISLV
TMIBRZ
LEIGLD
LEISLV
LEIBRZ
You can run the SET PHBMAP command to dynamically configure the mapping from PHB to DSCP
(PHBMAP).
If the traffic is carried on a non-QoS path, the PHB of the path is determined by the path type. Run the
SET PHBMAP command to configure PHBMAP.
If the traffic is carried on a QoS path, the PHB of the path is determined by the TRMMAP. Run the ADD
TRMMAP command to determine the PHB of the path, and then run the SET PHBMAP command to
configure PHBMAP.
You can run the SET QUEUEMAP command to dynamically configure the minimum DSCP value that
each queue at the IP port corresponds to.
The associated parameters are as follows:
Q0MINDSCP
Q1MINDSCP
Q2MINDSCP
Q3MINDSCP
Q4MINDSCP
The minimum DSCP value of queue 5 need not be set. The IP packet that meets the condition (0 <= DSCP value <
minimum DSCP value for queue 4) enters queue 5 for transmission.
Table 4-7 Mapping from traffic to transmission resources in HDLC transmission mode
TC Queue Priority
ESL 0
OML 0
RSL 0
EML 5
For IP transmission on the Abis interface, the associated parameters are as follows:
OMLDSCP
RSLDSCP
EMLDSCP
ESLDSCP
For HDLC transmission on the Abis interface, the associated parameters are as follows:
OMLPRI
RSLPRI
EMLPRI
ESLPRI
4.5 Summary
Table 4-8 describes the difference between traffic bearers in the 2G, 3G, and co-transmission systems.
Table 4-8 Difference between traffic bearers in the 2G, 3G, and co-transmission systems
Traffic Bearer 2G System 3G System Co-Transmission System
TC √ √ √
ARP √ √ √
THP √ √ √
Radio bearer type × √ √
The 2G system uses the ARP for admission, and there is no mapping from user priority to ARP.
The transport layer of the 2G system does not differentiate THPs.
Table 4-9 describes the adjacent-node-oriented transmission resource mapping of the 2G TRM, 3G TRM,
and Co-TRM.
Table 4-9 Adjacent-node-oriented transmission resource mapping of the 2G TRM, 3G TRM, and Co-TRM
Transmission Mode Adjacent-Node-Oriented Transmission Resource Mapping
3G ATM transmission From TC + ARP + THP + radio bearer type to PVC
3G IP transmission From TC + ARP + THP + radio bearer type to PHB, from PHB to DSCP to queue
priority
The mapping from signaling traffic of the Abis interface of the 2G system to transmission resources is not oriented to
adjacent nodes. It needs to be configured independently.
In TDM transmission mode of the 2G system, traffic is directly carried on the timeslot at the port. Thus, transmission
resource mapping is not required.
In IP transmission mode of the 3G system, configuration of primary and secondary paths is also supported.
5 Load Control
5.1 Overview of Load Control
Load control at the transport layer is used to manage transmission bandwidth and control transmission
load, for the purpose of allowing more users to access the network and increasing the system capacity
with the QoS guaranteed. Load control is responsible for management of data in the control plane.
Load control methods include admission control, LDR, and OLC.
Admission control is the basic method of load control. In the process of transmission resource
admission, admission control is used to determine whether the transmission resources are sufficient to
accept the admission request from a user. Admission control prevents excessive admission of users
and guarantees the quality of admitted services.
LDR is used to prevent congestion, reduce transmission load, and increase admission success rate
and system capacity.
OLC is used to quickly eliminate overload when congestion occurs, and to reduce the impact of
overload on high-priority users.
Differentiated services are implemented as follows:
Admission strategies: Different admission strategies are used for different types of users. During
admission based on transmission resources, differentiated services for user priorities are
implemented.
Preemption: High-priority users preempt bandwidth of low-priority users. Thus, differentiated services
for different service types and user priorities are implemented.
LDR: Different LDR actions are used for different services. During congestion, differentiated services
for different service types are implemented.
OLC: Bandwidth of low-priority users is released, which reduces the impact of overload on high-priority
users. In the case of overload, differentiated services for different service types and user priorities are
implemented.
Table 5-1 describes load control applied in the 2G TRM, 3G TRM, and Co-TRM.
Table 5-1 Load control applied in the 2G TRM, 3G TRM, and Co-TRM
Load Control 2G TRM 3G TRM Co-TRM
Admission Reserved bandwidth √ √ √
control admission
Load balancing - √ √
Preemption √ √ √
Queuing √ √ √
LDR √ √ √
OLC √ √ √
This section describes the definition and calculation of transmission load, calculation of reserved
bandwidth, and load thresholds in addition to admission control, LDR, and OLC. For differences of
implementing load control in the 2G TRM, 3G TRM, and Co-TRM, see the following sections.
In IP over E1 mode, the bandwidth reserved for LAPD signaling takes effect on LPs. To ensure the accuracy of admission
based on bandwidth for PPP and MLPPP links, you are advised to take one of the following measures:
Configure LPs on the PPP or MLPPP links with the same bandwidth as the PPP or MLPPP links
Configure IP paths of the QoS type: Bandwidth of IP path = Bandwidth of PPP or MLPPP - Max (Bandwidth reserved for
uplink signaling, Bandwidth reserved for downlink signaling)
Activity factors can be configured for different types of services and adjacent nodes:
Both default configuration and dynamic configuration are available for activity factors for different types
of service. The default configuration can be queried through the LST TRMFACTOR command. You
can run the ADD TRMFACTOR command to dynamically configure activity factors for different types
of service.
You can run the ADD ADJMAP command to configure the same activity factor table for an adjacent
node by specifying the FTI parameter.
For 3G BE services, the GBR can be set by running the SET UUSERGBR command, according to traffic
classes, traffic priorities, user priorities, and types of radio bearers. The associated parameters are as
follows:
TrafficClass
THPClass
BearType
UserPriority
UlGBR
DlGBR
The congestion threshold and congestion clear threshold, and the overload threshold and overload clear
threshold are used to prevent ping-pong effect. It is recommended that they should be set to different
values.
By running the ADD TRMLOADTH command, you can configure a load threshold table (TRMLOADTH
table) for paths, LPs, resource groups, or physical ports. By specifying the TRMLOADTHINDEX
parameter, the TRMLOADTH table can be referred to.
In ATM transmission, you can run the MML command ADD AAL2PATH or ADD ATMLOGICPORT
command to refer to the TRMLOADTH table.
In IP transmission, you can run the MML command ADD IPPATH or ADD IPLOGICPORT command to
refer to the TRMLOADTH table.
In TDM/HDLC transmission, you can run the MML command SET BSCABISPRIMAP to refer to the
TRMLOADTH table.
For details about the preceding thresholds, see sections 5.5 "Admission Control" and 5.6 "Load
Reshuffling and Overload Control."
As shown in Figure 5-1, when the users request transmission resources, the admission control process
is as follows:
1. The admission based on transmission resources is decided according to the admission strategy. If
the admission is successful, a user can obtain transmission resources. If the admission fails, go to
step 2. For details about the admission strategy, see section 5.5.2 "Admission Strategy."
2. The attempt to preempt resources is made. If the preemption is successful, a user can obtain
transmission resources. If the preemption fails or the preemption function is not supported, go to step
3. For details about preemption, see section 5.5.5 "Preemption."
3. The attempt for queuing is made. If the queuing is successful, a user can obtain transmission
resources. If the queuing fails or the queuing function is not supported, the admission based on
transmission resources fails. For details about queuing, see section 5.5.6 "Queuing."
After transmission resources are successfully admitted, bandwidth needs to be reserved on the
corresponding paths and ports. In addition, bandwidth needs to be updated to the load.
Multiple levels of admission. After the user initiates a request for transmission resources, admission
based on transmission resources is decided in the sequence of paths -> LPs -> physical ports.
If a certain level of admission is not supported, you can directly perform the admission decision of transmission
resources of the next level. If the LP is not configured, the admission is performed in the sequence of paths -> physical
ports.
In multiple levels of admission, users can obtain transmission resources only when the admission based on all
resources is successful.
In TDM Flex Abis transmission, the transmission resource admission is performed from the Flex Abis resources of the
lowest-level base station step by step in an ascending order. In HDLC transmission, admission is based on HDLC links.
The service priorities need to be taken into consideration. New users, handover users, and users
requesting a rate increase use different admission strategies.
The admission based on transmission resources is determined according to the current load, bandwidth
requested by users, and admission thresholds. The admission strategy varies according to the types of
users.
For a new user
− Admission based on paths
Path load + Bandwidth required by the user < Total configured bandwidth for the path - Path
bandwidth reserved for handover.
− Admission based on LPs
The admission based on LPs is performed level by level. For each level of admission, the strategy is
as follows: LP load + Bandwidth required by the user < Total bandwidth for the LP - LP bandwidth
reserved for handover.
For a handover user
− Admission based on paths
Path load + Bandwidth required by the user < Total bandwidth for the path.
− Admission based on LPs
The admission based on LP resources is performed level by level. For each level of admission, the
strategy is as follows: LP load + Bandwidth required by the user < Total bandwidth for the LP.
For a user requesting a rate increase
− Admission based on paths
Path load + Bandwidth required by the user < Total bandwidth for the path - Path congestion
threshold.
− Admission based on LPs
The admission based on LPs is performed level by level. For each level of admission, the strategy is
as follows: LP load + Bandwidth required by the user < Total bandwidth for the LP - LP congestion
threshold.
NOTE
If no admission threshold is configured for the user, the admission strategy can be simplified as: Load + Bandwidth
required by the user < Total bandwidth configured.
Step 1 Paths are selected according to transmission resource mapping. For details about transmission
resource mapping, see section 4.4 "Transmission Resource Mapping."
If no paths are available for use, for example, when the mapped path type does not exist, the
admission fails.
Step 2 The admission sequence for all paths is determined. For details, see the section "Sequence of
the Admission Based on Paths."
Step 3 According to the sequence, a path is selected to undergo admission decision.
If… Then…
The admission succeeds. The admission based on paths is complete.
The admission fails. Go to Step 4.
If… Then…
There is no available path. The admission fails, the admission based on paths
is complete.
There are still available paths. Go to Step 3.
For example,
One type of service is mapped to five paths of the same type that are numbered path 1 to path 5. The
five paths form a circular chain: 1→2→3→4→5→1.
Assume that the type of service needs to be admitted for 100 times in response to 100 requests. The
times are respectively marked T1, T2, T3, …
Assume that the admission of T1 succeeds on path 1.
Then the admission of T2 is performed in the sequence of 2→3→4→5→1. Assume that the admission
succeeds on path 4.
Then the admission of T3 is performed in the sequence of 5→1→2→3→4. Assume that the admission
fails on all paths. In this case, the admission of T3 is rejected.
Then the admission of T4 is performed in the sequence of 5→1→2→3→4. …
If the admission of all the 100 times succeeds on the first path for admission decision, then the 100
service requests are respectively admitted on one of the five paths in the following way:
When the primary path for a type of service exists at more than one physical port, PortUsed and
PortAvailable refer to the sum of used bandwidth and the sum of available bandwidth at these ports
respectively.
Figure 5-4 Admission bandwidth for the primary and secondary paths of a new user
Available bandwidth 1 = Total bandwidth of the primary path - Used bandwidth - Bandwidth
reserved for handover
Available bandwidth 2 = Total bandwidth of the secondary path - Used bandwidth - Bandwidth
reserved for handover
5.5.5 Preemption
In the case of preemption, a high-priority user preempts the bandwidth from a low-priority access user for
admission based on transmission resources. This improves satisfaction of high-priority users. In the
Co-TRM, preemption is performed only within the 2G or 3G system. A high-priority 2G user preempts the
bandwidth of a low-priority 2G user, and a high-priority 3G user preempts the bandwidth of a low-priority
3G user.
If the admission based on transmission resources fails, the preemption function is triggered when the
following conditions are met:
The transmission channel (path, LP, resource group, or physical port) supports preemption.
The user who requests transmission resources supports preemption as defined in the user request.
The preemption switch is enabled.
− In
the 2G system, the preemption switch is enabled through the ENPREEMPTTRANSADMT
parameter.
− In the 3G system, the preemption switch is enabled through the PreemptAlgoSwitch parameter.
− In
the Co-TRM, the preemption switches for 2G and 3G services are set separately. Both
ENPREEMPTTRANSADMT and PreemptAlgoSwitch need to be set.
Intelligent Access Control (IAC) is aimed at improving the access success rate. Preemption is one of the
IAC procedures. For details about the principles of preemption at the RNC, see the Load Control Feature
Parameter Description of the RAN.
The principles of preemption at the BSC are as follows:
In IP and HDLC transmission modes
− If
transmission resources are insufficient, preemption for bandwidth of different types of service is
performed. That is, preemption for bandwidth is performed between CS services and PS services.
− If
conditions for preemption between different types of service are not met, preemption is performed
on bandwidth of the same traffic class. That is, a high-priority CS service preempts the bandwidth of a
low-priority CS service, and a high-priority PS service preempts the bandwidth of a low-priority PS
service.
In Flex Abis mode, a CS service preempts the bandwidth of a low-priority PS service.
The principles of bandwidth preemption between CS services and PS services are as follows:
During transmission resource admission, a CS service can preempt the bandwidth of low-priority PS
service only when the ratio of the bandwidth occupied by the CS service to the total bandwidth is lower
than the GSMCSBWRATE parameter.
During transmission resource admission, a PS service can preempt the bandwidth of low-priority CS
service only when the ratio of the bandwidth occupied by the CS service to the total bandwidth is
higher than the GSMCSBWRATE parameter.
Whether a CS service is of high priority can be determined by configuring the
GSMCSUSERHIGHPRILEV parameter. If the priority of the CS service indicated in the user request is
lower than or equal to the value of this parameter, the CS service is considered as of high priority.
Otherwise, it is of low priority.
5.5.6 Queuing
In the queuing function, the user that requests transmission resources is put in a queue to wait for free
transmission resources.
If the admission based on transmission resources fails, or the user that requests transmission resources
does not support the preemption function, or the preemption function fails, the queuing function is
triggered when the following conditions are met:
The transmission channel (path, LP, resource group, or physical port) supports queuing.
The user that requests transmission resources supports queuing as defined in the user request. The
2G user for queuing must be a non-handover CS user.
The queuing switch is enabled.
− In the 2G system, the queuing switch is enabled through the ENQUETRANSADMT parameter.
− In the 3G system, the queuing switch is enabled through the QueueAlgoSwitch parameter.
− In
the Co-TRM, the preemption switches for 2G and 3G services are set separately. Both
ENQUETRANSADMT and QueueAlgoSwitch need to be set.
Queuing is also one of the IAC procedures. For details about the principles of queuing at the RNC, see
the Load Control Feature Parameter Description of the RAN.
The principle of queuing at the BSC is that the user entering a queue captures transmission resources
according to the First in First Out (FIFO) strategy when transmission resources are released.
Congestion threshold: When the usage of transmission resources increases and the remaining
bandwidth falls below the congestion threshold, the system considers that congestion occurs.
Congestion clear threshold: When the usage of transmission resources decreases and the remaining
bandwidth exceeds the congestion clear threshold, the system considers that congestion is cleared.
For parameters associated with the congestion, see section 5.4 "Load Thresholds."
Congestion detection can be triggered in any of the following conditions:
Bandwidth adjustment because of resource allocation, modification, or release
Change in the configured bandwidth or the congestion threshold
Fault in the physical link
Congestion detection for a path is similar to that for a port. Assume that the forward parameters of a port
for congestion detection are defined as follows:
Configured bandwidth: AVE
Forward congestion threshold: CON
Forward congestion clear threshold: CLEAR
Used bandwidth: USED
Then, the policies of congestion detection for the port are as follows:
Congestion occurs on the port when AVE - USED < CON.
Congestion is cleared from the port when AVE - USED > CLEAR.
Generally, congestion thresholds need to be set for only physical ports or resource groups. If different
types of paths require different congestion thresholds, the TRM load threshold tables need to be
adjusted by running the ADD TRMLOADTH command, and then be referred by specifying the
TRMLOADTHINDEX parameter when the paths are configured.
If ATM LPs or IP LPs are configured, LDR is also applicable to ATM LPs or IP LPs. LDR for LPs is similar
to that for resource groups.
Overload detection for a path is similar to that for a port. Assume that the forward parameters of an LP
for overload detection are defined as follows:
Total bandwidth: AVE
Forward overload congestion remain threshold: OVERLOD
Forward overload congestion clear remain threshold: CLEAR
Used bandwidth: USED
Then, the policies of overload detection for the LPs are as follows:
Overload occurs on the LP when AVE - USED < OVERLOD.
Overload is cleared on the LP when AVE - USED > CLEAR.
If a path, or port is not configured with overload thresholds, the policy of overload detection is simplified
as USED > AVE.
After the RNC receives an overload message, the RNC triggers OLC actions. OLC triggers release of
resources used by users in order of comprehensive priority.
For details about the LDR actions for various types of services and the comprehensive priorities, see
the Load Control Feature Parameter Description of the RAN.
5.7 Summary
Figure 5-5 shows the load control process in the increase of transmission bandwidth usage during the
admission of transmission resources.
As shown in Figure 5-5, the load control process in the increase of transmission bandwidth usage is as
follows:
Admission control
− All users are admitted when Remaining bandwidth > Congestion threshold.
− Newusers and handover users are admitted when Handover reserved threshold < Remaining
bandwidth < Congestion threshold.
− Handover users are admitted when Overload threshold < Remaining bandwidth <. Handover
reserved threshold
− Newusers are allowed to preempt the bandwidth of admitted users, but cannot be allocated new
bandwidth when Remaining < Overload threshold.
LDR: LDR starts when Remaining bandwidth < Congestion threshold.
OLC: OLC starts when Remaining bandwidth < Overload threshold.
When the usage of transmission bandwidth decreases and Remaining bandwidth > Overload clear
threshold, OLC is cleared. When the usage of transmission bandwidth decreases and Remaining
bandwidth > Congestion clear threshold, LDR is cleared.
Table 5-2 summarizes the difference of load control between the 2G TRM, 3G TRM, and Co-TRM.
Table 5-2 Difference of load control between the 2G TRM, 3G TRM, and Co-TRM
Networking Congestion Threshold Overload Handover Preemption Queuing Calculatio Calculatio
Scenario & Congestion Clear Threshol Reserved n of n of
Threshold d& Threshol Bandwidth Bandwidth
Overload d Reserved Reserved
Clear for for Traffic
Threshol Signaling
d
2G IP transmission Path × × √ × × × √
LP √ √ √ √ √ √
Resource × × √ × ×
group
PPPLNK/MP √ √ √ √ √
GRP
2G HDLC √ √ √ √ √ × √
transmission
2G Flex Abis √ × × √ × × ×
3G IP transmission Path √ √ √ √ √ × √
LP √ √ √ √ √
Resource √ √ √ √ √
group
PPPLNK/MP √ √ √ √ √
GRP
3G ATM Path √ √ √ √ √ √ √
transmission
LP √ √ √ √ √
Resource √ √ √ √ √
group
IMAGRP/UNI √ √ √ √ √
LNK/FRALNK
Co-transmission IP LP √ √ √ √ √ √ √
In the 3G Iub hybrid transmission scenario, including ATM&IP dual stack and hybrid IP transmission, the load balancing is
supported in the process of transmission resource admission.
Bandwidth sharing of MBTS multi-mode co-transmission is applied to the scenario where the MBTS is in co- transmission
mode and the BSC and RNC are deployed separately.
This section describes user plane processing in terms of scheduling, shaping, Iub overbooking, Iub user
plane congestion control, and dynamic bandwidth adjustment based on IP PM.
The recommended configurations for the downlink congestion control algorithms are as follows:
The RLC retransmission rate-based congestion control algorithm switch is disabled. Other algorithm
switches are enabled.
In the convergence scenario, the multiple-level LPs are configured if the configuration of multiple-level
LPs is supported.
In the IP transmission scenario, the IP PM is enabled if it is supported.
The relations between the four downlink congestion control algorithms are as follows:
Relation between the RNC backpressure-based congestion control algorithm and the RNC RLC
retransmission rate-based congestion control algorithm
Both the algorithms are implemented in the RNC. Therefore, they may take effect simultaneously.
Relation between the NodeB HSDPA flow control algorithm and the RNC backpressure-based
congestion control algorithm
The NodeB flow control algorithm switch is set to BW_SHAPING_ONOFF_TOGGLE by default.
In default configuration:
− If
the RNC backpressure switch is set to OFF, the NodeB flow control policy is automatically adjusted
to DYNAMIC_BW_SHAPING, and can independently solve the congestion problem of HSDPA
users.
− Ifthe RNC backpressure switch is set to ON and direct connection networking is applied, the NodeB
flow control policy is automatically adjusted to NO_BW_SHAPING and the RNC backpressure
algorithm takes effect.
− If
the RNC backpressure switch is set to ON and transmission convergence networking is applied,
the NodeB flow control policy is automatically adjusted to DYNAMIC_BW_SHAPING, and both
NodeB flow control algorithm and RNC backpressure algorithm take effect. The NodeB flow control
algorithm solves the congestion problem on the transmission network whereas the RNC
backpressure algorithm solves the congestion problem on the Iub interface of the RNC side.
Relation between the NodeB HSDPA flow control algorithm and the RNC RLC retransmission
rate-based congestion control algorithm
− TheNodeB HSDPA flow control algorithm is excellent. Therefore, the RLC retransmission rate-based
congestion control algorithm of the HSDPA service is not used.
− When both the algorithms take effect simultaneously, one is applied to R99 services, and the other is
applied to HSDPA services. They do not conflict with each other. Generally, the priority of R99
services is higher than that of HSDPA services. Therefore, the rate of HSDPA services is reduced till
the rate reaches the minimum value. In this case, the RLC retransmission rate-based congestion
control algorithm takes effect to limit the rate of R99 services.
Figure 6-1 Procedure for the RLC retransmission rate-based flow control of the BE service
Through this algorithm, the transmission rate of the RNC matches the bandwidth on the Iub interface, as
shown in Figure 6-2.
Figure 6-2 BE service flow control in the case of Iub congestion
----End
The congestion thresholds are CONGTHD0, CONGTHD1, CONGTHD2, CONGTHD3, CONGTHD4, and CONGTHD5.
Step 3 When the buffer length of the queue is greater than the packet discarding threshold, the RNC
starts discarding data packets from the buffer.
Step 4 When the buffer length of the queue is smaller than the congestion recovery threshold, the
queue leaves the congestion state. The port is recovered if all the queues on the port leave the
congestion state. The interface boards send congestion resolving signals to the associated
DPUb boards, and the DPUb boards restore the transmission rate of BE users on the port.
Step 5 After the BE users leave the congestion state, the RNC increases the transmission rate every 10
ms according to the increasing step until the BE users reach the Maximum Bit Rate (MBR). For
detailed about MBR, see Load Control Feature Parameter Descriptio.
----End
The result of this algorithm for the BE service is shown in Figure 6-3.
Figure 6-3 Result of the backpressure-based flow control algorithm for the BE service
This algorithm enables the service scheduling and retransmission functions on the NodeB and
reduces data transmission latency.
This algorithm minimizes the buffer size and buffer time of the NodeB to prevent data loss caused by
data buffering timeout.
This algorithm prevents packet loss and maximizes the power and code resource efficiency.
This algorithm solves the Iub congestion problems of HSDPA users in various scenarios.
The prerequisites for implementing the algorithm are as follows:
The HSDPA MBR reporting switch is set as follows:
− When the switch is set to ON, the RNC sends the user MBR to the NodeB. When the NodeB MAC-hs
flow control entity distributes flow to the users, the rate does not exceed the MBR.
− When the switch is set to OFF, the Iub MBR reporting function is disabled.
NOTE
This switch is not configurable. It is set to ON by default.
The NodeB Iub flow control algorithm switch SWITCH is set as follows:
− When the switch is set to DYNAMIC_BW_SHAPING, the NodeB adjusts the available bandwidth for
HSDPA users based on the delay and packet loss condition on the Iub interface. Then, considering
the rate on the air interface, the NodeB performs Iub shaping and distributes flow to HSDPA users.
− When the switch is set to NO_BW_SHAPING, the NodeB does not adjust the bandwidth based on
the delay and packet loss condition on the Iub interface. The NodeB reports the conditions on the air
interface to the RNC, and then the RNC performs bandwidth allocation.
− When the switch is set to BW_SHAPING_ONOFF_TOGGLE, the flow control policy for the ports of
the NodeB is either DYNAMIC_BW_SHAPING or NO_BW_SHAPING in accordance with the
congestion detection mechanism of the NodeB.
When SWITCH is set to DYNAMIC_BW_SHAPING or BW_SHAPING_ONOFF_TOGGLE, the cell
throughput decreases in the case of severe packet loss in the transport network.
This section describes the flow control policy used when SWITCH is set to
BW_SHAPING_ONOFF_TOGGLE. The algorithm architecture is shown in Figure 6-4.
----End
The recommended configurations for the uplink congestion control algorithms are as follows:
All the algorithm switches are enabled.
In the IP transmission scenario, the IP PM is enabled if it is supported.
The relations between the four uplink congestion control algorithms are as follows:
The NodeB backpressure-based uplink congestion control algorithm and the NodeB uplink bandwidth
adaptive adjustment algorithm are implemented in the NodeB. The RNC R99 single service uplink
congestion control algorithm is implemented in the RNC. These three algorithms may take effect
simultaneously.
The result (available bandwidth for LPs) of the NodeB uplink bandwidth adaptive adjustment algorithm
is the input for the NodeB backpressure-based uplink congestion control algorithm. If the NodeB
boards support the NodeB uplink bandwidth adaptive adjustment algorithm and the NodeB
backpressure-based uplink congestion control algorithm, both the algorithms can be used together to
solve the uplink Iub congestion problems (in direct connection and convergence scenarios). This is the
main scheme of the uplink flow control algorithm.
If the NodeB supports the NodeB backpressure-based uplink congestion control algorithm and the
NodeB uplink bandwidth adaptive adjustment algorithm, the RNC R99 single service uplink congestion
control algorithm can control the transmission rate of UEs based on the backpressure flow control and
rate limiting results. They do not conflict with each other. Otherwise, the RNC R99 single service uplink
congestion control algorithm independently controls the transmission rate of UEs based on the FP
congestion detection results.
If the NodeB supports the NodeB backpressure-based uplink congestion control algorithm and the
NodeB uplink bandwidth adaptive adjustment algorithm, the NodeB uplink congestion control
algorithm for cross-Iur single HSUPA service can solve the packet loss problem due to Iur interface
congestion for HSUPA users.
NOTE
The switch for this algorithm is not configurable. It is set to ON by default.
Figure 6-5 shows the principle of the NodeB backpressure-based congestion control algorithm.
Figure 6-5 Principle of the NodeB backpressure-based uplink congestion control algorithm
Step 4 When detecting that the transmission buffering duration falls below the congestion recover
threshold, the NodeB determines that transmission congestion is eliminated. The congestion
recover threshold is not configurable and is fixed at 20 ms.
After Iub congestion is eliminated, the NodeB increases the transmission rates for all BE users
carried on the Iub LPs up to their MBRs.
− The NodeB increases the transmission rates for BE users in certain steps every 10 ms.
− The step varies with the user priority. The transmission rate for a BE user with a higher priority is
increased in larger steps.
----End
NOTE
The switch of this algorithm is not configurable. It is set to ON by default.
The RNC monitors congestion due to delay and frame loss based on the packet transmission time
specified in the Spare Extension field in the FP frame and the number of FP packets sent by the NodeB.
Then, the RNC returns the congestion indication according to the congestion detection result. The frame
structure of the congestion indication is shown in Figure 6-6. At the same time, the cross-Iur indication is
added to the congestion indication, which is used for the NodeB to perform cross-Iur flow control for
HSUPA users.
Figure 6-6 Frame structure of the congestion indication on the transport network
Congestion Status indicates the congestion status of the transport network. Its values are as follows:
0: no TNL congestion
1: reserved for future use
2: TNL congestion detected by delay build-up
3: TNL congestion detected by frame loss
After receiving the non-cross-Iur congestion indication periodically measured on each LP, the NodeB
adjusts the exit bandwidth on the NodeB side according to the following principles:
If the NodeB receives the congestion indication in which the value of Congestion Status is 2 or 3 in a
measurement period, it reduces the exit bandwidth of the LP by a certain step.
Otherwise, the NodeB increases the exit bandwidth of the LP by a certain step, and the changed exit
bandwidth does not exceed the configured bandwidth.
This algorithm enables monitoring Iub transmission resources to dynamically adjust the exit bandwidth
on the NodeB side, which greatly increases resource efficiency. If a large number of packets are lost in
the transport network, the HSUPA throughput decreases.
NOTE
The switch of this algorithm is not configurable. It is set to ON by default.
The spare field in the uplink DCH data frame is extended to implement FP-based uplink congestion
detection. The algorithm is implemented as follows:
Step 1 The NodeB sends the DCH FP frame that carries the total number of FP packets.
Step 2 The RNC performs R99 single service uplink congestion detection due to frame loss.
Step 3 If a frame loss is detected, the RNC reduces the rate of the uplink service (not lower than the
GBR) and notifies the UE through the TFC Control signaling.
Step 4 If there is no frame loss and the current rate of the user does not reach the MBR, the RNC
increases the rate and notifies the UE through the TFC Control signaling.
----End
NOTE
Step 3 In a certain period, the NodeB increases the transmission rate for the uplink cross-Iur HSUPA
user until the rate of the BE user reaches the MBR.
Step 4 After obtaining the transmission rate, the decoding DSP sends data by using the leaky bucket
algorithm.
If the NodeB supports uplink backpressure, the transmission rate is the minimum value between the
rate limited by the backpressure algorithm and the rate specified by this algorithm.
----End
The IP PM for the Abis interface is similar to that for the Iub interface.
7 Engineering Guidelines
This section provides engineering guidelines regarding the configuration of the TRM feature.
You can run the MML command LST TRMLOADTH to query the default TRMLOADTH table. The default value of
TRMLOADTHINDEX is 3. If you want to set the parameter to a different value, perform this step. However, the default
TRMLOADTH table is strongly recommended.
Step 2 Run the MML command ADD IPLOGICPORT to set the LPNTYPE parameter to Leaf and
change TRMLOADTHINDEX to the value set in Step 1. For example,
ADD IPLOGICPORT: SRN=0, SN=0, BT=FG2c, LPNTYPE=Leaf, LPN=0, CARRYT=ETHER, PN=0,
RSCMNGMODE=SHARE, BWADJ=OFF, CIR=200, FLOWCTRLSWITCH=ON, TRMLOADTHINDEX=3;
8 Parameters
Table 8-1 Parameter description for 2G TRM, 3G TRM, and Co-TRM
Parameter ID NE MML Description
BWDCONGBW BSC6900 ADD Meaning: If the available backward bandwidth is
TRMLOADTH(Optional) less than or equal to this value, the backward
congestion alarm is emitted and backward
congestion control is triggered.
DROPPKTTHD BSC6900 ADD Meaning: If the duration for buffering the data in
0 PORTFLOWCTRLPAR queue 0 is more than or equals to the value of this
A(Optional) parameter, the subsequent packets added to
queue 0 are discarded. For the flow control
adopting the ATM, this parameter indicates the
threshold for discarding packets in the CBR
queue.
UlGBR BSC6900 SET Meaning: Uplink guaranteed bit rate (GBR) of the
UUSERGBR(Optional) BE service. GBR is the minimum bit rate that the
system can guarantee for the service.When
BearType set to R99,virtual value of UlGBR is not
greater than D384.
BearType BSC6900 SET Meaning: Bearer type of the service. R99 indicates
UUSERGBR(Mandator that the service is carried on a non-HSPA channel.
y) HSPA indicates that the service is carried on an
HSPA channel.
LEIBRZ BSC6900 ADD Meaning: Bronze user load EQ index used by the
ADJMAP(Mandatory) current adjacent node
MOD
ADJMAP(Optional) GUI Value Range: 0~63
Actual Value Range: 0~63
Unit: None
Default Value: None
RXTRFX BSC6900 ADD Meaning: RX traffic record index of the AAL2 Path
AAL2PATH(Mandatory) on the out RNC port (ATM layer PVC traffic). The
MOD traffic index is configured in the ATM traffic table
AAL2PATH(Optional) (see "LST ATMTRF").
DlLdrFourthActi BSC6900 ADD Meaning: This parameter has the same content as
on UNODEBLDR(Optional) DlLdrFirstAction. The selected actions, however,
MOD should be unique.
UNODEBLDR(Optional)
GUI Value Range: NoAct(no action),
BERateRed(BE traff rate reduction),
QoSRenego(uncontrolled real-time traff Qos
re-negotiation), CSInterRatShouldBeLDHO(CS
domain inter-rat should be load handover),
PSInterRatShouldBeLDHO(PS domain inter-rat
should be load handover),
CSInterRatShouldNotLDHO(CS domain inter-rat
should not be load handover),
PSInterRatShouldNotLDHO(PS domain inter-rat
should not be load handover)
Actual Value Range: NoAct, BERateRed,
QoSRenego, CSInterRatShouldBeLDHO,
PSInterRatShouldBeLDHO,
CSInterRatShouldNotLDHO,
PSInterRatShouldNotLDHO
Unit: None
Default Value: NoAct
UlLdrFourthActi BSC6900 ADD Meaning: This parameter has the same content as
on UNODEBLDR(Optional) UlLdrFirstAction. The selected actions, however,
MOD should be unique.
UNODEBLDR(Optional)
GUI Value Range: NoAct(no action),
BERateRed(BE traff rate reduction),
QoSRenego(uncontrolled real-time traff Qos
re-negotiation), CSInterRatShouldBeLDHO(CS
domain inter-rat should be load handover),
PSInterRatShouldBeLDHO(PS domain inter-rat
should be load handover),
CSInterRatShouldNotLDHO(CS domain inter-rat
should not be load handover),
PSInterRatShouldNotLDHO(PS domain inter-rat
should not be load handover)
Actual Value Range: NoAct, BERateRed,
QoSRenego, CSInterRatShouldBeLDHO,
PSInterRatShouldBeLDHO,
CSInterRatShouldNotLDHO,
PSInterRatShouldNotLDHO
Unit: None
Default Value: NoAct
The Default Value column is valid only for optional parameters and the "-" symbol indicates that there is no default value.
9 Counters
There are no specific counters associated with this feature.
10 Glossary
For the acronyms, abbreviations, terms, and definitions, see the Glossary.
11 Reference Documents
[1] Load Control Feature Parameter Description of the RAN
[2] IP BSS Feature Parameter Description of the GBSS
[3] IP RAN Feature Parameter Description of the RAN
[4] Common Transmission Feature Parameter Description of the SingleRAN
[5] SRNS Relocation and DSCR Feature Parameter Description of the RAN
[6] HSDPA Feature Parameter Description of the RAN
[7] HSUPA Feature Parameter Description of the RAN
[8] Flex Abis Feature Parameter Description of the GBSS
[9] Bandwidth Sharing of MBTS Multi-Mode Co-Transmission Feature Parameter Description of the
SingleRAN