Professional Documents
Culture Documents
Wcdma Optimization PDF
Wcdma Optimization PDF
Soft Handover Overhead is higher than 45% in RNC, the value can’t meet KPI request, customer ask to
optimize SHO overhead.
Check cell coverage for improving overshooting and reducing SHO overhead with iNastar, we find some
cells coverage to larger, and then ask to customer to down antenna tilt of those cells.
some value of parameters are different HW’s recommend value, particularly TrigTime1A (1A Time to
trigger) still using NSN’s setting, after swap NSN network 2 years.
SET UPSINACTTIMER
PsInactTmrForCon
PsInactTmrForStr
PsInactTmrForInt
PsInactTmrForBac
Meaning: When detecting that the Ps' User had no data to transfer for a long time which longer
than this timer, the PDCP layer would request the RRC layer to release this Radio Access Bear.
So the number of normal release will increase which will result in decreasing the PS CDR =
Abnormal Release / Abnormal Release + Normal Release
1) External Interference
We found the KPI for Our site is not good, and the RTWP for all cells are very High.
We check the RTWP for Site New Sites GHB968:
We make a trace for RF Frequency Scaning by which we are confermed that there is some External
Interference
After This we conferm that there is some External Interference in our Network, so we just inform to our
coustomer to make it clear.
Always check the results for surrounding sites , if you are suspecting Interference.
2. We replaced the all WRFU and WBBP board, the high RTWP not disappeared;
3. We blocked GSM all TRX in the morning during idle hour, but no any improvement.
4. After we monitor several days KPI,we found that the RTWP can reach the normal level on
sometime , we doubted that it was interference cause RTWP.so we check the installation, we
saw one antenna very near the Huawei antenna.
Negotiated with the other customer regarding reducing their MW power, after they reduce the
power ,the RTWP can reach normal value.
1) DL power congestion solved by admission
control and CPICH power optimization
Cells suffer from high DL power congestion affecting accessibility KPIs (RRC, CS RAB & PS RAB %)
We took two actions:
Optimize CPICH power by decreasing it in both carrier’s
MOD UCELL:CellId=40483, PCPICHPower=340;
MOD UCELL:CellId=40488, PCPICHPower=340;
optimize the DL load threshold by controlling the admission control (CAC) of conversational
AMR service, conversational non-AMR service, and handover scenarios thresholds, where they
decide when to accept the call only if the load after admitting it is less than above four
thresholds depending on type (default values: 80, 80, 85, 75%)
MML Commands
MODUCELLCAC:CellId=40483,DlConvAMRThd=92,DlConvNonAMRThd=92,DlOtherThd=90,
DlHOThd=93, DlCellTotalThd=95;
MODUCELLCAC:CellId=40488,DlConvAMRThd=92,DlConvNonAMRThd=92,DlOtherThd=90,
DlHOThd=93, DlCellTotalThd=95;
Cause CS IRAT and PS IRAT bad bec high physical channel failure at worst cells (which refers to failure due to R
Analysi F problems) + failure due to congestion (found only in CS as PS has no preparation)
s:
After finding out 2 major reasons for CS and PS IRAT failures we investigate further and found bellow men
tioned conclusions –
Handli Now we know that route cause of poor IRAT performance was congestion at target 2G cells and poor 2G
ng coverage at time of IRAT handovers. Capacity augmentation done by 2G team on request for congested
Proces 2G cells on and PS IRAT performance improved after this.
s:
We also done bellow mentioned parameter optimization to further improve IRAT performance as it was still
bellow baseline –
1)
3A event:
The estimated quality of the currently used UTRAN frequency is below a certain
threshold and the estimated quality of the other system is above a certain threshold
QOtherRAT + CIOOtherRAT ≥ TOtherRAT + H3a/2
QUsed ≤ TUsed - H3a/2
Recommended values of TOtherRAT:
Parameter Recommended Value
We changed TargetRatHThd=16 to 26
2)
3)
In 3A:
QOtherRAT + CIOOtherRAT ≥ TOtherRAT + H3a/2
CIO is composite of CIO(2G) + CIOoffset(3G2G), so we decreased the CIOoffset to give less priority to
2G to HO to it
here is F2 G31377
here is for F1 G31373 and F2 G31377
1) HSDPA low throughput analysis
DT of a cluster we found that the throughput is not high in special areas as per the below snap
Radio conditions was good, CQI of that road was very good (average more than 23) which we verified as per the
below snap
the IUB utlization is normal and there is no congestion as well as the power, below snaps of the IUB utiliz ation at
the test time:
we went deeper to check the number of codes assigned to the UE during the test we found that the number of
codes was very low as per the snap
Reason we found that the NodeB lisence for the number of codes was normal and the feat ure of the dynamic
codes allocation is activated on the nodeB, but when we checked the average number of users ber hour in a day
we found that the cell is covering alot of users of HSDPA services below snap to show the number of users hourly
8) HSDPA Rate was LOW due to 16QAM not
activated
was swapping vendor and after we swapped the first cluster we found the HSDPA rate is Low comparing to
the value we have before Swap
1- We sent a DT Engineer and started to make a test.
2- Also we checked the IUB BW and the number of HSDPA users configured on the sites and the
number of codes configured for each site.
3- From point 2 we found everything is OK.
4- But from the DT log files we found the following:
5- the DT log files we found the following, We found all the samples under the QPSK and zero
samples at 16 QAM.
we checked the NodeB configuration found the 16QAM switch enabled on all the sites from LST
MACHSPAR
we found one parameter was not exist in our NodeB License: HSDPA RRM license, after activating it 16-
QAM worked and throughput for the same HSDPA traffic increased
1) Idle Mode 2G-3G optimization to stay more on
3G
To offload traffic over 2G and to make user under 3G coverage more, Change parameter
3G Coverage and traffic increase which can be seen from increase in HSDPA throughput ( more user in 3G for
longer time duration) also face power and CE blocking due to increase in 3G users on those sites which was fixed.
HSDPA UE Mean Cell (increased after change, but reduced again since 20-Oct, probably due
to increased of power blocking)
Huawei Confidential
The reason for this degrade was following two reasons that after setting them right the things
returned normal as seen in above 2 figures
1. Blind HO Flag for Multi carrier cells inter-frequency relation was wrong setting
Checking from the Ios trace, it is found that after the RNC sends the
below.
The problem was that the GSM cell when created and configured to be in co-BCCH mode which
the main BCCH is in 850MHz while 1900MHz as below from ADD GCELL
But when GSM is defined as external neighbor to the UMTS, it was defined in a band different
from the actual one
TYPE Freq. Meaning: This parameter specifies the frequency band of new cells. Each new
Band cell can be allocated frequencies of only one frequency band. Once the
frequency band is selected, it cannot be changed.
GSM900: The cell supports GSM900 frequency band.
DCS1800: The cell supports DCS1800 frequency band.
GSM900_DCS1800: The cell supports GSM900 and DCS1800 frequency bands.
GSM850: The cell supports GSM850 frequency band.
GSM850_DCS1800: The cell supports GSM850 and DCS1800 frequency bands.
PCS1900: The cell supports PCS1900 frequency band.
GSM850_PCS1900: The cell supports GSM850 and PCS1900 frequency bands.
TGSM810: The cell supports TGSM810 frequency band.
Unit: None
ADD UEXT2GCELL):
BandInd Inter-RAT Cell Meaning: When the inter-RAT cell frequency number is within the
Frequency range 512-810, the parameter indicates whether this frequency
Band Indicator number belongs to the DSC1800 or PCS1900 frequency band.
Unit: None
So when the UE try to make the handover to GSM PCS1900MHz band, the RNC had instructed
the UE to search for DCS1800 band which caused the failure.
After the implementation, the CS IRAT Handover Success Rate has improved obviously as below:
12) Abnormal high RTWP due to improper setting
on NodeB
During cluster acceptance O operator swap project, it was found cell W6374B3 and W6229B3 always be
the top worst cells in AMR drops.
AMR drops for the 7days.
PS DCR was also having relatively poor KPIs, which was 5~30% in these 2 cells.
Scanning through for possible reason of drops, it was found both cells having abnormal high RTWP
1. After revert, RTWP of both cells back to normal, on level of -105dBm as shown below.
2. PS DCR of these 2 cells (W6229B3 & W6374B3) showed significant improvement to level of 1% as
shown below.
T591C:
T6425B:
T6574A:
T5565C:
2nd Action: Request to change UARFCN from Freq1 band 1 (UL 9613 DL 10563) to Freq Band 6
(UL 9738 DL 10688) which is 25M apart from 1st freq on site “120031_A_Dahlan_3G” for trial
purpose,
After change frequency RTWP normal
So now we know that there is interference on the 1st freq, so we will continue using this 2nd trial freq until
interference is solved in first one, but the problem with 2nd freq is that the KPI’s where not good as seen
below:
CSSR decrease: RRC.FailConnEstab.NoReply bad.
DCR Increase: VS.RAB.AbnormRel.PS.RF.SRBReset/VS.RAB.AbnormRel.PS.RF.ULSync
/VS.RAB.AbnormRel.PS.RF.UuNoReply bad.
Traffic increased.
So we want to find what is the problem
3rd Action: the first thing found wrong on 2nd freq from Audit Parameters is that there is no inter-freq
HO activated as in 1st freq from below parameter,
we found the HOSWITCH_HO_INTER_FREQ_HARD_HO_SWITCH=FALSE which states that there
is no IFHO performed
Note that there is another switch HO_ALGO_LDR_ALLOW_SHO_SWITCH: this switch is to activate the inter-freq HO triggered by LDR
and LDR only, it means whether LDR action “inter-freq” can trigger inter-freq HO or not, while the previous one is whether inter-freq is
activated or not which is a must as if not activated this parameter will not have any meaning
so before in 1st freq some UE’s performed inter-freq as there was no good intra-freq cell, so if no inter-
freq the UE will keep work on the current freq that will increase traffic on current freq and also this will
result in more CDR probability
After fix switch: IFHO comes normal, here below KPI of IFHO success rate
there is improvement in all KPIs but still not good, so we need to improve more
4th Action: we wanted to enhance the KPI’s for the 2nd freq even more, Check propagation delay
distribution for site 120031_A_Dahlan_3G before and after changing the freq: Found site
overshooting after change frequency:
ID Counter Description
73423486 VS.TP.UE.0 Number of RRC Connection Establishment Requests with
Propagation Delay of 0
73423488 VS.TP.UE.1 Number of RRC Connection Establishment Requests with
Propagation Delay of 1
73423490 VS.TP.UE.2 Number of RRC Connection Establishment Requests with
Propagation Delay of 2
73423492 VS.TP.UE.3 Number of RRC Connection Establishment Requests with
Propagation Delay of 3
73423494 VS.TP.UE.4 Number of RRC Connection Establishment Requests with
Propagation Delay of 4
73423496 VS.TP.UE.5 Number of RRC Connection Establishment Requests with
Propagation Delay of 5
73423498 VS.TP.UE.6.9 Number of RRC Connection Establishment Requests with
Propagation Delay of 6~9
73423510 VS.TP.UE.10.15 Number of RRC Connection Establishment Requests with
Propagation Delay of 10~15
73423502 VS.TP.UE.16.25 Number of RRC Connection Establishment Requests with
Propagation Delay of 16~25
73423504 VS.TP.UE.26.35 Number of RRC Connection Establishment Requests with
ID Counter Description
Propagation Delay of 26~35
73423506 VS.TP.UE.36.55 Number of RRC Connection Establishment Requests with
Propagation Delay of 36~55
73423508 VS.TP.UE.More55 Number of RRC Connection Establishment Requests with
Propagation Delay Greater than 55
Each propagation delay represents three chips. The propagation distance of one chip is 78 m.
Therefore, one propagation delay corresponds to 234 m.
When the propagation delay is 0, it indicates that the UE is 0-234 m away from the base station.
When the propagation delay is 1, it indicates that the UE is 234-468 m away from the base
station.
When the propagation delay is 2, it indicates that the UE is 468-702 m away from the base
station.
......
When the propagation delay is 55, it indicates that the UE is 12870-13104 m away from the base
station
So as u can see the 2nd freq has more coverage, this comes from the fact that 2nd freq has no
continues coverage as 1st freq, as not commonly used freq by other neighbor sited, so this
resulted in less HO that made coverage is more
1) Bad Quality (ECIO) for due to high Users/RTWP
There was bad Ec/No as seen below in DT
This is not a permanent issue as found mainly in busy hour as seen below
The problem mainly was due to high traffic as seen below when number of users increase the RTWP increase up
to -92dB which degrade the quality (Ec/No) in UL which is the same in DL
So the problem was due to not external interference but high traffic
So there are number of solutions to solve high traffic
VS.SHO.FailRLRecfgIur Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of OM intervention
.OM.Tx (cause value: OM Intervention)
VS.SHO.FailRLRecfgIur Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of insufficient RNC
.CongTx capability (cause value: Cell not Available, UL Scrambling Code Already in Use, DL Radio Resources not
Available, UL Radio Resources not Available, Combining Resources not Available, Measurement Temporarily
not Available, Cell Reserved for Operator Use, Control Processing Overload, or Not enough User Plane
Processing Resources)
VS.SHO.FailRLRecfgIur Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of improper
.CfgUTx configurations (cause value: UL SF not supported, DL SF not supported, Downlink Shared Channel Type not
supported, Uplink Shared Channel Type not supported, CM not supported, Number of DL codes not supported,
or Number of UL codes not supported)
VS.SHO.FailRLRecfgIur Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of hardware failure
.HW.Tx (cause value: Hardware Failure)
VS.SHO.FailRLRecfgIur Number of failed radio link synchronous reconfigurations by DRNC on Iur interface because of insufficient RNC
.TransCongRx transmission capability (cause value: Transport Resource Unavailable)
Note that if the counter is Tx it refers to DRNC while Rx refers to SRNC
According to the RNC statistics, the DRNC (ZTE) shows a big amount of failures
(VS.SHO.FailRLRecfgIur.CongTx, VS.SHO.FailRLAddIur.Cong.Tx and
VS.SHO.FailRLSetupIur.CongTx) than the SRNC(Huawei). Please find below the respective
pictures.
After investigation of the traces was detected the next problems which is there is big
congestion in code at ZTE RNC, here below is counters for some cells in ZTE RNC
Time(As
RNCId CellId CellName day) VS.RAC.DCCC.Fail.Code.Cong VS.RAB.SFOccupy.Ratio VS.RAB.SFOccupy.MAX VS.RAB.SFOccupy
79 25656 256C5_6 2012-07-18 3.0000 0.9136 251.0000 233.8861
79 25652 256C5_2 2012-07-18 754.0000 0.9121 256.0000 233.5064
79 25655 256C5_5 2012-07-18 0 0.9107 246.0000 233.1368
79 14242 142U4_2 2012-07-21 822.0000 0.9097 255.0000 232.8829
79 28095 280C9_5 2012-07-18 0 0.9085 240.0000 232.5664
79 28891 288C9_1 2012-07-18 77.0000 0.9080 248.0000 232.4595
79 28896 288C9_6 2012-07-18 0 0.9080 243.0000 232.4520
79 45053 450C5_3 2012-07-18 85.0000 0.9080 253.0000 232.4490
79 27894 278C9_4 2012-07-22 63.0000 0.9072 255.0000 232.2551
79 62342 623U4_2 2012-07-25 808.0000 0.9068 254.0000 232.1405
79 24351 243C5_1 2012-07-18 89.0000 0.9067 255.0000 232.1035
79 62341 623U4_1 2012-07-18 223.0000 0.9066 254.0000 232.1025
79 14245 142U4_5 2012-07-18 0 0.9062 254.0000 231.9770
79 62343 623U4_3 2012-07-25 173.0000 0.9060 255.0000 231.9387
79 25651 256C5_1 2012-07-26 1562.0000 0.9059 256.0000 231.9010
79 53245 532U4_5 2012-07-18 0 0.9056 240.0000 231.8272
79 3754 037C5_4 2012-07-18 0 0.9051 255.0000 231.7155
79 25656 256C5_6 2012-07-20 0 0.9051 247.0000 231.6953
79 43752 437C5_2 2012-07-31 1025.0000 0.9051 255.0000 231.6940
79 3855 038C5_5 2012-07-18 34.0000 0.9049 256.0000 231.6653
79 25652 256C5_2 2012-07-27 109.0000 0.9049 256.0000 231.6500
79 28094 280C9_4 2012-07-18 18.0000 0.9049 256.0000 231.6447
79 28092 280C9_2 2012-07-18 874.0000 0.9049 248.0000 231.6443
79 43752 437C5_2 2012-07-29 906.0000 0.9048 256.0000 231.6314
79 24352 243C5_2 2012-07-18 30.0000 0.9047 248.0000 231.6035
79 17993 179C9_3 2012-07-23 585.0000 0.9047 255.0000 231.5929
79 43752 437C5_2 2012-07-30 871.0000 0.9045 256.0000 231.5526
79 25656 256C5_6 2012-07-19 1.0000 0.9045 246.0000 231.5394
79 25652 256C5_2 2012-07-23 31.0000 0.9044 255.0000 231.5190
79 25652 256C5_2 2012-07-19 200.0000 0.9043 253.0000 231.4931
79 62342 623U4_2 2012-07-26 219.0000 0.9041 256.0000 231.4475
79 62343 623U4_3 2012-07-26 1157.0000 0.9041 256.0000 231.4468
79 3653 036C5_3 2012-07-31 560.0000 0.9040 256.0000 231.4336
79 27896 278C9_6 2012-07-29 1247.0000 0.9040 256.0000 231.4212
So ZTE activated some algorithms on its side and changed some parameters to solve the
problem, which was actually solved as seen below
15) DCR KPI degraded after NodeB rehoming from
one RNC to another
Phenomen rehoming of 29 NodeBs to a new RNCon 24May. The following showed the abnormal release (DCR
on nom) increased significantly after 24Jul while normal release (DCR denom) remained almost same
Description level.
:
Then we went into details to check raw counters of every KPIs, and found that the CS IRAT HO
attempts decreased till almost zero value, same went to PS attempts as well. This explained
the reason why DCR increased and CS traffic increased abnormally as the CS calls have been
kept and dragged in 3G till call drops.
3. Based on this assumption, we tried to compare the configuration of RNC Depok and RNC
Depok2. No different in term of parameters and switches configuration.
4. Then we continued the verification on RNC license, found there was missing item called
“Coverage Based Inter-RAT Handover Between UMTS and GSM/GPRS=ON” in RNC Depok2.
16) External Interference
Interference Found in below cells.
• Amar_Taru (2286) – 3rd Sector.
• Panneri (2149) – 1st Sector.
Interference Test Analysis of Amar_Taru – 3rd Sector / Panneri 1st Sector
Field test observation – we had changed Azimuth of Panneri 1st Sector from 40* to 160* on
that time RTWP suddenly decreased that mean some Unknown frequency generating by
unknown source which is available near to Andheri Station which is same or very close to
RCOM UL Centre Frequency (1961.5MHz) .
Phenome
non
Descripti
on
1. It is found that AMR call drop is happening after the compress mode is
triggered from NASTAR.
Analysis
Solution
there is improvement in AMR call drop rate after the changes done in IRAT 2D 2F
parameter settings.
Cause Analysis
1. Resource Congestion;
2. Improper configuration;
3. RF issue;
4. CN issue;
5. Others
Handling Process:
1. Checked the traffic of the sector B, and the site has high traffic;
3. Analyze the coverage on Naster. The analysis result shows that the site can reach a distant area
(TP=20, Distance=4.6km).
4. With the Nastar result, we then check the site on Google earth. It is clear that the site has
overshooting and overlapping issue. Adjusting azimuth or downtilt is suggested.
Adjust the downtilt and azimuth as the red arrow shows, the issue was recovered with the
reduced traffic.
Then we can monitor the counters as follows to check the effect of LDR action:
VS.LCC.LDR.InterFreq
VS.LCC.LDR.BERateDL
VS.LCC.LDR.BERateUL
Note: usually power congestion will not happen in dual carrier cell. For single carrier
site, if power congestion is serious, expand carrier is recommended.
Analysis :
• Uplink power congestion was found on site 102373_SEKELOA_3G although parameter
ULTOTALEQUSERNUM has been set to 200 (=maximum value)
HUAWEI Confidential
HUAWEI Confidential
Action :
Disable UL power CAC for cell with high UL power congestion. For any cell with UL power
congestion still appear although ULTOTALEQUSERNUM has been set to 200 (=maximum value),
we decide to disable UL power CAC by setting NBMUlCacAlgoSelSwitch in UCELLALGOSWITCH
to ALGORITHM_OFF.
Uplink Power Congestion
Result :
• After changing NBMUlCacAlgoSelSwitch setting improvement in uplink power congestion.
HUAWEI Confidential
CS Domain
28 kbps -2 -17 64
32 kbps -2 -17 64
56 kbps 0 -15 32
PS Domain
32 kbps -4 -19 64
64 kbps -2 -17 32
Solution:
If HS-PDSCH Reserved Code’s value is excessively high, the HSDPA code resource is wasted and the
admission rejection rate of R99 services increases due to code resource.
so we have change this parameter from 12 to 5 .
As I checked the site parameter config. And found Code number for HS-PDSCH is 12. So change it to 5 as per baseline.
After reduce the HS-PDSCH Code problem is solved.
20) CS IRAT HO Problem due to LAC miss-configuration [HO]
When we implemented the work order of RNC in one region we got the IRAT HO Success Rate
of 24%.
After we executed one work order on 69 sites of One RNC in one region we got so many IRAT
failures.
BSC6900UCell IRATHO.FailRelocPrepOutCS.UKnowRNC
1. Checked neighbor data from 3G to GSM Handover in RNC, checks each NGSM cell
information, there is no problem in that.
2. Traced singling in RNC using LMT and found many prepare handover failed, the
reason is “unknown target RNC”. What backed it out is that the counters from
M2000 that counts are
IRATHO.FailOutCS.PhyChFail
IRATHO.FailRelocPrepOutCS.UKnowRNC
3. Based on that we have checked the configured LAC in MSC, checked MSC data and
find LAI is wrong.
After the LAI modifications in the RNC & MSC we have got The IRAT HO success Rate of
97%
21) How to improve PS IRAT Success rate
3G to 3G and 3G to 2G neighbor list review and optimization
- Adjust parameter INTERRATPHYCHFAILNUM from 3 to 1 to speed up the penalty period after first time physical channel
Parameter ID InterRatPhyChFailNum
Parameter Name Inter-RAT HO Physical Channel Failure THD
Meaning Maximum number of inter-RAT handover failures allowed due to physical channel failu
When the number of inter-RAT handover failures due to physical channel failure excee
threshold, a penalty is given to the UE. During the time specified by
"PenaltyTimeForInterRatPhyChFail", the UE is not allowed to make inter-RAT hando
attempts. For details about the physical channel failure, see 3GPP TS 25.331.
- GSM cells that contribute with high failure that affect IRAT success rate, you can decrease its priority by adjusting targe
(NPRIOFLAG, NPRIO, RATCELLTYPE).
>After implemented the actions according to KPI Improvement plan (page 3) , the
target KPI : PS IRAT HO Success Rate significant improve from about 85.6% to
94.8 %.
And TOP cell has nearly 30 drops R99 PS drop, other cell has several times R99 PS drop:
At the same time, H2D time begins to increase when activation of 64QAM is made:
Analyzing the RNC configuration, find that HSPA+ service is not allowed to start CM:
This configuration will cause 64QAM user in the bad coverage must turn to DCH from
HSDPA, then the user starts CM. This is more possible to drop.
In the IOS, some user drop after 64QAM UE return to DCH for bad coverage:
Solution
According to the above analysis, HSPA+ service can’t support CM, so HSPA+ user in bad
coverage return to DCH that causes R99 PS drop ratio increase.
SET UCMCF: EHSPACMPermissionInd=TRUE
2) SHO OVERHEAD PROBLEM solved by optimizing
event 1B
During working on B project i found problem of SHO Overhead in RNC's is high
In Trial Optimisation : I present 2 batches for the optimisation
1st Batch
1.Select Cells where SHO Overhead is high and have high traffic/congestion.
2.Adjust antenna e-tilt to control coverage. If antenna e-tilt is already at maximum then g
o to (3).
3.Adjust SHO parameters IntraRelThdFor1BCSNVP and IntraRelThdFor1BPS from 12
(means 6dB) to 10 (means 5dB) to increase probabilities of triggering event 1B and impr
ove SHO Overhead
2nd Batch
1.Select Cells where SHO Overhead is still high
Change TRIGTIME1B from 640 to 320 (ms) to further improvement.
ID Counter Description
73439970 VS.FACH.DCCH.CONG.TIME Congestion Duration of
DCCHs Carried over FACHs
for Cell
73439971 VS.FACH.DTCH.CONG.TIME Congestion Duration of DTCHs
Carried over FACHs for Cell
These counters provide the duration for which the DCCHs/DTCHs carried over the FACHs in a
cell are congested. Unit:sec
Step1: Increasing the SF of 2nd SCCPCH from
SF64 to be SF32
It is found that the main contributor to the SPU load is the soft handover. Most of the N
odeB are six sector NodeB, therefore, there will be more RL established per UE
From network audit analysis, 27% of the SPU load is caused by softhandover
Solution:
Event 1A triggering threshold is reduced to make the event less likely to occur. Below is
the command:
İt was changed from default value 6. Below is the result after change:
The soft handover overhead and SPU Load reduced after the change. The SPU
load usage reduction more than 10%
In addition, the call drop rate have not changed after the changes
Degrade in Paging Success Rate after IU-FLEX implementation
Customer in Country M, at office M , reported that there are degradations in Paging Success Rate for 1 RNC, IPRN5. The Paging
Success Rate (PSR) for idle UE on RNC IPRN5 was degraded since 14th Sep 2012.
CAUSE ANALYSIS
o The problem is shown in Figure 1, where the IU Paging Success Ratio is degraded.
o As shown in Figure 2, the RRC successful connection rate stayed almost the same. This indicated that there is nothing wrong wit
h the common part which RRC connection and paging share together, including UU interface, NODEB, IUB, some internal modul
es of RNC.
Figure 2 RRC successful connection rate
Figure 3 CPUSALLVS.PAGING.FC.Disc.Num.CPUs
o In addition, from the performance file, there is no PCH congestion found at all, as shown in figure 4, and there is no paging discar
ded too.
o It shows that, the paging message should successfully be delivered from IU interface to UU interface. This conclusion together wi
th point 1 indicates the PSR deterioration is not caused by UTRAN.
Figure 4 UCELLALLVS.RRC.Paging1.Loss.PCHCong.Cell
PSR for the idle UE on RNC is calculated by the formula:PSR=VS.RANAP.Paging.Att.IdleUE/VS.RANAP.Paging.Succ.IdleUE. The denominator and
the numerator are shown in Figure 5.
From an hour IU Trace, there is 286 location update failure out of 4042 location update requests in total with the reason shown as Figure 7.All the
failure was received from CN.
FINDINGS:
From the analysis, we could say that after IU-FLEX, repeated paging mechanism could be altered, which could bring in more useless paging attempts.
As a result, PSR on RNC is degraded.
We found when the UL power congest, the traffic is a little high,so we reduce CPICH power 1DB to decreases the cov
erage ,but we found the UL power still congestion after the revision, we doubt that lack of resources is not the root cau
se.
We check the current network parameters, found uplink CAC algorithm switch of the issue cell is set to ALGORITHM_
SECOND(The equivalent user number algorithm).
Algorithm Content
ALGORITHM_OFF Uplink power admission control algorithm disabled.
ALGORITHM_FIRST Power-based increment prediction algorithm for uplink admission control.
ALGORITHM_SECOND ENU-based admission algorithm for uplink admission control.
ALGORITHM_THIRD Power-based non-increment prediction algorithm for uplink admission control.
If we use ALGORITHM_SECOND,the network performs admission control based on the uplink equivalent number of
users (ENU) of the cell and the predicted ENU caused by admitting new users.
It means according to the different service types, equivalent to different number of users. When the cell equivalent nu
mber of users exceeds the set value(Here is 95), the cell will deny user access.
According the algorithm principle,we use “ALGORITHM_OFF “to disable uplink call admission control algorithm.
After we monitor several days KPI,we found that the KPI can reach the normal level,and there are no abnormal fluct
uations with other KPIs:
For the uplink power congestion,we could analyze from the following two aspects
1.Lack of resources.
a:Check CE adequacy of resources;
b:Adjust the coverage.by modifying the pilot power and the maximum transmission power or by RF optimal adjustme
nt.
2. Lead to the issue of parameter settings.
Adjust cell parameters:as the access control algorithm
Cells Location with LAC Borders
Below mentioned plot shows cells location with less than 98% RRC Registration
success rate with LAC borders, most of cells are located on LAC borders / covering in
open areas.
Page 1
Shown below is the Overall TP distribution for X area Cells. As shown in Map these cells are
facing in open area with no 3G coverage overlap.
CS Traffic has increased after swap hence there is no loss of coverage after swap from legacy
Page 3
Cause of the problem was attenuation not set and TMA not configured but were physically present on the Site .
On investigation we found that the cell having High RSSI were having TMA before Swap . But were not configured
in the Huawei System afterwards . Also attenuation needs to be set accordingly .
Detail Analysis:
In Moran RNC on Mosaic project >> PS RAB Success/UL Power congestion noticed
and due to which PS RAB get affected. To improve it >> Cell Loading Reshuffling
parameters UCELL_UU_LDR changed and due to change PS RAB get OK.
Number of Failed PS
RAB Setup Call Setup
RAB Establishments
Cluster Name Start time Success Success
for Cell (UL Power
Rate(PS) Rate(PS)
Congestion) (none)
But still after changing this parameter, UL Power Congestion problem did not resolve, there was
some improvement but Congestion was there.
1.4 UL Power congestion problem get resolved after changing this parameter
Detail Analysis:
In Meteor RNC on Mosaic project PS RAB get degraded on 1 Site,So to improve
UCELL CAC UL UE equivalent parameter changed and due to change PS RAB get
OK.
Failures Reason Analysis
Analyse the counter related to CS/PS RABs and it is found that many call are failing on
counter : >> Number of Failed PS RAB Establishments for Cell (UL Power
Congestion) (none) <<<< On 1 particular Cell >> Ashford_MMC_2
Number of Failed
RAB Call RAB Call PS RAB
Setup Setup Setup Setup Establishments for
Cluster Name Start time
Success Success Success Success Cell (UL Power
Rate(CS) Rate(CS) Rate(PS) Rate(PS) Congestion)
(none)
DESCRIPTION OF PARAMETER:
Impact on Network Performance: If the value is too high, the system load after
admission may be over large, which impacts system stability and leads to system
congestion. If the value is too low, the possibility of user rejects may increase, resulting
in waste in idle resources.
Number of Failed
Call RAB Call PS RAB
RAB Setup
Setup Setup Setup Establishments
Cluster Name Start time Success
Success Success Success for Cell (UL Power
Rate(CS)
Rate(CS) Rate(PS) Rate(PS) Congestion)
(none)
1.4 UL Power congestion problem get resolved after changing this parameter
Report for CS RAB Failure due to DL Power Congestion and
Improved by changing DLALGOSWITCH OFF Parameter
Detail Analysis:
In Meteor RNC on Mosaic project CS RAB get bad of 1 Site, So to improve
DLALGOWSITCH OFF parameter changed and due to change>> CS RAB get OK.
Number of
Failed CS RAB
Call
Establishments
Setup
Cluster Name Start time for Cell (DL
Success
Power
Rate(CS)
Congestion)
(none)
Make DL CAC Algorithm Switch >>>> OFF >>>from>>> Algorithm First state
DESCRIPTION OF PARAMETER:
1. In OFF condition : DL CAC algorithm is disable.
If Algorithm first applied than after reaching load factor, new calls are rejected. While if
we disable it than it can take new call. We make it OFF most of the time at the time of
more load on site, while Algorithm First is used when we have more sites nearby and
reaching certain load/threshold, it can transfer calls to near by BTS
1.3 KPI Analysed : KPI analysed after cahnge and Improvement found :
KPI ATTACHED for refernece:
Number of Failed
CS RAB
Call Setup Establishments
Cluster Name Start time Success for Cell (DL
Rate(CS) Power
Congestion)
(none)
11/30/2012
BallyguileHill_MMC_F1_1 18:00 99.42% 0
11/30/2012
BallyguileHill_MMC_F1_2 18:00 100.00% 0
11/30/2012
BallyguileHill_MMC_F1_1 19:00 99.65% 0
11/30/2012
BallyguileHill_MMC_F1_2 19:00 100.00% 0
11/30/2012
BallyguileHill_MMC_F1_1 20:00 99.75% 0
11/30/2012
BallyguileHill_MMC_F1_2 20:00 100.00% 0
11/30/2012
BallyguileHill_MMC_F1_1 21:00 99.04% 0
11/30/2012
BallyguileHill_MMC_F1_2 21:00 100.00% 0
11/30/2012
BallyguileHill_MMC_F1_1 22:00 99.60% 0
11/30/2012
BallyguileHill_MMC_F1_2 22:00 100.00% 0
11/30/2012
BallyguileHill_MMC_F1_1 23:00 99.58% 0
11/30/2012
BallyguileHill_MMC_F1_2 23:00 100.00% 0
1.4 DL Power congestion problem get resolved after changing this parameter
Phenomenon Description
Alarm Information
none
Cause Analysis
Check behavior of all counters in hsupa call drop formula
Check expected behavior of the system when cm hsupa is permitted
Phenomenon Description: In country R during WCDMA optimization project, at the step of RRC CSSR optimization
RNO team found abnormal distribution of RRC attempts for registration reason. It takes around 50% of total RRC
Attempts. Hardware version is BSC6810V200R011C00SPC100.
Analyse Procedure: From statistic for RNC 4016 VS.RRC.AttConnEstab.Reg teaks around 50% of
total RRC Attempts Connection Establishment. Attempts are normally distributed among cells.
RNCName Time(As day) VS.RRC.AttConnEstab VS.RRC.AttConnEstab.Reg
RNC:4016 2011-08-10 791541 414010
RNC:4016 2011-08-11 811675 462559
RNC:4016 2011-08-12 796428 424042
RNC:4016 2011-08-13 815134 446783
RNC:4016 2011-08-14 835164 450958
1000000
800000
600000
400000
2000000
VS.RRC.AttConnEstab
2011-08-14
2011-08-10
2011-08-11
2011-08-12
2011-08-13
VS.RRC.AttConnEstab.
Reg
RNC:4016
RNC:4016
RNC:4016
RNC:4016
RNC:4016
At the same time for other 2 RNC's no such situation, RRC Attempts with Registration reason are
no more than 15%.
Such results exclude problem of CN because all 3 RNC’s of this region share same CN.
For second reason RNO team decide to perform Drive Test to check coverage and UE
behavior. As result found that UE repeat to perform Combined RA/LA update and Location
Update every time failed with reason “MSC temporarily not reachable“. RA Update is
performed successfully.
1000000
800000
600000
400000 VS.RRC.AttConnEstab
200000
0
2011-08-…
2011-08-…
2011-08-…
2011-08-…
2011-08-…
2011-08-…
2011-08-…
2011-08-…
2011-08-…
VS.RRC.AttConnEstab.R
eg
Suggestion: For RAN performance optimization needs to pay attention at whole network
structure including Transmission and Core Network. Wrong setting of such global parameter like
NMO brings additional UE power, radio resource consumption, additional RNC SPU and CN
signalling loading.