Professional Documents
Culture Documents
System Capacity Optimization of UMTS FDD Networks PDF
System Capacity Optimization of UMTS FDD Networks PDF
DISSERTATION
ausgef
uhrt zum Zwecke der Erlangung des akademischen Grades eines Doktors
der technischen Wissenschaften
eingereicht an der Technischen Universitat Wien
Fakultat f
ur Elektrotechnik und Informationstechnik
von
Alexander GERDENITSCH
Matrikelnummer 9656381
A-7022 Loipersbach, Berggasse 1
Abstract
In this thesis I investigate the problem of capacity optimization in UMTS FDD
networks. The goal is to improve the capacity of the network, measured as served users, only by changing the base station parameters. The focus is on the
optimization of antenna tilt and common pilot channel (CPICH) power of the
base stations. These parameter adjustments improve the UMTS radio network
capacity by means of reducing inter-cell interference, achieve cell load sharing,
and optimize base station power resources.
Altogether five different algorithms for finding the best settings of antenna tilt
and CPICH power are presented. The first three optimization algorithms, Rule
Based Approach, Simulated Annealing and Adaptive Rule Based Approach, are
local techniques. Furthermore, a global technique, the Genetic Algorithm, will be
presented. Also, an Analytic Optimization Algorithm will be discussed.
The fitness function used for the algorithms considers the number of served users
as the main optimization goal. For the Genetic Algorithm I use a fitness function
that additionally also considers coverage and soft handover.
First, the Rule Based Approach is adressed. The optimization process is characterized by reducing the CPICH power and increasing the antenna downtilt in the
individual cells according to a configurable rule set. Subsequently, this algorithm
is extended by incorporating Simulated Annealing. Here, the decision whether to
take a worse result is, in contrast to the first method, independent of the rule
set. The third local algorithm is also a further development of the Rule Based
Approach. The main difference between the Adaptive Rule Based Approach and
the other two local approaches is that CPICH power and antenna tilt are changed
together, and that also an increase of CPICH power and antenna up tilting is
possible during the optimization process.
Further, a Genetic Algorithm is introduced which I improved by taking operators
that are adapted for the UMTS capacity optimization problem by taking into
account the quality of the network. In addition, a local optimization is included
to improve the performance.
Finally, I address an Analytical Optimization Algorithm. Beside antenna tilt and
CPICH power settings, this algorithm optimizes also the antenna azimuth.
i
ii
The performance of the algorithms is evaluated using a static UMTS FDD network simulator on two virtual scenarios of a typical European city. In the first
scenario the network covers the whole area of the city. The second scenario only
spans across downtown.
With the different algorithms, I show improvements in capacity of up to 105 %
compared to the initial settings. The Genetic Algorithm performs best, but with
the drawback of a high computation time. If we compare the three local optimization techniques, Rule Based Approach, Simulated Annealing and Adaptive Rule
Based Approach, we see that the Adaptive Rule Based Approach achieves the
highest improvement. The computation effort for all three algorithms is approximately the same. The Analytic Optimization Algorithm shows, with only five
network evaluations, almost the same optimization result as the local algorithms.
Zusammenfassung
Diese Dissertation beschaftigt sich mit der Kapazitatsoptimierung in UMTS Mobilfunknetzen. Das Ziel ist, die Kapazitat im Netz, gemessen an der Anzahl
der bedienten Teilnehmer, nur durch Optimierung der Basisstationsparameter
zu erhohen. Zur Optimierung werden die Antennenneigung sowie die Sendeleistung des common pilot channel (CPICH) herangezogen. Eine korrekte Einstellung dieser Parameter bewirkt eine Kapazitatssteigerung des UMTS Mobilfunknetzes durch Reduzierung der Interferenz der Nachbarzellen. Weiters kommt es
zu einer gleichmassigen Aufteilung der Last auf die einzelnen Zellen und zur Optimierung der Leistungsressourcen der Basisstationen.
In dieser Arbeit werden insgesamt f
unf verschiedene Algorithmen zur Suche nach
der optimalen Einstellung der Antennenneigung sowie der CPICH-Leistung vorgestellt. Die ersten drei Optimierungsalgorithmen, ein auf einem Regelwerk basierter Algorithmus, ein auf diesem basierender adaptiver Algorithmus und ein
Simulated Annealing Algorithmus, sind lokale Techniken. Weiters wird auch eine
globale Technik, ein genetischer Algorithmus, untersucht. Der letzte diskutierte
Algorithmus ist ein analytischer Optimierungsalgorithmus.
Die verwendete Fitnessfunktion beschreibt das Optimierungsziel (Kapazitatssteigerung) durch die Anzahl der bedienten Mobilfunkteilnehmer. F
ur den genetischen
Algorithmus wird die Fitnessfunktion um den Grad der Netzabdeckung und die
Anzahl der Teilnehmer, die mit mehr als einer Basisstation gleichzeitig verbunden
sind, erweitert.
Zu Beginn stelle ich in meiner Arbeit den auf einem Regelwerk basierten Algorithmus vor. Der Optimierungsprozess bei diesem Algorithmus zeichnet sich durch
eine Erhohung der Antennenneigung und Reduzierung der CPICH-Leistung in
den einzelnen Zellen mittels eines konfigurierbaren Regelwerkes aus. In weiterer Folge wird der Algorithmus durch das Einbinden von Simulated Annealing
erweitert. Die Entscheidung, ob ein schlechtes Ergebnis akzeptiert wird, ist im
Gegensatz zum vorigen Algorithmus unabhangig vom Regelwerk. Der dritte Algorithmus ist ebenfalls eine Erweiterung des ersten Algorithmus. Der grundlegende
Unterschied zu den vorigen Algorithmen ist, dass nun die Antennenneigung und
die CPICH-Leistung gemeinsam adaptiert werden, und dass eine Erhohung der
iii
iv
CPICH-Leistung sowie eine Reduzierung der Antennenneigung wahrend des Optimierungsprozesses ebenfalls zulassig ist.
Weiters behandle ich einen genetischen Algorithmus, der an meine Problemstellung angepasst ist. Mittels angepasster Operatoren wird auch die Qualitat des
Mobilfunknetzes zur Optimierung herangezogen. Ein Bestandteil des genetischen
Algorithmus ist auch eine lokale Optimierung, mit dem Ziel die Leistung des
Algorithmus weiter zu steigern.
Schlielich behandle ich in meiner Dissertation einen analytischen Optimierungsalgorithmus. Dieser Algoritmus optimiert neben der Antennenneigung und der
CPICH-Leistung auch den Azimutwinkel der Antenne.
Die Leistungsfahigkeit der einzelnen Algorithmen wird mit Hilfe eines statischen
UMTS FDD Netzwerksimulators auf zwei virtuellen Szenarien einer typischen europaischen Grostadt bewertet. Das erste Szenario umfasst das komplette Stadtgebiet, wogegen das zweite nur das Zentrum der Stadt abdeckt.
Mit den einzelnen Algorithmen zeige ich auf beiden Szenarien eine Kapazitatssteigerung von bis zu 105 % verglichen zur anfanglichen Parametereinstellung. Der
genetische Algorithmus liefert das beste Ergebnis, jedoch mit dem Nachteil der
langen Laufzeit. Unter den lokalen Optimierungsverfahren schneidet der adaptive Regelwerk basierte Algorithmus am besten ab. Die Laufzeit ist jedoch f
ur
alle drei Algorithmen ungefahr gleich. Der analytische Optimierungsalgorithmus
zeigt eine ahnliche Kapazitatssteigerung wie die lokalen Verfahren, jedoch mit
dem Vorteil, dass dieser Algorithmus nur f
unf statt mehr als hundert Iterationen
benotigt.
Acknowledgment
I am deeply grateful to Prof. Ernst Bonek for his guidance and invaluable support
during my work. I thank him for his encouragement during the course of this
work and for various suggestions improving the quality of this thesis.
I am also very grateful to Martin Toeltsch and Thomas Neubauer of SYMENA,
Software & Consulting GmbH, for the numerous discussions and their fruitful
collaboration. Further I thank them for providing the static UMTS FDD network
simulator CAP ESSOT M .
Special thanks go to Thomas Baumgartner and Werner Weichselberger for many
critical and useful suggestions and discussions.
My very great appreciation goes to all my colleagues, Plamen Dintchev, Klaus
vi
Contents
1 Introduction
1.1
1.2
1.3
1.4
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
10
2.2.1
10
2.2.2
13
15
2.3.1
Synchronization Channel . . . . . . . . . . . . . . . . . . .
15
2.3.1.1
Primary SCH . . . . . . . . . . . . . . . . . . . .
15
2.3.1.2
Secondary SCH . . . . . . . . . . . . . . . . . . .
15
2.3
2.3.2
. . . . . . . . . . . . . . . . . . .
17
2.3.2.1
17
2.3.2.2
18
2.3.3
Broadcast Channel . . . . . . . . . . . . . . . . . . . . . .
18
2.3.4
18
2.3.5
19
2.3.6
Paging Channel . . . . . . . . . . . . . . . . . . . . . . . .
19
vii
viii
CONTENTS
2.3.7
Dedicated Channel . . . . . . . . . . . . . . . . . . . . . .
20
2.3.8
20
2.3.9
20
2.5
. . . .
21
21
2.4.1
Cell Search . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.4.2
Power Control . . . . . . . . . . . . . . . . . . . . . . . . .
22
2.4.3
Handover . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
25
2.5.1
. . . . . . . . . . . . . . . .
26
2.5.2
30
2.5.3
31
33
3.1
General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
3.2
Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
3.3
Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . .
37
3.4
Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
3.5
Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . .
40
3.5.1
Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . .
42
3.5.1.1
Selection
. . . . . . . . . . . . . . . . . . . . . .
44
3.5.1.2
Recombination . . . . . . . . . . . . . . . . . . .
45
3.5.1.3
Mutation . . . . . . . . . . . . . . . . . . . . . .
45
3.5.1.4
46
46
3.6
CONTENTS
ix
49
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.2
50
4.3
53
4.3.1
53
4.3.2
54
55
4.4.1
55
4.4.2
57
57
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
4.4
4.4.3
4.5
61
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
5.2
Antenna Parameters . . . . . . . . . . . . . . . . . . . . . . . . .
62
5.2.1
Antenna Azimuth . . . . . . . . . . . . . . . . . . . . . . .
62
5.2.2
Antenna Tilt . . . . . . . . . . . . . . . . . . . . . . . . .
63
5.3
66
5.4
68
5.5
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
71
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
6.2
Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
6.2.1
72
6.2.2
72
6.2.3
73
6.3
Grade of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
6.4
Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . .
75
6.4.1
Outaged Mobiles . . . . . . . . . . . . . . . . . . . . . . .
75
6.4.2
Quality Factor . . . . . . . . . . . . . . . . . . . . . . . . .
76
6.4.3
77
CONTENTS
7 Simulation Environment
79
7.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
7.2
79
7.2.1
81
7.2.2
82
7.2.2.1
Mode 1 . . . . . . . . . . . . . . . . . . . . . . .
82
7.2.2.2
Mode 2 . . . . . . . . . . . . . . . . . . . . . . .
83
7.3
Simulator Interface . . . . . . . . . . . . . . . . . . . . . . . . . .
83
7.4
Network Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . .
84
7.4.1
Big Scenario . . . . . . . . . . . . . . . . . . . . . . . . . .
85
7.4.2
Small Scenario . . . . . . . . . . . . . . . . . . . . . . . .
86
8 Optimization Algorithms
89
8.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
8.2
90
8.2.1
91
8.2.2
Simulated Annealing . . . . . . . . . . . . . . . . . . . . .
92
8.2.3
94
8.2.3.1
94
Algorithm Description . . . . . . . . . . . . . . .
96
Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
99
8.3.1
Representation . . . . . . . . . . . . . . . . . . . . . . . .
99
8.3.2
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
8.2.3.2
8.3
8.3.2.1
8.3.2.2
Selection
8.3.2.3
Recombination . . . . . . . . . . . . . . . . . . . 104
8.3.2.4
Mutation . . . . . . . . . . . . . . . . . . . . . . 106
8.3.2.5
. . . . . . . . . . . . . . . . . . . . . . 102
CONTENTS
8.3.3
8.4
xi
8.3.2.6
8.3.2.7
8.3.2.8
8.4.2
8.4.3
8.4.3.2
125
9.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
9.2
9.2.2
9.3
9.2.1.2
9.2.1.3
9.2.1.4
9.3.2
9.3.3
9.3.4
9.4
9.5
xii
CONTENTS
9.5.1
9.5.2
9.5.3
9.6
9.7
9.5.1.2
9.5.2.2
9.6.2
9.6.3
163
11 Appendix
167
169
173
177
181
183
185
189
CONTENTS
xiii
191
197
Simulation Parameters
201
205
L Curriculum Vitae
209
Bibliography
213
xiv
CONTENTS
List of Tables
2.1
10
2.2
26
2.3
29
2.4
31
3.1
38
4.1
51
4.2
52
4.3
56
6.1
74
6.2
77
8.1
. . . .
92
8.2
. . . . . . . . . . . .
96
8.3
97
8.4
8.5
8.6
xvi
LIST OF TABLES
9.1
9.2
9.3
Results for the Rule Based Approach with different parameter ranges.131
9.4
Results for the Rule Based Approach with adapted antenna tilt in
the start scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
9.5
Results for the Rule Based Approach with adapted CPICH power
in the start scenario. . . . . . . . . . . . . . . . . . . . . . . . . . 134
9.6
Results for Simulated Annealing with Slow Cooling, different values for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
9.7
Results for Simulated Annealing with Slow Cooling, different values for TC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
9.8
9.9
9.10 Results for Adaptive Rule Based Approach with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.11 GA settings for the best optimization run on the small network
scenario with CPICH verification mode 1. . . . . . . . . . . . . . 142
9.12 Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 1. . . . . . 144
9.13 GA settings for the best optimization run on the big network scenario with CPICH verification mode 1. . . . . . . . . . . . . . . . 145
9.14 Evaluation of the best result with 100 different snapshots on the
big network scenario with CPICH verification mode 1. . . . . . . 147
9.15 Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 2. . . . . . 150
9.16 GA settings for the best optimization run on the small network
scenario with CPICH verification mode 2. . . . . . . . . . . . . . 150
9.17 Evaluation of best result with 100 different snapshots on the big
network scenario with CPICH verification mode 2. . . . . . . . . . 153
9.18 GA settings for the best optimization run on the big network scenario with CPICH verification mode 2. . . . . . . . . . . . . . . . 153
LIST OF TABLES
xvii
9.19 Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots). . . . . . . . . . . . . . . . . . . . 156
9.20 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB
and required coverage probability in worst case of 0.5/0.75. . . . . 158
9.21 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB
and required coverage probability in worst case of 0.8/0.98. . . . . 158
9.22 Comparison of the different algorithms with 50 different snapshots
on the big network scenario with CPICH verification mode 1. . . . 160
B.1 Preliminary environments identified by COST 259. . . . . . . . . 174
B.2 Propagation properties proposed by COST 259 and considered by
3GPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
D.1 Standard rule set used for Rule Based Approach. . . . . . . . . . 181
E.1 Rule set 1 for Simulated Annealing. . . . . . . . . . . . . . . . . . 183
E.2 Rule set 2 for Simulated Annealing. . . . . . . . . . . . . . . . . . 184
F.1 Rule set 1 for Adaptive Rule Based Approach. . . . . . . . . . . . 185
F.2 Rule set 2 for Adaptive Rule Based Approach. . . . . . . . . . . . 186
F.3 Rule set 3 for Adaptive Rule Based Approach. . . . . . . . . . . . 186
F.4 Rule set 4 for Adaptive Rule Based Approach. . . . . . . . . . . . 187
I.1
I.2
I.3
I.4
I.5
xviii
LIST OF TABLES
List of Figures
1.1
1.2
1.3
2.1
11
2.2
12
2.3
12
2.4
13
2.5
13
2.6
14
2.7
14
2.8
16
2.9
16
17
22
2.12 Outer and inner TPC loop. The green-colored blocks reside in the
physical (PHY) layer and the yellow-colored blocks reside in the
radio resource control (RRC) layer (source: [74]). . . . . . . . . .
23
24
25
27
2.16 Packet transmission over the UMTS air interface (source: [100]). .
29
xix
xx
LIST OF FIGURES
2.17 3GPP traffic classes classification. . . . . . . . . . . . . . . . . . .
30
3.1
. . . . . . . . . . . . . . . . . . . . . . .
36
3.2
39
3.3
40
3.4
41
3.5
Genetic Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . .
43
3.6
43
3.7
1-point crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
3.8
Mutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
5.1
62
5.2
63
5.3
64
5.4
64
5.5
Capacity for different CPICH power and antenna tilt settings. (Capacity is measured as served users.) . . . . . . . . . . . . . . . . .
68
6.1
73
7.1
80
7.2
84
7.3
Base station location and one user distribution of the big network
scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
Base station location and one user distribution of the small network
scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
8.1
90
8.2
91
8.3
93
8.4
95
8.5
98
7.4
LIST OF FIGURES
xxi
8.6
99
8.7
8.8
8.9
. . . . . . . . . . . . . . . . . . . . 111
9.2
9.3
9.4
9.5
9.6
xxii
LIST OF FIGURES
9.7
3D matrix of parameter range limits for the Rule Based Approach. 132
9.8
Block diagram for the simulation of the four rule sets over 50 snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
9.9
Comparison of cdf curves for the four rules sets of the Adaptive
Rule Based Approach (50 snapshots). . . . . . . . . . . . . . . . . 140
9.10 Results for the Genetic Algorithm on the small network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 143
9.11 Optimization run for the best result on the small network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 144
9.12 Results for the Genetic Algorithm on the big network scenario with
CPICH verification mode 1. . . . . . . . . . . . . . . . . . . . . . 146
9.13 Optimization run for the best result on the big network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 147
9.14 Mean results over 100 snapshots for the Genetic Algorithm on the
small network scenario with CPICH verification mode 2. . . . . . 149
9.15 Optimization run for the best result on the small network scenario
with CPICH verification mode 2. . . . . . . . . . . . . . . . . . . 151
9.16 Mean results over 100 snapshots for the Genetic Algorithm on the
big network scenario with CPICH verification mode 2. . . . . . . 152
9.17 Optimization run for the best result on the big network scenario
with CPICH verification mode 2. . . . . . . . . . . . . . . . . . . 154
9.18 Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots). . . . . . . . . . . . . . . . . . . . 157
9.19 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB.159
9.20 Comparison of the different algorithms on the big network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 161
A.1 Overview and basic entities of the UMTS network structure. . . . 171
B.1 Channel shape (power delay profile) with multiple clusters. Source:
[7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
B.2 Reduced complexity channel model parameters. Source: [7]. . . . 176
C.1 The principle of maximum ratio combining within the CDMA
RAKE receiver. Source: [54]. . . . . . . . . . . . . . . . . . . . . 178
LIST OF FIGURES
xxiii
xxiv
LIST OF FIGURES
Chapter 1
Introduction
Third generation (3G) mobile communication systems in Europe are known as
the Universal Mobile Telecommunication System (UMTS). UMTS is expected to
play a key role in creating the future mass market for high-quality multimedia
communications that will approach 2 billion users worldwide by the year 2010
[102]. Enabling anytime, anywhere connectivity to the Internet is just one of
the opportunities for UMTS networks. UMTS will bring more than just mobility
to the Internet. The major market opportunity will build on new possibilities
for mobile users like multi-media-messaging, location-based services, personalized
information, and entertainment experiences. Due to the number of new applications and services a significant redistribution of operators revenue will take place
within the next years. Nobody knows the killer application per se, but all market
studies are in complete agreement in one point: packet data will increasingly
dominate the traffic flows. By 2007, predictions from the heyday of the UMTS
euphoria state that more data than voice will flow over mobile networks [102, 35].
This is an amazing statistic considering that mobile cellular networks today are
almost exclusively voice.
In 2nd generation systems, coverage planning and frequency planning, involving
optimization, are the most important but also sufficient issues for operating the
network. Coverage prediction and capacity estimation are mostly well separable. In UMTS networks, where all users operate on the same frequency carrier,
the number of simultaneous connections directly influences the system capacity. Multiple services like speech, Internet and high data rate interactive services
will co-exist. Since higher bit-rate services will require higher capacity, the base
station density will have to be increased.
In this thesis I focus on the problem of capacity optimization for UMTS FDD
networks. The goal is to improve the capacity of the network only by changing
the base station parameters and not by using more sites.
1
CHAPTER 1. INTRODUCTION
1.1
In June 1987 the RACE I project (Research of Advanced Communication Technologies in Europe) was initiated by the European Union. This was the official
start of the research and development activities towards a third generation mobile communication system in Europe. After RACE I several European R&D
programs e.g. RACE II and ACTS (Advanced Communications Technologies
and Services) [10] followed in order to support 3rd generation mobile communications system development. Within ACTS, the FRAMES project (Future Radio
Wideband Multiple Access System) was initiated with the objective of defining
a proposal for a UMTS radio access system.
From a global point of view, the work for the development of 3rd generation
mobile systems started in 1992, when the WARC (World Administrative Radio
Conference) of ITU (International Telecommunications Union) identified the frequencies around 2 GHz for use by future third generation mobile systems. The
ITU calls these IMT 2000 1 . The frequency bands and geographical areas, where
these different bands are defined, are shown in Figure 1.1. For IMT 2000 altogether 230 MHz in two frequency bands, 1885 - 2025 MHz and 2110 - 2200 MHz,
were reserved.
The proposals for the UMTS Terrestrial Radio Access (UTRA) air interface received by the milestone were grouped into five concept groups in ETSI in June
1997, after their submission and presentation during 1996 and early 1997. The
following groups were formed:
Alpha concept: Wideband CDMA (W-CDMA) [37]
Beta concept: OFDMA [38]
Gamma concept: Wideband TDMA (W-TDMA) [39]
Delta concept: Wideband TDMA/CDMA [40]
Epsilon concept: ODMA [41]
ETSI decided between the technologies in January 1998 [42], selecting W-CDMA
as the standard for the UTRAN air interface on the paired frequency bands,
i.e. for FDD (Frequency Division Duplexing) operation, and TDMA/CDMA for
the operation within unpaired spectrum allocation, i.e. for TDD (Time Division
Duplexing) operation. These combined modes formed the basis for the ETSI
proposal to the ITU as a candidate IMT-2000 radio transmission technology. It
1
International Mobile Telecommunications 2000. 2000 can be interpreted either as for the
year 2000 , or for the frequency band around 2000 MHz .
Figure 1.1: Spectrum allocation for Europe, China, Japan, Korea and North
America [100].
took 10 years from the initiation of the European research programs (RACE I+II,
ACTS) to reach a decision of the UTRA technology. The detailed standardization
of UTRA proceeded within ETSI until the work was handed over to the 3rd
Generation Partnership Project (3GPP). The technical work was transferred to
3GPP with the contribution of UTRA in early 1999.
Meanwhile the standardization bodies of Japan (TTC, ARIB), Korea (TTA),
China (CWTS) and USA (T1P1) were independently choosing their own 3G radio access technologies. Even though the original goal of the standardization
process was a single common global IMT-2000 radio interface, the goal to achieve
a single worldwide standard was extremly difficult from the beginning. In Europe
and Asia, including Japan and Korea, W-CDMA is to be used utilizing 5 MHz
blocks in the frequency bands at around 2 GHz. In North America that spectrum had already been auctioned for operators using second generation systems
in blocks of 1.25 MHz (see Figure 1.1). Hence, 3rd generation services have to
be implemented not only within the existing frequency bands, but also with a
different radio technology, since there were no cohesive 5 MHz frequency blocks
available in the US. Since it became evident that it would be very difficult to
achieve identical specifications, initiatives were started to create a single forum
for a common UTRA standardization. The 3rd Generation Partnership Project
(3GPP - http://www.3gpp.org) was set up in 1998 with this objective, including
CHAPTER 1. INTRODUCTION
TTC/ARIB for Japan, ETSI for Europe, TTA for Korea, T1P1 for the US and
CWTS for China as partners. The detailed technical work of this group started
in early 1999, with the aim of having the first version of the common specification, called Release-99, ready by the end of 1999. Within 3GPP, four different
technical specification groups (TSG) were set up as follows:
Radio Access Network TSG
Core Network TSG
Service and System Aspects TSG
Terminals TSG
Within these groups the one most relevant to the W-CDMA technology is the
Radio Access Network TSG (RAN TSG), which produced the Release-99 of the
UTRA air interface.
The development of the ITU recommendations for 3rd generation mobile communication systems ran like this: In the first phase of the IMT-2000 candidate
submission process, the ITU received a number of different proposals. In the
second phase of the process, evaluation results were received from proponent organizations as well as from other evaluation groups. The ITU IMT-2000 process
was finalized at the end of 1999, when the detailed specification was created and
the radio interface specifications were approved by ITU-R [54, 95]. The members
of ITU-R IMT-2000 family are shown in Figure 1.2. The TDMA subgroup consits
of a TDMA single carrier (UWC-136) and a TDMA Multi-Carrier2 (DECT) concept. The CDMA interface consists of the Direct-Spread (UTRA FDD) and the
Multi-Carrier (cdma2000) part. The TDD part of the CDMA concept consists
of the 3GPP proposal, UTRA TDD (3.84 Mchip/s) and the Chinese narrowband
version TD-SCDMA (1.2288 Mchip/s). Good overviews on the 3GPP proposals
for IMT-2000 are given in [22, 29, 54, 84]
1.2
CHAPTER 1. INTRODUCTION
The cellular structure of the network should accomplish the condition for
a complete personal communication network.
Open architecture, to facilitate an easy introduction of new technologies
and services.
Support of different types of terminals (e.g. mobile terminals, PDAs, notebooks,...)
A small, cheap, light and simple usable terminal should be established at the
mass market for standard applications. Beside these terminals, there will be a
set of products, which are convenient for applications with higher requirements.
1.3
The UMTS radio interface can carry voice and data services with various data
rates, traffic requirements and quality of service (QoS) targets. Furthermore, the
operating environments vary considerably from outdoor macro cells to indoor
micro cells. Careful configuration of the many network and cell parameters is
required and crucial to the network operator, because they determine the capability to provide services, influence the QoS, and account for a major portion of
the total network deployment and maintenance costs.
Optimization is needed, both in the planning stage to optimize the network configuration for investment saving as well as after the deployment of the network
to satisfy growing service demand (see [73, 106]). However, there are numerous configurable parameters which are multi-dimensional and interdependent,
and their influence on the network is highly non-linear. Hence, finding optimum
network configuration is a very complex and time-consuming task. Automated
optimization algorithms are needed to perform the optimization process quickly
and efficiently, with minimal contribution from the operational expenditure.
Different optimization techniques are known, which are suitable for the problem
of UMTS base station parameter optimization. All these techniques have several advantages and also disadvantages. During the work for this thesis several
automated optimization algorithms (local, global and analytic algorithms) were
developed, which are presented and compared in the present work.
Base station parameters such as antenna azimuth, antenna tilt and common pilot
channel power (CPICH) are the three most common optimization parameters that
have significant influence on network capacity. By optimizing these three key
parameters, the network capacity can be boosted without additional investment
needed.
The work on this thesis led to four papers at international conferences and
one submission to an Asian journal. Some of the results obtained during the
course of this thesis have also been discussed at working group 3 of COST 273
(www.lx.it.pt/cost273).
1.4
CHAPTER 1. INTRODUCTION
Chapter 4 presents the UMTS radio network coverage and capacity limiting
factors. For the several limiting reasons possible solutions are given.
In Chapter 5 the parameters, which are used during this thesis for optimizing the capacity in a UMTS FDD network, are described in more detail.
Further, the influence of these parameters on the capacity in the network
is explained.
Chapter 6 introduces the used fitness functions for the optimization as well
as the performance indicators, which are taken into account during the
optimization process.
Chapter 7 is a description of the simulation environment. The main characteristics of the used simulator are specified as well as the simulation scenarios and the user distribution in the scenarios.
In Chapter 8 of the thesis all the developed optimization algorithm are
presented. Altogether five different strategies are introduced. First, the optimization with local algorithms are studied. Further, a global optimization
approach and an analytic algorithm are presented in this chapter.
Chapter 9 presents some results achieved with the several algorithms.
Finally, Chapter 10 summarizes and concludes the thesis.
Chapter 2
Radio System Aspects
2.1
Introduction
UMTS Terrestrial Radio Access (UTRA) denotes the air interface of the UMTS
system. The basis for UTRA is spread spectrum technology. Very good and
elaborate descriptions of this technique are given in [54, 97]. Thereby, an artificialspectral spreading increases the resistance against interference of the transmitting signal. With a user dependent spreading code, this code multiplex technique can be used as a multiple access scheme. This technology is called Code
Division Multiple Access (CDMA). CDMA has been claimed to deliver high capacity [105], i.e. that a lot of users can be served. The frequency reuse factor is
one, therefore all cells can use the whole frequency band. Furthermore, CDMA
offers the possibility to combat interference efficiently and multipath propagation
by the use of simple RAKE receivers. Details of the RAKE receiver are explained
in Appendix C.
The standardized UTRA solution from ETSI includes two different concepts.
These two are the UTRA FDD mode with the use of W-CDMA (Wideband
CDMA) and the UTRA TDD mode with TD-CDMA (Time Division/Code Division Multiple Access). For the FDD mode, two 60 MHz bands (paired bands,
1920 -1980 MHz for UL and 2110 - 2170 MHz for DL) are scheduled, whereas 20
and 15 MHz (unpaired band, 1900 -1920 MHz and 2010 - 2025 MHz) are reserved
for the TDD mode. The planned application areas for the FDD mode are public
macro and micro cells with data rates up to 384 kbit/s, whereas the TDD mode
should be dedicated for small public cells (micro and pico cells) as well as for
unlicensed wireless applications and wireless local loops (WLL) with data rates
up to 2 Mbit/s. In the following the UTRA FDD mode is explained in more
detail.
9
10
2.2
A short overview about the technical details of the FDD mode is given in Table
2.1.
Multiple access scheme
DS-CDMA
Duplex technique
FDD
Modulation scheme
QPSK
Chip rate
3.84 Mchip/s
Pulse shaping
Bearer spacing
5 MHz
Frame length
10 ms
not necessary
2.2.1
As in the GSM system, the flow of information in the UMTS system is organized
in channels. Thereby, we differ between transport channels and physical channels.
11
The transport channels are mapped on the physical channels by multiplexing (see
Figure 2.1).
The transport channels are divided into common and dedicated transport channels, whereby the first ones are used by more than one mobile in the cell. The
same distinction is made for the physical channels. In the case of the dedicated physical channels we can differ between the dedicated physical data channel
(DPDCH) and the dedicated physical control channel (DPCCH). The DPCCH
includes pilot bits, data for the transmit power control (TPC) and information
about the transmitting data (TFI, transport format indicator). Figure 2.2 shows
the frame structure of the dedicated transport channel for the downlink. Each
frame has a length of 10 ms and is divided into 15 slots. The first part of a slot
includes the DPCCH and in the second part follows the DPDCH. There are 7 different bit rates possible, from 12.2 kbit/s up to 2 Mbit/s. Due to the fact that the
chip rate is constant (3.84 Mchip/s), different spreading factors are necessary. For
the increase of the data rate over 1 Mbit/s, several DPDCHs are assigned to one
mobile, but only one DPCCH. The frame structure of the DPCH for the uplink
is shown in Figure 2.3. In contrast to downlink, data and control information are
transmitted in parallel, separated by the I- and Q-branch (see Section 2.2.2). A
description of the individual transport and physical channels and their mapping
is given in Section 2.3.
12
2.2.2
13
Figures 2.4 and 2.5 show the arrangement of modulator, spreading, scrambling,
pulse shaping and conversion to IF- and HF- frequency for uplink and downlink.
14
Figure 2.6: Code tree for OVSF (orthogonal variable spreading factor) codes.
each cell only one scrambling code is assigned (i.e for each mobile in the cell the
same code is applied). Therefore, the purpose of the complex valued scrambling
codes are to distinguish signals originating from different base stations in the
mobile.
The scrambling in the uplink is done in order to distinguish signals of different
mobiles at the base stations. There are two different possibilities for the scrambling operation: either, a short code from a Kasami-set [19] of length 256 or
a long code, consisting of a 38400 chip segment from a gold code [19] of length
241 1. The possibility of utilization of the short code was created to facilitate the
Joint Detection technique [19] with feasible effort. In contrast to the downlink,
the separation of the users (i.e the orthogonality) in the uplink is guaranteed
by the scrambling code and not by the spreading code. This means that a mobile, when choosing its spreading code, does not have to take into account the
spreading codes of the other users in the cell.
2.3
15
In UTRA FDD the data generated at higher layers is carried over the air interface
with transport channels, which are mapped in the physical layer to different
physical channels. The basic organization of this concept for the information
flow is described in Section 2.2.1 of this thesis. The various different tasks (e.g.
cell search, paging,...) that the UMTS network has to fulfill are handled by
different transport channels. These logical transport channels are the services
that the physical layer offers to higher layers. Figure 2.8 illustrates the mapping of
transport channels to physical channels. There are also physical channels that are
not transparent to higher layers and have no corresponding transport channels.
The synchronization channel (SCH), the common pilot channel (CPICH) and the
acquisition indication channel (AICH) are not directly visible to higher layers,
but are essential from the system function point of view. The dedicated channel
(DCH) is mapped onto two physical channels, the dedicated physical data channel
(DPDCH) and the dedicated physical control channel (DPCCH).
In the following a brief description of the different channels is given.
2.3.1
Synchronization Channel
The synchronization channel (SCH) is a pure physical channel used for the cell
search procedure by the mobile. Hence, the SCH has to be transmitted into
the entire cell. The SCH consists of two sub-channels transmitted in parallel,
the primary and secondary SCH. The 10 ms radio frames of the primary and
secondary SCH are divided into 15 slots of length 2560 chips each. The SCH is
only transmitted during the first 256 chips of each slot. Figure 2.9 shows the
structure of the SCH.
2.3.1.1
Primary SCH
The primary SCH consists of a modulated sequence of length 256 chips, the
primary synchronization code, denoted cp in Figure 2.9. The sequence cp is transmitted once every slot and is the same for every cell in the system, so that the
mobiles can detect it easily with a matched filter.
2.3.1.2
Secondary SCH
16
17
2.3.2
The common pilot channel (CPICH) is a fixed rate downlink physical channel (15 kbit/s) that carries a continuous pre-defined bit/symbol sequence. The
spreading factor of the CPICH is 256. Figure 2.10 shows the frame structure of
the CPICH. The function of the CPICH is to aid the channel estimation at the
mobile for the DCH (see Section 2.3.7) and to provide the channel estimation
reference for the common channels. The CPICH does not carry any higher layer
information, neither is there any transport channel mapped to it.
There are two types of CPICHs, the primary and secondary CPICH (P-CPICH
and S-CPICH). They differ in their use and the limitations placed on their physical features [3]. In the following the P-CPICH and S-CPICH are explained in
more detail.
2.3.2.1
18
P-CPICH can be used in the mobile to determine the scrambling code used for
scrambling the downlink channels of the cell (see Section 2.4.1). There is always
only one P-CPICH per cell, which is broadcast over the entire cell. The P-CPICH
is the phase reference for the SCH, P-CCPCH, AICH and PICH, and the default
phase reference for all other downlink physical channels [3].
The P-CPICH is responsible for the measurements on the handover and cell
selection/reselection. The use of the CPICH reception level at the mobile for
handover measurements has the consequence that by adjusting the CPICH power
the cell load can be balanced between adjacent cells. Reducing the CPICH power
in one cell causes that mobiles at the cell boundary hand over to the neighboring
cells, while increasing it invites more mobiles to hand over to the cell [54].
2.3.2.2
The secondary common pilot channel (S-CPICH) can be spread by any channelization code of length 256 and can be scrambled by either the primary or a
secondary scrambling code. There maybe zero, one, or several S-CPICHs per
cell. A S-CPICH may be transmitted over the entire cell or only into a part of
the cell. The S-CPICH can be the phase reference for secondary common control
physical channels (S-CCPCH) and dedicated physical channels (DPCH) [3].
2.3.3
Broadcast Channel
2.3.4
The random access channel (RACH) is an uplink transport channel, which carries control information from the terminal, such as a request to set up an RNC
connection. It can further be used to send small amounts of uplink packet data to
19
the network [54]. The random access channel must be heard from the whole desired cell coverage area, especially for the initial system access and other control
procedures.
The RACH is mapped on the physical random access channel (PRACH). The
PRACH has specific preambles, which are sent prior to data transmission. These
use a spreading factor of 256 and contain a signature sequence of 16 symbols.
Once the preamble has been detected by the base station and acknowledged with
the acquisition indicator channel (AICH) the 10 or 20 ms long message part is
transmitted with a spreading factor from 256 down to 32. If the sent preamble
is not acknowledged within a certain time, the preamble is retransmitted with
increased power until the reception of the preamble is acknowledged by the base
station. Beside this power adjustment prior to transmission, there is no power
control for the PRACH.
2.3.5
The forward access channel (FACH) is a downlink transport channel that carries
control data to terminals in the given cell, for example, after a random access message has been received by the base station. It is also possible to transmit packet
data on the FACH. There can be more than one FACH per cell with different
data rates, but there must be at least one FACH with such a low data rate that
all terminals in the cell can decode it. The FACH does not use fast power control,
and the messages transmitted need to include in-band identification information.
The FACH is mapped on the secondary common control physical channel (SCCPCH). If there exists more than one FACH in the cell, then all these are
multiplexed on one S-CCPCH. The S-CCPCH may use different offsets between
the control and data fields at different symbol rates and may support slow power
control. More details and a table of all possible slot formats are given in [3].
2.3.6
Paging Channel
The paging channel (PCH) is a downlink transport channel that carries data
relevant to the paging procedure, that is, when the network wants to initiate
communication with the terminal [54]. An example is a speech call: the network
transmits the paging message to the terminal on the paging channel of those cells
belonging to the location area that the terminal is expected to be in. The signal
has to be heard in the entire cell area and can be transmitted in up to a few
hundreds cells, depending on the system configuration. The PCH is transported
like the FACH by the S-CCPCH.
20
The configuration of the paging channel affects the terminals power consumption
in standby mode. The less often the mobile has to tune the receiver on to listen
for a possible paging message, the longer will the terminals battery last in standby
mode. Therefore, all mobiles are assigned to a paging group. The presence of a
paging message in the S-CCPCH is signaled by the network on a separate physical
channel, the paging indication channel (PICH). So, a terminal in stand-by mode
only has to check the PICH and does not need to decode the S-CCPCH all the
time.
2.3.7
Dedicated Channel
The dedicated channel (DCH) is the only dedicated transport channel specified in
3GPP. The DCH carries all higher layer information intended for a given user including data for the actual services (speech frames, data,...) as well as higher layer
control information (handover commands, measurement control commands,...).
In the physical layer the DCH is mapped on the dedicated physical channel
(DPCH). The DPCH uses closed-loop power control and fast data adaptation on
a frame-by-frame basis. It can be transmitted to a part of the cell and support
soft/softer handover. The DPCH consists of two sub-channels, the dedicated
physical data channel (DPDCH) that carries the actual user data and the dedicated physical control channel (DPCCH) that carries physical layer information.
As the modulation scheme is different in up- and downlink, the structure of the
DPCH is also different for the two directions. The downlink and uplink slot
structure of the DPCH are shown in Figure 2.2 and Figure 2.3 in Section 2.2.1.
2.3.8
2.3.9
The uplink common packet channel (CPCH) is an extention to the RACH that
is intended to carry packet-based user data in the uplink direction. In contrast
21
to the RACH the CPCH uses fast power control and collision detection. In the
physical layer the CPCH is transported by the physical common packet channel
(PCPCH). The uplink CPCH transmission may last serveral frames in contrast
with one or two frames for the RACH meassage. A detailed description of the
CPCH can be found in [54].
2.3.10
For the basic network operation the following channels are required: SCH, PCPICH, the P-CCPCH for carrying the BCH, a S-CCPCH for carrying the FACH
and the PCH, the PICH and a PRACH together with AICH for random access.
The use of the CPCH and DSCH is optional for the network, which makes all
physical channels necessary for signaling and transport of CPCH and DSCH
optional.
2.4
In the physical layer of the UMTS system there are many procedures essential for
system operation. Power control, cell search and handover are briefly described
in this section. An exhaustive description can be found in [54, 68].
2.4.1
Cell Search
22
code groups is used for the downlink scrambling. In the example in Figure 2.11 the cell uses the code group with the number 4.
Step 3: Scrambling-code identification
During the third and last step of the cell search procedure, the mobile
determines the exact primary scrambling code used by the found cell. This
is done by trying to detect the P-CPICH that is scrambled with one of
the eight primary scrambling codes of the code group. After the primary
scrambling code has been identified, the primary CCPCH can be detected
and the system- and cell specific BCH information can be read.
2.4.2
Power Control
Stringent uplink and downlink transmit power control (TPC) is required to combat the near-far problem. The near-far problem in the uplink is created by interfering users located near the base stations, which corrupt the signals from more
distant mobiles. The uplink TPC must be fast enough to track rapid channel
variations (e.g. caused by small-scale fading). The same properties are desired
for the downlink TPC even though the near-far problem is less important. In
UTRA FDD mode the power control is done in the control loops, the outer loop
and the inner loop. The outer loop is handled in the RNC and controls the QoS
(e.g. BER or BLER) by adjusting the target signal to interference ratio (SIR)
of the link according to the service. The SIR target is adjusted at a slow rate
(typically 10-100 Hz) and signaled via higher layers. The required SIR, BER and
BLER measurements are standardized in [4].
The inner loop of the power control, also called fast power control or closed loop
power control, adjusts the transmit power after every slot (666 s) according to
the received SIR of the previous slot, resulting in a 1500 Hz command rate. If
the received SIR is below the target SIR, then a power up command is fed back
23
to the transmitter using the power control bits that are reserved for this purpose
in every slot. If the received SIR is equal to or above the target SIR, then a
power down command is sent to the transmitter. The transmitter adjusts the
transmit power according to the received power control command in steps of 1 dB.
Additionally, multiples of that step size can be used. The specifications define
the relative accuracy for a 1 dB power control step to be 0.5 dB. Figure 2.12
shows the interaction between the outer and inner loop.
Figure 2.12: Outer and inner TPC loop. The green-colored blocks reside in the
physical (PHY) layer and the yellow-colored blocks reside in the radio resource
control (RRC) layer (source: [74]).
In UTRA FDD there is also an open loop power control, which is applied for initiating the transmission on the RACH or CPCH. Here the transmitter estimates
the pathloss in the downlink using the received P-CPICH power (the transmit
P-CPICH power is known at the receiver) prior to transmission. The mobile adjusts the transmit power using the noise level at the receiver that is signaled on
higher layers in such a way that the target SIR is reached at the receiver. Due
to measurement inaccuracies the open loop power control is very inaccurate. In
normal conditions the tolerance for the open loop power control is 9 dB [6].
2.4.3
Handover
24
Maximum ratio combining (MRC) is one of the most common linear combining techniques
in receive diversity systems. In MRC, combiner weights are chosen to maximize the output
signal to noise ratio (SNR). Details on MRC are well described in [59, 90].
2
A hard handover results in the radio connection being broken between the network and the
mobile, before a new radio connection is established with the network in the target cell. Hard
handovers usually require a change of frequency.
25
this are high capacity base stations, so-called hot-spot cells 3 , with several
carriers. Another application is the handover from a macro-cell to a microcell, which uses different frequencies.
Inter-system hard handover: Takes place between the UTRA FDD mode
and the UTRA TDD mode, or between the UTRA FDD mode and the
GSM system.
The support of seamless inter-frequency hard handover is a key feature of WCDMA, not previously implemented in cellular CDMA system. Hard handover is
necessary for the support of a hierarchical cell structure (HCS): a cellular system
can provide very high capacity through micro-cells, offering at the same time
full coverage via the macro cells. Therefore, hard handover is a very important
feature to perform handover between the different cells. A second scenario, where
hard handover is necessary, is the hot-spot one. In this case, a certain cell that
serves a high traffic area uses carriers in addition to those used by the neighboring
cells. If the deployment of extra carries is to be limited to the actual hot-spot
area, the possibility of hard handover is essential.
2.5
Hot spot cells can have a larger number of carriers than the surrounding ones, therefore,
a different mechanism of handover is necessary between different frequencies, which is hard
handover.
26
In this section two classifications of services from different points of view are
presented. The first classification is based on market forecasts by the UMTS
Forum. The second one is based on QoS requests. Further, the importance of
traffic forecasts for network optimization is explained.
2.5.1
Based on market forecasts, which are performed by the UMTS Forum (see Section 1.2) the services for UMTS can be divided in several classes. Figure 2.15
shows this classification.
Speech (S) is a symmetric service with the same amount of information in the
UL as in the DL and with an activity factor of 0.5. This implies that the system
should be able to handle the discontinuous transmission mode. The simple messaging service (SM) is the evolution of the GSM short message service (SMS).
The typical size of a message is about 40 KByte and and an acceptable delay
for this service is about 30 s. The switched data service is a 14.4 kbit/s CS service type similar to existing data service in GSM. Services like downloads for the
WWW belong to the multimedia service class (MM). The typical amount of data
that needs to be transmitted for a medium MM service is about 0.5 MByte during
14 s, while for a high MM service a data size of 10 MByte and a call duration of
53 s is typical. While these MM services are asymmetrical, the interactive MM
service is based on a 128 kbit/s symmetrical connection.
Services
User
Effective
User net
Coding
Asymmetry
Switch
Service
nominal bit
call
bit rate
factor
factor
mode
bandwidth
rate [kbit/s]
duration [s]
[kbit/s]
HIMM
128
144
128
1/1
CS
256/256
HMM
2000
53
1509
0.005/1
PS
15/3200
MMM
384
14
286
0.026/1
PS
15/3200
SD
14
156
14.4
1/1
CS
43/43
SM
14
30
10.67
1/1
PS
22/22
16
60
16
1.75
1/1
CS
28/28
[kbit/s]
27
28
The various classes have different characteristics. HIMM, e.g., video telephony,
require isochronous transmission, as well as SD and S. Therefore, they are handled
as CS services. The average call duration time of these services corresponds to
the actual connection set up time, and the effective call duration depends on
the activity factor, which is 0.5 for speech and 0.8 for video telephony. For PS
services, the call duration is the sum of the time intervals, where data is actually
transferred via the air interface. Thus, the activity factor in this scenario is
equal to one. In Figure 2.16 the structure of a PS transmission is shown and the
effective call duration per service according to the activity factor and the average
call duration is given in Table 2.3.
The call duration and the activity factor are not suitable to characterize PS services. However, an estimation of the effective call duration, and the equivalent
offered bit quantity that packet services will generate, can be based on calculations that consider busy hour calls and an acceptable throughput and delay for
packet services [96].
4
The activity factor describes how many percent of a connection during a call one user is,
on average, active. Or in other words, it describes if and how much, on average, the activity of
the service will vary.
29
Figure 2.16: Packet transmission over the UMTS air interface (source: [100]).
Services
Activity factor Average call duration [s] Effective call duration [s]
HIMM
0.8
180
144
HMM
53.3
53.3
MMM
13.9
13.9
SD
156
156
SM
30
30
0.5
120
60
30
2.5.2
Beside the classification in the six groups S, SM, SD, MMM, HMM and HIMM
from the UMTS-Forum described above, there exists also a second service classification. 3GPP divides the applications and services in four different groups,
according to the QoS requests. The four different traffic classes are: conversational, streaming, interactive, and background. The main distinguishing factor
between these classes is the delay-sensitive of the traffic: the conversational class
is meant for very delay-sensitive traffic, while background class is the most delayinsensitive. The characteristics of the UMTS QoS classes are shown in Figure
2.17.
2.5.3
31
Cell type
180 000
Micro/pico
7 200
Macro
380
Pico
Urban (pedestrian)
108 000
Macro/micro
Urban (vehicular)
2 780
Marco/micro
36
Macro
32
Chapter 3
Overview of Optimization
Techniques
3.1
General Issues
This chapter gives a short overview of commonly used and well known optimization techniques. There are several different techniques, which are suitable for
different optimization problems. The different types of optimization problems
are classified according to [92] in the following list:
Continuous functions
Combinatorial problems
Nonlinear structures, e.g.:
Neural networks
Fuzzy systems
Computer programs
Integrated circuits
The problem of optimizing the base station parameters of a UMTS network,
which is covered by this work, is a highly nonlinear problem and is assigned to
the first item in the previous list. For an optimization problem a quantitative
value (fitness function) is defined, for which we are interested in the maximum or
minimum value. This value depends on different parameters, and the goal of an
optimization algorithm is to optimize the quantitative value (fitness function).
The optimization algorithms can be classified according to their tasks in the
following way [83]:
33
34
Examples of conventional optimization methods are Random Search, Simplexmethod, enumeration method and gradient method. Random Search is a global
method and very generally applicable, but on the other hand usually inefficient.
The Simplex-method is an approach involving mathematical programming formulations, which is suitable for linear fitness functions with linear boundary conditions. The third conventional method, the enumeration method, is a primitive
enumeration of the solutions. It is very inefficient, especially in the case of a large
solution space. For this reason advanced enumeration methods were developed,
like dynamic programing or branch-and-bound. The gradient method is a very
popular approach for several types of problems. It is a purely local method, and
it must be possible to calculate the derivation. For an overview on this area see
[43].
For solving complex problems, like nonlinear structures, it is very inefficient and
time-consuming to use such conventional mathematical methods for optimizing
these kind of problems. Some problems are intractable to solve due to the following properties, which can occour:
The solution space is large and complex.
35
A solution is called Pareto-optimal (or efficient) solution, if there is no other solution for
which at least one criterion has a better value while values of remaining criteria are the same
or better. In other words, one can not improve any criterion without deteriorating a value of
at least one other criterion.
36
3.2
Local Search
The Local Search algorithm is a local iterative search technique that generates
a sequence of solutions for the optimization problem, say x1 , x2 , . . . , xk . At each
iteration i (1 i k 1), solution xi is perturbed a number of times (by rules
which are described as a move) to produce a neighborhood N(x) of candidate
solutions. The best solution in the neighborhood is then taken as solution xi+1 ,
if it is better than the solution in the previous iteration. This algorithm works
with the so-called greedy concept, because it always takes the best one. In Figure
3.1 a simple program structure for a local search algorithm is shown.
procedure Local Search
begin
x initial solution;
repeat
x0 N (x);
// derive neighborhood solution
if x0 is better than x then
x x0 ;
until termination condition is fulfilled;
end
Figure 3.1: Local Search algorithm.
One big disadvantage of the algorithm is that in general it only finds a local
optimum. An improvement gives the so-called Multi-start Local Search, which
starts from several different initial solutions. A full description of the Local Search
37
method and its extensions is given in [8, 85]. For the optimization of combinatorial
problems it is a very popular approach. The best known example is the TravelingSalesman Problem 2 . Also for network design such methods are utilized. This
approach is best to generate an initial network for further development [106]. A
crucial issue concerns the way in which the candidate sites3 for a network and
configurations are ordered. Such methods, for example, have been investigated
for related graph based problems and frequency assignment for GSM in [14, 15].
In [99], this approach has been used for cell planning.
3.3
Simulated Annealing
(3.1)
The traveling salesman problem, or TSP for short, is defined as follows: given a finite
number of citiesalong with the cost of travel between each pair of them, find the cheapest
way of visiting all the cities and returning to your starting point.
3
A candidate site is a possible site for a base station.
38
optimization
system state
valid solution x
energy E
transition of state
solidification (crystal)
founded solution
temperature T
control parameter T
39
3.4
Tabu Search
The Tabu Search meta-heuristic technique (TS) operates using the neighborhood
principle of the Local Search technique from 3.2. However, in order to prevent
cycling and to provide a mechanism for escaping locally but not globally optimal
solutions, some moves 4 at one particular iteration may be classified as tabu. After
a valid move is executed, it is stored in the so-called tabu list T L. Now it is not
allowed to perform this move again for the next k iterations. This tabu list is an
4
40
essential component of the algorithm, because it stores the history of the visited
candidate solutions. There can also be aspiration criterias, which override the
tabu moves if particular circumstances apply. A detailed description on tabu list
management and aspiration criteria can be found in [67].
procedure Tabu Search
begin
T L 0;
// tabu list
x initial solution;
repeat
X 0 subset of N (x) under consideration of T L;
x0 best solution of X 0 ;
add move from x to x0 to T L;
delete moves from T L, which are older than k iterations;
x x0 ;
if x is better than best solution until now then
store x;
end
until termination condition is fulfilled;
end
Figure 3.3: Tabu Search algorithm.
In Figure 3.3 a simple program structure for a Tabu Search algorithm is shown.
The Tabu Search technique as extension of the Local Search algorithm is a very
efficient optimization technique for the optimization of combinatorial problems
like the TSP or the Bin Packing Problem 5 . In [18] a short overview of different
combinatorial problems is given. For the TSP the Tabu Search algorithm nowadays is the most efficient technique for optimizing this problem. For cell planning
issues in CDMA networks this method is also used [12, 71]. For further details
of Tabu Search see [49, 67, 93].
3.5
Evolutionary Algorithms
Bin Packing Problem: Determine how to put the most objects in a given number of fixed
space bins. More formally, find a partition and assignment of a set of objects such that a
constraint is satisfied.
41
win 6 in the year 1859. In his famous book The Origin of Species he explained
the inheritance with variances (mutation) and talked about natural selection.
The principal structure of an evolutionary optimization algorithm in shown in
Figure 3.4. In this Figure, P denotes a population of solutions. In each iteration a new generation Q is produced, and from the union of Q and P the new
population P is decided.
procedure EA
begin
P set of initial solutions;
Evaluate(P );
repeat
Q GenerateNewSolutionByVariation(P );
Evaluate(P );
P SelectBetterSolutions(P, Q);
until termination condition is fulfilled;
end
Figure 3.4: The structure of an Evolutionary Algorithm.
Evolutionary Algorithms have the following properties:
They do not require certain properties of the optimized function like continuity, differentiability or dimensionality.
The algorithms are not restricted to numeric optimization.
No special information about the solution space like derivation, is needed.
Suitable for problems with a big solution space, where other techniques,
like enumeration methods, need to much time.
Global view of the technique: The algorithm searches for the global optimum, and not for the next local one.
Finding of the optimal solution cannot be guaranteed.
It is possible to combine Evolutionary Algorithms with other optimization
techniques or problem specific heuristics.
6
Charles Robert Darwin (1809-1882): Darwin was the British naturalist who became famous
for his theories of evolution and natural selection. Like several scientists before him, Darwin
believed all the life on earth evolved (developed gradually) over millions of years from a few
common ancestors.
42
3.5.1
Genetic Algorithms
The idea of using genetic approaches for optimization originated from J. H. Holland about thirty years ago. He wrote the primary work about Genetic Algorithms
(GA) [53]. De Jong [31] extended his work for functional optimization, involving
the use of optimization search strategies based on after the Darwinian notion of
natural selection and evolution. In the last two decades, first D. E. Goldberg [50]
and then Z. Michalewicz [13, 76, 77] enhanced the methods and made research
on the theoretical basics of Genetic Algorithms.
Genetic Algorithms are particularly effective, when the goal is to find an approximate global maximum in a high dimensional, multi-modal function domain in
a near-optimum manner [61]. They differ from other conventional techniques by
operating on a group (or population) of trial solutions in parallel. Normally a
Genetic Algorithm operates on a coding of the function parameters (a chromosome) rather than on the parameters themselves. For the coding of the problem a
suitable structure should be used. The most common coding schemes are binary
coding or gray coding. Simple, stochastic operators (selection, crossover and mutation) are used to explore the solution domain in search of an optimal solution.
The basic block diagram with the three operators are depicted in Figure 3.5.
procedure GA
begin
i 0;
initialize(P (i));
evaluate (P (i));
while (not termination-condition) do
i i + 1;
Qs (i) select(P (i 1));
Qr (i) recombine(Qs (i));
P (i) mutate(Qr (i));
evaluate (P (i));
done
end
Figure 3.6: Canonical Genetic Algorithm.
43
44
Figure 3.6 shows the algorithm of the block diagram from Figure 3.5. This simple
type of GA is known as Canonical GA. In Figure 3.6 P (i) denotes the population
of the actual iteration, Qs (t) indicates the population after the selection process
and Qr (t) is the population after recombination. In keeping with the natural
selection analogy, successive populations of trial solutions are called generations.
Subsequent generations are made up of children, produced through the selective
reproduction of pairs of parents taken from the current generation. A list of some
of the commonly encountered GA terms relating to the optimization problem is
presented below:
Population:
Parent:
Child:
Generation:
Chromosome:
Coded form of a trial solution vector (string) consisting of genes made of alleles. A chromosom is
also referred to as individual.
Gene:
Allele:
Fitness:
In the beginning of the optimization process the first population has to be initialized (see Figure 3.6). Normally this is done by a random setting. For the evaluation of all the individuals of the population, a fitness function f (i) is needed.
A higher fitness value means a better solution and a lower fitness value shows a
worse solution. If the fitness function reaches the highest value, this means that
the searched optimal solution is found.
In the following the three operators are shortly explained.
3.5.1.1
Selection
The selection of the parents for the next generation is mostly controlled by randomness, but according to the natural selection: Better individuals are selected
more often then worse individuals. The selection process forces the population
of the GA in the direction of better solutions. In the majority of cases a fitness
proportional selection scheme is used. This method is referred to as roulette-wheel
45
selection, because it works like the roulette game with different probabilities. For
each individual the fitness value is scaled according to the sum of the fitness of all
the individuals. The probability for an individual i to be selected in the selection
process is shown in Equ. (3.2).
f (i)
ps (i) = Pn
j=1 f (j)
with f (j) 0 and
n
X
(3.2)
f (j) > 0
j=1
Recombination
Recombination is the primary operator for generating new individuals. In Figure 3.7 the 1-point crossover is shown, as one possible example for a recombination operator. In this example 8 parameters are binary coded on one chromosome.
Mutation
46
3.5.1.4
Researchers and developers use Genetic Algorithms in many areas where a global
optimum is searched for. In particular they are used for practical optimization
applications with a big and complex solution spaces. In the field of electromagnetics engineering, developers use Genetic Algorithms for example for the design
of lightweight, broadband microwave absorbers, the reduction of array sidelobes
in thinned arrays, the shaped-beam antenna arrays or the design of broadband
patch antennas [61].
In the area of network planning for 2G and 3G systems, Genetic Algorithms
are very popular. In [52, 78] these algorithms are used for the base station
location problem. The papers [21, 55, 70] demonstrate approaches for coverage
and capacity optimization in wireless networks using Genetic Algorithms.
3.6
The idea of imitating the behavior of ants to find solutions for combinatorial
optimization problems was initiated by Colorni, Dorigo and Maniezoo in 1991
[24, 25, 26]. The metaphor comes from the way ants search for food and find
their way back to the nest. Initially, ants explore the area surrounding their nest
in a random manner. As soon as an ant finds a source food, it evaluates the
interest of the source (quality and quantity) and carries some of the food to the
nest. During the return trip, the ant leaves a chemical pheromone trail on the
ground, whose quantity depends on the quality of the source. The role of this
pheromone trail is to guide other ants towards the source. After a while, the
path to a good source of food will be indicated by a strong pheromone trail, as
the trail grows with the number of ants that reach the source. Since sources that
are close to the nest are visited more frequently than those that are far away,
pheromone trails leading to the nearest sources grow faster. The final result of
this process is that ants are able to optimize their work.
The transfer of this food searching behavior into an algorithmic framework for
47
48
Chapter 4
Coverage- and Capacity-limiting
Factors
4.1
Introduction
Unlike GSM, the coverage and capacity improvement methods cannot be separated anymore in the UMTS system. There is always a tradeoff between coverage
and capacity. Some of the improvement methods enhance the coverage at the cost
of capacity, while others improve capacity, but at the same time the coverage decreases.
In UMTS the network coverage and capacity can be either uplink or downlink
limited. It is generally accepted that service coverage is uplink limited. However, system capacity may be either uplink or downlink limited depending upon
the system configuration and the traffic profile. In rural environments, where
the network is normally planned with relatively low uplink load, the scenario is
typically capacity limited in the uplink. A downlink capacity-limited scenario is
more likely in an urban scenario, where the network is planned for higher uplink
load to increase the system capacity [68].
When a cells capacity limitation is reached, additional users cannot be admitted
to the system and, therefore, they are put to outage. Outaged users are within
the coverage of the cell, but not able to access the network services. Thus, as the
number of users at outage increases, the network capacity decreases. The outage
problems can be managed by radio resource management (RRM) and optimization of the base station parameters. Therefore, understanding and identifying
the limitations is important for the development of optimization strategies for
increasing coverage and capacity effectively.
This chapter provides the basis for understanding the reasons for coverage and
capacity limited scenarios both in the up- and downlink. Further, the corre49
50
sponding solutions for the enhancement of network coverage and capacity are
presented.
4.2
The majority of existing literature makes the assumption that service coverage
is uplink limited [68]. In general this is true, though it is fairly easy to identify scenarios where service coverage is downlink limited, for example when the
data rate is asymmetric with more data in the downlink combined with a limited
base station transmit power capability. The simplest method for studying service coverage performance is using a link budget. For the identification, which
parameters need to be improved to enhance service coverage performance, the
link budget is also very useful.
Techniques, which require additional investments for improving the service coverage are active antennas, mast head amplifiers, higher-order receive diversity,
increased sectorization, repeaters and smart antennas. Some of these techniques
improve coverage, but at the cost of capacity. However, other techniques like
smart antennas simultaneously improve both coverage and capacity. A detailed
description of the individual techniques can be found in [68]. A good overview
on smart antennas is given in [16, 45].
Link budgets for a W-CDMA system follow the same principles as those for GSM.
The main differences are the inclusion of processing gain, Eb /N0 requirement, soft
handover gain, target uplink cell loading and a headroom to accommodate the
fast power control loop. In the link budget the target loading is the main capacityrelated parameter. A low value for the target loading corresponds to a larger cell
range, but a lower cell capacity.
The link budget for a data service supporting 384 kbit/s on the downlink and
64 kbit/s on the uplink is presented in Table 4.1. The lower allowed propagation
loss value indicates that the service coverage is uplink limited. If we assume a
power of 37 dBm for the power amplifier of the base station, the service coverage
of the scenario gets downlink limited. A maximum of half of the total transmit
power is generally allocated to any one single link, i.e. 34 dBm (similar to the
example from Table 4.1 where 40 dBm represents half of the total transmit power
capabiliy of a 43 dBm power amplifier). Therefore, the downlink allowed propagation loss decreases by 6 dB and results in a downlink-limited service coverage.
A series of typical uplink link budgets is presented in Table 4.2 for a range of
service data rates. The difference in the allowed propagation loss value may
be used to estimate the difference in site count requirements for various service
Parameter
Uplink
Downlink
64
384
kbit/s
21.0
40.0a
dBm
0.0
18.5
dBi
0.0
2.0
dB
Transmit EIRP
21.0
56.5
dBm
Processing gain
17.8
10.0
dB
Required Eb /N0
2.0
4.5
dB
MDC gain
0.0
1.2
dB
Target loading
50
80d
3.0
7.5
dB
-174.0
-174.0
dBm/Hz
3.0
8.0
dB
-168.0
-159.0
dBm/Hz
-117.9
-99.9
dBm
18.5
18.5
dBi
2.0
0.0c
dB
3.0
0.0
dB
2.0
2.0f
dB
EIRP required
-133.4
-101.9
dBm
154.4
158.4
dB
Antenna gain
Receiver sensitivity
51
Table 4.1: Example link budget for a data service (source: [68]).
40 dBm is a typical limit placed upon a downlink traffic channel for a 43 dBm
power amplifier module to prevent an excessive share of base station power being
allocated to a single user.
b
The values for the antenna gain, body loss and cable loss are very optimistic.
A detailed research on this topic shows [87].
c
It has been assumed that data services do not incur a body loss.
d
The downlink target loading is a function of the traffic mix loading in the cell.
80 % is a typical value.
e
Measurements of the background noise floor can be found in [82].
f
The value for soft handover gain in the downlink is too hard. Typical values,
based on numerous simulations can be found in [17].
a
52
coverage objectives [68]. Table 4.2 shows that the highest data rate service defines
the cell range in terms of the allowed propagation loss. Planning the network
for a 384 kbit/s service coverage will be sufficient to ensure acceptable coverage
performance for lower data rate services and speech.
Service type
Speech
Data
Data
Data
12.2
64
144
384
kbit/s
21.0
21.0
21.0
21.0
dBm
Antenna gaina
0.0
0.0
2.0b
2.0b
dBi
3.0
0.0c
0.0c
0.0c
dB
Transmit EIRP
18.0
21.0
23.0
23.0
dBm
Processing gain
25.0
17.8
14.3
10.0
dB
Required Eb /N0
4.0
2.0
1.5
1.0
dB
Target loading
50
50
50
50
3.0
3.0
3.0
3.0
dB
-174.0
-174.0
-174.0
-174.0
dBm/Hz
3.0
3.0
3.0
3.0
dB
Interference floor
-168.0
-168.0
-168.0
-168.0
dBm/Hz
Receiver sensitivity
-123.1
-117.9
-114.9
-111.1
dBm
18.5
18.5
18.5
18.5
dBi
2.0
2.0
2.0
2.0
dB
3.0
3.0
3.0
3.0
dB
2.0
2.0
2.0
2.0
dB
EIRP required
-138.6
-133.4
-130.4
-126.6
dBm
156.6
154.4
153.4
149.6
dB
Table 4.2: Example uplink link budgets for illustrating the impact of service data
rate (source: [68]).
a
The values for the antenna gain, body loss and cable loss are very optimistic.
A detailed research on this topic shows [87].
b
It has been assumed that terminals supporting higher data rates are superior in
terms of antenna configuration.
c
It has been assumed that data services do not incur a body loss.
Improving any of the parameters in the link budget will lead to an improvement
in service coverage performance. However, improving service coverage leads to a
greater average base station transmit power requirement per downlink connection.
53
4.3
In the uplink, there are two possible limiting factors for uplink capacity-limited
systems. One reason could be that the mobile doesnt have enough transmit
power to achieve the required bit energy to interference plus noise density ratio (Eb /I0 ) to access the network services. An uplink capacity-limited scenario
can also occur when the maximum uplink load is reached and therefore no additional users can be accepted in the system. The traffic associated with an uplink
capacity-limited scenario is generally relatively symmetric.
4.3.1
The maximum allowed transmit power of a mobile must be enough to fulfill the
Eb /I0 requirement at the base station in order to access the network services.
The transmit power PT X,M S needed for the mobile is calculated using Equ. (4.1)
and compared to the maximum allowed.
PT X,M S =
N0 L p
(1 U L ) (1 +
W
)
R
(4.1)
Where N0 is the background noise, Lp is the propagation loss between the mobile
and the base station, R, and are the bit rate, service activity and uplink
Eb /N0 requirement of the chosen service respectively, W is the W-CDMA chip
rate and U L is the uplink loading.
Hence, if the mobile fails to fulfill the required Eb /I0 , the RNC commands the
mobile to increase its transmit power through the closed loop power control algorithm, which is based on the received power measured at the base station. If this
is not possible, because the maximum transmit power of the mobile is achieved,
the mobile is put to outage.
From Equ. (4.1) we can see that the required transmit power of a mobile is
directly proportional to the path loss. Consequently, this power level could be
reduced by decreasing the path loss, e.g. by adjusting the antenna downtilt or
the antenna azimuth.
54
4.3.2
U L =
KN
X
k=1
1
1+
W
k Rk
(1 + i)
(4.2)
(4.3)
55
4.4
4.4.1
PT =
n
X
PT X,n + Pcommon
(4.4)
i=1
In Equ. (4.4) PT X,n is the required code power for the connected user n, and
Pcommon is the overall transmit power of the common channels. In general, approximately 20 % of the maximum cell power PT,max is assigned to the pilot and
common control channels. The remaining 80 % is available to support traffic
channel capacity.
56
When the base station reaches its maximum transmit power level, PT = PT,max
(where PT,max is the maximum base station transmit power capability), it cannot
allocate extra power to an additional user even if the cell is not highly loaded. In
this case, additional users cannot be added without modifying the base station
configuration.
All active users belonging to a cell, including those mobiles connected by soft
handover share the total transmit power PT . Hence, a lower average code power
P
requirement P T X (P T X = n1 ni=1 PT X,n ) results in a higher cell capacity. Furthermore, it is possible to increase the number of served users by reducing the
soft handover overhead. Soft handover links only occur at the cell border, which
experience maximum path loss and therefore require higher code power.
Table 4.3 shows a range of typical W-CDMA base station transmit power configurations PT,max . The capacity offered by each transmit power configuration
PT,max is a function of the traffic profile as well as the maximum propagation loss
defining the cell range. The greater the propagation loss, the greater the average
code power P T X and the lower the cell capacity. In other words, we can say that
the smaller the cell size the lower P T X and the higher the cell capacity.
Base station Tx power
Application
40 dBm (10 W)
43 dBm (20 W)
46 dBm (40 W)
Table 4.3: Typical base station transmit power configurations (source: [68]).
As we can see from Equ. (4.4), a part of the total transmit power of the base
station is assigned to the common pilot channel (CPICH) and the other common channels. Consequently, by reducing the CPICH power and the powers of
the other common channels, more power will be available to support the traffic
channel capacity.
In the following list the reasons, why the maximum transmit power PT,max of a
base station is reached, are summarized:
57
4.4.2
4.4.3
In the downlink, each dedicated link needs a certain transmit power to reach the
Eb /N0 requirement for a sufficient connection to the mobile. In Equ. (4.5) the
required transmit power for a certain link is presented.
PT X
R
W
P
k
k Lp (Itot k Ik +NM S )
k
(4.5)
58
Where NM S is the background noise level at the mobile station, Itot is the total
wideband interference power received at the mobile station, Ik is the total wideband power received at the mobile station from base station k, Lpk is the link
loss from base station k to the mobile station, k is the orthogonality factor of
cell k and k is the scaling factor (relative maximum link powers) for different
base stations in the active set.
This code power PT X is limited per single traffic link. This limitation is defined
as the maximum transmitted power PT X,max on one channelization code on a
given carrier. If the code power PT X requested by a mobile is higher then the
permitted level (PT X > PT X,max ), the mobile will not be admitted.
4.5
Summary
Understanding the limitation mechanisms for service coverage and system capacity forms an essential part of being able to develop effective capacity optimization
strategies for UMTS radio networks.
Service coverage and system capacity can be either uplink or downlink limited. Coverage is generally uplink limited, although a low base station transmit
power capability combined with asymmetric data services may lead to a downlink coverage-limited scenario. Capacity may be either uplink or downlink limited
depending upon the planned level of uplink loading, the base station transmit
power capability, the traffic loading of the network and the performance of the
network.
There are various available techniques to increase the network capacity. According to [68], the simplest and most effective way to increase system capacity is
to add one or more carriers. When all the available carriers have been used,
then other methods such as additional scrambling codes, mast head amplifiers
and active antennas, remote HF head amplifiers, high-order receive diversity,
downlink transmit diversity, beamforming, sectorization, repeaters, micro cells
or smart antennas [16, 45] could be applied. In [68] the most of these techniques
are described in more detail.
There are also several possible ways to increase the network capacity without
additional infrastructure and cost investment, for instance:
Minimizing the intra- and inter-cell interference
Shrinking the service coverage area
Optimizing the CPICH power allocation
4.5. SUMMARY
59
60
Chapter 5
Key Optimization Parameters
5.1
Introduction
In this chapter the parameters, which are used in this thesis for optimizing the
capacity in a UMTS FDD network, are described in more detail.
Careful configuration of the many network and cell parameters is required and
crucial to the network operator, because they determine the capability to provide
services, influence the quality of service (QoS), and account for a major portion
of the total network deployment and maintenance costs. However, there are
numerous configurable base station parameters which are multi-dimensional and
interdependent, and their influence on the network is highly non-linear.
The following list shows the most important parameters:
Antenna settings
Antenna azimuth
Antenna tilt
Height
Antenna pattern
Primary common pilot channel (P-CPICH) power level
Soft handover parameters
Active set size
Active set window
61
62
All these parameters have a strong influence on the interference in the system
and therefore on the amount of served mobile terminals 1 (capacity of the network). The optimization algorithms described in this thesis focus on optimizing
P-CPICH power as well as antenna tilt and antenna azimuth. So, in the following
a description of these three parameters is presented. Furthermore, the influence of
these key optimization parameters on the network, especially on system capacity
and coverage will be explained.
5.2
Antenna Parameters
The antenna parameters are the most important ones related to the interference
situation in the network. Besides the height of the antenna and the used pattern,
the azimuth angle and the elevation angle can be tuned. The height of the antenna
as well as the antenna azimuth can only changed with a higher operating expense.
However, changing antenna tilt and antenna pattern is associated with less effort.
5.2.1
Antenna Azimuth
This thesis focuses on base stations with 3 sectors (cells) and a fixed spacing of
120 between the three antennas. When adjusting the antenna azimuth, all three
antennas are turned in the same direction at the same time, so that the spacing
between them will be kept constant at 120 , as shown in Figure 5.1. The arrows
symbolize the directions of the main beams of the antennas.
For finding the optimum azimuth settings in a network, the interference has to be
taken into account. The goal of the azimuth optimization in this work is to reduce
the intra- and inter-cell interference. As a result the capacity of the network will
be increased. In Figure 5.2, the horizontal pattern of the used KATHREIN 739707
1
63
antenna [64] is shown. The pattern shows a difference in antenna gain of about
6 dB between the main direction of the antenna (0 ) compared to an angle of 60
(at this angle the adjacent sectors of this base station begin, and there the mobile
stations will initiate a handover to the neighboring cell). Due to that difference
of 6 dB, the direction of the main beam of the antenna is quite significant and
thus it is important to adjust the azimuth of the antennas in order to reach the
highest antenna gain for the users in the own cell, as well as the lowest gain (or
highest attenuation) for the mobile stations located in neighboring cells. This
way, less power is needed for covering the area, and therefore less interference is
generated.
5.2.2
Antenna Tilt
The antenna tilt is defined as the elevation angle of the main beam of the antenna
relative to the azimuth plane. Since the tilt is usally set in the direction down
to the ground, the term downtilt is often used. A positive downtilt is defined
as the negative elevation angle of the main beam of the antenna relative to the
horizontal plane (see Figure 5.3). The service area in Figure 5.3 is the own cell
and the far-end interference area is the area of the adjacent cells.
The antenna downtilt can be implemented in a mechanical way as well as by
electrical tilting. These two tilting mechanisms have different effect: When using
mechanical tilting, the antenna pattern itself stays constant and is only tilted,
64
while with electrical tilting the antenna pattern changes when adjusting the tilt.
Due to the complexity of analyzing the system with changing the antenna pattern for every tilt value, in this thesis only mechanical tilting is applied. So, the
optimization algorithms in this work are evaluated on networks with one fixed
antenna pattern (KATHREIN 739707 antenna [64]) with a fixed predefined electrical tilt of 3 included in the vertical antenna pattern. Figure 5.4 shows the used
vertical antenna pattern. A very detailed examination on the effect of electrical
and mechanical antenna down-tilting in UMTS networks can be found in [44].
65
i=
Ioth
Iown
(5.1)
In Equ. (5.1) Ioth denotes the inter-cell interference (interference from the other
cells) and Iown is the intra-cell interference (interference from the own cell).
By down-tilting the antennas, the other-to-own-cell interference ratio i can be
reduced: The antenna main beam delivers less power towards the neighboring
base stations, and therefore most of the radiated power goes to the area that is
intended to be served by this particular base station [20]. Due to the fact that
the interference in the system is decreasing, the capacity increases and more users
can be served in the network. However, down-tilting the antenna will also reduce
the sectorization efficiency, which will decrease the cell capacity.
Both, other-to-own-cell interference ratio i and sectorization efficiency affect the
overall network capacity. According to [20], for small and moderate antenna
downtilt angles, the improvement provided through inter-cell interference rejection dominates, and a net increase in capacity can be achieved. For larger downtilt
values, the reduction of sectorization efficiency dominates, and the result is a net
decrease in capacity.
Additionally, antenna tilt adjustment also affects the cell coverage area. Too
much down-tilting causes that the service area could become too small and also
holes in the coverage of the network can occur. Furthermore, if the down-tilting
reaches a certain value, the interference in the neighboring cells increases again
due to the side lobes of the vertical antenna pattern. Figure 5.5 in Section 5.4
shows this effect. In [88] it is shown that, for smaller inter-cell site separation,
higher downtilt is required to mitigate the inter-cell interference. As the inter-cell
site separation increases, smaller downtilt is advantageous, offering higher gains
to distant users. Hence, the impact on the cell coverage area limits the tilt to
reasonable values.
The simulation analysis of [69] shows that the optimum value for the antenna tilting depends on the propagation environment, the cell site, user locations, and the
antenna radiation pattern. Furthermore, in order to achieve the highest number
of served users, it is very crucial to effectively control the inter-cell interference
and soft handover overhead. It also stated that due to the antenna radiation
pattern side lobes and nulls, there could be some variations of i and coverage
probability can occur as a function of the antenna tilting angle.
In [72] it is demonstrated that antenna tilt tuning can also help to relieve congestion in hot-spot sectors and maintain the blocking probability at an acceptable
level.
Detailed descriptions of the effect of antenna tilt on the system capacity are
presented in [20, 60, 69, 72, 88].
66
5.3
The common pilot channel in UMTS consists of two subchannels, the primary
CPICH (P-CPICH) and the secondary CPICH (S-CPICH). A detailed description
of the CPICH channels is given in Section 2.3.2. The algorithms in this thesis
focus on the optimization of the P-CPICH power, and therefore the term CPICH
will be used for P-CPICH in the following.
The CPICH is very important for handover, cell selection and cell reselection. After turning on the power of the mobile station and while roaming in the network,
the mobile measures and reports the received level of chip energy to interference
plus noise density ratio (Ec /I0 ) on the CPICH to the base station for the cell
selection procedures. Ec is the average energy per pseudo noise (PN) chip, and
I0 denotes the total received power density, including signal and interference, as
measured at the mobile station antenna connector2 . This Ec /I0 ratio is given by
Equ. (5.2),
RSCPCP ICH
Ec
=
Io
RSSI
(5.2)
where the received signal code power (RSCPCP ICH ) is the received power of the
CPICH measured at the mobile station. It can be used to estimate the path loss,
since the transmission power of the CPICH is either known or can be read from
the system information. The received signal strength indicator (RSSI) is the
wideband received power within the relevant channel bandwidth in the downlink.
The cell with the highest received CPICH level at the mobile station is selected
as the serving cell. As a consequence, by adjusting the CPICH power level, the
cell load can be balanced between neighboring cells, which reduces the inter-cell
interference, stabilizes network operation and facilitates radio resource management [103]. Reducing the CPICH power of one cell causes part of the terminals
to hand over to adjacent cells, while increasing it invites more terminals to hand
over to the own cell, as well as to make their initial access to the network in that
cell.
During the radio network planning process, the CPICH transmit power of the
base stations should be set as low as possible, while ensuring that the serving cells
and neighboring cells can be measured and synchronized to and the CPICH can be
2
There exist quantities that are a ratio of energy per chip to PSD. This is the common
practice of relating energy magnitudes in communication systems. It can be seen that if both
energy magnitudes in the ratio are divided by time, the ratio is converted from an energy ratio
to a power ratio, which is more useful from a measurement point of view. It follows that an
energy per chip of X dBm/3.84 MHz can be expressed as a mean power per chip of X dBm.
Similarly, a signal PSD of Y dBm/3.84 MHz can be expressed as a signal power of Y dBm.
67
used as a phase reference for all other downlink physical channels. Too high values
of CPICH power will cause the cells to overlap and therefore create interference
to the neighboring cells, called pilot pollution, which will decrease the network
capacity. Furthermore, the CPICH power is part of the total transmit power
of the base station, which is generally limited. Thus, less CPICH power would
provide more power for the traffic channels, and therefore increase the capacity.
On the other hand, the mobile stations are only able to receive the CPICH down
to a certain threshold level of Ec /I0 , which determines the coverage area. Due to
that fact, setting the CPICH power too low will cause uncovered areas between
the cells. In an uncovered area, CPICH power is too weak for the mobile to
decode the signal, and call setup is impossible. According to the specifications of
the Third Generation Partnership Project (3GPP), the mobile must be able to
decode the pilot from a signal with Ec /I0 of -20 dB [2].
To make Equ. (5.2) better understandable, the Ec /I0 ratio can also be described
with Equ. (5.3).
CP ICHEc /I0 =
PCP ICH
Lp
PnumBSs PT X,i
+ IACI
i=1
Lpi
+ N0
(5.3)
Where PCP ICH is the CPICH power of the best server, Lp is the link loss to the
best server, PT X,i is the total transmit power of BS i, Lpi is the link loss to base
station i, IACI is adjacent channel interference, N0 is the thermal noise of the
mobile and numBSs is the number of base stations in the network.
The soft handover (SHO) area can also be controlled by the strength of the
CPICH power. By reducing the CPICH power, the SHO areas will decrease.
However, a certain amount of overlapping cell boundaries is necessary for mobiles
near the cell border to perform SHO and to counteract fluctuations of receiving
signal power.
As conclusion, the level of the CPICH power is very important to reach high
capacity in the system. Therefore, the CPICH power level is a key optimization
parameter and is included in the optimization strategies for the increase of the
capacity in this thesis. The power levels of the other common channels (PCH,
SCH,...) are typically set with respect to the CPICH power level [68]. Therefore,
the optimization algorithms in this thesis change the power of the PCH and SCH
in the same way like the CPICH power. For example, if the CPICH power is
decreased by 1 dB also the power of PCH and SCH is reduced by 1 dB.
Further information regarding the influence of the adjustment of CPICH power
on the system capacity can be found in [60, 65, 68, 72, 91, 103]
68
5.4
To show the influence of CPICH power and antenna tilt the different settings for
all base stations are evaluated on the small scenario. In Section 7.4.2 this scenario
is introduced. It consists of 9 base stations equipped with 3-sector antennas,
thus comprising 27 cells. In this scenario only speech users are used instead
of two-way 64 kbit/s data users, which are normally used for the small network
scenario. Figure 7.4 shows the distribution of the base stations as well as the user
distribution.
For this investigation the antenna downtilt was varied from 0 up to 16 , in steps
of 1 , with the same value for the antenna tilt in each cell. Further, the CPICH
power value was changed from 33 dBm in 1 dB steps down to 10 dBm, again with
the same value in each cell. All possible combinations for these two parameters
were evaluated on the small scenario with the UMTS FDD network simulator in
CPICH coverage verification mode 1 (see Section 7.2.2.1). A 3D-plot in Figure 5.5
shows the results of this investigation.
Figure 5.5: Capacity for different CPICH power and antenna tilt settings. (Capacity is measured as served users.)
In Figure 5.5 the x- and y-axis show the key optimization parameters CPICH
power and antenna tilt. The z-axis shows the number of served users in the
5.5. SUMMARY
69
network. From the plot we see that if the CPICH power is varied by the same
value in each cell, there is no strong influence on the number of served users.
This means that the absolute level of the CPICH power is not so important
for the capacity, because the serving cell areas remain the same. However, if
the cells would have different power values for the CPICH, this will influence the
serving cell areas in the network and so causes an increase or decrease in capacity,
depending on the setting.
The antenna tilt affects the capacity in a different way. From Figure 5.5 we can
divide the antenna tilt range in 4 areas:
area1 : 0 6
area2 : 6 11
area3 : 11 13
area4 : 13 16
In area1 the capacity increases, if the antenna downtilt value increases. This is
due to the reduction of inter-cell interference. In area2 the capacity decreases
due to the antenna radiation pattern (see Figure 5.4 in Section 5.2.2). The first
side lobe in the antenna pattern (compare with Figure 5.4) increases the capacity
in the network in area3. From a downtilt of 13 (area4 ) up to higher values the
capacity decreases again. Note that these results are only valid for a scenario
with a plane terrain. In a scenario with hilly terrain the situation can be totally
different.
5.5
Summary
Adjusting the antenna parameters antenna tilt and antenna azimuth as well as
the CPICH power enables to increase the network capacity by:
1. Reducing inter-cell interference and pilot pollution.
2. Optimizing base station transmit power resources.
3. Load sharing and balancing between cells.
4. Optimizing SHO areas.
70
The amount of transmit power for the CPICH is not specified in the 3GPP
standard. So, it is up to the network operator to assign appropriate power levels
to the CPICH. In [68], it suggests that the CPICH power is set to about 5-10 %
of the total cell transmit power capability.
Despite the challenges associated with CPICH power, antenna tilt and antenna
azimuth settings adjustment, it is clear that these parameters provide low cost
techniques for optimizing expensive UMTS networks, because no additional expenditure in infrastructure is necessary.
Chapter 6
Fitness Function and
Performance Indicators
6.1
Introduction
6.2
Fitness Function
For the evaluation of the network, a fitness function has to be defined. The fitness
function represents the optimization goal. This thesis focuses on the capacity
optimization in a UMTS FDD network. So, the the main goal of the optimization
process is to increase the number of served users.
In this section two fitness functions, which are used during this work, are presented. First, a basic version is introduced, which is used for the local optimization algorithms (Section 8.2). During the development of the genetic algorithm
71
6.2.1
As mentioned before, the goal of the optimization is to increase the capacity. So,
the basic fitness function considers the number of served users in the network
as the goal of optimization. Equ. (6.1) shows the fitness function for one test
solution i1 .
g(i) =
cells
X
servedk
(6.1)
k=0
In Equ. (6.1), servedk is the number of served users of cell k, and cells is the
number of total cells in the network.
6.2.2
The basic fitness function (see Equ. (6.1)) was extended for the use of the genetic
algorithm. As before, the number of served users is used again in the fitness
function. In addition to the capacity, also the coverage area and the number of
mobiles in soft handover are taken into account for the fitness calculation. This
is done to increase the accuracy of the fitness value and to allow the algorithm
to differentiate between two otherwise equal solutions: If two solutions have the
same number of served users, the solution with the larger coverage area and
fewer mobiles in SHO has a higher fitness value. In Equ. (6.2) the extended
fitness function g(i) for one test solution i is shown.
g(i) =
cells
X
(6.2)
k=0
In Equ. (6.2), servedk is the number of served users of cell k, and cells is the
number of total cells in the network. The term gcov represents the coverage probability of the pixels in the simulation area (covered pixels over existing pixels),
scaled between 0 (no coverage) and 1 (all pixels are covered). The SHO proportion is taken into account by the term gSHO in Equ. (6.2). In order to serve more
mobiles in a cell, the idea is to reduce the number of SHO links in this cell. In
Equ. (6.3) the calculation of gSHO is shown.
1
In this thesis one test solution means one evaluated network state with one particular
parameter setting (antenna azimuth, antenna tilt and CPICH power).
gSHO = 1
73
X gSHO,cell (k)
1 cells
cells k=0 gSHO,max
(6.3)
The term gSHO,cell (k) in Equ. (6.3) denotes the number of SHO links in cell k
and gSHO,max is the maximum over all gSHO,cell (k). The range of gSHO is between
0 and 1. A high value of gSHO means a low SHO rate and a low value of gSHO
characterizes a high amount of SHO connections.
There is always a tradeoff between capacity and coverage, QoS and costs. The
goal in this thesis is to increase network capacity in a cost effective way, while
maintaining the coverage and QoS. Figure 6.1 shows the tradeoff between these
factors.
6.2.3
Several optimization algorithms are presented in this thesis, using either the
basic fitness function presented in Section 6.2.1 or the fitness function from Section 6.2.2. Table 6.1 summarizes which fitness function is used for which algorithm.
Section
8.2.1
Simulated Annealing
8.2.2
g(i) =
8.2.3
g(i) =
cells
P
k=0
cells
P
k=0
cells
P
servedk
servedk
servedk
k=0
Genetic Algorithm
8.3
g(i) =
cells
P
k=0
8.4
g(i) =
cells
P
servedk
k=0
The fitness function value is not considered by the Analytic Optimization Algorithm, but it
was used during the development of the algorithm. Further details are described in Section 8.4.
6.3
Grade of Service
Besides the fitness value, a second indicator is very important during the optimization process, which is used to characterize the proportion of the users that
can be provided with a service. This value is called Grade of Service (GoS) and
describes the ratio of served users over all existing users2 . In this thesis the GoS
is defined as
GoS =
served
existing
(6.4)
In Equ. (6.4), served denotes the total number of served users in a defined area
(e.g. the whole simulation area), and existing is the total number of simulated
users in the same area. During the optimization process, GoS increases from its
initial value of 95 % until it has reached 100 %. Then all users are served and the
optimization algorithm cannot proceed any further. However, the network could
accept more users. Thus, the different optimization algorithms developed in his
thesis apply the following approach: When GoS reaches a value of 96 %, new
users are added to the network until the initially defined GoS of 95 % is reached
again.
The calculation of the number of additional users is done within the network
evaluation with the UMTS FDD network simulator described in Section 7.2. The
2
Note that some literature defines GoS as the probability of a call being blocked or delayed
for more than a specified interval.
75
service mix (40 % speech users and 60 % 64 kbit/s data users) after adding the additional users remains the same. For the big network scenario (see Section 7.4.1)
the new users are only added in the optimization area (usergroups optarea1 and
optarea2 ), and not in the whole network scenario. However, in the small scenario
the additional users are added in the whole network. Remember, the optimization
area in the small scenario covers the whole network scenario (see Section 7.4.2).
In this thesis, the function for adding the additional users is referred to as Add
Users.
6.4
6.4.1
Performance Indicators
Outaged Mobiles
The first performance indicator that is used for the algorithms are the outaged
users of a cell. A user is put to outage due to several reasons. These reasons are
called the outaged reasons. In Chapter 4, a general description of the coverageand capacity-limiting factors, which reflect the outage reasons, is given.
The evaluation of a network scenario with the static UMTS FDD network simulator delivers the number of outaged mobiles per cell. The following list shows
the used outaged reasons of the simulator:
DL outage:
DL cell power: If the total base station power is to high, the connection with the highest code power is closed. This is repeated until the
base station transmit power is below the predefined maximum value.
The outage priority could also be on the existing SHO connections
(not implemented in the used simulator). The latter criterion usually
equals the former, since SHO connections in most cases have to be
transmitted with higher power levels than active links. This is obvious, since SHO connections only occur at the cell border and they
require higher DL code power levels as active connections.
DL code power: If the required code power for a mobile is to high, this
mobile is put to outage.
OVSF code power limitation: In that case the number of connections
has to be reduced. Similar to the procedure regarding the maximum
base station power, the priority is on the connection with the highest
power, but could also be on the SHO links.
6.4.2
Quality Factor
(6.5)
In Equ. (6.5), load threshold denotes the planned uplink cell load and cell load
is the actual cell loading. The second value, cell pwr f actor, is a measure of base
station transmit power utilization in the downlink and is defined as
cell pwr f actor =
(6.6)
In Equ. (6.6), max cell pwr denotes the maximum transmit power of a cell and
cell tx pwr represents the current total transmit power of that cell. The third
value, ovsf f actor, is a measure of the OVSF code utilization in the downlink
and is defined as
ovsf f actor =
77
(6.7)
In Equ. (6.7), ovsf limit is the maximum available number of OVSF codes, which
is 512; ovsf utilization is the number of used OVSF codes. For example, a voice
call occupies 2 codes, so 256 voice users can be served. If there are only 64 kbit/s
data users 32 users can be served, because one 64 kbit/s data user occupies 16
codes.
The range of each of these three factors is between zero and one. For the QF
the minimum of these three values is taken and therefore the range of the QF
is between zero and one. A low value for QF describes a heavily loaded cell,
and a high value for QF describes a weakly loaded cell. Therefore, by using
the QF as the performance indicator, CPICH power and antenna tilt settings
can be adjusted adaptively according to the loading condition in the uplink and
downlink of a cell.
6.4.3
In this chapter two performance indicators, the number of outaged mobiles as well
as the quality factor QF were introduced. Nor for all algorithms, both indicators
are used. Table 6.2 gives an overview which indicator is used for which algorithm.
Algorithm
Rule Based Approach
Simulated Annealing
Adaptive Rule Based Approach
Genetic Algorithm
Outaged mobiles
Quality factor QF
The analytic optimization algorithm uses different parameters as basis for the adjustment of
the parameters, which are described in Section 8.4.
Chapter 7
Simulation Environment
7.1
Introduction
For the evaluation of the different optimization techniques, the algorithms have
to be evaluated on a network scenario. Furthermore, a network simulator is
necessary to calculate the coverage and capacity relevant information.
In this chapter the used static UMTS network simulator and the interaction
with the optimization algorithms are described in Section 7.2 and Section 7.3.
Further, the network scenarios, on which the different algorithms are evaluated,
are presented in Section 7.4.
7.2
For the evaluation of the network configuration in this thesis, a static UMTS
FDD network simulator based on the Monte Carlo approach is used. This approach utilizes a sufficient number of independent snapshots of potential user
distributions with one fixed network configuration. With this they allow the
compilations of significant statistics of the system performance parameters. In
the static approach, each single snapshot corresponds to a network situation in
equilibrium, which means that the physical layer procedures like power control
are applied iteratively for each user in the system. With this we can combat the
near-far problem of the real system and find a stable system condition as it would
be, if we have fast power control in real time. Due to the static scenario there is
no need for a real time fast power control implementation.
The Monte Carlo approach takes not only propagation conditions into account,
but also the changes of the individual services, data rates, requirements, user
79
80
positions and some other time-invariant parameters like SHO-gain. In the simplest case the different snapshots (realizations) in a run represent various user
positions in the network area. For each realization (which are initialized at the
beginning of each run) an iteration is done until a certain convergence criterion
is fulfilled. This criterion can, as an example, be based on the variation of the
TX power per iteration for all mobiles in the system. If the change request for
the transmit power of a terminal between two consecutive iterations is below a
certain threshold for, e.g. 95 % of all mobiles, convergence is attained. When this
stable situation has been reached, the number of mobiles able to achieve their
performance targets can be determined and a new realization can be computed.
For a representative statistical evaluation it is important to perform sufficient
realizations of the investigated configuration. The principles of the Monte Carlo
approach as well as a schematic flowchart of a static simulator are shown in
Figure 7.1.
81
7.2.1
82
7.2.2
For the optimization of a UMTS system the CPICH power level is very important
(see Section 5.3 and Section 5.4). The level should be optimized very accurately
to use the power resources of the base station efficiently, to reduce the interference
level, and nevertheless provide the required CPICH coverage in the system.
Due to the importance of the CPICH coverage and to achieve the possible minimum of CPICH power level, it is mandatory to know how the simulator defines
CPICH coverage when deriving CPICH optimization strategies.
The UMTS network simulator CAP ESSOT M from SYMENA provides two modes
for CPICH coverage verification: mode 1 with a fixed threshold for CPICH coverage and mode 2, including the interference by a Ec /I0 threshold (for details see
[68]). In the following both modes are explained in detail.
7.2.2.1
Mode 1
(7.1)
In Equ. (7.1) RSCPCP ICH is the received signal code power of CPICH and S is
the receiver sensitivity with a default value of -120 dBm.
It is crucial to mention that the CPICH coverage verification mode 1 does not take
any interference into account for the calculation of coverage. In this mode the
CPICH coverage of a single cell would be the same with or without the presence
of the total system.
83
Mode 2
The CPICH coverage verification mode 2 includes the interference level of the system in the calculation of CPICH coverage. In this mode, a mobile in the simulated
scenario is covered by the CPICH if the EC /I0 for the received CPICH at the mobile station exceeds a certain threshold. This threshold is called CP ICH EC /I0 thres
according to the 3GPP specification [2] and is set to -12 dB for the most of the
simulations performed in this work. The mobiles, which dont fulfill the Ec /I0
requirements are put to outage (see Section 6.4.1). In CAP ESSOT M the Ec /I0
threshold is set by the receiver sensitivity parameter. A receiver sensitivity of
-120 dBm is equivalent to an EC /I0 threshold of -12 dB. Equ. (7.2) shows the
relationship between EC /I0 threshold and the receiver sensitivity.
CP ICH EC /I0 thres = S Pnoise
(7.2)
In Equ. (7.2) S denotes the receiver sensitivity and Pnoise is the noise power at
20 C, which is -108.09 dBm.
Also, the signal to noise ratio (SNR) is called CPICH EC /I0 although there is an
important difference, which will be explained in the following. In Equ. (7.3) the
calculation in CAP ESSOT M for mode 2 is presented.
CP ICHEc /I0 =
RSCPCP ICH
CP ICH Ec /I0 thres
RSSI RSCPCP ICH
(7.3)
In Equ. (7.3) RSCPCP ICH denotes the received signal code power of the CPICH
as measured by the mobile and RSSI (received signal strength indicator) is the
wideband received power within the relevant channel bandwidth in the downlink.
The mentioned difference is that in the 3GPP specification the CP ICHEc /I0 is
the ratio of the received energy per PN chip for the CPICH to the total received
power spectral density at the UE antenna connector as shown in Equ. (5.2) and
Equ. (5.3), whereas in CAP ESSOT M the CP ICHEc /I0 is the real SNR.
7.3
Simulator Interface
The static UMTS network simulator CAP ESSOT M from SYMENA, Software &
Consulting GmbH consists of two parts: the simulation engine and the graphical
user interface (GUI). The GUI helps the operator to feed the simulation with the
relevant data (user distribution, network configuration and parameters). Further,
the GUI prepares the simulation results in tables, diagrams and plots for the
84
operator. As interface between the simulation engine and the GUI, XML1 files
are used, as shown in Figure 7.2.
Two files are used as interface: One, which includes all the input information
for the simulator, in Figure 7.2 named as XML Input File. The second one, the
XML Output File, is generated after the simulation is finished and comprises all
the results for the GUI.
The XML files are also used as interface for the optimization process. The optimization algorithm prepare the input file for the simulator and starts the simulation. After the simulation is finished the output file of the network analysis is
read from the optimization algorithm for calculation of the fitness function and
GoS. Fig 7.2 shows this interface marked in blue.
7.4
Network Scenarios
For the evaluation of the several optimization algorithms two virtual network
scenarios of a typical European city are used. In the first scenario (bigger one)
the network covers the whole area of the city. The second scenario contains only
downtown. In the following both scenarios are described. The more interesting
one is the bigger one. For the results present in Chapter 9 normally this one is
used. Only if it is explicitly mentioned the small scenario is used.
1
XML: Extensible markup language. The next-generation of HTML, is now viewed as the
standard way information will be exchanged that dont share common platforms (www.xml.org).
7.4.1
85
Big Scenario
In the big network scenario 25 base stations equipped with 3-sector antennas,
thus comprising 75 cells, are used. Figure 7.3 shows the distribution of the base
stations as well as the distribution of the users. The arrows in the figure represent
the main antenna direction of each cell, and the black dots symbolize the users
in the system.
Figure 7.3: Base station location and one user distribution of the big network
scenario.
In this simulation scenario the area inside the rectangle is defined as the optimization area. This means that this region is the area of interest, and the fitness
function as well as the GoS is evaluated over this part.
In the total simulation area the users are distributed equally in each cell according
to the best server plot. The best server plot shows the regions of the dominance
areas of the different cells due to the highest received CPICH power level. That
means, all the pixels (one pixel is equivalent to 100 m 100 m) in the best server
plot of the scenario with the highest received CPICH power level from one cell
86
(so-called serving cell ) are dedicated to that cell and describe the best server area
of that certain cell. The same number of mobiles are distributed in each of these
areas. This kind of distribution is called Best Server Equal distribution in this
thesis. It is also possible to use an equally distributed user model. In this model
the users are equally distributed over a defined rectangular area. In this work
this distribution model is called Equal distribution.
For the simulation all the mobiles are assigned to four different user groups. It is
possible to use Best Server Equaland Equaldistribution in each user group,
as well as a certain service (voice or data). The big scenario uses four usergroups:
usergroup1, usergroup2, optarea1 and optarea2. The first two usergroups cover
the whole scenario and so define the service mix in the whole scenario. Usergroups
optarea1 and optarea2 only cover the optimization area in the network and so
define the service mix in this area. For the results present in Chapter 9 normally
the Best Server Equaldistribution is used in all usergroups. In the scenario a
service mix of 40 % 12.2 kbit/s speech users and 60 % two-way 64 kbit/s data
users is assumed with an activity factor of 50 % when using speech service and
100 % when using PS data service. The initial number of users in the whole
network is 1057. Additional users during the optimization process are admitted
in optarea1 and optarea2 by the Add Users function (see Section 6.3).
For the variation of the positions of the mobiles in the scenario (generate different
snapshots, see Figure 7.1 in Section 7.2), the initialization value of the random
generator for the different usergroups can be changed by a parameter, which is
called rand init.
In the simulator also some parameters of the mobile stations can be defined and
accordingly varied. The maximum transmit power of the mobile station is set
to 21 dBm. The mobiles have an antenna gain of 0 dB, and the body loss as
well as the receiver noise figure are set to 0 dB. The threshold down to which
the received CPICH power level is considered for the interference calculations is
-126 dBm, and the receiver sensitivity S of the mobiles is -120 dBm. Since the
receiver sensitivity defines the CPICH coverage, S can also reach other values
(see Section 7.2.2).
7.4.2
Small Scenario
The second scenario, which is used for some investigations in this work represents
the downtown of the European city used for the big network scenario. This
scenario is equivalent to the optimization area of the scenario described in the
former section.
In the small network scenario 9 base stations equipped with 3-sector antennas,
thus comprising 27 cells, are used. Figure 7.4 shows the distribution of the base
87
Figure 7.4: Base station location and one user distribution of the small network
scenario.
In this scenario the optimization area comprise the whole network. Two usergroups are used: usergroup1 and optarea1. These usergroups cover the whole
area. The Best Server Equaldistribution with only two-way 64 kbit/s data
users is used in both usergroups. The initial number of users in the network is
376. The other parameters are the same like in the other scenario. Note that
additional users are only admitted in optarea1 by the Add Users function (see
Section 6.3).
88
Chapter 8
Optimization Algorithms
8.1
Introduction
In this main chapter of the thesis the developed optimization algorithms are presented. Altogether five different strategies for the optimization of antenna tilt
(see Section 5.2.2) and CPICH power (see Section 5.3) were developed. First,
the optimization with local algorithms was studied and consequentially a Rule
Based Approach algorithm, a Simulated Annealing algorithm and an Adpative
Rule Based Approach algorithm originated. Further, a Genetic Algorithm approach was implemented and studied in more detail. The disadvantage of all the
local and especially the genetic approach is that they are very time consuming.
So, also an analytic algorithm was developed with the objective to use as few
steps as possible in contrast to the other strategies. This algorithm optimizes
also antenna azimuth besides antenna tilt and CPICH power (see Section 5.2.1).
The following list summarizes all developed approaches:
Local Optimization Algorithms:
Rule Based Approach
Simulated Annealing
Adaptive Rule Based Approach
Genetic Algorithm
Analytic Optimization Algorithm
The Adaptive Rule Based Approach and the Analytic Optimization Algorithm
were developed and studied within the scope of the diploma these of Yee Yang
89
90
Chong and Wolfgang Karner under supervision of the author. In this PhD thesis
only a brief summary of the approaches as well as some optimization results for
comparison with the other algorithms are given. For a detailed description as
well as further simulation results see [23, 63].
8.2
In the case of the local algorithms, the optimization of the base station parameters
CPICH power and antenna tilt begins with an initial evaluation of the network.
After analyzing the results of the first evaluation, the iterative optimization process is started. Each optimization loop includes two steps. After changing the
parameters in the first step, the network is evaluated in the second step. Then,
the next iteration of the optimization loop is started and the parameters are
changed again. The optimization runs until a specific termination condition is
fulfilled. Figure 8.1 shows the flow chart of the optimization process.
For the three different local approaches this optimization process is always the
same. The difference lies in the block Change parameters. Further, the decision,
which results (test solutions) are accepted and which are not is different for the
strategies. In the three Sections 8.2.1, 8.2.2 and 8.2.3 the individual approaches
are explained and the differences are worked out.
8.2.1
91
The crucial part of the algorithm is the function change paramters(x) (see Figure 8.2). This function processes a rule set like the one depicted in Table 8.1 and
is executed for each cell with outaged mobiles (see Section 6.4.1). The parameters of the table describe the following: param specifies the modified parameter;
delta denotes the amount of change; limit describes the lower or upper limit of
the parameter, and iter specifies how often the rule is applied at most. When
the optimization process is launched, the algorithm starts with the first rule of
the rule set. In each iteration only one rule is applied. According to Table 8.1,
we can see that after the first 5 optimization loops the algorithm continues with
rule number 2 and so on. During one rule also worse results are accepted. When
advancing to the next rule, however, the best result of the previous one is taken.
The algorithm terminates, when all rules of the rule set have been processed. In
Appendix D the standard rule set, which is used in Chapter 9, is presented.
92
param
delta
limit
iter
CPICH
-5 dB
28 dBm
tilt
CPICH
-3 dB
28 dBm
10
tilt
10
CPICH
-1 dB
28 dBm
10
tilt
10
10
Table 8.1: Example rule set for Rule Based Optimization Algorithm.
8.2.2
Simulated Annealing
Subsequently the Rule Based Approach was extended and improved by incorporating Simulated Annealing [9]. The decision to accept a bad result is in contrast
to the Rule Based Approach independent of the rule set. After an optimization
loop worse results can be taken, but only with a certain probability. This probability corresponds to a cooling function (CF). The exponential CF is analogous
to the physical process of heating and then slowly cooling steel to obtain a crystalline structure with minimum entropy. In the beginning of the optimization the
probability to take a bad result is higher. During the optimization the probability
to accept a worse result than the previous one decreases according to the CF.
Section 3.3 gives an detailed overview about Simulated Annealing.
Concerning the implementation, the main difference to the Rule Based Approach
is shown as bold text in Figure 8.3. The variable rand denotes a uniformly
distributed random value between 0 and 1. CF is the cooling function of the
Simulated Annealing algorithm. In this algorithm the same decisions for changing
CPICH power and antenna tilt settings are used. So, a rule is executed in each
cell with outaged mobiles. However, the number of outaged mobiles as well as
the outage reason do not influence the decision.
The same type of rule set (see Table 8.1) as in the Rule Based Approach is
used for Simulated Annealing in the function change paramters(x). For the Simulated Annealing algorithm the rule set was improved by including additional
rules with smaller values for delta. Appendix E presents two developed rule sets
(see Table E.1 and Table E.2), which were applied for the optimization results in
Chapter 9. Again, the basic fitness function from Equ. (6.1) is used.
The implementation of Simulated Annealing in this thesis uses only the rule set for
finding a new parameter setting and not the Simulated Annealing mechanism. So,
like in the Rule Based Approach it is only possible to decrease the CPICH power
93
Procedure RuleBased()
begin
X := initial parameters
f it := evaluation(X)
while != break
X 0 := change parameters(X)
f it0 := evaluation(X)
if (f it0 > f it) or (rand < CF )
X := X 0
f it := f it0
end
end
end
Figure 8.3: Extension of Rule Based Optimization algorithm to Simulated Annealing.
and increase the antenna downtilt. The idea was to use Simulated Annealing for
accepting also bad results with a certain probability.
In the algorithm two different functions for the cooling temperature TC are implemented. On the one hand Geometric Cooling according to [9] is implemented
with the following function:
TC,N EW = TC
(8.1)
In Equ. (8.1) denotes a parameter of the function. On the other hand also Slow
Cooling is implemented according the following function:
TC,N EW =
TC
1 + TC
(8.2)
GoSGoSOLD
TC
(8.3)
In Equ. (8.3), GoS denotes the actual GoS and GoSOLD represents the GoS of
the previous iteration. On the one hand, CF depends on the cooling temperature
TC . On the other hand, it also depends on the change of the GoS. Thus, for
94
big differences in GoS (if the result is worse than before), the value of CF, and
thus the probability to take the new result is lower. If the random value rand is
smaller than the CF the optimization result is taken (see Figure 8.3).
If a result is not accepted, the previous result is modified again according the
same rule. However, only in half of the cells the parameter (CPICH power or
antenna tilt) is changed. For this, the cells are sorted by the number of outaged
mobiles with the most outaged mobiles first. If in the last iteration only in one
cell the parameter was changed, then the algorithm processes with the next rule
and changes the parameter again in all cells with outaged mobiles.
8.2.3
In this section the developed Adaptive Rule Based Approach is described. This
optimization algorithm is an extension of the Rule Based Approach introduced
in Section 8.2.1.
The parameters are changed according to an adaptive optimization technique,
described in the following. Figure 8.1 shows the basic optimization process. This
algorithm differs from the previous two described algorithms (Section 8.2.1 and
Section 8.2.2) that now CPICH power and antenna tilt are changed together.
Further, an increase of CPICH power and antenna uptilting is also possible.
Remember, the Rule Based Approach and Simulated Annealing were only able
to change the parameters in one direction (reduce CPICH power and reduce
antenna downtilt).
8.2.3.1
In this section the idea, why CPICH power and antenna tilt are adjusted together,
is explained considering an example. Figure 8.4 shows this example. In the first
picture (Figure 8.4 (a)) two mobiles are shown. Both are served by base station
BS 1. The goal is to achieve a load balancing in a way that one mobile is served
by BS 1 and the other by BS 2. This aim can be reached with three different
strategies.
The first strategy decreases the CPICH power of BS 1. Figure 8.4 (b) shows this
situation. Now, each base station serves one mobile, but with the disadvantage
that BS 1 causes inter-cell interference to the cell of BS 2.
In the second strategy the antenna downtilt of BS 1 is increased to shrink the
coverage area of the cell (Figure 8.4 (c)). However, the CPICH power is to high
for the smaller cell and so affects pilot pollution to the adjacent cell. Further, the
Figure 8.4: Why adjust CPICH power and antenna tilt together?
95
96
excessive CPICH power causes a waste of base station power resources, because
the total transmit power is limited.
The last strategy, which is the basis for the Adaptive Rule Based Approach,
combines the previous ones and changes CPICH power and antenna tilt together.
So, no additional inter-cell interference and pilot pollution is produced like in
strategy one and two. This combined strategy is shown in Figure 8.4 (d).
8.2.3.2
Algorithm Description
In each iteration of the optimization loop (see Figure 8.1) the quality factor QF
(see Section 6.4.2) is computed for each cell. CPICH power and antenna tilt are
changed according to the developed rules, as shown in Table 8.2.
QF
QF < 0.5
0.5 QF 0.7
No change
QF > 0.7
Numerous simulations with different values for the limits of QF from Table 8.2 show that
0.5 and 0.7 achieves the best results.
97
rule
param
stepsize
limit
iter
CPICH
3 dB
25 dBm
50
tilt
1.5
CPICH
0.5 dB
10 dBm
tilt
50
0.25
Table 8.3: Example of CPICH power and antenna tilt limitation & stepsize settings.
When the QF of a cell is greater than 0.7, both its CPICH power and antenna
up tilt will be increased by
QF 0.7
.
stepsize
0.3
(8.4)
On the other hand, if the cells QF is less than 0.5, the CPICH power will be
reduced and antenna down tilt will be increased by
stepsize (1 QF ).
(8.5)
Consequently, the adjustments of CPICH power and antenna tilt depend on the
cells actual QF . With this strategy, CPICH power and antenna tilt are adjusted
adaptively according to the loading condition of a cell.
The modification of antenna down tilt and CPICH power has to be limited (lower
limit for CPICH power and upper limit for antenna down tilt) in each rule by
the parameter limit in order to avoid too big changes in a particular cell (see
Table 8.3). There are no limitations set for the maximum CPICH power and
maximum antenna up tilt, so that larger service coverage area is allowed in regions
where the user density is low. Furthermore, the developed algorithm is biased
toward reducing the initial CPICH power and increasing the antenna down tilt.
When the optimization process is launched, the algorithm starts with the first
rule of the used rule set (e.g. from Table 8.3). For each rule several iterations are
performed. According to Table 8.3, we can see that in this case the algorithm
continues with rule number 2 after the first 50 iterations.
If the GoS after one iteration within one rule is lower than 95 percent and lower
than the GoS from the previous iteration, the new result is not accepted. In this
case, the same iteration is processed again, but only in two third of the previously
modified cells (priority according QF ) the CPICH power and tilt settings are
98
changed (Cell Reduction). The algorithm terminates, when either all rules of the
rule set have been processed, or if the QF in all the cells is between 0.5 and 0.7
(see Table 8.2). This means, the network is balanced or the algorithm cannot
process further, due to the limits for CPICH power and tilt. A detailed flowchart
of the optimization loop is shown in Figure 8.5.
99
sidered, like in the basic fitness function in Equ. 6.1. Since the GoS (Equ. 6.4)
includes the number of served users in the enumerator, also this performance
indicator is utilizable as fitness function.
Appendix F presents four developed rule sets for the Adaptive Rule Based Approach (see Table F.1, Table F.2, Table F.3 and Table F.4), which were applied
for the optimization results in Chapter 9.
8.3
Genetic Algorithm
8.3.1
Representation
100
tilt. The search space for the algorithm is set in the following way: The range for
the CPICH values is from 15 to 38 dBm with a resolution of 0.5 dB; The values
for the antenna downtilt are limited by 2 and 10 . The resolution of the tilt is
set to 0.5 . The element <limit> specifies the search space in the initialization
file (Appendix G).
8.3.2
Algorithm
In this subsection the optimization process as well as the used genetic operators
are described in more detail. Newly developed operators are used to incorporate
knowledge about the quality of the cells in the network. In Figure 8.7, the
flowchart of the implemented algorithm is shown.
The algorithm starts with the initialization of all individuals of the population.
Section 8.3.2.1 describes this initialization of the population. After the initial
phase, the whole population is evaluated with the static UMTS FDD network
simulator. The fitness function g(i) as well the GoS are calculated for all the
individuals (i = 0, 1, ..., n)2 . For the fitness evaluation the extended fitness function from Equ. 6.2 in Section 6.2.2 is used. In the next step the GoS of the best
individual, i.e. the individual with the highest fitness value, is compared to the
limit3 of 96 %. If the GoS is higher than this threshold, additional users are added
in the simulation (Add Users)4 , and the whole population has to be reevaluated
again to get the new values for g(i) and GoS.
After this preprocessing of the population, the optimization process is started. In
my genetic approach I have implemented selection, recombination and mutation.
With the selection operator, the individuals for the new population are selected.
Of some individuals multiple copies can be selected, and some other individuals will be not selected at all, according to the implemented selection method.
After the selection process, the individuals (parents) are recombined to create
new individuals (children). The last genetic operator randomly mutates genes
of the individuals with a certain probability. When the evolution process of one
population is finished, the whole population has to be evaluated again to get the
new values for g(i) and GoS of the individuals.
With the best individuals of the population a local optimization step is performed
to improve the performance of the Genetic Algorithm. After the local step the
GoS of the best individual is compared to the threshold of 96 %. If the GoS is
higher, then additional users are added and the population is reevaluated. In
2
101
102
the next iteration, the evolution process is repeated. The Genetic Algorithm
stops, if a certain termination condition is fulfilled, e.g. after a certain number
of iterations.
8.3.2.1
Initial Population
For all the individuals of the population the initial values for CPICH power and
antenna tilt are chosen randomly, but with the same CPICH power and antenna
tilt values for all cells. The values for the CPICH power are chosen within 15
and 38 dBm with a resolution of 1 dB. The initial values for the antenna tilt are
selected within 2 and 10 with a resolution of 1 .
However, not all individuals are initialized randomly. The first 12 individuals
of the population are set to fixed predefined values. The element <init> of
the initialization file (Appendix G) defines the setting. For each of these 12
individuals one value for CPICH power and one value for the antenna tilt is
defined. So, each cell of the network has the same CPICH power and antenna tilt
value. The background of this initialization is that the search space is covered
by the start population as good as possible.
8.3.2.2
Selection
with a =
5
f (i) = a g(i) b
(8.6)
Cm g g
gmax g
(8.7)
and b = a g g
In [76, 77, 92] all the selection methods are explained in detail.
103
In Equ. (8.7), Cm denotes the selection pressure 6 , which describes how much the
algorithm favors good individuals compared to bad individuals. The mean value
over all g(i) is denoted as g in Equ. (8.7), and gmax is the highest fitness that
occurs in the population.
My implementation of the fitness proportional selection works as follows: First,
the best individual since the last function call of Add User is selected to guarantee that the best solution cannot get lost. This method is called elitism 7 in
literature [76, 92]. Next, the expected number of descendants for each individual
is calculated with the following equation:
f (i)
n
e(i) = Pn
j=1 f (j)
(8.8)
In Equ. (8.8), n denotes the size of the population. From each individual be(i)c8
descendants are produced. To complete the population, the best individual of
the last population is repeatedly selected, until n individuals are in the new
population.
Tournament Selection
As mentioned before, also other selection methods were tested. For the sake of
completeness also tournament selection is explained in the following. With this
selection method also good results were achieved during the algorithm development process.
First, the implemented tournament selection method selects the best individual of
the old population (elitism). Afterwards, k randomly chosen (equal distributed)
individuals are selected. The individual with the highest fitness value of this
selected sub-population is taken over in the new population. This selection procedure is repeated until n individuals are admitted in the new population. The
factor k controls the selection pressure. A higher value of k corresponds to a high
selection pressure and vice versa. The value for k depends on the optimization
problem. In the case of the considered problem in this thesis, it has been shown
that good values for k lies between 2 and 10 % of the population size.
6
The selection pressure is specified in the initialization file by the attribute cm in the element
<ga> (Appendix G). Good values of Cm for the base station parameter optimization problem
are between 3.5 and 6.
7
Elitism: Independent of the selection function the best k individuals are taken into the new
population to guarantee a monotonic increase of the fitness.
8
The bc operator denotes the floor function.
104
8.3.2.3
For each cell in the network the number of outaged mobiles and the QF between
the two individuals are compared. If the number of outaged mobiles of parent 2
is smaller than that of parent 1 and the QF of parent 2 is better for this cell,
then the corresponding genes for this cell are taken from parent 2 for the new
child, otherwise the two genes for CPICH power and antenna tilt are taken from
parent 1. If the algorithm decides not to produce a child by recombination (with
105
a probability of (1 pc )), then the first selected parent is taken unchanged. After
Recombinate outage, the population again has a size of n individuals, consisting
only of the children produced by the recombination.
During the development of the Genetic Algorithm this recombination operator
showed the best performance. So, for the results, which are presented in Chapter 9, only the Recombinate outage operator is used.
Recombinate basic
In contrast to the previous recombination operator, the Recombinate basic operator selects n/2 times randomly two individuals from the population (parent 1
and parent 2 ) and produces two new children (referred to as child 1 and child 2 )
with a recombination probability pc .
First, parent 1 bequeaths all his genes to child 1 and parent 2 bequeaths all his
genes to child 2. So, child 1 and child 2 have the same parameter setting for
CPICH power and tilt like parent 1 and parent 2, respectively. Then, for each
cell in the network the number of outaged mobiles and the QF between the two
children are compared. If the number of outaged mobiles of child 2 is smaller than
that of child 1 and the QF of child 2 is better for this cell, then the corresponding
genes for this cell are exchanged between the children, otherwise the genes are
not exchanged.
If the algorithm decides not to produce children by recombination (with a probability of (1 pc )), then the selected parents are taken unchanged in the population. After Recombinate basic, the population again has a size of n individuals,
consisting only of the children produced by the recombination.
Recombinate average
The last version of the recombination operator doesnt take any performance
indicator into account. Hence, this operator performs worst of all 3 procedures.
For the sake of completeness also this recombination version is explained in this
thesis.
The Recombinate average algorithm selects n times randomly two individuals
(referred to as parent 1 and parent 2 ) from the population and produces a new
child with a recombination probability pc . For each cell in the network the average
of the parents CPICH power and antenna tilt setting is calculated and taken as
new setting for the new child. Equ. 8.9 and Equ. 8.10 show the calculation of the
average values.
106
CP ICHchild =
CP ICHparent 1 + CP ICHparent 2
2
tiltchild =
tiltparent 1 + tiltparent 2
2
(8.9)
(8.10)
If the algorithm decides not to produce a child by recombination (with a probability of (1 pc )), then the first selected parent is taken over unchanged in the
new population. After Recombinate average, the population again has a size of
n individuals.
8.3.2.4
Mutation
The mutation operator is performed with each individual of the population. The
algorithm decides for each gene of the individual with a mutation probability pm
whether the value of the gene will be mutated or not (for CPICH power and
antenna tilt separately). In Figure 8.9, the mutation for one cell is shown.
Parameters are changed from their old value xold to their new value xnew according
to the following rule:
xold x xnew xold + x
(8.11)
The value for x in Equ. (8.11) is set to 0.5 for CPICH power and antenna
tilt, respectively. From this interval the value xnew is randomly chosen with a
resolution of 0.5. So, three solutions are possible: xnew = xold x, xnew = xold
or xnew = xold + x. If xnew lies outside the search space, then the corresponding
limiting value is taken.
107
Local Optimization
After the evolution process, a local optimization with the best local num individuals is carried out. The flowchart of the local optimization is shown in
Figure 8.10.
First, the best local num individuals are selected. For each of these individuals,
local iter local optimization iterations are performed. These parameters are
specified in the initialization file by the attributes local num and local iter
in the element <ga> (Appendix G). Each iteration includes two steps. In the
first step, the parameters are changed according to the quality of the cells in the
network. The rules for changing the parameters are based on the Rule Based
Approach from Section 8.2.1, with the extension that the parameters can be
changed in both directions. In the second step, the individuals are evaluated. If
the fitness value for the new parameter setting of the individual is better than
the old one, then this setting is taken, otherwise the old parameter setting is
retained.
I use two rules for the local optimization, the first rule (rule 1 ) to shrink a cell in
the network and the second rule (rule 2 ) to enlarge a cell. In each local iteration
for each individual, a random value decides, whether rule 1 or 2 is used. If rule 1
108
is selected, then for each cell is checked, if there are outaged mobiles and if QF is
bad (QF < 0.1). If this is the case, the CPICH power is decreased by 0.5 dB and
the antenna downtilt is increased by 0.5 in this cell, both with a probability of
0.7. In the case of rule 2, in cells without outaged mobiles and with a good QF
(QF > 0.1), the CPICH power is increased by 0.5 dB and the antenna downtilt
is decreased by 0.5 , both with a probability of 0.5.
8.3.2.6
During the optimization process with the Genetic Algorithm sometimes for several generations no better results are found. The mathematical meaning of this
is that the algorithm gets stuck in a local optimum. This especially occurs near
the end of the optimization. So, to counteract this phenomenon, the idea is to
give the algorithm a new impulse to find a better result.
This new impulse is given by admitting new users by the Add Users function.
Normally, the function is called, if the GoS of the best solution in the population
reaches a value of 96 %. Then, so many additional users are admitted to the
system that the best solution has approximately a GoS of 95 %. In the initialization file it is possible to configure the algorithm in that way that before the
GoS reaches a value of 96 % the Add Users function will be called. The attribute
add ever of the element <ga> (Appendix G) specifies the number of generations,
after that at the latest the function is called. This means, if add ever is set to
a value of 20, at the latest after 20 generations new users area admitted to the
system. Also, if the GoS of the best individual of the population is lower than
96 %.
With this additional impulse the algorithm is led in a new direction and shows
altogether better results.
8.3.2.7
A second idea to give a new impulse during the optimization with the Genetic
Algorithm is to increase the selection pressure Cm after a certain number of
populations. This can be done either by increasing the value of Cm , or by reducing
the size of the population.
The idea of this strategy is that first the algorithm searches in a broader view
and also tracks solutions with a not so good fitness value. After the selection
pressure is increased, the algorithm focuses more on the better solutions and so
increases the convergence behavior of the algorithm.
In my algorithm I implemented the following approach: In the initialization file
two attributes (reduce iter and min pop size) of the element <ga> configure
109
the increase of the selection pressure (Appendix G). After reduce iter generations the size of the population is halved. This means that the population is
sorted by the individuals fitness value, and only the first half is consider furthermore. The lower limit of individuals in one population is specified by the
attribute min pop size. So, the selection pressure Cm is increased9 after each
reduce iter generations. This means that for example, if the Genetic Algorithm
starts with a population size of 200 individuals and reduce iter is set to 100,
after 100 generations the size of the population is 100, after 200 generations there
are only 50 individuals in the population and so on. If the size of the population
reaches the lower limit min pop size, then number of individuals arent reduced
furthermore.
Simulations have shown that this approach only brings a very small increase of
the performance of the Genetic Algorithm. So, this feature is not implemented
for the simulation results presented in Chapter 9.
8.3.2.8
Genetic Algorithms are well suited for a parallel implementation. The big advantage of a parallel implementation is that several computers share the work
and so save computation time. If we look at the flow chart of the genetic approach (see Figure 8.7), we see that in the block Evaluation for each individual
one network evaluation is performed. This means that altogether n times the
simulation engine (see Figure 7.2 in Section 7.3) will be started. Normally, the
network evaluations are done in serial processing on one computer.
For Genetic Algorithms different methods are known for an implementation with
several processors. The following list gives an overview of the methods. A description of the individual approaches can be found in [79, 92].
Synchrony master/slave model : A master is responsible for population management, selection, recombination, mutation, etc. For the evaluation of the
fitness function, the new individuals are sent to several slaves. When all
slaves have done their work, the master starts with the next generation.
Disadvantages:
Master has to wait until all processors have finished their network
evaluations
Total breakdown, if master has a malfunction
High communication effort
9
Note, that a reduction of 50 % of the individuals does not mean an increase in the selection
pressure of 50 %.
110
A steady state GA produces only one new individual per generation and this new individual
replaces one of the current population.
111
Section 7.3) for the individuals of the current population and initiates the scheduler to distribute the files to the clients. Figure 8.12 shows a screen-shot of the
Distributed Client Scheduler.
112
So, the clients evaluate the whole population. After finishing the evaluation of
the last individual the Distributed Client Scheduler finishes its job with preparing
all the XML output files for the Genetic Algorithm.
With this approach a speedup in execution time can be achieved with a minimum
of 3 clients. If only 2 clients are used, the TCP/IP overhead of the network compensates the advantage of more processors. If 8 clients are used at the same time
a speedup of about 600 % is achieved. This is very important for the evaluation of
big network scenarios, where the evaluation of one single parameter setting takes
a long time. Note, that this parallel implemention only speeds up the runtime
of the algorithm, but it doesnt influence the process of the optimization nor the
result.
8.3.3
During the development process several settings were tried for the parameters of
the Genetic Algorithm. Table 8.4 shows the best setting. With this setting the
highest increase was achieved for both scenarios (see Section 7.4).
I use a population size of 400 individuals with a selection pressure Cm of 5. With
this high value for Cm , I facilitate the production of so-called super-individuals to
shorten the runtime of the algorithm, thus accepting a reduced solution diversity
in the population. The algorithm stops after 350 iterations. This is equivalent to
150 000 network evaluations. On a Pentium IV with a clock rate of 2.0 GHz and
512 MB main memory, my Genetic Algorithm requires about 64 hours runtime
without distributed computing. If the parallel implementation with 8 clients is
used, the same algorithm takes only about 10 hours runtime.
Normally, the settings from Table 8.4 are used for the simulation results presented
in Chapter 9. In cases, where a different setting is used for the algorithm, the
alternative setting is explicitly stated.
8.4
In this section ad hoc strategies for the adjustment of the three key optimization
parameters antenna azimuth, antenna downtilt and CPICH power are described,
which were developed during a diploma thesis by Wolfgang Karner. Ad hoc
in this context means to reach the result only by considering the structure of
the UMTS network, for example the position of the base stations to each other,
the height of the antennas, the height profile of the terrain, or the maximum
transmit power of the base stations. The objective of these strategies is to use as
few steps as possible in contrast to the step by step optimization strategies from
n = 400
Selection pressure
Cm = 5
Probability Recombination
pc = 0.8
Probability Mutation
pm = 0.1
113
Local Optimization:
Number of individuals
local num = 20
Number of iterations
local iter = 2
15 dBm 38 dBm
Antenna downtilt
2 10
Resolution
Termination condition
8.4.1
The azimuth adjustment strategy is based on studying the optimum solution for
the regular hexagonal layout. The optimum azimuth setting for the regular case
was discussed in several papers (e.g. [80]). Figure 8.14 shows the best and the
worst case for a scenario with 19 base stations equipped with 3-sector antennas.
According to [80], an improper direction of sector antennas in a regular hexagonal
layout can cause a capacity degradation exceeding even 20 % and requires an
increase of base station transmit power in the range from 3 to 6 dB.
The effect of these directions in a regular hexagonal grid can be seen in Figure 8.15a for the worst case and in Figure 8.15b for the best case, where the
pathgain is presented in an area of 5000 times 5000 meters with 19 base stations equipped with 3-sector antennas. The plots in Figure 8.15 are produced by
a downlink static UMTS FDD simulator, described in [16]. If the transmitted
CPICH power level is added to the pathgain in the diagram, the received CPICH
power level at each point of the area can be obtained. In the worst case scenario
114
Figure 8.14: Best and worst case of antenna directions in a regular hexagonal
grid. Source: [80].
(Figure 8.15a), there are zones in the network with bad coverage. These critical
zones would need more transmit power to be covered and therefore would cause
more interference in the system. In the best case (Figure 8.15b), there are no
such holes of bad coverage, and the total area is covered more regularly.
a.
b.
Figure 8.15: Worst case and best case of base station azimuth in a regular grid,
pathgain in dB.
115
Algorithm Description
The azimuth adjustment strategy is based on the knowledge of the regular case.
The initial setting for the azimuth values can be arbitrarily chosen. Before optimization, all the base stations are marked as unchanged. The algorithm consists
of two steps. First, the main lobes of a partition of the antennas are turned to
so-called critical spots. In the second step the remaining antennas are interleaved,
so that the interference in the network will be minimized.
Step 1: Turn Base Stations to Critical Spots
For a given UMTS network, first the number and coordinates of the critical spots
have to be defined, as well as the number of base stations nr bs, which should be
turned to these critical spots.
For the azimuth adjustment routine one or more critical spots have to be defined.
We define a critical spot as a place in the network with a higher minimum distance
to all the base stations than all other places; or in other words: a critical spot is
a place with a big distance to all the surrounding base stations. Therefore, it is
difficult to cover that area and to serve mobile stations located there.
For each critical spot the nearest nr bs are determined. The algorithm then
adjusts the azimuth values of these nearest base stations, so that one of the main
lobes of the three antennas points directly to that critical spot. The routine
determines the rotating angles in a way, that the base stations have to be turned
between 0 and 120 clockwise starting from north (0 ). After a base station
azimuth has been adjusted, this site is marked as changed and cannot be adjusted
any more. Figure H.1 in Appendix H shows the flowchart of this step.
Step 2: Interleaving of the Remaining Base Stations
In the second step, the remaining base stations are interleaved. During the azimuth adjustment there are two types of base stations: changed and unchanged
ones. For each unchanged base station the distances to all already adjusted base
stations are calculated. The base station with the minimum distance to an already changed base station is interleaved first and will be called bs to interleave
in the following.
In the next step the rotation angle has to be calculated. For this purpose, the closest 5 already adjusted base stations (in the following denoted as bs ref erence(i), i =
1..5) are taken into account. However, a base station is only taken as bs ref erence(i),
if it has a distance to bs to interleave smaller than 1.5 times the distance of the
closest base station. Consequently, the maximum value of i can be smaller than 5.
116
117
To calculate the rotation angle for bs to interleave, we choose the window which
contains the most angles (Figure 8.17). If the number of angles is equal in all
windows, we take the window with the minimum difference between the angles
contained in it and calculate the mean angle. If there is only one angle in every
window, the angle of the nearest base station is taken as the value for turning
bs to interleave. After the azimuth value of base station bs to interleave has
been adjusted, it is marked as changed.
The algorithm repeats step 2 of the azimuth adjustment routine until no unchanged base stations remain. Figure H.2 in Appendix H shows the flowchart of
step 2.
8.4.2
The antenna downtilt is not set directly equal to the mean elevation, but according to certain rules obtained from results of the Rule Based Approach from
Section 8.2.1. For CPICH coverage verification mode 1 these rules are presented
11
The mean elevation angle is defined as the mean value over all elevation angles of the mobiles
in one cell. Single mobiles, which are covered in the cell but are situated e.g. just below the
antenna or in a remote area of coverage, having very big or small elevation angles and therefore
would falsify the result of the mean elevation, are not considered for the calculation.
118
in Table 8.5 and in Figure 8.19 as a linearized function over the mean elevation
angle. For CPICH coverage verification mode 2 these rules are shown in Table 8.6
and Figure 8.20.
Mean elevation angle Antenna downtilt
-1.5
-4
1.5 < 1
-3
1 < 0.5
-2
0.5 < 0
-1
0 < 0.5
0.5 < 1
1 < 1.5
1.5 < 2
2 < 2.5
2.5 < 3
3<4
4<5
5<6
6<7
Table 8.5: Rules for adjusting the antenna downtilt according to the mean elevation angle in CPICH coverage verification mode 1.
These rules are only valid for the antenna pattern of the used KATHREIN 739707
antenna [64]. For a different antenna the rules will be different. Note, that the
difference between CPICH coverage verification mode 1 and mode 2 is exactly
2 .
Figure H.3 in Appendix H shows the flowchart for the implementation of the
antenna tilt adjustment.
8.4.3
In Chapter 5.3 it has been shown that the optimum CPICH power level is the
lowest CPICH value that can be received correctly by the mobile in the serving
area. With this minimum power level, too much overlapping in the CPICH
coverage areas is avoided and therefore the minimum pilot pollution and the
119
Figure 8.19: Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 1.
-6
1.5 < 1
-5
1 < 0.5
-4
0.5 < 0
-3
0 < 0.5
-2
0.5 < 1
-1
1 < 1.5
1.5 < 2
2 < 2.5
2.5 < 3
3<4
4<5
5<6
6<7
Table 8.6: Rules for adjusting the antenna downtilt according to the mean elevation angle in CPICH coverage verification mode 2.
120
Figure 8.20: Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 2.
The same result can be seen in Figure 8.21, where the total transmit power of
one cell in the optimization area of the big network scenario (see Figure 7.3 in
Section 7.4.1) is presented over the CPICH power level. This curve is a result of
simulations with CAP ESSOT M in CPICH coverage verification mode 1. During the simulation the CPICH power levels of all cells in the scenario have been
decreased. To achieve a valid result, the cell had 19 mobiles served for all measurement points. What we can see is that by decreasing the CPICH power level
of the cells, the total transmit power of the cells decreases as well, not only due
to the amount of reduced CPICH power, but also because of a certain down
swinging effect due to a reduced interference level in the system.
Thus, the main task in optimizing the CPICH power level is to avoid a too high
degree of cell overlapping while still having a certain minimum CPICH power
for providing the required CPICH coverage in the defined area. Therefore, the
overall problem is to find that minimum CPICH level for maintaining CPICH
coverage. Due to the significance of the correct calculation of CPICH coverage
and because of the quite different coverage verification mode 1 and mode 2, two
strategies for the two modes have been developed.
121
Figure 8.21: Transmit power of one cell in the optimization area of the big network
scenario.
8.4.3.1
122
8.4.3.2
123
serving cell is adjusted in order to reach the received Ec /I0 requirement12 . If the
requirement is already fulfilled, the CPICH power remains unchanged.
Figure H.4 and Figure H.5 in Appendix H show the flowchart for the implementation of the CPICH power adjustment for CPICH coverage verification mode
2.
12
For the simulations in Chapter 9 the Ec /I0 requirements are usually set to -12 dB (Section 7.2.2.2).
124
Chapter 9
Algorithm Performance Analysis
9.1
Introduction
In this second main chapter, the performance of the various algorithms, introduced in Chapter 8, are presented. Usually the big network scenario (see Section 7.4.1) is used for the analysis of the algorithms. If the small network scenario
(7.4.2) is used, it is explicitly mentioned.
The results for the local algorithms Rule Based Approach, Simulated Annealing
and Adaptive Rule Based Approach, are presented in Section 9.2, Section 9.3
and Section 9.4, respectively. Section 9.5 presents the results for my Genetic
Algorithm. The analysis for the Analytic Optimization Algorithm is presented
in Section 9.6. Section 9.7 compares all the different algorithms.
Note that in general the results of the different sections cannot be compared with
each other, because during the development of the several algorithms also the
network simulator has been improved. However, results presented in one section
are evaluated with the same version of CAP ESSOT M .
9.2
In this section the results for the Rule Based Approach (Section 8.2.1) are presented. The network simulations are performed with CPICH coverage verification
mode 1 and the rule set from Appendix D is used. The results of the optimization
are shown in Table 9.1. Before optimization the number of served users is 483
in the optimization area. After optimization with the Rule Based Approach, 817
users are served. The GoS in both cases is about 95 %. This increase in served
users leads to a capacity gain of 69.5 %.
125
126
After optimization
483
817
512
856
94.3
94.5
Served users in
optimization area
Covered users in
optimization area
GoS [%]
Capacity gain [%]
69.5 %
Computational effort
70
[iterations]
Table 9.1: Results for the Rule Based Approach with CPICH verification mode 1.
Further, the algorithm was executed with various number of iterations (see iter
in Table D.1 in Appendix D). The default setting for the rule set is from 6 to 10.
Table 9.2 shows the results. The table shows that there are not enough iterations
in the first case (4..8 iterations), while the best result (822) has been obtained
by using much more iterations.
Iterations
4..8
765
5..9
807
6..10 (default)
817
7..11
807
8..12
810
9..13
822
10..14
819
Table 9.2: Results for the Rule Based Approach with CPICH verification mode 1.
9.2.1
Based on the default rule set from Table D.1 in Appendix D, several investigations
have been carried out. In each investigation, one parameter was changed, while
all others were kept constant during the optimization process.
127
In this investigation the Rule Based Approach was performed on several snapshots with different user distributions. This means that before the optimization
a new user distribution was created and then the algorithm worked with this specific distribution. During the optimization process the distribution of the users
remained the same.
Figure 9.1 shows the results for 20 different snapshots. In the legend of the figure
Start serv and End serv denote the served users in the optimization area before
and after the optimization. Start cov and End cov are the covered users. In
snapshot 1 the default user distribution is used (result from previous section).
Figure 9.1: Results for the Rule Based Approach on different snapshots.
The number of served and covered users are within a relatively close range. The
average number of served users values at algorithm start is 459. After optimization there are 791 served users in the optimization area on average (compared to
483 and 817 for the result of the previous section).
9.2.1.2
A very important question is, how stable is an optimized network towards changing user distributions. In this section results are presented with changing the
user distribution after the optimization process. For 5 different snapshots from
128
the section before (snapshot 1, 6, 11, 14 and 18; all shaded in grey in Figure 9.1)
also after the optimization 20 snapshots are simulated with CAP ESSOT M .
For using the optimized scenarios where the algorithm has performed well (snapshot 1, 11 and 18) the results are shown in Figures 9.2, 9.3 and 9.4.
Figure 9.2: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 1.
As Figures 9.2, 9.3 and 9.4 show, the network is clearly optimized for the scenario generated by the receptive snapshot (if the network is originally optimized
for snapshot 18, then the results are best if the same user distribution is used
after optimization - see shaded areas in the figures). However, apart from these
maxima, the results are relatively stable. This is important, since in a real scenario the users will move all the time, so it would make no sense to optimize the
network for a static distribution of users that is never allowed to change.
When changing the user distribution of badly optimized scenarios (snapshot 6
and 14), the results are not so good. Figures 9.5 and 9.6 show the results.
In these cases, there is no such clear maximum in the results if the user distribution is left unchanged - in some cases even better results are obtained by
changing it! This indicates that certain distributions of users exist that the algorithm cannot optimize efficiently. On the other hand, this can also be seen as
an advantage: in those cases, the algorithm didnt narrowly optimize for a very
specific user distribution, and thus the results are worse in absolute numbers, but
more generally valid.
129
Figure 9.3: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 11.
Figure 9.4: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 18.
130
Figure 9.5: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 6.
Figure 9.6: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 14.
131
Served users in
Covered users in
GoS [%]
[dBm]
[ ]
15
816
855
93.9
20
765
805
94.7
20
756
794
95.1
25
730
767
95.2
15
820
863
93.4
15
718
754
95.2
25
694
730
95.1
24..20
5..8
776
816
95.0
20..10
5..9
774
814
92.1
10
6..10
716
754
94.9
Table 9.3: Results for the Rule Based Approach with different parameter ranges.
Applying constrains to the range of parameters mostly degrades the results. If
they need to be in place for some external reasons, the algorithm can still deliver
good results; it is however preferable not to have such constrains.
Figure 9.7 shows a 3D-visualization of the optimization gain at combinations of
limited CPICH power and antenna tilt ranges.
Obviously, for the used network scenario, CPICH power modifications should
be allowed as far downwards as to 15 dBm, and antenna downtilts should be
increasable up to 8 . If the range is too large, however, results deteriorate. This
is mostly due to the algorithmss linear design - in each step it only moves into one
direction and keeps the results if they are slightly better than the results obtained
in the previous step - even if a smaller step would have been more beneficial. The
gains obtained by a true bi-directional algorithm saturates at a higher level, even
if the allowed parameter ranges are too large (see results for the Adaptive Rule
Based Approach in Section 9.4).
132
Figure 9.7: 3D matrix of parameter range limits for the Rule Based Approach.
133
Results
Antenna tilt
Served users in
GoS [%]
Served users in
GoS [%]
[]
optimization area
483
94.4
817
94.6
485
94.7
795
94.6
484
94.5
790
93.7
521
95.3
736
95.3
555
95.4
732
95.2
572
95.0
784
93.6
616
95.2
824
94.1
694
95.3
805
94.4
optimization area
Table 9.4: Results for the Rule Based Approach with adapted antenna tilt in the
start scenario.
The column Served users in optimization area (Initial values) indicates the usefulness of setting the downtilt to the indicated value for all cells (in this unique
scenario).
Similar simulations have been carried out, modifying the CPICH power. The
results are presented in Table 9.5.
These results show that the developed Rule Based Approach depends on the
initial setting of the CPICH power and antenna tilt. The main reason for this is
that the algorithm is only able to go in one direction.
9.2.2
For the evaluation a Pentium IV with a clock rate of 2 GHz and 512 MB main
memory was used. The algorithm with the rule set from Appendix D and with a
first version of the network simulator (not runtime optimized) requires about 1 h
30 min for the 80 iterations.
134
Results
CPICH power
Served users in
GoS [%]
Served users in
GoS [%]
[]
optimization area
33
483
94.4
817
94.6
32
485
94.7
815
94.6
31
486
94.9
807
93.8
30
494
95.2
833
94.0
29
510
95.0
814
94.1
27
521
95.3
776
94.1
25
524
95.1
767
94.5
optimization area
Table 9.5: Results for the Rule Based Approach with adapted CPICH power in
the start scenario.
9.3
For the results with Simulated Annealing (Section 8.2.2) usually the rule set from
Table E.1 is used. The network simulations are performed with CPICH coverage
verification mode 1 on the big network scenario (7.4.1). Before optimization the
number of served users is 483 in the optimization area like in Section 9.2.
In this section results with both introduced cooling functions, Slow Cooling
(Equ. 8.2) and Geometric Cooling (Equ. 8.1) are presented (Section 9.3.1 and
Section 9.3.2). Further, results with the rule set from Table E.2 are presented
(Section 9.3.3).
9.3.1
For Slow Cooling an initial cooling temperature of 0.07 was used for TC . The
value of was changed from 0.9 to 1.4. For each parameter three trials were
done. The results are shown in Table 9.6.
The results show that there is an important influence of the parameter on
the cooling function. The best result (average of 3 runs), 858 served users in
the optimization area, has been obtained with = 1.1. This corresponds to a
capacity increase of 77.6 %.
Table 9.7 show results with different initial values of TC . With this parameter
the shape of the cooling function can be changed, like by . The shape of the
Served users in
135
Number of iterations
optimization area
0.9
827
102
0.9
836
119
0.9
868
114
1.0
868
121
1.0
803
128
1.0
839
123
1.1
870
104
1.1
859
112
1.1
846
144
1.2
846
135
1.2
846
126
1.2
805
184
1.3
849
102
1.3
774
159
1.3
805
122
1.4
729
131
1.4
824
141
1.4
821
101
Mean value
served users
844
836
858
832
809
791
Table 9.6: Results for Simulated Annealing with Slow Cooling, different values
for .
136
Served users in
Number of iterations
for TC
optimization area
0.03
718
163
0.03
801
131
0.03
852
140
0.04
710
155
0.04
767
157
0.04
804
190
0.05
851
122
0.05
860
117
0.05
828
127
0.05
834
123
0.07
870
104
0.07
859
112
0.07
846
144
0.1
829
132
0.1
821
126
0.1
773
157
0.5
854
104
0.5
829
133
0.5
828
131
866
103
860
87
829
111
Mean value
served users
790
760
843
858
808
837
852
Table 9.7: Results for Simulated Annealing with Slow Cooling, different values
for TC .
The best result I achieved with a initial value for TC of 0.07. So, I conclude
that the best choice of the Slow Cooling parameters for the UMTS base station
137
9.3.2
For the simulations with the Geometric Cooling function (Equ. 8.1), the initial
value of TC was set to 0.07. The parameter was changed from 0.95 to 0.99. For
each parameter three trials have been carried out. The rule type was the same
as in Section 9.3.1. The results are shown in Table 9.8.
From the table we see that the best results are obtained with a value for of
0.990. The algorithm found always the same solution, but always with a different
number of iterations.
If we compare Slow Cooling and Geometric Cooling, we can conclude that Geometric Cooling provides slightly better results.
9.3.3
I extended the rule set from Table E.1 by two additional rules (Table E.2). Now,
the algorithm is able to adjust the CPICH power and antenna tilt parameter with
a higher resolution.
Both cooling functions with the following settings have been tested:
With each cooling function three simulations have been carried out, to see if the
results are independent of the Simulated Annealing random value (probability
for the choice of bad results). Table 9.9 shows the results of the simulations.
From the table we see again that Geometric Cooling outperforms Slow Cooling.
The best result was found with Slow Cooling, however Geometric Cooling found
in each trial the same good result.
138
Served users in
Number of iterations
optimization area
0.950
717
145
0.950
740
140
0.950
731
106
0.960
727
107
0.960
736
149
0.960
750
128
0.970
846
136
0.970
821
135
0.970
849
96
0.975
832
121
0.975
846
126
0.975
863
100
0.980
855
116
0.980
866
102
0.980
866
108
0.985
860
89
0.985
830
113
0.985
867
95
0.990
866
105
0.990
866
96
0.990
866
123
Mean value
served users
729
738
839
847
862
852
866
Trial
Served users in
Number of iterations
optimization area
Slow Cooling
Geometric Cooling
139
Mean value
served users
886
188
831
131
834
153
879
143
879
162
879
159
850
879
Table 9.9: Results for Simulated Annealing with improved rule set.
9.3.4
For the evaluation a Pentium IV with a clock rate of 2 GHz and 512 MB main
memory was used. The two rule sets have a little bit different runtime. Rule
set 1 (see Table E.1) requires on average 124 iterations, and therefore the average
runtime is about 2 h 10 min. Rule set 2 (see Table E.2) needs on average 156
iterations, and therefore the average runtime is about 2 h 50 min.
9.4
140
Figure 9.8: Block diagram for the simulation of the four rule sets over 50 snapshots.
Figure 9.9: Comparison of cdf curves for the four rules sets of the Adaptive Rule
Based Approach (50 snapshots).
141
The fluctuation of the cdf curves suggests that different rule sets favor different
realizations of the user distribution. There is no particular rule set that is able
to achieve the highest number of served users in all 50 snapshots. Nevertheless,
the gradient of the cdf curves illustrates that rule set #4gives the most stable
performance among the four different rule sets.
Additionally, the mean achieved number of served users for each rule set, shown
in Table 9.10, indicates that rule set #4has the highest mean number of served
users and therefore yields to a capacity gain of 66.9 %.
Simulation
Mean # of served
Standard deviation
Capacity
(# served users)
gain [%]
Before optimization
511
7.54 (1.48 %)
Rule set #1
831
31 (3.7 %)
62.6 %
Rule set #2
839
29 (3.5 %)
64.2 %
Rule set #3
842
34 (3.9 %)
64.8 %
Rule set #4
853
25 (2.9 %)
66.9 %
Table 9.10: Results for Adaptive Rule Based Approach with CPICH verification
mode 1.
It is noticeable that the standard deviation of served users with respect to different
user distributions after the optimization is much higher than before. This means
that the optimized network is less stable regarding the user distribution.
In this thesis only a small part of the simulations are shown. In [23] more results
and a stability analysis of the algorithm are presented.
The runtime of the Adaptive Rule Based Algorithm depends on the used rule set.
On average the algorithm requires about 150 iterations. On a Pentium IV with
a clock rate of 2 GHz and 512 MB main memory the runtime is approximate 2 h
45 min.
9.5
The Genetic Algorithm has been tested on both scenarios (small and big network scenario) as well as with both CPICH coverage verification modes. Several
different settings of the parameters for the Genetic Algorithm, like size of the population, selection pressure and parameters for the local optimization have been
tested. Table 8.4 in Section 8.3.3 shows the best settings. However, good results
142
have also been obtained with other settings. As fitness function, the extended
fitness function from Equ. (6.2) was used.
9.5.1
9.5.1.1
For the small network scenario altogether 60 optimization runs have been carried
out, always with the same user distribution. Before the optimization 347 served
users are in the network. Figure 9.10 shows the results.
In Figure 9.10 the results are in chronological order of execution. From the figure
we see that during the development process the results got better. The mean
number of served users over all 60 runs are 488. This corresponds to a capacity
increase of 40.6 %. The standard deviation is 17.6 (3.6 %).
The best result was achieved with the optimization run at index 53. After this
optimization run 518 users were served, this corresponds to a capacity increase
of 49.3 %. Figure 9.11 shows the growth of the fitness value (fitness value of the
best individual and average fitness of the population) during the optimization for
this run and Table 9.11 presents the used parameters for the Genetic Algorithm.
Population size
n = 100
Selection pressure
Cm = 5
Local optimization:
Number of individuals
local num = 20
Number of iterations
local iter = 2
15 dBm 33 dBm
Antenna downtilt
0 8
Table 9.11: GA settings for the best optimization run on the small network
scenario with CPICH verification mode 1.
For the best optimization run the algorithm terminates after 172 populations.
However, Figure 9.11 shows that the final result is already found after 75 populations. Afterwards, no better results were found. To evaluate the stability of the
result, the CPICH power and antenna tilt settings before and after optimization
were evaluated with 100 different user distribution snapshots. Table 9.12 shows
the results of the analysis.
143
Figure 9.10: Results for the Genetic Algorithm on the small network scenario
with CPICH verification mode 1.
144
Figure 9.11: Optimization run for the best result on the small network scenario
with CPICH verification mode 1.
Before optimization
After optimization
Min
323
460
Max
363
518
Mean
344
483
Standard deviation
7.2 (2.1 %)
9.1 (1.9 %)
in optimization area
40.4
Table 9.12: Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 1.
145
The capacity increase for the mean number of served users is only 40.4 %. However, from Table 9.12 we see that the result after optimization is also stable like
before optimization, because the standard deviation does not increase. Furthermore we see that the optimization has been carried out for one single user distribution and so during the evaluation with different user distributions no higher
result than 518 served users was found.
9.5.1.2
For the big network scenario altogether 17 optimization runs have been carried
out, again always with the same user distribution. Before the optimization 506
served users are in the optimization area. Figure 9.12 shows the results, again in
chronological order of execution.
From Figure 9.12 we see again that optimization runs, which were carried out
later in the development process, deliver better results. The mean number of
served users over all 17 runs are 1064. This corresponds to a capacity increase of
110.3 %. The standard deviation is 77.8 (7.3 %).
The best result was achieved with the optimization run with index 17. After
this optimization run, 1148 users were served in the optimization area; this corresponds to a capacity increase of 126.9 %. Figure 9.13 shows the growth of the
fitness value during the optimization for this run and Table 9.13 presents the
used GA parameters.
Population size
n = 100
Selection pressure
Cm = 5
Local optimization:
Number of individuals
local num = 20
Number of iterations
local iter = 2
15 dBm 33 dBm
Antenna downtilt
0 8
Table 9.13: GA settings for the best optimization run on the big network scenario
with CPICH verification mode 1.
The algorithm runs altogether 335 populations. However, like for the small network scenario the best result was already found after 198 populations. From
Figure 9.13 we again see (like in Figure 9.11) the biggest increase of the fitness
146
Figure 9.12: Results for the Genetic Algorithm on the big network scenario with
CPICH verification mode 1.
147
Figure 9.13: Optimization run for the best result on the big network scenario
with CPICH verification mode 1.
in the first part of the optimization process (up to population 100). Afterwards,
only a small increase of the fitness values per population is observed. To evaluate the stability of the result, again the CPICH power and antenna tilt settings
before and after optimization were evaluated with 100 different user distribution
snapshots. Table 9.14 shows the results.
Number of served users
Before optimization
After optimization
Min
485
1044
Max
529
1148
Mean
511
1128
Standard deviation
7.5 (1.5 %)
13.0 (1.2 %)
in optimization area
120.7
Table 9.14: Evaluation of the best result with 100 different snapshots on the big
network scenario with CPICH verification mode 1.
The capacity increase for the mean number of served users is 120.7 % and the
148
9.5.2
All results, which have been performed with the CPICH coverage verification
mode 2 were analyzed before and after the optimization with 100 different user
distribution snapshots. However, the optimization itself has been carried out
with one fixed user distribution. For the CP ICHEc /I0 threshold a value of -12 dB
is used.
9.5.2.1
Figure 9.14 shows the results on the small network scenario. Altogether, 7 different optimization runs have been carried out. Before the optimization in average
352 users are served in the network. The mean number of served users after the
optimization over all 7 runs are 374. This corresponds to a capacity increase of
6.3 %. The standard deviation is 7.6 (2.0 %).
Index 5 in Figure 9.14 shows the best result. After this optimization run 382
users are served in average, this corresponds to a capacity increase of 8.5 %. If
we compare the results from CPICH coverage verification mode 2 with mode 1,
we see that now the capacity increase is much lower. This is due to the fact
that for mode 2 the interference is included in the calculation and so the network
simulator delivers more pessimistic results.
Table 9.15 summarizes the optimization results and Table 9.16 presents the used
parameters. Again, like for the CPICH coverage verification mode 1 the growth
of the fitness value during the optimization run is shown for the best run (index
5 in Figure 9.14). See Figure 9.15 for the diagram.
Figure 9.15 shows that the optimization run terminates after 452 populations.
Only one user distribution has been used during the optimization process. The
starting value is 351 served users and the final value is 397 served users. The
best fitness value is reached after 416 populations. If we compare the curve
from Figure 9.15 with the fitness growth of CPICH coverage verification mode 1
(Figure 9.11), we see that now the increase in fitness is slower. Remember, in
Figure 9.11 the best result was found after 75 populations.
9.5.2.2
For the big network scenario altogether 21 optimization runs have been carried
out. Before the optimization in average 512 users are served in the optimization
area. Figure 9.16 shows the results.
149
Figure 9.14: Mean results over 100 snapshots for the Genetic Algorithm on the
small network scenario with CPICH verification mode 2.
150
Before optimization
After optimization
Min
334
366
Max
369
397
Mean
352
382
Standard deviation
6.8 (1.9 %)
6.5 (1.7 %)
in optimization area
8.5
Table 9.15: Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 2.
Population size
n = 100
Selection pressure
Cm = 5
Local optimization:
Number of individuals
local num = 20
Number of iterations
local iter = 2
15 dBm 33 dBm
Antenna downtilt
0 8
Table 9.16: GA settings for the best optimization run on the small network
scenario with CPICH verification mode 2.
151
Figure 9.15: Optimization run for the best result on the small network scenario
with CPICH verification mode 2.
In Figure 9.16 the results are in chronological order of execution. From the figure
we see that during the development process the results get better. The mean
number of served users over all 21 runs are 656. This corresponds to a capacity
increase of 28.1 %. The standard deviation is 22.4 (3.4 %).
The best result was achieved with the optimization run with the index 16. After
optimization 693 users are served in average. This corresponds to a capacity
increase of 35.4 %. Again, like for the small network scenario we see that the
network simulator is more pessimistic and so the capacity gain is much lower
than with the CPICH coverage verification mode 1. Remember, with CPICH
verification mode 1 we had a capacity increase of 120 %! However, we have to keep
in mind that the results of the CPICH coverage verification mode 2 approximates
the real situation better.
The optimization results for the best run are summarized in Table 9.17 and the
GA parameters in Table 9.18. Figure 9.17 shows the the growth of the fitness
value during the optimization run.
In Figure 9.17 the starting value of the optimization are 498 served users and the
termination value are 733 served users in the optimization area. The optimization
runs 338 populations. From the figure we see that first there is a very sharp rise in
fitness. Then, from population 150 to 280 no better result is found. Afterwards,
152
Figure 9.16: Mean results over 100 snapshots for the Genetic Algorithm on the
big network scenario with CPICH verification mode 2.
153
Before optimization
After optimization
Min
498
672
Max
530
733
Mean
512
693
Standard deviation
7.5 (1.5 %)
9.6 (1.4 %)
in optimization area
35.4
Table 9.17: Evaluation of best result with 100 different snapshots on the big
network scenario with CPICH verification mode 2.
Population size
n = 400
Selection pressure
Cm = 5
Local optimization:
Number of individuals
local num = 20
Number of iterations
local iter = 2
15 dBm 38 dBm
Antenna downtilt
2 10
Table 9.18: GA settings for the best optimization run on the big network scenario
with CPICH verification mode 2.
154
Figure 9.17: Optimization run for the best result on the big network scenario
with CPICH verification mode 2.
the algorithm finds a new solution and so gets a new impulse. The best result
is then found after 320 populations. From the figure we see also that, if we are
only interested in a good result and do not want to wait until the algorithm is
converged, the algorithm can be stopped after 150 populations. So, we save a lot
of computation time.
9.5.3
The runtime of the Genetic Algorithm depends on the network scenario and on
the size of the population. For the presented results two different population
sizes were used: 100 and 400 individuals. On a Pentium IV with a clock rate of
2 GHz and 512 MB main memory the Genetic Algorithm runs on the big network
scenario with 100 individuals about 15 h and with 400 individuals about 65 h.
The small network scenario requires approximately half of the time of the big
network scenario. The version of the network simulator, which was used for the
Genetic Algorithm was optimized in runtime and therefore much faster than the
first version, used for the local algorithms.
9.6
155
For obtaining the results with the Analytic Optimization Algorithm, both CPICH
coverage verification mode 1 and mode 2 have been used. The two main results in
the following Sections 9.6.1 and 9.6.2 are the number of served users in the optimization area (nr served target) and the mean number of served users in the optimization area over 40 different user distribution snapshots (mean cap target).
The reference scenario is characterized by antenna downtilts of 0 (+3 electrical tilt in the antenna pattern) for all the antennas, and CPICH power levels
of 33 dBm in all the cells in the scenario. In the Tables 9.19, 9.20 and 9.21,
four columns of results are presented. The first column shows the results of the
reference scenario. The other three columns titled Adj azimuth, Adj tilt and
Adj CPICH contain then results after running the strategy for the adjustment of
the antenna azimuth, after the additional strategy for the antenna downtilt and
after the additional CPICH adjustment strategy, respectively.
In the adjustment routine for the antenna azimuth, three critical spots 1 inside the
optimization area were used. In the strategy for the adjustment of the antenna
downtilt, the initial tilt value was set to 3 (+3 electrical tilt) in all cells, while the
initial CPICH power value (start CP ICH) was varied for the diverse simulations.
An initial CPICH power level (start CP ICH) is also necessary for the CPICH
adjustment strategy. The actually used initial CPICH power level for the special
simulation is given in the additional notes in Tables 9.19, 9.20 and 9.21.
In the simulations, the required coverage probability for the worst case in the total
scenario was set to 50 % and in the optimization area to 75 %, or 80 % in the total
scenario and 98 % in the optimization area, as again indicated in the additional
notes in the various tables. In CPICH coverage verification mode 2 the coverage
probability is indicated as CP ICHEc /I0 cov prob total or CP ICHEc /I0 cov prob target,
while in CPICH coverage verification mode 1 simply as cov prob total or cov prob target.
For the overcrowded scenario in antenna tilt and CPICH power strategies, 5000
mobiles were used.
In Figures 9.18 to 9.19, the mean numbers of served users in the optimization area
from the Tables 9.19, 9.20 and 9.21 are presented in bar charts. The bar chart for
CPICH coverage verification mode 1 (Figure 9.18) shows the mean capacity in the
optimization area in the reference scenario (initial parameter setting) in blue, as
well as after the adjustment of base station azimuth (green bar), antenna downtilt
(yellow bar) and CPICH power level (red bar). The bar chart for CPICH coverage
verification mode 2 (Figure 9.19) presents an additional bar for the variation of
1
156
the required coverage probability for the worst case in the optimization area.
Thus, in CPICH coverage verification mode 2 there are two different bars for the
mean number of served users in optimization area after the CPICH adjustment
routine, one for required coverage probability in the total scenario of 0.8 and
0.98 in the optimization area, and another for the required coverage probability
in the total scenario of 0.5 and 0.75 in the optimization area. It is important
to remember that these required coverage probabilities are only for the worst
case. The effective coverage probabilities in normal interference situation are
presented in the Tables 9.19, 9.20 and 9.21 and are between 0.91 and 1, both in
total scenario and in the optimization area.
9.6.1
As Figure 9.18 shows, the mean capacity in the optimization area can be increased
from 512 mean served users in the reference scenario (blue bar) up to 829 mean
served users (red bar). This equals a gain of 62 % in mean capacity compared
to the reference scenario. It is important to mention that after the CPICH
adjustment the CPICH power level in all cells is 15 dBm, equal to the minimum
CPICH power level. It is also shown in Table 9.19 that the reached coverage
probability after the CPICH strategy in the total scenario (cov prob total) is more
than 91 % and in the optimization area (cov prob target) is 100 %, even though
the required coverage probability was set to 0.8 and 0.98 in the total scenario and
in the optimization scenario, respectively. However, if using a parameter setting
with a CPICH power level of 33 dBm in all cells, thus utilizing the routines
without the CPICH procedure, there is also an increase in mean capacity in the
optimization area from 512 to 703 mean served users (yellow bar in Figure 9.18).
This is an improvement of 37 % compared to the reference scenario.
Reference scenario Adj azimuth
nr served target
508
540
691
821
512
557
703
829
0.9963
0.9904
0.9193
additional notes
start CP ICH in adj tilt: 15 dBm, start CP ICH in adj CPICH: 15 dBm,
final CPICH values = 15 dBm in all cells, required coverage probability: 0.8
total and 0.98 in target area, best server equal dist.
Table 9.19: Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots).
157
Figure 9.18: Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots).
9.6.2
For this mode the results are shown in Table 9.20 for a required coverage probability in worst case of 0.5 in the total scenario and 0.75 in the optimization
area. Before parameter adjustment the mean number of served users is 520 in
the reference scenario. The improvement of capacity in the optimization area,
which can be reached by using CPICH coverage verification mode 2 is 15 % (to
595 served users) compared to the reference scenario after adjusting the antenna
azimuth and of 21 % (to 630 served users) after the antenna tilt routine. After
the procedure for adjustment of CPICH power levels, however, Table 9.20 shows
a decrease in capacity. This is due to the fact that for reaching the coverage requirements in the worst case with quite severe CP ICHEc /I0 threshold of -12 dB,
the CPICH power levels have to be increased from the initial value of 33 dBm in
the reference scenario. The final CPICH power levels after CPICH adjustment
are between 32 dBm and 35 dBm. If using the more severe requirements in coverage probability of 0.8 in the total scenario and 0.98 in the optimization area, the
final results goes down even more to 553 served users in the optimization area
(see Table 9.21). The final CPICH power levels in this case are between 35 dBm
and 37 dBm. Figure 9.19 shows the results in a diagram.
Comparing the results from CPICH coverage verification mode 1 and mode 2,
we can see that the possible improvement by optimization is smaller in the more
realistic case (mode 2), because the CPICH coverage verification is including
158
Reference
Adj azimuth
Adj tilt
Adj CPICH
scenario
nr served target
480
569
624
614
520
595
630
612
0.9339
0.9288
0.9433
0.9598
0.9545
0.9533
0.9635
0.9860
additional notes
Table 9.20: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB and required
coverage probability in worst case of 0.5/0.75.
Reference
Adj azimuth
Adj tilt
Adj CPICH
scenario
nr served target
480
569
624
522
520
595
630
553
0.9339
0.9288
0.9433
0.9823
0.9545
0.9533
0.9635
additional notes
Table 9.21: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB and required
coverage probability in worst case of 0.8/0.98.
159
Figure 9.19: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB.
interference calculation.
9.6.3
The Analytic Optimization Algorithm requires only 5 network evaluations. Therefore the runtime is very fast. On a Pentium IV with a clock rate of 2 GHz and
512 MB main memory the algorithms requires with the fast version of the network
simulator only about 1 min.
9.7
In this section a comparison of the individual algorithms is given. The used big
network scenario is evaluated with CPICH coverage verification mode 1. Table 9.22 shows the results for 50 user distribution snapshots and in Figure 9.20
the corresponding diagram is presented.
From Table 9.22 we see that the Genetic Algorithm performs best. It outperforms
all other algorithms easily. Beside the best optimization results we also can
conclude that the GA delivers stable results. This means that the resulting
160
Number of served
Simulation
Before optimization
Number of
users in
Capacity gain
network
optimization area
[%]
evaluations
511
Algorithm
Rule Based Approach
806
57.7
80
815
59.5
150
853
66.9
150
Genetic Algorithmb
1046
104.7
150000
Analytic Optimization
821
60.7
Simulated Annealing
Approach
Algorithm
Table 9.22: Comparison of the different algorithms with 50 different snapshots
on the big network scenario with CPICH verification mode 1.
a
The presented value is the result of only one run. However, for a fair comparison
the mean value of several runs should be calculated.
b
The presented value is the mean value over 17 different runs.
161
Figure 9.20: Comparison of the different algorithms on the big network scenario
with CPICH verification mode 1.
162
parameter settings also works fine with different user distribution snapshots. For
the other algorithms I cannot claim this. However, the Genetic Algorithm has
one significant drawback, its runtime. The computational effort is much higher
than for all other algorithms.
If we compare the local optimization techniques, Rule Based Approach, Simulated
Annealing2 and Adaptive Rule Based Approach, we see that the Adaptive Rule
Based Approach shows the best result. The computation effort is for all three
algorithms approximately the same.
Comparing the result of the Analytic Optimization Algorithm with the local
techniques, we see that almost the same improvement is reached. It is important
to mention that the computational effort with only 5 network evaluations is much
smaller than for all other optimization algorithms.
Finally, in this chapter I conclude that if the computational effort is irrelevant,
then the Genetic Algorithm is the best choice for optimizing the base station
parameters in a UMTS network. However, if the runtime of the algorithm is
very important, the Analytic Optimization Algorithm shows a good performance,
because in only 5 evaluations a good result is achieved. If the computational effort
plays only an underpart, then the Adaptive Rule Based Approach is a good choice,
because with about 150 network evaluations the second best result is obtained.
Chapter 10
Summary and Conclusion
In this thesis I addressed the problem of capacity optimization in UMTS FDD
networks. The goal was to improve the capacity of the network, measured as
served users, without any additional expenses: The capacity should be improved
only by changing the parameters of the base stations.
For the operation of a network, a specific number of base station parameters
influence the capacity and therefore affect the performance of the network. Each
sector of a base station can be configured by selecting: antenna type, antenna tilt,
antenna pattern or CPICH power. All these parameters have a strong influence
on the interference in the system and therefore on the amount of served users.
In this thesis I focused on the optimization of CPICH power and antenna tilt,
because these parameters have the most influence [54, 88]. By optimizing the
antenna tilt settings, the other-to-own-cell interference ratio can be reduced: The
antenna main beam delivers less power towards the neighboring base stations,
and therefore most of the radiated power goes to the area that is intended to be
served by this particular base station. Also the CPICH power settings are very
important: The CPICH power has to be set such that the coverage is ensured with
minimum interference to neighboring cells in order to reduce the pilot pollution.
Altogether five different algorithms has been developed. The first three optimization algorithms, Rule Based Approach, Simulated Annealing and Adaptive
Rule Based Approach are local techniques. Also a global technique, the Genetic
Algorithm, has been developed. The last algorithm is an analytic approach.
The Rule Based Approach starts from a scenario where all cells of the network
are set to identical values for CPICH power and antenna tilt. The optimization
process is characterized by reducing the CPICH power and increasing the antenna
downtilt in the individual cells according to a configurable rule set.
Based on the Rule Based Approach the algorithm was subsequently extended and
163
164
165
worst performance. The capacity increase was about 60 % for CPICH coverage
verification mode 1.
So, I conclude for the developed algorithms that if the computational
effort is irrelevant, the Genetic Algorithm is the best choice for optimizing the base station parameters in a UMTS network. However,
if the runtime of the algorithm is very important, the Analytic Optimization Algorithm should be used, because this algorithm shows good
results in only 5 network evaluations.
166
Chapter 11
Appendix
167
168
Appendix A
UMTS - Network Structure
In this appendix an overview over the basic entities of the UMTS network is given.
The overview is based on the standard presented in [1]. The basic configuration
of a Public Land Mobile Network (PLMN) is shown in Figure A.1.
In the basic configuration presented in Figure A.1, all the functions are considered implemented in different equipments. Therefore, all the interfaces within
PLMN are external. Interface A and Abis are defined in the GSM 08-series of
the Technical Specifications (TS). Interfaces Iu, Iur are defined in the UMTS
25.4xx-series of Technical Specifications. Interfaces B, C, D, E, F and G need
the support of the Mobile Application Part of the signalling system No. 7 to
exchange the data necessary to provide the mobile service. No protocols for the
H-interface and for the I-interface are standardized. All the GPRS-specific interfaces (G-series) are defined in the UMTS 23-series and 24-series of Technical
Specifications. Interfaces Mc, Nb and Nc are defined in UMTS 23.205 and in the
UMTS 29-series of Technical Specification1 .
From this configuration, all the possible PLMN organizations can be deduced.
In the case when some functions are contained in the same equipment, the relevant interfaces become internal to that equipment. The individual blocks and
functionalities of the Core Network (CN) are well presented in [1]. The CN can
use two different types of access networks: the base station system (BSS) and
the radio network system (RNS). The MSC (respectively SGSN) can connect to
one of these Access Network (AN) type or to both of them2 .
For the network optimization the functionalities in, and the interfaces between
1
169
170
RNS and MS are of interest3 . In the network simulator from SYMENA, Software
& Consulting GmbH, which is used throughout this thesis, the functionalities
of the radio network controller (i.e. SHO, RRM,...) are modeled under the
assumption that there is no difference between two different RNCs. This means
that the Iur-Interface is not considered.
The interface between the MS and the RNS is specified in the 24- and 25-series of UMTS
Technical Specifications (www.3gpp.org).
171
Figure A.1: Overview and basic entities of the UMTS network structure.
172
Appendix B
3GPP COST 259 Channel
Models
This appendix is based on the deployment aspects for channel models in [7].
Within 3GPP only a small portion of the COST 259 models [27] are included.
Nevertheless, in the following description the 3GPP point of view is presented.
COST 259 [27] is a research forum founded by the EU, in which there are participants from manufactors, operators and universities. This forum is a successor
of COST 207 [36] and COST 231 [28], which did the work on which the channel
models used in GSM standardization were based. One of the work items identified in COST 259 was to propose a new set of channel models that overcome
the limitations in the GSM channel models, while aiming at the same general
acceptance. The models are aimed at UMTS and HIPERLAN, with particular
emphasis on adaptive antennas and directional channels.
B.1
Model Descriptions
The main difference between the COST 259 model and previous models is that it
tries to describe the complex range of conditions found in the real world by distributions of channels rather than a few typicalcases. The probability densities
for the occurrence of different channels are functions of mainly two parameters:
1. Environment
2. Distance
Given a certain environment (e.g. Urban Macrocell) and a certain distance (or
distance range/cell radius), the parameters describing the distribution functions
173
174
for this particular case can be extracted. Performing a sufficient number of channel realizations will give a distribution of channels, which give a much better
representation of reality than what would be possible using only one channel.
The environments identified in COST 259 and included in 3GPP so far are given
in Table B.1. The macrocellular environments have the same names as the GSM
models. Further parameters and a much more detailed description of the model
can be found in [27, 98].
Macrocell
Microcell
Picocell
Typical Urban
(Street Canyons)
(Tunnel/Corridor)
Bad Urban
(Open Places)
(Factory)
Rural Area
(Tunnels)
(Office/Residential Home)
Hilly Terrain
(Street Crossings)
(Open Lounge)
B.2
3GPP Considerations
The propagation properties considered in the COST 259 model and considered
by 3GPP are shown in Table B.2.
The shape of the channel is given by one or several clusters, where each cluster
is exponentially decreasing in delay and Laplacian (double-sided exponential) in
azimuth. Each cluster consists of a number of Rayleigh-fading paths, plus a
possible non-fading path to get Rice fading.
Of interest here are mainly the properties 4 and 7 shown in Table B.2. For this
case, a full description of the channel is given by specifying the parameter set (see
Figure B.1). The ith cluster is described by its total power Pi , the delay of the
first path i and the cluster delay spread ,i . The last parameter describes the
slope of the exponentially decaying power in the cluster. The number of clusters
present is given by NC .
175
Path Loss
Shadow Fading
Fast Fading
Time Dispersion
Polarization
Multiple Clusters
Figure B.1: Channel shape (power delay profile) with multiple clusters. Source:
[7].
176
From the 3GPP point of view it is possible to reduce the complexity of the COST
259 model by approximating the continuous distributions with a small number of
cases, selected to be typical representations of the channel in common environment. 3GPP proposes a set of models with fixed parameters as shown in Figure
B.2. The selected parameters correspond to the COST 207/GSM models with
one important difference concerning the delay spread value for the Typical Urban
channel. This has been reduced to better correspond to typical measurement
results.
Appendix C
RAKE Reception
In this appendix the details about the RAKE implementation for UMTS are
explained. The basic operations for CDMA signal reception can be described in
three steps:
Time delay identification:
First of all the different time delays at which significant energy arrives
have to be identified. With this, the correlation receivers, i.e. the RAKE
fingers, have to be allocated to the individual peaks. Quoted from [54], the
measurement grid for acquiring the multipath delays is in the order of one
chip duration (typically within the range of 14 - 21 chip duration) with an
update rate in the order of some tens of milliseconds.
Note that the chip duration in UMTS is Tc = 0.26 s. With this, multipath components with a difference in the path length of at least 0.26 s
can be separated and combined coherently. This difference in path length
corresponds to a distance of about 78 m, which can be obtained even in
small cells. For IS-95 systems, with a chip duration of about 1 s (i.e about
300m), this multipath diversity in small cells is not possible.
Tracking of phase and amplitude:
Within each RAKE finger both, phase and amplitude changes (caused by
small scale fading) have to be tracked and removed (see Figure C.1).
Signal Combination:
The different signal contributions (the individual RAKE fingers) have to be
combined coherently. The resulting symbols can then be presented to the
decoder for further processing.
In order to facilitate the tracking of both, signal-phase and -amplitude, UMTS
uses known pilot symbols that are used to sound the channel state (i.e the weight
177
178
Figure C.1: The principle of maximum ratio combining within the CDMA RAKE
receiver. Source: [54].
vectors) for a particular finger. With this weight vector the received symbol is
rotated back in order to cancel the phase rotation caused by the radio propagation
channel. The channel-compensatedsymbols can then be summed up to recover
the energy across all delays1 .
In Figure C.2 the block diagram of a W-CDMA RAKE receiver is shown. Code
generators and correlators perform the despreading and integration to user data
symbols of received I- and Q-branches from the HF front-end. The channel
estimation uses the pilot symbols for estimating the channel state which will
then be removed by the phase rotator from the received symbols. In the Delay
Equalizer the different delays for the individual taps are compensated. Since the
individual taps are uncorrelated (they have different fading statistics), the delay
equalization provides a multi-path diversity gain.
In order to perform a successful despreading, code and data timing must be
known. This can be estimated by a so-called matched-filter. A matched filter
works the following way (see Figure C.3): We assume an incoming serial data
stream. When the samples of the incoming serial data are equal to bits of predefined data (i.e. pilot symbols), there is a maximum at filter output.
Although there are several differences between the UMTS RAKE receivers in
the mobile and the base station [104], all the basic principles presented in this
appendix are the same. To learn more about RAKE reception in UMTS see
[54, 97].
1
This maximum ratio combining (MRC) algorithm performs optimal in case that the interference is uncorrelated [108].
179
180
Appendix D
Rule Set for Rule Based
Approach
The standard rule set for the Rule Based Approach (Section 8.2.1) is shown in
Table D.1.
rule
param
delta
limit
iter
CPICH
-5 dB
24 dBm
tilt
CPICH
-4 dB
22 dBm
tilt
CPICH
-3 dB
20 dBm
tilt
CPICH
-2 dB
18 dBm
tilt
CPICH
-1 dB
15 dBm
10
tilt
10
Table D.1: Standard rule set used for Rule Based Approach.
181
182
Appendix E
Rule Sets for Simulated
Annealing
The two rule sets, which are used for the Simulated Annealing algorithm (Section 8.2.2) are listed in the following tables.
rule
param
delta
limit
iter
CPICH
-5 dB
24 dBm
20
tilt
20
CPICH
-4 dB
22 dBm
20
tilt
20
CPICH
-3 dB
20 dBm
20
tilt
20
CPICH
-2 dB
18 dBm
20
tilt
20
CPICH
-1 dB
15 dBm
20
9
10
11
tilt
CPICH -0.5 dB
tilt
0.5
20
15 dBm
20
20
183
184
rule
param
delta
limit
iter
CPICH
-5 dB
24 dBm
20
tilt
20
CPICH
-4 dB
22 dBm
20
tilt
20
CPICH
-3 dB
20 dBm
20
tilt
20
CPICH
-2 dB
18 dBm
20
tilt
20
CPICH
-1 dB
15 dBm
20
tilt
20
10
CPICH
-0.5 dB
15 dBm
20
11
tilt
0.5
20
14 dBm
20
8.5
20
12
13
CPICH -0.25 dB
tilt
0.25
Appendix F
Rule Sets for Adaptive Rule
Based Approach
The four rule sets, which are used for the Adaptive Rule Based Approach (Section 8.2.3) are listed in the following four tables.
rule
param
delta
limit
iter
CPICH
3 dB
25 dBm
50
tilt
1.5
CPICH
2 dB
15 dBm
1
2
50
tilt
CPICH
1 dB
10 dBm
tilt
0.5
50
185
rule
param
delta
limit
iter
CPICH
3 dB
25 dBm
50
1
2
3
tilt
1.5
CPICH
2 dB
20 dBm
tilt
CPICH
1 dB
15 dBm
50
50
tilt
0.5
CPICH
1 dB
10 dBm
tilt
0.5
50
rule
0
1
2
3
4
5
6
7
param
delta
CPICH 0.5 dB
limit
iter
25 dBm
50
tilt
0.25
CPICH
3 dB
25 dBm
tilt
1.5
CPICH
1 dB
20 dBm
tilt
0.5
CPICH
3 dB
20 dBm
1.5
CPICH
2 dB
15 dBm
tilt
CPICH
3 dB
15 dBm
1.5
CPICH
1 dB
10 dBm
tilt
0.5
CPICH
3 dB
10 dBm
tilt
1.5
50
50
50
tilt
50
tilt
50
50
50
187
rule
param
delta
limit
iter
CPICH
3 dB
25 dBm
50
tilt
1.5
1
2
3
4
5
6
7
CPICH 0.5 dB
25 dBm
tilt
0.25
CPICH
3 dB
20 dBm
tilt
1.5
CPICH 0.5 dB
20 dBm
tilt
0.25
CPICH
3 dB
15 dBm
tilt
1.5
CPICH 0.5 dB
15 dBm
tilt
0.25
CPICH
3 dB
10 dBm
tilt
1.5
CPICH 0.5 dB
tilt
0.25
10 dBm
50
50
50
50
50
50
50
Appendix G
Parameter File for Genetic
Algorithm
For the genetic algorithm an XML file is used to specify the most important
parameters. The file contains 3 elements: <ga>, <limit> and <init>. The
element <ga> specifies the main parameters of the algorithm. The search space is
defined by the element <limit>. The element <init> defines the start parameter
settings of the first 12 individuals.
In the element <ga> the attributes define the following:
pop: Number of individuals in the population.
cm: Selection pressure Cm .
local num and local iter: Parameters of the local optimization (Section 8.3.2.5).
reduce iter and min pop size: Parameters of Reduced Population Size
(Section 8.3.2.7).
add gos: GoS, where new users are admitted.
add ever: Parameter of Adding Users as New Impulse (Section 8.3.2.6).
<parameter_file>
<ga
pop="100"
cm="5"
local_num="20"
local_iter="2"
189
190
reduce_iter="999"
min_pop_size="10"
add_gos="0.96"
add_ever="20" >
</ga>
<limit
cpich_upper="38"
cpich_lower="15"
tilt_upper="-2"
tilt_lower="10" >
</limit>
<init>
<param cpich="33"
<param cpich="33"
<param cpich="15"
<param cpich="15"
<param cpich="24"
<param cpich="24"
<param cpich="24"
<param cpich="33"
<param cpich="15"
<param cpich="33"
<param cpich="24"
<param cpich="15"
</init>
</parameter_file>
tilt="0"
tilt="6"
tilt="0"
tilt="6"
tilt="0"
tilt="6"
tilt="4"
tilt="4"
tilt="4"
tilt="2"
tilt="2"
tilt="2"
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
Appendix H
Flowcharts for Analytic
Optimization Algorithm
This appendix presents the flowcharts for the Analytic Optimization Algorithm
presented in Section 8.4.
Figure H.1 and Figure H.2 show the implementation of the azimuth adjustment
described in Section 8.4.1. The introduced optimization of the antenna downtilt
in Section 8.4.2 is presented by Figure H.3. Figure H.4 and Figure H.5 show the
implementation of the CPICH power adjustment from Section 8.4.3.
The full description of the implementation can be found in Chapter 6 of Wolfgang
Karners diploma thesis [63].
191
Figure H.1: Automatic azimuth adjustment routine for turning base stations to
critical spots.
193
Figure H.2: Automatic azimuth adjustment routine for interleaving base stations.
195
Figure H.4: Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in total scenario.
Figure H.5: Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in optimization area.
Appendix I
Simulation Parameters
The following tables summarize the relevant network parameters, which are used
for CAP ESSOT M .
Antenna type
Conventional
Antenna height
30 m
Antenna sector
65
Antenna gain
Antenna pattern
16 dBi
Kathrein 739707 (Figure 5.2 and 5.4)
197
198
Number of carriers
43 dBm
40 dBm
15 dBm
33 dBm
5 dBm
5 dBm
3 dB
Transmitter loss
0 dB
5 dB
MRC efficiency
65 dB
DL TPC headroom
1 dB
25 dB
UL TPC headroom
2 dB
Changed together with CPICH power. See Section 5.3 for details.
199
Background noise floor
2 s
azimuth of taps
0.4
2 dB
0.5
ACIR
5%
ISIR
5%
21 dBm
0 dB
Body loss
0 dB
0 dB
Receiver sensitivity
-120 dBm
-126 dBm
The Receiver sensitivity is used in CAP ESSOT M for setting the EC /I0 threshold in CPICH
coverage verification mode 2 (see Section 7.2.2.2). A receiver sensitivity of -120 dBm is equivalent to an EC /I0 threshold of -12 dBm.
See Section 7.4 for a detailed description of the possible user distributions. During this thesis
only the Best Server Equal distribution is used for the evaluation of the different optimization
algorithms.
200
Appendix J
Frequently Used Acronyms
2G
3G
3GPP
ACIR
ACTS
AICH
AN
AP-AICH
ARIB
AuC
AS
BCH
BER
BLER
BSC
BSS
BTS
cdf
CDMA
cdma2000
CD/CA-ICH
CF
CN
COST
CPCH
CPICH
CS
CSICH
CWTS
Second Generation
Third Generation
Third Generation Partnership Project
Adjacent Channel Interference Ratio
Advanced Communications Technologies and Services
Acquisition Indication Channel
Access Network
Access Preamble Acquisition Indication Channel
Association of Radio Industries and Businesses
Authentication Center
Active Set
Broadcast Channel
Bit Error Rate
Block Error Rate
Base Station Controller
Base Station System
BAse Transceiver Station
cumulative distribution function
Code Division Multiple Access
3G CDMA standard in US
Collision-Detection/Channel-Assignment Indicator Channel
Cooling Function
Core Network
European Cooperation in the field of Scientific and Technical research
Common Packed Channel
Common Pilot Channel
Circus Switched
CPCH Status Indicator Channel
China Wireless Communication Standard
201
202
DCH
DECT
DL
DPCCH
DPCH
DPDCH
DSCH
DS-CDMA
DTX
EA
EIR
EIRP
ETSI
FACH
FDD
FDMA
FRAMES
GA
GGSN
GoS
GPRS
GSM
GUI
HCS
HF
HIMM
HLR
HMM
HO
HIPERLAN
HTML
IF
IMT-2000
IP
ISIR
ITU
ITU-R
LAN
MDC
ME
MMM
MRC
MS
203
MSC
OFDMA
ODMA
QAP
QoS
QPSK
OVSF
PCH
PCPCH
PDSCH
PHY
PICH
PLMN
PN
PRACH
PS
PSD
PSTN
P-CCPCH
P-CPICH
QF
RACE
RACH
RAN
RNC
RNS
RRC
RRM
R&D
S
SA
SCH
SD
SGSN
SHO
SIM
SIR
SM
SMS
SNR
S-CCPCH
S-CPICH
T1P1
204
TCP/IP
TDD
TDMA
TD-SCDMA
TFI
TL
TPC
TS
TSG
TSP
TTA
TTC
TX
UE
UL
UMTS
USIM
UTRA
UTRAN
VLR
WARC
WLAN
WLL
W-CDMA
W-TDMA
WWW
XML
Appendix K
Frequently Used Symbols
206
IACI
Ik
Ioth
Iown
Itot
k
KN
L
Lp
Lpi
Lpk
n
N0
NC
NM S
numBSs
pc
Pcommon
PCP ICH
Pi
pm
Pnoise
ps (i)
PT
PT,max
P TX
PT X
PT X,i
PT X,max
PT X,M S
PT X,n
QF
R
Rk
RSCPCP ICH
RSSI
207
S
served
servedk
,i
T
Tc
TC
i
Receiver Sensitivity
Total number of served users
Number of served users of cell k
Cluster delay spread
Temperature
Chip duration
Cooling temperature
Delay of the first path
Service activity
W-CDMA chip rate
208
Appendix L
Curriculum Vitae
Personal Data
Name:
Alexander Gerdenitsch
Address:
Birthday:
21.10.1976, Eisenstadt
Family status:
Unmarried
Education
1983-1987
1987-1991
1991-1996
1996-2001
210
2002-2004
Military service
04/2001 - 11/2001
Career
12/2001 - 6/2002
Since 7/2002
Publications
S. Jakl, A. Gerdenitsch, W. Karner, M. Toeltsch, An Approach for the
Initial Adjustment of Antenna Azimuth and Other Parameters in UMTS
Networks, Proc. 13th IST Mobile & Wireless Communications Summit
2004, June 2004, Lyon, France.
A. Gerdenitsch, S. Jakl, W. Karner, M. Toeltsch, Influence of Antenna
Azimuth in Non-Regular UMTS Networks, Proc. 5th World Wireless
Congress, San Francisco, US, 2004.
A. Gerdenitsch, S. Jakl, M. Toeltsch, The Use of Genetic Algorithms for
Capacity Optimization in UMTS FDD Networks, Proc. 3rd International
Conference on Networking ICN04, vol. 1, pp. 293-298, ISBN: 0-86341326-9, Guadeloupe, French Caribbean, 2004.
211
A. Gerdenitsch, S. Jakl, Y.Y. Chong, M. Toeltsch, An Adaptive Algorithm
for CPICH and Antenna Tilt Optimization in UMTS FDD Networks, Proc.
8th International Conference on Cellular and Intelligent Communications
(CIC), p. 378, ISBN: 89-5519-118-9-98560, October 2003, Seoul, Korea.
A. Gerdenitsch, S. Jakl, M. Toeltsch, T. Neubauer, Intelligent Algorithms
for System Capacity Optimization of UMTS FDD Networks, Proc. IEE
4th International Conference on 3G Mobile Communication Technologies,
pp. 222-226, ISBN: 0-85296 756-X, June 2003, London.
A. Springer, A. Gerdenitsch, Z. Li, A. Stelzer, R. Weigel, Adaptive Predistortion for Amplifier Linearization for UMTS Terminals, Proc. IEEE 7th
International Symposium on Spread-Spectrum Techniques and Applications,
pp. 78-82, ISBN: 0-7803-7627-7, Sept. 2002, Prague.
A. Springer, A. Gerdenitsch, R. Weigel, Digital Predistortion-Based Power
Amplifier Linearization for UMTS, Proc. European Conference on Wireless Technology (ECWT2001), pp. 185-189, ISBN: 0 86213 163 4, Sept.
2001, London.
Alexander Gerdenitsch, Digitale Vorverzerrung zur Linearisierung von Leistungsverstarkern f
ur UMTS, Master Thesis, June 2001.
212
Bibliography
[1] 3GPP, Network architecture, TS23.002, v3.6.0, September 2002,
http://www.3gpp.org.
[2] 3GPP, Requirements for support of radio resource management (FDD),
TS25.133, v6.0.0, September 2002, http://www.3gpp.org.
[3] 3GPP, Physical channels and mapping of transport channels onto
physical channels (FDD), TS25.211, v3.12.0, September 2002,
http://www.3gpp.org.
[4] 3GPP, Physical layer - Measurements (FDD), TS25.215, v4.7.0, June
2003, http://www.3.gpp.org.
[5] 3GPP, RF system
http://www.3.gpp.org.
scenarios,
TS25.942,
v3.3.0,
June
2002,
[6] 3GPP, Terminal conformance specification; radio transmission and reception (FDD), TS34.121, v3.10.0, September 2002, http://www.3gpp.org.
[7] 3GPP, Deployment
http://www.3pgg.org.
aspects,
TR25.943,
v5.1.0,
June
2002,
214
BIBLIOGRAPHY
BIBLIOGRAPHY
215
216
BIBLIOGRAPHY
[38] ETSI SMG 24, Summary of concept description of the beta concept,
1997, http://etsi.org.
[39] ETSI SMG 24, Concept group W-TDMA: System description summary,
1997, http://etsi.org.
[40] ETSI SMG 24, Concept group delta W-TD/CDMA: System description
summary, 1997, http://etsi.org.
[41] ETSI SMG 24, Concept group epsilon ODMA: System description summary, 1997, http://etsi.org.
[42] ETSI Press Release, SMG Tdoc 40/98, Agreement Reached on Radio Interface for Third Generation Mobile System, UMTS, Paris, France, January 1998.
[43] C. A. Floudas and P. M. Pardalos, eds., State of the art in global optimization, Kluwer, Dordrecht, 1996.
[44] I. Forkel, A. Kemper, R. Pabst, R. Hermans, The Effect of Electrical
and Mechanical Antenna Down-Tilting in UMTS Networks, Proceedings
of 3rd International Conference on 3G Mobile Communication Technologies,
pp. 86-90, London, Great Britain, May 8-10, 2002.
[45] J. Fuhl, Smart Antennas for SEcond and Third Generation Mobile Communications Systems, PhD Thesis, Technische Universitat Wien, Austria,
1997.
[46] L. M. Gambardella and M. Dorigo, Solving symmetric and asymmetric
TSPs by ant colonies, Proceedings IEEE Conference on Evolutionary Computation (ICEC96), pp. 622-627, 1996.
[47] M. A. C. Garcia, Analysis of Multi-Service Traffic in UMTS FDD mode
Networks, IST, Technical University of Lisbon, Portugal, May 2000.
[48] A. Gerdenitsch, Digitale Vorverzerrung zur Linearisierung von Leistungsverstarkern f
ur UMTS, Master Thesis (in German), University Linz,
Austria, June 2001.
[49] F. Glover and M. Laguna, Tabu Search, Kluwer Academic Publishers, Dordrecht, 1997.
[50] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine
Learning, Addison-Wesley, MA, 1989.
BIBLIOGRAPHY
217
218
BIBLIOGRAPHY
[62] P. Jones, and R. Owen, Sensitivity of UMTS FDD system capacity and
coverage to model parameters, Proc. of 1st IEE 3G Mobile Communications Technologies Conference, pp. 224-229, London, March 2000.
[63] W. Karner, Optimum Default Base Station Parameter Settings for UMTS
Networks, Master Thesis, Technische Universitat Wien, September 2003.
[64] Kathrein, 790-2200 MHz Base Station Antennas for Mobile Communications, 2001, Catalogue.
[65] D. Kim, Y. Chang and J. W. Lee, Pilot Power Control and Service Coverage Support in CDMA Mobile Systems, Proceedings of 49th IEEE Vehicular Technology Conference, VTC 1999-Spring, vol. 4, pp. 2238-2242,
Houston, TX, May, 1999.
[66] A. Klose, Simulated Annealing, Lecture notes (in German), 2002,
http://troubadix.unisg.ch/klose.
[67] A. Klose, Tabu-Suche, Lecture notes (in German), 2002,
http://troubadix.unisg.ch/klose.
[68] J. Laiho, A. Wacker and T. Novosad, eds., Radio Network Planning and
Optimization for UMTS, John Wiley & Sons, Ltd., 2002.
[69] J. Laiho-Steffens, A. Wacker and P. Aikio, The impact of the radio network planning and site configuration on the WCDMA network capacity and
quality of service, Proceedings of 51th IEEE Vehicular Technology Conference, VTC 2000-Spring, vol. 2, pp. 1006-1010, Tokyo, Japan, May 15-18,
2000.
[70] I. Laki, L. Farkas and L. Nagy, Cell planning in mobile communication
systems using sga optimization, Proceedings of International Conference
on Trends in Communications, vol. 1, pp. 124-127, 2001.
[71] C. Y. Lee and H. G. Kang, Cell planning with capacity expansion in
mobile communications: A tabu search approach, IEEE Transactions on
Vehicular Technology, vol. 49, pp. 1678-1691, March 2000.
[72] R. T. Love, K. A. Beshir, D. Schaeffer and R. S. Nikides, A Pilot Optimization Technique for CDMA Cellular System, Proceedings of 50th IEEE
Vehicular Technology Conference, VTC 1999-Fall, vol. 4, pp. 2238-2242,
1999.
[73] R. M. Mathar and T. Niessen. Optimum positioning of base stations for
cellular radio networks, Wireless Networks, vol. 6, pp. 421-428, 2000.
BIBLIOGRAPHY
219
220
BIBLIOGRAPHY
BIBLIOGRAPHY
221
[100] UMTS Forum, Minimum spectrum demand per public terrestrial UMTS
operator in the initial phase, UMTS Report No. 5, September 1998,
http://www.umts-forum.org.
[101] UMTS Forum, UMTS/IMT-2000 Spectrum, UMTS Report No. 6, December 1998, http://www.umts-forum.org.
[102] UMTS Forum, The future mobile market, UMTS Report No. 8, March
1999, http://www.umts-forum.org.
[103] K. Valkealathi, A. Hoglund, J. Parkkinen and A. Hamalainen, WCDMA
Common Pilot Power Control for Load and Coverage Balancing, Proceedings of 13th IEEE International Symposium on Personal, Indoor and Mobile
Radio Communications, vol. 3, pp. 1412-1416, 2002.
[104] B. N. Vejlgaard, Data Receiver for the Universal Mobile Telecommunications System (UMTS), PhD Thesis, Aalborg University Danmark, 2001.
[105] A. M. Viterbi and A. J. Viterbi, Erlang capacity of a power controlled
CDMA system, IEEE Journal on Selected Areas in Communications,
vol. 11, pp. 892-900, Aug. 1993.
[106] R. M. Whitaker and S. Hurley, Evolution of Planning for Wireless Communication Systems, Proceedings of the 36th Hawaii International Conference
on System Sciences (HICSS03), pp. 295-305, 2003.
[107] D. Whitley, GENITOR II: A Distributed Genetic Algorithm, Journal of
Experimental and Theoretical Artificial Intelligence, vol. 2, pp. 189-214.
[108] J. H. Winters, Optimum combining in digital mobile radio with cochannel interference, IEEE Journal on Selected Areas in Communication,
vol. SAC-2, pp. 528-539, 1984.