You are on page 1of 249

Technische Universitat Wien

DISSERTATION

System Capacity Optimization


of UMTS FDD Networks

ausgef
uhrt zum Zwecke der Erlangung des akademischen Grades eines Doktors
der technischen Wissenschaften
eingereicht an der Technischen Universitat Wien
Fakultat f
ur Elektrotechnik und Informationstechnik
von

Alexander GERDENITSCH
Matrikelnummer 9656381
A-7022 Loipersbach, Berggasse 1

geboren in Eisenstadt, Osterreich


am 21. Oktober 1976

Wien, im Juni 2004

First Supervisor: o. Univ.-Prof. i.R. Dr. Ernst BONEK


Institut f
ur Nachrichtentechnik und Hochfrequenztechnik
Fakultat f
ur Elektrotechnik und Informationstechnik
Technische Universitat Wien, Austria

Second Supervisor: a.o. Univ.-Prof. Dr. G


unther RAIDL
Institut f
ur Computergraphik und Algorithmen
Fakultat f
ur Informatik
Technische Universitat Wien, Austria

Abstract
In this thesis I investigate the problem of capacity optimization in UMTS FDD
networks. The goal is to improve the capacity of the network, measured as served users, only by changing the base station parameters. The focus is on the
optimization of antenna tilt and common pilot channel (CPICH) power of the
base stations. These parameter adjustments improve the UMTS radio network
capacity by means of reducing inter-cell interference, achieve cell load sharing,
and optimize base station power resources.
Altogether five different algorithms for finding the best settings of antenna tilt
and CPICH power are presented. The first three optimization algorithms, Rule
Based Approach, Simulated Annealing and Adaptive Rule Based Approach, are
local techniques. Furthermore, a global technique, the Genetic Algorithm, will be
presented. Also, an Analytic Optimization Algorithm will be discussed.
The fitness function used for the algorithms considers the number of served users
as the main optimization goal. For the Genetic Algorithm I use a fitness function
that additionally also considers coverage and soft handover.
First, the Rule Based Approach is adressed. The optimization process is characterized by reducing the CPICH power and increasing the antenna downtilt in the
individual cells according to a configurable rule set. Subsequently, this algorithm
is extended by incorporating Simulated Annealing. Here, the decision whether to
take a worse result is, in contrast to the first method, independent of the rule
set. The third local algorithm is also a further development of the Rule Based
Approach. The main difference between the Adaptive Rule Based Approach and
the other two local approaches is that CPICH power and antenna tilt are changed
together, and that also an increase of CPICH power and antenna up tilting is
possible during the optimization process.
Further, a Genetic Algorithm is introduced which I improved by taking operators
that are adapted for the UMTS capacity optimization problem by taking into
account the quality of the network. In addition, a local optimization is included
to improve the performance.
Finally, I address an Analytical Optimization Algorithm. Beside antenna tilt and
CPICH power settings, this algorithm optimizes also the antenna azimuth.
i

ii
The performance of the algorithms is evaluated using a static UMTS FDD network simulator on two virtual scenarios of a typical European city. In the first
scenario the network covers the whole area of the city. The second scenario only
spans across downtown.
With the different algorithms, I show improvements in capacity of up to 105 %
compared to the initial settings. The Genetic Algorithm performs best, but with
the drawback of a high computation time. If we compare the three local optimization techniques, Rule Based Approach, Simulated Annealing and Adaptive Rule
Based Approach, we see that the Adaptive Rule Based Approach achieves the
highest improvement. The computation effort for all three algorithms is approximately the same. The Analytic Optimization Algorithm shows, with only five
network evaluations, almost the same optimization result as the local algorithms.

Zusammenfassung
Diese Dissertation beschaftigt sich mit der Kapazitatsoptimierung in UMTS Mobilfunknetzen. Das Ziel ist, die Kapazitat im Netz, gemessen an der Anzahl
der bedienten Teilnehmer, nur durch Optimierung der Basisstationsparameter
zu erhohen. Zur Optimierung werden die Antennenneigung sowie die Sendeleistung des common pilot channel (CPICH) herangezogen. Eine korrekte Einstellung dieser Parameter bewirkt eine Kapazitatssteigerung des UMTS Mobilfunknetzes durch Reduzierung der Interferenz der Nachbarzellen. Weiters kommt es
zu einer gleichmassigen Aufteilung der Last auf die einzelnen Zellen und zur Optimierung der Leistungsressourcen der Basisstationen.
In dieser Arbeit werden insgesamt f
unf verschiedene Algorithmen zur Suche nach
der optimalen Einstellung der Antennenneigung sowie der CPICH-Leistung vorgestellt. Die ersten drei Optimierungsalgorithmen, ein auf einem Regelwerk basierter Algorithmus, ein auf diesem basierender adaptiver Algorithmus und ein
Simulated Annealing Algorithmus, sind lokale Techniken. Weiters wird auch eine
globale Technik, ein genetischer Algorithmus, untersucht. Der letzte diskutierte
Algorithmus ist ein analytischer Optimierungsalgorithmus.
Die verwendete Fitnessfunktion beschreibt das Optimierungsziel (Kapazitatssteigerung) durch die Anzahl der bedienten Mobilfunkteilnehmer. F
ur den genetischen
Algorithmus wird die Fitnessfunktion um den Grad der Netzabdeckung und die
Anzahl der Teilnehmer, die mit mehr als einer Basisstation gleichzeitig verbunden
sind, erweitert.
Zu Beginn stelle ich in meiner Arbeit den auf einem Regelwerk basierten Algorithmus vor. Der Optimierungsprozess bei diesem Algorithmus zeichnet sich durch
eine Erhohung der Antennenneigung und Reduzierung der CPICH-Leistung in
den einzelnen Zellen mittels eines konfigurierbaren Regelwerkes aus. In weiterer Folge wird der Algorithmus durch das Einbinden von Simulated Annealing
erweitert. Die Entscheidung, ob ein schlechtes Ergebnis akzeptiert wird, ist im
Gegensatz zum vorigen Algorithmus unabhangig vom Regelwerk. Der dritte Algorithmus ist ebenfalls eine Erweiterung des ersten Algorithmus. Der grundlegende
Unterschied zu den vorigen Algorithmen ist, dass nun die Antennenneigung und
die CPICH-Leistung gemeinsam adaptiert werden, und dass eine Erhohung der
iii

iv
CPICH-Leistung sowie eine Reduzierung der Antennenneigung wahrend des Optimierungsprozesses ebenfalls zulassig ist.
Weiters behandle ich einen genetischen Algorithmus, der an meine Problemstellung angepasst ist. Mittels angepasster Operatoren wird auch die Qualitat des
Mobilfunknetzes zur Optimierung herangezogen. Ein Bestandteil des genetischen
Algorithmus ist auch eine lokale Optimierung, mit dem Ziel die Leistung des
Algorithmus weiter zu steigern.
Schlielich behandle ich in meiner Dissertation einen analytischen Optimierungsalgorithmus. Dieser Algoritmus optimiert neben der Antennenneigung und der
CPICH-Leistung auch den Azimutwinkel der Antenne.
Die Leistungsfahigkeit der einzelnen Algorithmen wird mit Hilfe eines statischen
UMTS FDD Netzwerksimulators auf zwei virtuellen Szenarien einer typischen europaischen Grostadt bewertet. Das erste Szenario umfasst das komplette Stadtgebiet, wogegen das zweite nur das Zentrum der Stadt abdeckt.
Mit den einzelnen Algorithmen zeige ich auf beiden Szenarien eine Kapazitatssteigerung von bis zu 105 % verglichen zur anfanglichen Parametereinstellung. Der
genetische Algorithmus liefert das beste Ergebnis, jedoch mit dem Nachteil der
langen Laufzeit. Unter den lokalen Optimierungsverfahren schneidet der adaptive Regelwerk basierte Algorithmus am besten ab. Die Laufzeit ist jedoch f
ur
alle drei Algorithmen ungefahr gleich. Der analytische Optimierungsalgorithmus
zeigt eine ahnliche Kapazitatssteigerung wie die lokalen Verfahren, jedoch mit
dem Vorteil, dass dieser Algorithmus nur f
unf statt mehr als hundert Iterationen
benotigt.

Acknowledgment
I am deeply grateful to Prof. Ernst Bonek for his guidance and invaluable support
during my work. I thank him for his encouragement during the course of this
work and for various suggestions improving the quality of this thesis.
I am also very grateful to Martin Toeltsch and Thomas Neubauer of SYMENA,
Software & Consulting GmbH, for the numerous discussions and their fruitful
collaboration. Further I thank them for providing the static UMTS FDD network
simulator CAP ESSOT M .
Special thanks go to Thomas Baumgartner and Werner Weichselberger for many
critical and useful suggestions and discussions.
My very great appreciation goes to all my colleagues, Plamen Dintchev, Klaus

Kopsa, Christian Pommer, H


useyin Ozcelik,
Elmar Trojer, Markus Herdin, Gerhard Gritsch and Biljana Badic for their fruitful collaboration.
Very special thanks to Stefan Jakl, who shared the office with me and worked
together with me on the same project, for the numerous discussions and the
excellent office atmosphere. Further I thank him for proof-reading this thesis. It
was a pleasure to share the office with you.
Thanks are also included to Wolfgang Karner and Yee Yang Chong, who made
their master thesis in our project under the guidance of Stefan and me.
I owe also thanks to Michael Feher of Dipl.-Ing. Dr. Hermann B
uhler GmbH,
Prof. G
unther Raidl of the Algorithms and Data Structures Group from the
Vienna University of Technology and Hendrik Rogier of the Electromagnetics
Group from the Department of Information Technology (INTEC), Ghent University, Belgium.
Last, but not least, I would like to thank my parents and family for the tremendous support during my studies. Without their assistance I wouldnt have reached
my goals.

vi

Contents
1 Introduction

1.1

Background and Standardization of W-CDMA . . . . . . . . . . .

1.2

Objectives of the UMTS System . . . . . . . . . . . . . . . . . . .

1.3

Motivation for UMTS Optimization . . . . . . . . . . . . . . . . .

1.4

Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . .

2 Radio System Aspects

2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2

UTRA FDD System Description . . . . . . . . . . . . . . . . . . .

10

2.2.1

Channel and Frame Structure . . . . . . . . . . . . . . . .

10

2.2.2

Modulation, Spreading and Scrambling . . . . . . . . . . .

13

Transport Channels and Physical Channels . . . . . . . . . . . . .

15

2.3.1

Synchronization Channel . . . . . . . . . . . . . . . . . . .

15

2.3.1.1

Primary SCH . . . . . . . . . . . . . . . . . . . .

15

2.3.1.2

Secondary SCH . . . . . . . . . . . . . . . . . . .

15

2.3

2.3.2

Common Pilot Channel

. . . . . . . . . . . . . . . . . . .

17

2.3.2.1

Primary Common Pilot Channel . . . . . . . . .

17

2.3.2.2

Secondary Common Pilot Channel . . . . . . . .

18

2.3.3

Broadcast Channel . . . . . . . . . . . . . . . . . . . . . .

18

2.3.4

Random Access Channel . . . . . . . . . . . . . . . . . . .

18

2.3.5

Forward Access Channel . . . . . . . . . . . . . . . . . . .

19

2.3.6

Paging Channel . . . . . . . . . . . . . . . . . . . . . . . .

19

vii

viii

CONTENTS
2.3.7

Dedicated Channel . . . . . . . . . . . . . . . . . . . . . .

20

2.3.8

Downlink Shared Channel . . . . . . . . . . . . . . . . . .

20

2.3.9

Uplink Common Packet Channel . . . . . . . . . . . . . .

20

2.3.10 Necessary Common Channels in a UMTS Network


2.4

2.5

. . . .

21

Physical Layer Procedures . . . . . . . . . . . . . . . . . . . . . .

21

2.4.1

Cell Search . . . . . . . . . . . . . . . . . . . . . . . . . .

21

2.4.2

Power Control . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.4.3

Handover . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

Services and Applications . . . . . . . . . . . . . . . . . . . . . .

25

2.5.1

UMTS Forum Traffic Classes

. . . . . . . . . . . . . . . .

26

2.5.2

3GPP UMTS QoS Classes . . . . . . . . . . . . . . . . . .

30

2.5.3

Traffic Forecast for UMTS Network Optimization . . . . .

31

3 Overview of Optimization Techniques

33

3.1

General Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

3.2

Local Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

3.3

Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.4

Tabu Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

3.5

Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . .

40

3.5.1

Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . .

42

3.5.1.1

Selection

. . . . . . . . . . . . . . . . . . . . . .

44

3.5.1.2

Recombination . . . . . . . . . . . . . . . . . . .

45

3.5.1.3

Mutation . . . . . . . . . . . . . . . . . . . . . .

45

3.5.1.4

Applications for Genetic Algorithms . . . . . . .

46

Ant Colony Optimization . . . . . . . . . . . . . . . . . . . . . . .

46

3.6

CONTENTS

ix

4 Coverage- and Capacity-limiting Factors

49

4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

4.2

Uplink and Downlink Coverage-limited Scenarios . . . . . . . . .

50

4.3

Uplink Capacity-limited Scenarios . . . . . . . . . . . . . . . . . .

53

4.3.1

Insufficient Uplink Power . . . . . . . . . . . . . . . . . . .

53

4.3.2

Uplink Cell Load Limitation . . . . . . . . . . . . . . . . .

54

Downlink Capacity-limited Scenarios . . . . . . . . . . . . . . . .

55

4.4.1

Cell Power Limitation . . . . . . . . . . . . . . . . . . . .

55

4.4.2

Orthogonal Variable Spreading Factor (OVSF) Code Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

Code Power Limitation . . . . . . . . . . . . . . . . . . . .

57

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

4.4

4.4.3
4.5

5 Key Optimization Parameters

61

5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

5.2

Antenna Parameters . . . . . . . . . . . . . . . . . . . . . . . . .

62

5.2.1

Antenna Azimuth . . . . . . . . . . . . . . . . . . . . . . .

62

5.2.2

Antenna Tilt . . . . . . . . . . . . . . . . . . . . . . . . .

63

5.3

Common Pilot Channel (CPICH) Power . . . . . . . . . . . . . .

66

5.4

A Short Investigation of CPICH and Antenna Tilt Changing . . .

68

5.5

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

6 Fitness Function and Performance Indicators

71

6.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

6.2

Fitness Function . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

6.2.1

Basic Fitness Function . . . . . . . . . . . . . . . . . . . .

72

6.2.2

Extended Fitness Function . . . . . . . . . . . . . . . . . .

72

6.2.3

Used Fitness Function . . . . . . . . . . . . . . . . . . . .

73

6.3

Grade of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

6.4

Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . .

75

6.4.1

Outaged Mobiles . . . . . . . . . . . . . . . . . . . . . . .

75

6.4.2

Quality Factor . . . . . . . . . . . . . . . . . . . . . . . . .

76

6.4.3

Summary of Performance Indicators . . . . . . . . . . . . .

77

CONTENTS

7 Simulation Environment

79

7.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

7.2

The Used UMTS FDD Simulator . . . . . . . . . . . . . . . . . .

79

7.2.1

Parameters of the Network Simulator . . . . . . . . . . . .

81

7.2.2

CPICH Coverage Verification Modes . . . . . . . . . . . .

82

7.2.2.1

Mode 1 . . . . . . . . . . . . . . . . . . . . . . .

82

7.2.2.2

Mode 2 . . . . . . . . . . . . . . . . . . . . . . .

83

7.3

Simulator Interface . . . . . . . . . . . . . . . . . . . . . . . . . .

83

7.4

Network Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . .

84

7.4.1

Big Scenario . . . . . . . . . . . . . . . . . . . . . . . . . .

85

7.4.2

Small Scenario . . . . . . . . . . . . . . . . . . . . . . . .

86

8 Optimization Algorithms

89

8.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

8.2

Local Optimization Algorithms . . . . . . . . . . . . . . . . . . .

90

8.2.1

Rule Based Approach . . . . . . . . . . . . . . . . . . . . .

91

8.2.2

Simulated Annealing . . . . . . . . . . . . . . . . . . . . .

92

8.2.3

Adaptive Rule Based Approach . . . . . . . . . . . . . . .

94

8.2.3.1

Why Adjust CPICH Power and Antenna Tilt Together? . . . . . . . . . . . . . . . . . . . . . . .

94

Algorithm Description . . . . . . . . . . . . . . .

96

Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .

99

8.3.1

Representation . . . . . . . . . . . . . . . . . . . . . . . .

99

8.3.2

Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

8.2.3.2
8.3

8.3.2.1

Initial Population . . . . . . . . . . . . . . . . . . 102

8.3.2.2

Selection

8.3.2.3

Recombination . . . . . . . . . . . . . . . . . . . 104

8.3.2.4

Mutation . . . . . . . . . . . . . . . . . . . . . . 106

8.3.2.5

Local Optimization . . . . . . . . . . . . . . . . . 107

. . . . . . . . . . . . . . . . . . . . . . 102

CONTENTS

8.3.3
8.4

xi
8.3.2.6

Adding Users as New Impulse . . . . . . . . . . . 108

8.3.2.7

Reduced Population Size . . . . . . . . . . . . . . 108

8.3.2.8

Implementation for Distributed Computing . . . 109

Summary of Genetic Algorithm . . . . . . . . . . . . . . . 112

Analytic Optimization Algorithm . . . . . . . . . . . . . . . . . . 112


8.4.1

Azimuth Adjustment Strategy . . . . . . . . . . . . . . . . 113


8.4.1.1

Algorithm Description . . . . . . . . . . . . . . . 115

8.4.2

Strategy for Antenna Downtilt . . . . . . . . . . . . . . . . 117

8.4.3

CPICH Power Level Adjustment Strategy . . . . . . . . . 118


8.4.3.1

Strategy for CPICH Coverage Verification Mode 1 121

8.4.3.2

Strategy for CPICH Coverage Verification Mode 2 122

9 Algorithm Performance Analysis

125

9.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

9.2

Results for the Rule Based Approach . . . . . . . . . . . . . . . . 125


9.2.1

9.2.2
9.3

Various Investigations with the Rule Based Approach . . . 126


9.2.1.1

Optimization on Different Snapshots . . . . . . . 127

9.2.1.2

Analyzing Optimization Results with Different Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . 127

9.2.1.3

Optimization with Modified Parameter Ranges . 131

9.2.1.4

Optimization on Adapted Start Scenarios . . . . 133

Runtime of the Rule Based Approach . . . . . . . . . . . . 133

Results for Simulated Annealing . . . . . . . . . . . . . . . . . . . 134


9.3.1

Results with Slow Cooling . . . . . . . . . . . . . . . . . . 134

9.3.2

Results with Geometric Cooling . . . . . . . . . . . . . . . 137

9.3.3

Results with Improved Rule Set . . . . . . . . . . . . . . . 137

9.3.4

Runtime of Simulated Annealing . . . . . . . . . . . . . . 139

9.4

Results for the Adaptive Rule Based Approach . . . . . . . . . . . 139

9.5

Results with the Genetic Algorithm . . . . . . . . . . . . . . . . . 141

xii

CONTENTS
9.5.1

9.5.2

9.5.3
9.6

9.7

CPICH Coverage Verification Mode 1 . . . . . . . . . . . . 142


9.5.1.1

Results on the Small Network Scenario . . . . . . 142

9.5.1.2

Results on the Big Network Scenario . . . . . . . 145

CPICH Coverage Verification Mode 2 . . . . . . . . . . . . 148


9.5.2.1

Results on the Small Network Scenario . . . . . . 148

9.5.2.2

Results on the Big Network Scenario . . . . . . . 148

Runtime of the Genetic Algorithm . . . . . . . . . . . . . 154

Results for the Analytic Optimization Algorithm . . . . . . . . . . 155


9.6.1

CPICH Coverage Verification Mode 1 . . . . . . . . . . . . 156

9.6.2

CPICH Coverage Verification Mode 2 . . . . . . . . . . . . 157

9.6.3

Runtime of the Analytic Optimization Algorithm . . . . . 159

Comparison of the Various Approaches . . . . . . . . . . . . . . . 159

10 Summary and Conclusion

163

11 Appendix

167

A UMTS - Network Structure

169

B 3GPP COST 259 Channel Models

173

B.1 Model Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . 173


B.2 3GPP Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 174
C RAKE Reception

177

D Rule Set for Rule Based Approach

181

E Rule Sets for Simulated Annealing

183

F Rule Sets for Adaptive Rule Based Approach

185

G Parameter File for Genetic Algorithm

189

CONTENTS

xiii

H Flowcharts for Analytic Optimization Algorithm

191

197

Simulation Parameters

J Frequently Used Acronyms

201

K Frequently Used Symbols

205

L Curriculum Vitae

209

Bibliography

213

xiv

CONTENTS

List of Tables
2.1

Technical overview of FDD mode. . . . . . . . . . . . . . . . . . .

10

2.2

Service Characteristics (source: [101]). . . . . . . . . . . . . . . .

26

2.3

Effective call duration (source: [101]). . . . . . . . . . . . . . . . .

29

2.4

Operational environment and cell types (source: [101]). . . . . . .

31

3.1

Analogy between the thermodynamic simulation of annealing and


optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

4.1

Example link budget for a data service (source: [68]). . . . . . . .

51

4.2

Example uplink link budgets for illustrating the impact of service


data rate (source: [68]). . . . . . . . . . . . . . . . . . . . . . . .

52

4.3

Typical base station transmit power configurations (source: [68]).

56

6.1

Assignment fitness function - optimization algorithm. . . . . . . .

74

6.2

Used performance indicators for the several algorithms. . . . . . .

77

8.1

Example rule set for Rule Based Optimization Algorithm.

. . . .

92

8.2

CPICH power and antenna tilt adjustment.

. . . . . . . . . . . .

96

8.3

Example of CPICH power and antenna tilt limitation & stepsize


settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

8.4

Best setting of optimization parameters for GA. . . . . . . . . . . 113

8.5

Rules for adjusting the antenna downtilt according to the mean


elevation angle in CPICH coverage verification mode 1. . . . . . . 118

8.6

Rules for adjusting the antenna downtilt according to the mean


elevation angle in CPICH coverage verification mode 2. . . . . . . 119
xv

xvi

LIST OF TABLES
9.1

Results for the Rule Based Approach with CPICH verification


mode 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

9.2

Results for the Rule Based Approach with CPICH verification


mode 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

9.3

Results for the Rule Based Approach with different parameter ranges.131

9.4

Results for the Rule Based Approach with adapted antenna tilt in
the start scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

9.5

Results for the Rule Based Approach with adapted CPICH power
in the start scenario. . . . . . . . . . . . . . . . . . . . . . . . . . 134

9.6

Results for Simulated Annealing with Slow Cooling, different values for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

9.7

Results for Simulated Annealing with Slow Cooling, different values for TC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

9.8

Results for Simulated Annealing with Geometric Cooling. . . . . . 138

9.9

Results for Simulated Annealing with improved rule set. . . . . . 139

9.10 Results for Adaptive Rule Based Approach with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
9.11 GA settings for the best optimization run on the small network
scenario with CPICH verification mode 1. . . . . . . . . . . . . . 142
9.12 Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 1. . . . . . 144
9.13 GA settings for the best optimization run on the big network scenario with CPICH verification mode 1. . . . . . . . . . . . . . . . 145
9.14 Evaluation of the best result with 100 different snapshots on the
big network scenario with CPICH verification mode 1. . . . . . . 147
9.15 Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 2. . . . . . 150
9.16 GA settings for the best optimization run on the small network
scenario with CPICH verification mode 2. . . . . . . . . . . . . . 150
9.17 Evaluation of best result with 100 different snapshots on the big
network scenario with CPICH verification mode 2. . . . . . . . . . 153
9.18 GA settings for the best optimization run on the big network scenario with CPICH verification mode 2. . . . . . . . . . . . . . . . 153

LIST OF TABLES

xvii

9.19 Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots). . . . . . . . . . . . . . . . . . . . 156
9.20 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB
and required coverage probability in worst case of 0.5/0.75. . . . . 158
9.21 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB
and required coverage probability in worst case of 0.8/0.98. . . . . 158
9.22 Comparison of the different algorithms with 50 different snapshots
on the big network scenario with CPICH verification mode 1. . . . 160
B.1 Preliminary environments identified by COST 259. . . . . . . . . 174
B.2 Propagation properties proposed by COST 259 and considered by
3GPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
D.1 Standard rule set used for Rule Based Approach. . . . . . . . . . 181
E.1 Rule set 1 for Simulated Annealing. . . . . . . . . . . . . . . . . . 183
E.2 Rule set 2 for Simulated Annealing. . . . . . . . . . . . . . . . . . 184
F.1 Rule set 1 for Adaptive Rule Based Approach. . . . . . . . . . . . 185
F.2 Rule set 2 for Adaptive Rule Based Approach. . . . . . . . . . . . 186
F.3 Rule set 3 for Adaptive Rule Based Approach. . . . . . . . . . . . 186
F.4 Rule set 4 for Adaptive Rule Based Approach. . . . . . . . . . . . 187
I.1

Base station antenna parameters in simulator CAP ESSOT M . . . 197

I.2

System parameters in simulator CAP ESSOT M . . . . . . . . . . . 198

I.3

Channel parameters in simulator CAP ESSOT M . . . . . . . . . . 199

I.4

Mobile station parameters in simulator CAP ESSOT M . . . . . . . 199

I.5

User parameters in simulator CAP ESSOT M . . . . . . . . . . . . 199

xviii

LIST OF TABLES

List of Figures
1.1

Spectrum allocation for Europe, China, Japan, Korea and North


America [100]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

ITU-R IMT-2000 grouping [54]. . . . . . . . . . . . . . . . . . . .

1.3

UMTS network environment [48]. . . . . . . . . . . . . . . . . . .

2.1

Channel structure for W-CDMA [97]. . . . . . . . . . . . . . . . .

11

2.2

Frame structure for the downlink DPCH. . . . . . . . . . . . . . .

12

2.3

Frame structure for the uplink DPCH. . . . . . . . . . . . . . . .

12

2.4

Transmit path for DPCH in the downlink direction. . . . . . . . .

13

2.5

Transmit path for DPCH in the uplink direction. . . . . . . . . .

13

2.6

Code tree for OVSF (orthogonal variable spreading factor) codes.

14

2.7

Relation between spreading and scrambling. . . . . . . . . . . . .

14

2.8

Transport-channel to physical-channel mapping. . . . . . . . . . .

16

2.9

Synchronization Channel (SCH) structure. . . . . . . . . . . . . .

16

2.10 Common Pilot Channel (CPICH) structure. . . . . . . . . . . . .

17

2.11 Code groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.12 Outer and inner TPC loop. The green-colored blocks reside in the
physical (PHY) layer and the yellow-colored blocks reside in the
radio resource control (RRC) layer (source: [74]). . . . . . . . . .

23

2.13 Softer handover. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

2.14 Soft handover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.15 Types of services (source: [47]). . . . . . . . . . . . . . . . . . . .

27

2.16 Packet transmission over the UMTS air interface (source: [100]). .

29

xix

xx

LIST OF FIGURES
2.17 3GPP traffic classes classification. . . . . . . . . . . . . . . . . . .

30

3.1

Local Search algorithm.

. . . . . . . . . . . . . . . . . . . . . . .

36

3.2

Simulated Annealing algorithm. . . . . . . . . . . . . . . . . . . .

39

3.3

Tabu Search algorithm. . . . . . . . . . . . . . . . . . . . . . . . .

40

3.4

The structure of an Evolutionary Algorithm. . . . . . . . . . . . .

41

3.5

Genetic Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . .

43

3.6

Canonical Genetic Algorithm. . . . . . . . . . . . . . . . . . . . .

43

3.7

1-point crossover. . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

3.8

Mutation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

5.1

Adjustment of base station azimuth. . . . . . . . . . . . . . . . .

62

5.2

Horizontal pattern of base station antenna (in dB). . . . . . . . .

63

5.3

Adjustment of base station downtilt. . . . . . . . . . . . . . . . .

64

5.4

Vertical pattern of base station antenna (in dB). . . . . . . . . . .

64

5.5

Capacity for different CPICH power and antenna tilt settings. (Capacity is measured as served users.) . . . . . . . . . . . . . . . . .

68

6.1

The three tradeoffs of UMTS optimization (source: [75]). . . . . .

73

7.1

Schematic principle of a static Monte Carlo simulator. . . . . . . .

80

7.2

Interface between network simulator and optimization algorithm. .

84

7.3

Base station location and one user distribution of the big network
scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

Base station location and one user distribution of the small network
scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

8.1

Structure of local optimization process. . . . . . . . . . . . . . . .

90

8.2

Rule Based Optimization Algorithm. . . . . . . . . . . . . . . . .

91

8.3

Extension of Rule Based Optimization algorithm to Simulated Annealing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

8.4

Why adjust CPICH power and antenna tilt together? . . . . . . .

95

8.5

Detailed flowchart of Adaptive Rule Based Approach. . . . . . . .

98

7.4

LIST OF FIGURES

xxi

8.6

Representation of individuals for capacity optimization. . . . . . .

99

8.7

Flowchart of Genetic Algorithm approach. . . . . . . . . . . . . . 101

8.8

Recombination operator. . . . . . . . . . . . . . . . . . . . . . . . 104

8.9

Mutation operator. . . . . . . . . . . . . . . . . . . . . . . . . . . 106

8.10 Detailed flowchart of local optimization. . . . . . . . . . . . . . . 107


8.11 Distributed Computing. . . . . . . . . . . . . . . . . . . . . . . . 110
8.12 Distributed Client Scheduler.

. . . . . . . . . . . . . . . . . . . . 111

8.13 Client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111


8.14 Best and worst case of antenna directions in a regular hexagonal
grid. Source: [80]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.15 Worst case and best case of base station azimuth in a regular grid,
pathgain in dB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
8.16 Check, if bs ref erence(i) looks to bs to interleave. . . . . . . . . 116
8.17 Window 1 contains the most angles. . . . . . . . . . . . . . . . . . 116
8.18 Elevation angle of the cell area. . . . . . . . . . . . . . . . . . . . 117
8.19 Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 1. . . . 119
8.20 Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 2. . . . 120
8.21 Transmit power of one cell in the optimization area of the big
network scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
9.1

Results for the Rule Based Approach on different snapshots. . . . 127

9.2

Results for the Rule Based Approach on different snapshots after


optimization for snapshot 1. . . . . . . . . . . . . . . . . . . . . . 128

9.3

Results for the Rule Based Approach on different snapshots after


optimization for snapshot 11. . . . . . . . . . . . . . . . . . . . . 129

9.4

Results for the Rule Based Approach on different snapshots after


optimization for snapshot 18. . . . . . . . . . . . . . . . . . . . . 129

9.5

Results for the Rule Based Approach on different snapshots after


optimization for snapshot 6. . . . . . . . . . . . . . . . . . . . . . 130

9.6

Results for the Rule Based Approach on different snapshots after


optimization for snapshot 14. . . . . . . . . . . . . . . . . . . . . 130

xxii

LIST OF FIGURES

9.7

3D matrix of parameter range limits for the Rule Based Approach. 132

9.8

Block diagram for the simulation of the four rule sets over 50 snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

9.9

Comparison of cdf curves for the four rules sets of the Adaptive
Rule Based Approach (50 snapshots). . . . . . . . . . . . . . . . . 140

9.10 Results for the Genetic Algorithm on the small network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 143
9.11 Optimization run for the best result on the small network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 144
9.12 Results for the Genetic Algorithm on the big network scenario with
CPICH verification mode 1. . . . . . . . . . . . . . . . . . . . . . 146
9.13 Optimization run for the best result on the big network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 147
9.14 Mean results over 100 snapshots for the Genetic Algorithm on the
small network scenario with CPICH verification mode 2. . . . . . 149
9.15 Optimization run for the best result on the small network scenario
with CPICH verification mode 2. . . . . . . . . . . . . . . . . . . 151
9.16 Mean results over 100 snapshots for the Genetic Algorithm on the
big network scenario with CPICH verification mode 2. . . . . . . 152
9.17 Optimization run for the best result on the big network scenario
with CPICH verification mode 2. . . . . . . . . . . . . . . . . . . 154
9.18 Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots). . . . . . . . . . . . . . . . . . . . 157
9.19 Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB.159
9.20 Comparison of the different algorithms on the big network scenario
with CPICH verification mode 1. . . . . . . . . . . . . . . . . . . 161
A.1 Overview and basic entities of the UMTS network structure. . . . 171
B.1 Channel shape (power delay profile) with multiple clusters. Source:
[7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
B.2 Reduced complexity channel model parameters. Source: [7]. . . . 176
C.1 The principle of maximum ratio combining within the CDMA
RAKE receiver. Source: [54]. . . . . . . . . . . . . . . . . . . . . 178

LIST OF FIGURES

xxiii

C.2 Block diagram of a W-CDMA RAKE receiver. Source: [54]. . . . 179


C.3 Schematic block diagram of matched-filter. . . . . . . . . . . . . . 179
H.1 Automatic azimuth adjustment routine for turning base stations
to critical spots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
H.2 Automatic azimuth adjustment routine for interleaving base stations.193
H.3 Automatic tilt adjustment routine. . . . . . . . . . . . . . . . . . 194
H.4 Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in total scenario. . . 195
H.5 Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in optimization
area. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

xxiv

LIST OF FIGURES

Chapter 1
Introduction
Third generation (3G) mobile communication systems in Europe are known as
the Universal Mobile Telecommunication System (UMTS). UMTS is expected to
play a key role in creating the future mass market for high-quality multimedia
communications that will approach 2 billion users worldwide by the year 2010
[102]. Enabling anytime, anywhere connectivity to the Internet is just one of
the opportunities for UMTS networks. UMTS will bring more than just mobility
to the Internet. The major market opportunity will build on new possibilities
for mobile users like multi-media-messaging, location-based services, personalized
information, and entertainment experiences. Due to the number of new applications and services a significant redistribution of operators revenue will take place
within the next years. Nobody knows the killer application per se, but all market
studies are in complete agreement in one point: packet data will increasingly
dominate the traffic flows. By 2007, predictions from the heyday of the UMTS
euphoria state that more data than voice will flow over mobile networks [102, 35].
This is an amazing statistic considering that mobile cellular networks today are
almost exclusively voice.
In 2nd generation systems, coverage planning and frequency planning, involving
optimization, are the most important but also sufficient issues for operating the
network. Coverage prediction and capacity estimation are mostly well separable. In UMTS networks, where all users operate on the same frequency carrier,
the number of simultaneous connections directly influences the system capacity. Multiple services like speech, Internet and high data rate interactive services
will co-exist. Since higher bit-rate services will require higher capacity, the base
station density will have to be increased.
In this thesis I focus on the problem of capacity optimization for UMTS FDD
networks. The goal is to improve the capacity of the network only by changing
the base station parameters and not by using more sites.
1

CHAPTER 1. INTRODUCTION

1.1

Background and Standardization of W-CDMA

In June 1987 the RACE I project (Research of Advanced Communication Technologies in Europe) was initiated by the European Union. This was the official
start of the research and development activities towards a third generation mobile communication system in Europe. After RACE I several European R&D
programs e.g. RACE II and ACTS (Advanced Communications Technologies
and Services) [10] followed in order to support 3rd generation mobile communications system development. Within ACTS, the FRAMES project (Future Radio
Wideband Multiple Access System) was initiated with the objective of defining
a proposal for a UMTS radio access system.
From a global point of view, the work for the development of 3rd generation
mobile systems started in 1992, when the WARC (World Administrative Radio
Conference) of ITU (International Telecommunications Union) identified the frequencies around 2 GHz for use by future third generation mobile systems. The
ITU calls these IMT 2000 1 . The frequency bands and geographical areas, where
these different bands are defined, are shown in Figure 1.1. For IMT 2000 altogether 230 MHz in two frequency bands, 1885 - 2025 MHz and 2110 - 2200 MHz,
were reserved.
The proposals for the UMTS Terrestrial Radio Access (UTRA) air interface received by the milestone were grouped into five concept groups in ETSI in June
1997, after their submission and presentation during 1996 and early 1997. The
following groups were formed:
Alpha concept: Wideband CDMA (W-CDMA) [37]
Beta concept: OFDMA [38]
Gamma concept: Wideband TDMA (W-TDMA) [39]
Delta concept: Wideband TDMA/CDMA [40]
Epsilon concept: ODMA [41]
ETSI decided between the technologies in January 1998 [42], selecting W-CDMA
as the standard for the UTRAN air interface on the paired frequency bands,
i.e. for FDD (Frequency Division Duplexing) operation, and TDMA/CDMA for
the operation within unpaired spectrum allocation, i.e. for TDD (Time Division
Duplexing) operation. These combined modes formed the basis for the ETSI
proposal to the ITU as a candidate IMT-2000 radio transmission technology. It
1

International Mobile Telecommunications 2000. 2000 can be interpreted either as for the
year 2000 , or for the frequency band around 2000 MHz .

1.1. BACKGROUND AND STANDARDIZATION OF W-CDMA

Figure 1.1: Spectrum allocation for Europe, China, Japan, Korea and North
America [100].

took 10 years from the initiation of the European research programs (RACE I+II,
ACTS) to reach a decision of the UTRA technology. The detailed standardization
of UTRA proceeded within ETSI until the work was handed over to the 3rd
Generation Partnership Project (3GPP). The technical work was transferred to
3GPP with the contribution of UTRA in early 1999.
Meanwhile the standardization bodies of Japan (TTC, ARIB), Korea (TTA),
China (CWTS) and USA (T1P1) were independently choosing their own 3G radio access technologies. Even though the original goal of the standardization
process was a single common global IMT-2000 radio interface, the goal to achieve
a single worldwide standard was extremly difficult from the beginning. In Europe
and Asia, including Japan and Korea, W-CDMA is to be used utilizing 5 MHz
blocks in the frequency bands at around 2 GHz. In North America that spectrum had already been auctioned for operators using second generation systems
in blocks of 1.25 MHz (see Figure 1.1). Hence, 3rd generation services have to
be implemented not only within the existing frequency bands, but also with a
different radio technology, since there were no cohesive 5 MHz frequency blocks
available in the US. Since it became evident that it would be very difficult to
achieve identical specifications, initiatives were started to create a single forum
for a common UTRA standardization. The 3rd Generation Partnership Project
(3GPP - http://www.3gpp.org) was set up in 1998 with this objective, including

CHAPTER 1. INTRODUCTION

TTC/ARIB for Japan, ETSI for Europe, TTA for Korea, T1P1 for the US and
CWTS for China as partners. The detailed technical work of this group started
in early 1999, with the aim of having the first version of the common specification, called Release-99, ready by the end of 1999. Within 3GPP, four different
technical specification groups (TSG) were set up as follows:
Radio Access Network TSG
Core Network TSG
Service and System Aspects TSG
Terminals TSG
Within these groups the one most relevant to the W-CDMA technology is the
Radio Access Network TSG (RAN TSG), which produced the Release-99 of the
UTRA air interface.
The development of the ITU recommendations for 3rd generation mobile communication systems ran like this: In the first phase of the IMT-2000 candidate
submission process, the ITU received a number of different proposals. In the
second phase of the process, evaluation results were received from proponent organizations as well as from other evaluation groups. The ITU IMT-2000 process
was finalized at the end of 1999, when the detailed specification was created and
the radio interface specifications were approved by ITU-R [54, 95]. The members
of ITU-R IMT-2000 family are shown in Figure 1.2. The TDMA subgroup consits
of a TDMA single carrier (UWC-136) and a TDMA Multi-Carrier2 (DECT) concept. The CDMA interface consists of the Direct-Spread (UTRA FDD) and the
Multi-Carrier (cdma2000) part. The TDD part of the CDMA concept consists
of the 3GPP proposal, UTRA TDD (3.84 Mchip/s) and the Chinese narrowband
version TD-SCDMA (1.2288 Mchip/s). Good overviews on the 3GPP proposals
for IMT-2000 are given in [22, 29, 54, 84]

1.2

Objectives of the UMTS System

The UMTS forum (www.umts-forum.org) is an open and independent panel,


which was founded by a multiplicity of companies and telecom authorities with
the goal to advance the development and implementation of the UMTS system.
The UMTS forum defines the UMTS system in the following way: UMTS will be
a mobile communication system that can offer significant user benefits including
2

Multi-Carrier here means FDMA.

1.2. OBJECTIVES OF THE UMTS SYSTEM

Figure 1.2: ITU-R IMT-2000 grouping [54].

high-quality wireless multimedia services to a convergent network of fixed, cellular


and satellite components. It will deliver information directly to users and provide
them with access to new and innovative services and applications. It will offer
mobile personalized communications to the mass market regardless of location,
network or terminal used. The idea was to harmonize the present different systems in only one wireless infrastructure, with a multiplicity of possible services
and applications. It should be possible to implement communication services
from each person to each other person, at all times, on any place, without disturbing delay, at low costs and with selectable quality and security. Some of the
goals, which were important during the definition phase of the UMTS system are
presented in the following list:
To create an integrated system, in which the user can use services of simple
and consistent manner.
Offer the possibility to choose between services of different networks and
operators.
To deliver a broad spectrum of mobile telecommunication services. These
services should also include data services with flexible data rates from 32
kbit/s up to 2 Mbit/s, which could also be used in fixed networks and
special services for mobile communication. The quality should be the same
as in fixed networks.
The services should be offered by mobile terminals (mobile phones, terminals fixed in cars,...) and by fixed terminals in all environments (flats,
offices, means of transportation, public buildings, i.e. from small picocells
in buildings up to big macrocells with coverage by satellite). A possible
environment is shown in Figure 1.3.
Worldwide roaming.

CHAPTER 1. INTRODUCTION
The cellular structure of the network should accomplish the condition for
a complete personal communication network.
Open architecture, to facilitate an easy introduction of new technologies
and services.
Support of different types of terminals (e.g. mobile terminals, PDAs, notebooks,...)

A small, cheap, light and simple usable terminal should be established at the
mass market for standard applications. Beside these terminals, there will be a
set of products, which are convenient for applications with higher requirements.

Figure 1.3: UMTS network environment [48].

1.3

Motivation for UMTS Optimization

The UMTS radio interface can carry voice and data services with various data
rates, traffic requirements and quality of service (QoS) targets. Furthermore, the
operating environments vary considerably from outdoor macro cells to indoor
micro cells. Careful configuration of the many network and cell parameters is

1.4. STRUCTURE OF THE THESIS

required and crucial to the network operator, because they determine the capability to provide services, influence the QoS, and account for a major portion of
the total network deployment and maintenance costs.
Optimization is needed, both in the planning stage to optimize the network configuration for investment saving as well as after the deployment of the network
to satisfy growing service demand (see [73, 106]). However, there are numerous configurable parameters which are multi-dimensional and interdependent,
and their influence on the network is highly non-linear. Hence, finding optimum
network configuration is a very complex and time-consuming task. Automated
optimization algorithms are needed to perform the optimization process quickly
and efficiently, with minimal contribution from the operational expenditure.
Different optimization techniques are known, which are suitable for the problem
of UMTS base station parameter optimization. All these techniques have several advantages and also disadvantages. During the work for this thesis several
automated optimization algorithms (local, global and analytic algorithms) were
developed, which are presented and compared in the present work.
Base station parameters such as antenna azimuth, antenna tilt and common pilot
channel power (CPICH) are the three most common optimization parameters that
have significant influence on network capacity. By optimizing these three key
parameters, the network capacity can be boosted without additional investment
needed.
The work on this thesis led to four papers at international conferences and
one submission to an Asian journal. Some of the results obtained during the
course of this thesis have also been discussed at working group 3 of COST 273
(www.lx.it.pt/cost273).

1.4

Structure of the Thesis

The thesis is organized as follows:


Chapter 2 gives a short overview of the system description of UMTS FDD.
Beside the theoretical basics of W-CDMA, transport channels, physical
channels and the physical layer procedures, also a short introduction on
possible services and applications is presented.
Chapter 3 gives an overview over common and well known optimization
techniques. This chapter discusses local as well as global optimization algorithms.

CHAPTER 1. INTRODUCTION
Chapter 4 presents the UMTS radio network coverage and capacity limiting
factors. For the several limiting reasons possible solutions are given.
In Chapter 5 the parameters, which are used during this thesis for optimizing the capacity in a UMTS FDD network, are described in more detail.
Further, the influence of these parameters on the capacity in the network
is explained.
Chapter 6 introduces the used fitness functions for the optimization as well
as the performance indicators, which are taken into account during the
optimization process.
Chapter 7 is a description of the simulation environment. The main characteristics of the used simulator are specified as well as the simulation scenarios and the user distribution in the scenarios.
In Chapter 8 of the thesis all the developed optimization algorithm are
presented. Altogether five different strategies are introduced. First, the optimization with local algorithms are studied. Further, a global optimization
approach and an analytic algorithm are presented in this chapter.
Chapter 9 presents some results achieved with the several algorithms.
Finally, Chapter 10 summarizes and concludes the thesis.

Chapter 2
Radio System Aspects
2.1

Introduction

UMTS Terrestrial Radio Access (UTRA) denotes the air interface of the UMTS
system. The basis for UTRA is spread spectrum technology. Very good and
elaborate descriptions of this technique are given in [54, 97]. Thereby, an artificialspectral spreading increases the resistance against interference of the transmitting signal. With a user dependent spreading code, this code multiplex technique can be used as a multiple access scheme. This technology is called Code
Division Multiple Access (CDMA). CDMA has been claimed to deliver high capacity [105], i.e. that a lot of users can be served. The frequency reuse factor is
one, therefore all cells can use the whole frequency band. Furthermore, CDMA
offers the possibility to combat interference efficiently and multipath propagation
by the use of simple RAKE receivers. Details of the RAKE receiver are explained
in Appendix C.
The standardized UTRA solution from ETSI includes two different concepts.
These two are the UTRA FDD mode with the use of W-CDMA (Wideband
CDMA) and the UTRA TDD mode with TD-CDMA (Time Division/Code Division Multiple Access). For the FDD mode, two 60 MHz bands (paired bands,
1920 -1980 MHz for UL and 2110 - 2170 MHz for DL) are scheduled, whereas 20
and 15 MHz (unpaired band, 1900 -1920 MHz and 2010 - 2025 MHz) are reserved
for the TDD mode. The planned application areas for the FDD mode are public
macro and micro cells with data rates up to 384 kbit/s, whereas the TDD mode
should be dedicated for small public cells (micro and pico cells) as well as for
unlicensed wireless applications and wireless local loops (WLL) with data rates
up to 2 Mbit/s. In the following the UTRA FDD mode is explained in more
detail.
9

10

CHAPTER 2. RADIO SYSTEM ASPECTS

2.2

UTRA FDD System Description

A short overview about the technical details of the FDD mode is given in Table
2.1.
Multiple access scheme

DS-CDMA

Duplex technique

FDD

Modulation scheme

QPSK

Chip rate

3.84 Mchip/s

Pulse shaping

Root raised cosine with a


roll-off factor of 0.22

Bearer spacing

5 MHz

Frame length

10 ms

Synchronization between the base stations

not necessary

Table 2.1: Technical overview of FDD mode.


The most important characteristics of the W-CDMA technology, which are used
within the FDD mode, are presented in the following list:
Flexible support of different services. W-CDMA offers the possibility to
allocate the required data rates by a judicious choice of different spreading
sequences and by the adaptation of the transmit power. The management
of the resources takes place in a decentralized way in each cell.
Increased capacity and coverage by high bandwidth (interference resistance
also compared to the other users in the same cell) and fast power control.
Assistance of adaptive antenna arrays, called Smart Antennas. The application of these antennas improves the interference resistance and therefore
boosts the capacity.
The base stations operate asynchronously and so the expense for the installation of the network is significantly reduced. It is also possible to use
UMTS as a local network inside a building.

2.2.1

Channel and Frame Structure

As in the GSM system, the flow of information in the UMTS system is organized
in channels. Thereby, we differ between transport channels and physical channels.

2.2. UTRA FDD SYSTEM DESCRIPTION

11

The transport channels are mapped on the physical channels by multiplexing (see
Figure 2.1).

Figure 2.1: Channel structure for W-CDMA [97].

The transport channels are divided into common and dedicated transport channels, whereby the first ones are used by more than one mobile in the cell. The
same distinction is made for the physical channels. In the case of the dedicated physical channels we can differ between the dedicated physical data channel
(DPDCH) and the dedicated physical control channel (DPCCH). The DPCCH
includes pilot bits, data for the transmit power control (TPC) and information
about the transmitting data (TFI, transport format indicator). Figure 2.2 shows
the frame structure of the dedicated transport channel for the downlink. Each
frame has a length of 10 ms and is divided into 15 slots. The first part of a slot
includes the DPCCH and in the second part follows the DPDCH. There are 7 different bit rates possible, from 12.2 kbit/s up to 2 Mbit/s. Due to the fact that the
chip rate is constant (3.84 Mchip/s), different spreading factors are necessary. For
the increase of the data rate over 1 Mbit/s, several DPDCHs are assigned to one
mobile, but only one DPCCH. The frame structure of the DPCH for the uplink
is shown in Figure 2.3. In contrast to downlink, data and control information are
transmitted in parallel, separated by the I- and Q-branch (see Section 2.2.2). A
description of the individual transport and physical channels and their mapping
is given in Section 2.3.

12

CHAPTER 2. RADIO SYSTEM ASPECTS

Figure 2.2: Frame structure for the downlink DPCH.

Figure 2.3: Frame structure for the uplink DPCH.

2.2. UTRA FDD SYSTEM DESCRIPTION

2.2.2

13

Modulation, Spreading and Scrambling

Figures 2.4 and 2.5 show the arrangement of modulator, spreading, scrambling,
pulse shaping and conversion to IF- and HF- frequency for uplink and downlink.

Figure 2.4: Transmit path for DPCH in the downlink direction.

Figure 2.5: Transmit path for DPCH in the uplink direction.

The spreading of the spectrum is accomplished by the channelization code cch .


The channelization codes are the same for up- and downlink. Their task is to
orthogonalize the different data streams of the transmitter and to increase the
bandwidth. Due to the different data rates and the fixed chip rate of 3.84 Mchip/s
there are various codes with different lengths. These codes have to be orthogonal,
because in the downlink the separation of the users is done by the channelization
codes. Therefore, so-called OVSF (orthogonal variable spreading factor) codes are
used. The choice of the codes takes place with the code tree shown in Figure 2.6.
The codes of all branches from one layer of the tree are orthogonal to each other.
Furthermore, all codes with different length are orthogonal to each other, if the
branch from the longer code does not originate from the branch with the shorter
code.
The scrambling in the downlink takes place with a 38400 chip (10 ms) long segment of a gold code with the length 218 1. This code is repeated in each frame.
Overall, 512 different scrambling codes are used. Through scrambling no additional spreading of the spectrum takes place. Figure 2.7 shows the relation of the
chip rate in the channel to spreading and scrambling in UTRA. In downlink for

14

CHAPTER 2. RADIO SYSTEM ASPECTS

Figure 2.6: Code tree for OVSF (orthogonal variable spreading factor) codes.

each cell only one scrambling code is assigned (i.e for each mobile in the cell the
same code is applied). Therefore, the purpose of the complex valued scrambling
codes are to distinguish signals originating from different base stations in the
mobile.
The scrambling in the uplink is done in order to distinguish signals of different
mobiles at the base stations. There are two different possibilities for the scrambling operation: either, a short code from a Kasami-set [19] of length 256 or
a long code, consisting of a 38400 chip segment from a gold code [19] of length
241 1. The possibility of utilization of the short code was created to facilitate the
Joint Detection technique [19] with feasible effort. In contrast to the downlink,
the separation of the users (i.e the orthogonality) in the uplink is guaranteed
by the scrambling code and not by the spreading code. This means that a mobile, when choosing its spreading code, does not have to take into account the
spreading codes of the other users in the cell.

Figure 2.7: Relation between spreading and scrambling.

2.3. TRANSPORT CHANNELS AND PHYSICAL CHANNELS

2.3

15

Transport Channels and Physical Channels

In UTRA FDD the data generated at higher layers is carried over the air interface
with transport channels, which are mapped in the physical layer to different
physical channels. The basic organization of this concept for the information
flow is described in Section 2.2.1 of this thesis. The various different tasks (e.g.
cell search, paging,...) that the UMTS network has to fulfill are handled by
different transport channels. These logical transport channels are the services
that the physical layer offers to higher layers. Figure 2.8 illustrates the mapping of
transport channels to physical channels. There are also physical channels that are
not transparent to higher layers and have no corresponding transport channels.
The synchronization channel (SCH), the common pilot channel (CPICH) and the
acquisition indication channel (AICH) are not directly visible to higher layers,
but are essential from the system function point of view. The dedicated channel
(DCH) is mapped onto two physical channels, the dedicated physical data channel
(DPDCH) and the dedicated physical control channel (DPCCH).
In the following a brief description of the different channels is given.

2.3.1

Synchronization Channel

The synchronization channel (SCH) is a pure physical channel used for the cell
search procedure by the mobile. Hence, the SCH has to be transmitted into
the entire cell. The SCH consists of two sub-channels transmitted in parallel,
the primary and secondary SCH. The 10 ms radio frames of the primary and
secondary SCH are divided into 15 slots of length 2560 chips each. The SCH is
only transmitted during the first 256 chips of each slot. Figure 2.9 shows the
structure of the SCH.

2.3.1.1

Primary SCH

The primary SCH consists of a modulated sequence of length 256 chips, the
primary synchronization code, denoted cp in Figure 2.9. The sequence cp is transmitted once every slot and is the same for every cell in the system, so that the
mobiles can detect it easily with a matched filter.

2.3.1.2

Secondary SCH

The secondary SCH consists of a repeatedly transmitted sequence of modulated


codes with a length of 256 chips. The secondary synchronization codes, which are

16

CHAPTER 2. RADIO SYSTEM ASPECTS

Figure 2.8: Transport-channel to physical-channel mapping.

Figure 2.9: Synchronization Channel (SCH) structure.

2.3. TRANSPORT CHANNELS AND PHYSICAL CHANNELS

17

transmitted in parallel to cp , are denoted ci,k


s in Figure 2.9, where i = 0, 1..., 63 is
the number of the scrambling code group, and k = 0, 1, ...14 is the slot number.
Each secondary synchronization code is from a set of 16 different codes. The
sequence of secondary synchronization codes transmitted on the secondary SCH
indicates which of the 64 possible scrambling code groups is used in the cell. As
the sequence of the 15 secondary synchronization codes are rotationally variant,
the secondary SCH determines also the beginning of the radio frame. Figure 2.11
in Section 2.4.1 shows the 64 code groups.

2.3.2

Common Pilot Channel

The common pilot channel (CPICH) is a fixed rate downlink physical channel (15 kbit/s) that carries a continuous pre-defined bit/symbol sequence. The
spreading factor of the CPICH is 256. Figure 2.10 shows the frame structure of
the CPICH. The function of the CPICH is to aid the channel estimation at the
mobile for the DCH (see Section 2.3.7) and to provide the channel estimation
reference for the common channels. The CPICH does not carry any higher layer
information, neither is there any transport channel mapped to it.

Figure 2.10: Common Pilot Channel (CPICH) structure.

There are two types of CPICHs, the primary and secondary CPICH (P-CPICH
and S-CPICH). They differ in their use and the limitations placed on their physical features [3]. In the following the P-CPICH and S-CPICH are explained in
more detail.

2.3.2.1

Primary Common Pilot Channel

The primary common channel (P-CPICH) is always spread with channelization


code Cch,256,0 and scrambled with the scrambling code of the cell. Hence the

18

CHAPTER 2. RADIO SYSTEM ASPECTS

P-CPICH can be used in the mobile to determine the scrambling code used for
scrambling the downlink channels of the cell (see Section 2.4.1). There is always
only one P-CPICH per cell, which is broadcast over the entire cell. The P-CPICH
is the phase reference for the SCH, P-CCPCH, AICH and PICH, and the default
phase reference for all other downlink physical channels [3].
The P-CPICH is responsible for the measurements on the handover and cell
selection/reselection. The use of the CPICH reception level at the mobile for
handover measurements has the consequence that by adjusting the CPICH power
the cell load can be balanced between adjacent cells. Reducing the CPICH power
in one cell causes that mobiles at the cell boundary hand over to the neighboring
cells, while increasing it invites more mobiles to hand over to the cell [54].

2.3.2.2

Secondary Common Pilot Channel

The secondary common pilot channel (S-CPICH) can be spread by any channelization code of length 256 and can be scrambled by either the primary or a
secondary scrambling code. There maybe zero, one, or several S-CPICHs per
cell. A S-CPICH may be transmitted over the entire cell or only into a part of
the cell. The S-CPICH can be the phase reference for secondary common control
physical channels (S-CCPCH) and dedicated physical channels (DPCH) [3].

2.3.3

Broadcast Channel

The broadcast channel (BCH) is a transport channel that is used to broadcast


network or cell specific information for a given cell. The most typical data needed
in every network are the available random access codes and access slots in the
cell, or the types of transmit diversity methods used with other channels for that
cell. Physically the BCH is transported by the primary common control physical
channel (P-CCPCH), which is a downlink data channel only. It is crucial that
all mobiles in the cell can decode the P-CCPCH. Therefore, the P-CCPCH is
transmitted with high power and fixed data rate (spreading factor SF = 256) in
order to reach all the users within the intended coverage area.

2.3.4

Random Access Channel

The random access channel (RACH) is an uplink transport channel, which carries control information from the terminal, such as a request to set up an RNC
connection. It can further be used to send small amounts of uplink packet data to

2.3. TRANSPORT CHANNELS AND PHYSICAL CHANNELS

19

the network [54]. The random access channel must be heard from the whole desired cell coverage area, especially for the initial system access and other control
procedures.
The RACH is mapped on the physical random access channel (PRACH). The
PRACH has specific preambles, which are sent prior to data transmission. These
use a spreading factor of 256 and contain a signature sequence of 16 symbols.
Once the preamble has been detected by the base station and acknowledged with
the acquisition indicator channel (AICH) the 10 or 20 ms long message part is
transmitted with a spreading factor from 256 down to 32. If the sent preamble
is not acknowledged within a certain time, the preamble is retransmitted with
increased power until the reception of the preamble is acknowledged by the base
station. Beside this power adjustment prior to transmission, there is no power
control for the PRACH.

2.3.5

Forward Access Channel

The forward access channel (FACH) is a downlink transport channel that carries
control data to terminals in the given cell, for example, after a random access message has been received by the base station. It is also possible to transmit packet
data on the FACH. There can be more than one FACH per cell with different
data rates, but there must be at least one FACH with such a low data rate that
all terminals in the cell can decode it. The FACH does not use fast power control,
and the messages transmitted need to include in-band identification information.
The FACH is mapped on the secondary common control physical channel (SCCPCH). If there exists more than one FACH in the cell, then all these are
multiplexed on one S-CCPCH. The S-CCPCH may use different offsets between
the control and data fields at different symbol rates and may support slow power
control. More details and a table of all possible slot formats are given in [3].

2.3.6

Paging Channel

The paging channel (PCH) is a downlink transport channel that carries data
relevant to the paging procedure, that is, when the network wants to initiate
communication with the terminal [54]. An example is a speech call: the network
transmits the paging message to the terminal on the paging channel of those cells
belonging to the location area that the terminal is expected to be in. The signal
has to be heard in the entire cell area and can be transmitted in up to a few
hundreds cells, depending on the system configuration. The PCH is transported
like the FACH by the S-CCPCH.

20

CHAPTER 2. RADIO SYSTEM ASPECTS

The configuration of the paging channel affects the terminals power consumption
in standby mode. The less often the mobile has to tune the receiver on to listen
for a possible paging message, the longer will the terminals battery last in standby
mode. Therefore, all mobiles are assigned to a paging group. The presence of a
paging message in the S-CCPCH is signaled by the network on a separate physical
channel, the paging indication channel (PICH). So, a terminal in stand-by mode
only has to check the PICH and does not need to decode the S-CCPCH all the
time.

2.3.7

Dedicated Channel

The dedicated channel (DCH) is the only dedicated transport channel specified in
3GPP. The DCH carries all higher layer information intended for a given user including data for the actual services (speech frames, data,...) as well as higher layer
control information (handover commands, measurement control commands,...).
In the physical layer the DCH is mapped on the dedicated physical channel
(DPCH). The DPCH uses closed-loop power control and fast data adaptation on
a frame-by-frame basis. It can be transmitted to a part of the cell and support
soft/softer handover. The DPCH consists of two sub-channels, the dedicated
physical data channel (DPDCH) that carries the actual user data and the dedicated physical control channel (DPCCH) that carries physical layer information.
As the modulation scheme is different in up- and downlink, the structure of the
DPCH is also different for the two directions. The downlink and uplink slot
structure of the DPCH are shown in Figure 2.2 and Figure 2.3 in Section 2.2.1.

2.3.8

Downlink Shared Channel

The downlink shared channel (DSCH) is a transport channel intended to carry


dedicated user data and/or control information in the downlink. The DSCH can
be shared by several users [54]. As a pure data channel, the DSCH is always
associated with a downlink DCH for power control and signaling purposes. In
the physical layer the DSCH is mapped on the physical downlink shared channel
(PDSCH). The PDSCH uses fast power control and the data rate can be changed
on a frame-by-frame basis.

2.3.9

Uplink Common Packet Channel

The uplink common packet channel (CPCH) is an extention to the RACH that
is intended to carry packet-based user data in the uplink direction. In contrast

2.4. PHYSICAL LAYER PROCEDURES

21

to the RACH the CPCH uses fast power control and collision detection. In the
physical layer the CPCH is transported by the physical common packet channel
(PCPCH). The uplink CPCH transmission may last serveral frames in contrast
with one or two frames for the RACH meassage. A detailed description of the
CPCH can be found in [54].

2.3.10

Necessary Common Channels in a UMTS Network

For the basic network operation the following channels are required: SCH, PCPICH, the P-CCPCH for carrying the BCH, a S-CCPCH for carrying the FACH
and the PCH, the PICH and a PRACH together with AICH for random access.
The use of the CPCH and DSCH is optional for the network, which makes all
physical channels necessary for signaling and transport of CPCH and DSCH
optional.

2.4

Physical Layer Procedures

In the physical layer of the UMTS system there are many procedures essential for
system operation. Power control, cell search and handover are briefly described
in this section. An exhaustive description can be found in [54, 68].

2.4.1

Cell Search

The cell search procedure or synchronization procedure in UTRA FDD mode of


the UMTS system differs greatly from the procedure in a synchronous system
like cdma2000. Since the cells use 512 different scrambling codes and not just
different code phase shifts, terminals cannot search all the codes of 10 ms duration
without any prior knowledge.
The cell search procedure using the synchronization channel has basically three
steps:
Step 1: Slot synchronization
After power on the mobile searches for the 256-chip primary SCH that is
equal in all cells. As the primary SCH code is the same in every slot, the
peak detected corresponds to the slot boundary.
Step 2: Frame synchronization and code-group identification
In the second step the mobile detects the secondary SCH in order to determine the boundaries of the frame structure and which of the 64 possible

22

CHAPTER 2. RADIO SYSTEM ASPECTS

Figure 2.11: Code groups.

code groups is used for the downlink scrambling. In the example in Figure 2.11 the cell uses the code group with the number 4.
Step 3: Scrambling-code identification
During the third and last step of the cell search procedure, the mobile
determines the exact primary scrambling code used by the found cell. This
is done by trying to detect the P-CPICH that is scrambled with one of
the eight primary scrambling codes of the code group. After the primary
scrambling code has been identified, the primary CCPCH can be detected
and the system- and cell specific BCH information can be read.

2.4.2

Power Control

Stringent uplink and downlink transmit power control (TPC) is required to combat the near-far problem. The near-far problem in the uplink is created by interfering users located near the base stations, which corrupt the signals from more
distant mobiles. The uplink TPC must be fast enough to track rapid channel
variations (e.g. caused by small-scale fading). The same properties are desired
for the downlink TPC even though the near-far problem is less important. In
UTRA FDD mode the power control is done in the control loops, the outer loop
and the inner loop. The outer loop is handled in the RNC and controls the QoS
(e.g. BER or BLER) by adjusting the target signal to interference ratio (SIR)
of the link according to the service. The SIR target is adjusted at a slow rate
(typically 10-100 Hz) and signaled via higher layers. The required SIR, BER and
BLER measurements are standardized in [4].
The inner loop of the power control, also called fast power control or closed loop
power control, adjusts the transmit power after every slot (666 s) according to
the received SIR of the previous slot, resulting in a 1500 Hz command rate. If
the received SIR is below the target SIR, then a power up command is fed back

2.4. PHYSICAL LAYER PROCEDURES

23

to the transmitter using the power control bits that are reserved for this purpose
in every slot. If the received SIR is equal to or above the target SIR, then a
power down command is sent to the transmitter. The transmitter adjusts the
transmit power according to the received power control command in steps of 1 dB.
Additionally, multiples of that step size can be used. The specifications define
the relative accuracy for a 1 dB power control step to be 0.5 dB. Figure 2.12
shows the interaction between the outer and inner loop.

Figure 2.12: Outer and inner TPC loop. The green-colored blocks reside in the
physical (PHY) layer and the yellow-colored blocks reside in the radio resource
control (RRC) layer (source: [74]).

In UTRA FDD there is also an open loop power control, which is applied for initiating the transmission on the RACH or CPCH. Here the transmitter estimates
the pathloss in the downlink using the received P-CPICH power (the transmit
P-CPICH power is known at the receiver) prior to transmission. The mobile adjusts the transmit power using the noise level at the receiver that is signaled on
higher layers in such a way that the target SIR is reached at the receiver. Due
to measurement inaccuracies the open loop power control is very inaccurate. In
normal conditions the tolerance for the open loop power control is 9 dB [6].

2.4.3

Handover

Handover is one of the most important mechanisms in wireless networks, since it


provides the maintenance of seamless communication when a mobile moves from
one site to another. Within the UTRA FDD mode the possible handovers are as
follows:

24

CHAPTER 2. RADIO SYSTEM ASPECTS


Softer handover
Soft handover
Hard handover

During softer handover, a mobile is simultaneously connected to two adjacent


cells of a site (base station). The communication between the mobile and the
base station takes place concurrently via two air interface channels, one for each
cell separately. This requires the use of two separate codes in the downlink. In
the uplink a similar process takes place at the base station: the signal of the
mobile is received in each cell and then combined by maximum ratio combining 1 .
Softer handover typically occurs in about 5 - 15 % of the connections [54]. Figure
2.13 shows a softer handover scenario.

Figure 2.13: Softer handover.

During soft handover, a mobile is connected to two cells belonging to different


base stations. As in softer handover, the communication between the mobile and
the base stations take place concurrently via two air interface channels from each
base station separately. As in softer handover, both signals are received at the
mobile by maximum ratio combining rake processing. Seen from the mobile, there
are very few differences between softer and soft handover. However, in the uplink
soft handover differs significantly from softer handover: the signals of the mobile
are received from both base stations, but the received data is then routed to the
RNC for combining. Soft handover occurs in about 20 - 40 % of the connections
[54]. Figure 2.14 shows a soft handover scenario.
In addition to soft/softer handover, the UTRA FDD mode provides two other
handover types (hard handover2 ):
Inter-frequency hard handover: Occurs for example, when a mobile hands
over from one W-CDMA frequency carrier to another. One application for
1

Maximum ratio combining (MRC) is one of the most common linear combining techniques
in receive diversity systems. In MRC, combiner weights are chosen to maximize the output
signal to noise ratio (SNR). Details on MRC are well described in [59, 90].
2
A hard handover results in the radio connection being broken between the network and the
mobile, before a new radio connection is established with the network in the target cell. Hard
handovers usually require a change of frequency.

2.5. SERVICES AND APPLICATIONS

25

Figure 2.14: Soft handover.

this are high capacity base stations, so-called hot-spot cells 3 , with several
carriers. Another application is the handover from a macro-cell to a microcell, which uses different frequencies.
Inter-system hard handover: Takes place between the UTRA FDD mode
and the UTRA TDD mode, or between the UTRA FDD mode and the
GSM system.
The support of seamless inter-frequency hard handover is a key feature of WCDMA, not previously implemented in cellular CDMA system. Hard handover is
necessary for the support of a hierarchical cell structure (HCS): a cellular system
can provide very high capacity through micro-cells, offering at the same time
full coverage via the macro cells. Therefore, hard handover is a very important
feature to perform handover between the different cells. A second scenario, where
hard handover is necessary, is the hot-spot one. In this case, a certain cell that
serves a high traffic area uses carriers in addition to those used by the neighboring
cells. If the deployment of extra carries is to be limited to the actual hot-spot
area, the possibility of hard handover is essential.

2.5

Services and Applications

In second generation mobile communication systems like GSM, basically there is


only one major service, speech (circus switched), from which a network cannot
offer many services and applications to a demanding user. UMTS will be able
to supply a wide range of services with different bit rates and flexible traffic
asymmetry.
3

Hot spot cells can have a larger number of carriers than the surrounding ones, therefore,
a different mechanism of handover is necessary between different frequencies, which is hard
handover.

26

CHAPTER 2. RADIO SYSTEM ASPECTS

In this section two classifications of services from different points of view are
presented. The first classification is based on market forecasts by the UMTS
Forum. The second one is based on QoS requests. Further, the importance of
traffic forecasts for network optimization is explained.

2.5.1

UMTS Forum Traffic Classes

Based on market forecasts, which are performed by the UMTS Forum (see Section 1.2) the services for UMTS can be divided in several classes. Figure 2.15
shows this classification.
Speech (S) is a symmetric service with the same amount of information in the
UL as in the DL and with an activity factor of 0.5. This implies that the system
should be able to handle the discontinuous transmission mode. The simple messaging service (SM) is the evolution of the GSM short message service (SMS).
The typical size of a message is about 40 KByte and and an acceptable delay
for this service is about 30 s. The switched data service is a 14.4 kbit/s CS service type similar to existing data service in GSM. Services like downloads for the
WWW belong to the multimedia service class (MM). The typical amount of data
that needs to be transmitted for a medium MM service is about 0.5 MByte during
14 s, while for a high MM service a data size of 10 MByte and a call duration of
53 s is typical. While these MM services are asymmetrical, the interactive MM
service is based on a 128 kbit/s symmetrical connection.
Services

User

Effective

User net

Coding

Asymmetry

Switch

Service

nominal bit

call

bit rate

factor

factor

mode

bandwidth

rate [kbit/s]

duration [s]

[kbit/s]

HIMM

128

144

128

1/1

CS

256/256

HMM

2000

53

1509

0.005/1

PS

15/3200

MMM

384

14

286

0.026/1

PS

15/3200

SD

14

156

14.4

1/1

CS

43/43

SM

14

30

10.67

1/1

PS

22/22

16

60

16

1.75

1/1

CS

28/28

[kbit/s]

Table 2.2: Service Characteristics (source: [101]).


Table 2.2 shows UMTS service characteristics, where some major service parameters for a possible traffic and capacity estimation are presented. These parameters
can be explained as follows [47]:
User nominal bit rate corresponds to the output bit rate from the source
without error protection.

2.5. SERVICES AND APPLICATIONS

Figure 2.15: Types of services (source: [47]).

27

28

CHAPTER 2. RADIO SYSTEM ASPECTS


Effective call duration of a service corresponds to how long, on average, the
service is connected. It is based on the average call duration multiplied by
the activity factor 4 (see Table 2.3) implies that the system should be able
to handle the discontinuous transmission (DTX) mode.
User net bit rate is a measure of the bit rate taking into account the packet
efficiency factor, which is based on considerations of practical packet networks and includes the effect of retransmission of unsuccessful packets.
Coding factor is a generalized measure of the degree of coding required to
transport the service to the required quality.
Asymmetry factor is used to show that some services will have a different
load (bit rate and bandwith) in the UL and DL.
Switch mode defines, if the service is CS or PS. Since the call duration and
the activity factor are not suitable to characterize PS services, an estimation
of effective call duration is generated.
Service bandwith is the product of user nominal bit rate, coding factor and
asymmetry factor.

The various classes have different characteristics. HIMM, e.g., video telephony,
require isochronous transmission, as well as SD and S. Therefore, they are handled
as CS services. The average call duration time of these services corresponds to
the actual connection set up time, and the effective call duration depends on
the activity factor, which is 0.5 for speech and 0.8 for video telephony. For PS
services, the call duration is the sum of the time intervals, where data is actually
transferred via the air interface. Thus, the activity factor in this scenario is
equal to one. In Figure 2.16 the structure of a PS transmission is shown and the
effective call duration per service according to the activity factor and the average
call duration is given in Table 2.3.
The call duration and the activity factor are not suitable to characterize PS services. However, an estimation of the effective call duration, and the equivalent
offered bit quantity that packet services will generate, can be based on calculations that consider busy hour calls and an acceptable throughput and delay for
packet services [96].
4

The activity factor describes how many percent of a connection during a call one user is,
on average, active. Or in other words, it describes if and how much, on average, the activity of
the service will vary.

2.5. SERVICES AND APPLICATIONS

29

Figure 2.16: Packet transmission over the UMTS air interface (source: [100]).

Services

Activity factor Average call duration [s] Effective call duration [s]

HIMM

0.8

180

144

HMM

53.3

53.3

MMM

13.9

13.9

SD

156

156

SM

30

30

0.5

120

60

Table 2.3: Effective call duration (source: [101]).

30

CHAPTER 2. RADIO SYSTEM ASPECTS

2.5.2

3GPP UMTS QoS Classes

Beside the classification in the six groups S, SM, SD, MMM, HMM and HIMM
from the UMTS-Forum described above, there exists also a second service classification. 3GPP divides the applications and services in four different groups,
according to the QoS requests. The four different traffic classes are: conversational, streaming, interactive, and background. The main distinguishing factor
between these classes is the delay-sensitive of the traffic: the conversational class
is meant for very delay-sensitive traffic, while background class is the most delayinsensitive. The characteristics of the UMTS QoS classes are shown in Figure
2.17.

Figure 2.17: 3GPP traffic classes classification.

The conversational and streaming classes are typically transmitted as real-time


connections over the air interface, while interactive and background classes are
transmitted as non-real-time packet data using packet scheduling. The applications, which belong to the four different classes, are shown in the following list
[54]:
Conversational: Real time applications (speech services, voice over IP, video
telephony), strict low end-to-end delay.
Streaming: Streaming data transferring applications (web broadcast, video
streaming on demand), with high symmetric traffic.
Interactive: Client-server applications (web browsing, database access, games,
tele-machines), low round trip delay is required
Background: Long delay applications (SMS, email, downloading databases,...)
A detailed description of the four UMTS QoS classes can be found in [54].

2.5. SERVICES AND APPLICATIONS

2.5.3

31

Traffic Forecast for UMTS Network Optimization

For radio network optimization, traffic is a very important input parameter.


Therefore, it is necessary to perform some traffic forecast based on users and
services statistics. Table 2.4 shows statistical information that is fundamental for
the network optimization process, since most of traffic estimation is dependent
on the user density. Therefore, the corresponding network capacity estimation
for each operational environment may be obtained. Only three of the operational
environments (marked in bold in Table 2.4) contribute to the maximum total
amount of capacity required, because they coexist in the same geographical area
[96].
Operational environments

Density of potential users/km2

Cell type

CBD/urban (in building)

180 000

Micro/pico

Suburban (in building or on street)

7 200

Macro

Home (in building)

380

Pico

Urban (pedestrian)

108 000

Macro/micro

Urban (vehicular)

2 780

Marco/micro

Rural in- and out-door

36

Macro

Table 2.4: Operational environment and cell types (source: [101]).


The following conclusions are based upon market forecast data for the years up
to 2005 [101]. For example, it is assumed that 90 % of the total speech and
low data traffic will be carried over existing 2G networks within this period.
It is also considered that 60 % of the indoor traffic will be carried over licenseexempt networks (like WLAN), and that high (2 Mbit/s) and medium (384 kbit/s)
multimedia services are PS, which are tolerant to delay. Although the majority of
users will be speech users, most of the capacity is needed for multimedia services.

32

CHAPTER 2. RADIO SYSTEM ASPECTS

Chapter 3
Overview of Optimization
Techniques
3.1

General Issues

This chapter gives a short overview of commonly used and well known optimization techniques. There are several different techniques, which are suitable for
different optimization problems. The different types of optimization problems
are classified according to [92] in the following list:
Continuous functions
Combinatorial problems
Nonlinear structures, e.g.:
Neural networks
Fuzzy systems
Computer programs
Integrated circuits
The problem of optimizing the base station parameters of a UMTS network,
which is covered by this work, is a highly nonlinear problem and is assigned to
the first item in the previous list. For an optimization problem a quantitative
value (fitness function) is defined, for which we are interested in the maximum or
minimum value. This value depends on different parameters, and the goal of an
optimization algorithm is to optimize the quantitative value (fitness function).
The optimization algorithms can be classified according to their tasks in the
following way [83]:
33

34

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES


Restricted and not restricted optimization:
In the case of a restricted optimization problem limits are set for the variables, e.g. only positive values are allowed for a variable.
Global and local optimization:
Global optimization looks always for the absolute maxima or minima of a
function. In contrast, local optimization, only considers the maximum or
minimum value within a limited scope.
With or without calculation of the derivation:
If it is possible to calculate the derivation, it can be involved in the optimization process. Often, the calculation of the derivation is very time
consuming, especially when the solution space is very large.
One-dimensional and multidimensional problems:
In the case of one-dimensional problems the optimization algorithm only
has to optimize one parameter. On the other hand, multidimensional optimization has to optimize more than one parameter. In most practical tasks
this is the case.
Continuous and discrete optimization:
Discrete optimization deals with parameters with discrete values (e.g zero
and one) and so the solution space is limited. In the case of a continuous
optimization the parameters are real values and so there exists an infinite
number of solutions.

Examples of conventional optimization methods are Random Search, Simplexmethod, enumeration method and gradient method. Random Search is a global
method and very generally applicable, but on the other hand usually inefficient.
The Simplex-method is an approach involving mathematical programming formulations, which is suitable for linear fitness functions with linear boundary conditions. The third conventional method, the enumeration method, is a primitive
enumeration of the solutions. It is very inefficient, especially in the case of a large
solution space. For this reason advanced enumeration methods were developed,
like dynamic programing or branch-and-bound. The gradient method is a very
popular approach for several types of problems. It is a purely local method, and
it must be possible to calculate the derivation. For an overview on this area see
[43].
For solving complex problems, like nonlinear structures, it is very inefficient and
time-consuming to use such conventional mathematical methods for optimizing
these kind of problems. Some problems are intractable to solve due to the following properties, which can occour:
The solution space is large and complex.

3.1. GENERAL ISSUES

35

The function to be optimized has many of local optima. Such a function is


called multi-modal [76].
Boundary conditions complicate the search for the optima.
The optimized function is time-variant or superposed with noise.
The problem has more than one fitness function. Therefore, a set of Paretooptimal1 [76] solutions is searched.
In the last decades also other, now commonly applied optimization methods
have been developed, which not necessarily find the absolute optimum, but they
hopefully give a good approximate solution. The algorithms can be summarized
under the term heuristic methods. One class of these heuristic methods is
very popular for solving practical problems, the so-called meta-heuristics. In the
following list the most commonly used meta-heuristic techniques are shown:
Local Search
Tabu Search
Simulated Annealing
Genetic Algorithms
Evolutionary Strategies
Evolutionary Programming
Estimation of Distribution Algorithms
Ant Colony Optimization
While business and operations research practitioners have frequently focused on
conventional optimization approaches involving mathematical programming formulations, computer scientists and electronic engineers have focused on applying optimization techniques subject to multi-criteria objectives and constraints
[106]. Under mathematical programming formulations, discrete decision variables
are used to represent components of the model and/or network. Subsequently,
maximization objectives and constraints are stated in terms of (frequently linear) equations. Such problems are often tackled using commercial mathematical
1

A solution is called Pareto-optimal (or efficient) solution, if there is no other solution for
which at least one criterion has a better value while values of remaining criteria are the same
or better. In other words, one can not improve any criterion without deteriorating a value of
at least one other criterion.

36

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

programming solvers (such as CPLEX, using a Simplex-method in [73] for base


station positioning for cellular radio networks) or using problem specific heuristics and techniques. An important difference between these approaches concerns
the required effort. Formulating a problem so that mathematical programming
can be applied may be problematic for engineering problems such as network design, particularly when non-linearities and dependencies need to be considered.
It takes potentially much less effort to tailor a suitable meta-heuristic approach
such as Simulated Annealing. However, given the difficulty of the network design
problem and the tensions and conflicts involved, the way in which meta-heuristics
techniques are targeted and tailored is crucial.
In this chapter a short outline for some of the most effective and frequently used
meta-heuristic solution techniques are introduced. A comprehensive overview of
these and other relevant techniques is given in [86].

3.2

Local Search

The Local Search algorithm is a local iterative search technique that generates
a sequence of solutions for the optimization problem, say x1 , x2 , . . . , xk . At each
iteration i (1 i k 1), solution xi is perturbed a number of times (by rules
which are described as a move) to produce a neighborhood N(x) of candidate
solutions. The best solution in the neighborhood is then taken as solution xi+1 ,
if it is better than the solution in the previous iteration. This algorithm works
with the so-called greedy concept, because it always takes the best one. In Figure
3.1 a simple program structure for a local search algorithm is shown.
procedure Local Search
begin
x initial solution;
repeat
x0 N (x);
// derive neighborhood solution
if x0 is better than x then
x x0 ;
until termination condition is fulfilled;
end
Figure 3.1: Local Search algorithm.
One big disadvantage of the algorithm is that in general it only finds a local
optimum. An improvement gives the so-called Multi-start Local Search, which
starts from several different initial solutions. A full description of the Local Search

3.3. SIMULATED ANNEALING

37

method and its extensions is given in [8, 85]. For the optimization of combinatorial
problems it is a very popular approach. The best known example is the TravelingSalesman Problem 2 . Also for network design such methods are utilized. This
approach is best to generate an initial network for further development [106]. A
crucial issue concerns the way in which the candidate sites3 for a network and
configurations are ordered. Such methods, for example, have been investigated
for related graph based problems and frequency assignment for GSM in [14, 15].
In [99], this approach has been used for cell planning.

3.3

Simulated Annealing

Simulated Annealing (SA) is a meta-heuristic derived from statistical mechanics.


It is a technique that has attracted significant attention as suitable for optimization problems of large scale, especially ones where a desired global extremum is
hidden among many, poorer, local extrema.
At the heart of the method of Simulated Annealing is an analogy with thermodynamics, specifically with the way that liquids freeze and crystallize, or metals
cool and anneal. At high temperatures, the molecules of a liquid move freely
with respect to one another. If the liquid is cooled slowly, thermal mobility is
lost. The atoms are often to line themselves up and form a pure crystal that is
completely ordered over a distance up to billions of times the size of an individual
atom in all directions. This crystal is the state of the minimum energy for this
system. The amazing fact is that for slowly cooled systems, nature is able to find
this minimum energy state. In fact, if a liquid metal is cooled quickly, it does
not reach this state. So the essence of the process is slow cooling, allowing ample
time for redistribution of the atoms as they lose mobility. This is the technical
definition of annealing, defined in [89].
The so-called Boltzmann probability distribution,
P rob (E) eE/kT

(3.1)

expresses the idea that a system in thermal equilibrium at temperature T has


its energy probabilistically distributed among all different energy states E. Even
at low temperature, there is a chance, albeit very small, of a system being in
a high energy state. Therefore, there is a corresponding chance for the system
to get out of a local energy minimum in favor of finding a better, more global
2

The traveling salesman problem, or TSP for short, is defined as follows: given a finite
number of citiesalong with the cost of travel between each pair of them, find the cheapest
way of visiting all the cities and returning to your starting point.
3
A candidate site is a possible site for a base station.

38

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

one. In Equ. (3.1) the quantity k (Boltzmanns constant) is a constant of nature


that relates temperature to energy. In other words, the system sometimes goes
uphill as well as downhill ; but the lower the temperature, the less likely is any
significant uphill excursion. For the simulation of the anneling process finding a
state of minimum energy the instructions are given in the following list:
Start with high temperature and cool down slowly.
For each temperature accept transitions to a new molecular structure, if
the energy E is lower.
Accept transitions to molecular structures with higher energy only with a
probability of eE/kT .
The instructions in the list above, can be used as optimization algorithm. The
analogy between the thermodynamic simulation of annealing and optimization is
shown in Table 3.1.
thermodynamic simulation

optimization

system state

valid solution x

energy E

value of fitness function f (x)

transition of state

derive neighborhood solution N (x)

solidification (crystal)

founded solution

temperature T

control parameter T

Table 3.1: Analogy between the thermodynamic simulation of annealing and


optimization.
A simple program structure of a Simulated Annealing optimization algorithm
is shown in Figure 3.2. The variable Z is an equal distributed random value
between zero and one. The critical point of the algorithm is how to choose the
initial value for the temperature T and the function for the cooling temperature
g(T, i). This function describes the cooling plan in the algorithm. In literature
the following different functions are known among others:
Geometric Cooling: g(T ) = T with [0.8, 1).
Slow Cooling: g(T ) = T /(1 + T ) with > 0 and small.
Theoretic good scheme: g(T ) = /(1 + i) with proportional to the increase in fitness function, and i is the iteration counter.

3.4. TABU SEARCH

39

Reannelaing: If a new solution is accepted, drop T according to g(T ) =


T /(1 + T ). If the new solution is discarded, increase T according to
g(T ) = T /(1 + T /l) with l N .
For more details on the different functions for T see [66]. The Simulated Annealing optimization algorithm also accepts worse results in contrast to the Local
Search described in 3.2, to permit escape from local optima which are not global.
The chances of accepting a worse solution is controlled by the cooling temperature T . For a detailed treatment of Simulated Annealing see [9]. This approach
is very popular for cell planning issues [11, 56, 73].
procedure Simulated Annealing
begin
i 0;
T initial temperature;
x initial solution;
repeat
x0 N (x);
// derive neighborhood solution
if x0 is better than x then
x x0 ;
else
0
if Z < e|f (x )f (x)|/T then
x x0 ;
end
T g(T, i);
i i + 1;
until termination condition is fulfilled;
end
Figure 3.2: Simulated Annealing algorithm.

3.4

Tabu Search

The Tabu Search meta-heuristic technique (TS) operates using the neighborhood
principle of the Local Search technique from 3.2. However, in order to prevent
cycling and to provide a mechanism for escaping locally but not globally optimal
solutions, some moves 4 at one particular iteration may be classified as tabu. After
a valid move is executed, it is stored in the so-called tabu list T L. Now it is not
allowed to perform this move again for the next k iterations. This tabu list is an
4

A move denotes the transition from one solution to an other.

40

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

essential component of the algorithm, because it stores the history of the visited
candidate solutions. There can also be aspiration criterias, which override the
tabu moves if particular circumstances apply. A detailed description on tabu list
management and aspiration criteria can be found in [67].
procedure Tabu Search
begin
T L 0;
// tabu list
x initial solution;
repeat
X 0 subset of N (x) under consideration of T L;
x0 best solution of X 0 ;
add move from x to x0 to T L;
delete moves from T L, which are older than k iterations;
x x0 ;
if x is better than best solution until now then
store x;
end
until termination condition is fulfilled;
end
Figure 3.3: Tabu Search algorithm.
In Figure 3.3 a simple program structure for a Tabu Search algorithm is shown.
The Tabu Search technique as extension of the Local Search algorithm is a very
efficient optimization technique for the optimization of combinatorial problems
like the TSP or the Bin Packing Problem 5 . In [18] a short overview of different
combinatorial problems is given. For the TSP the Tabu Search algorithm nowadays is the most efficient technique for optimizing this problem. For cell planning
issues in CDMA networks this method is also used [12, 71]. For further details
of Tabu Search see [49, 67, 93].

3.5

Evolutionary Algorithms

Evolutionary Algorithms (EA) are a category of computer-based problem solving


systems, which use calculable models of natural selection and evolution processes
as their key elements. The theory about evolution was founded by Charles Dar5

Bin Packing Problem: Determine how to put the most objects in a given number of fixed
space bins. More formally, find a partition and assignment of a set of objects such that a
constraint is satisfied.

3.5. EVOLUTIONARY ALGORITHMS

41

win 6 in the year 1859. In his famous book The Origin of Species he explained
the inheritance with variances (mutation) and talked about natural selection.
The principal structure of an evolutionary optimization algorithm in shown in
Figure 3.4. In this Figure, P denotes a population of solutions. In each iteration a new generation Q is produced, and from the union of Q and P the new
population P is decided.
procedure EA
begin
P set of initial solutions;
Evaluate(P );
repeat
Q GenerateNewSolutionByVariation(P );
Evaluate(P );
P SelectBetterSolutions(P, Q);
until termination condition is fulfilled;
end
Figure 3.4: The structure of an Evolutionary Algorithm.
Evolutionary Algorithms have the following properties:
They do not require certain properties of the optimized function like continuity, differentiability or dimensionality.
The algorithms are not restricted to numeric optimization.
No special information about the solution space like derivation, is needed.
Suitable for problems with a big solution space, where other techniques,
like enumeration methods, need to much time.
Global view of the technique: The algorithm searches for the global optimum, and not for the next local one.
Finding of the optimal solution cannot be guaranteed.
It is possible to combine Evolutionary Algorithms with other optimization
techniques or problem specific heuristics.
6

Charles Robert Darwin (1809-1882): Darwin was the British naturalist who became famous
for his theories of evolution and natural selection. Like several scientists before him, Darwin
believed all the life on earth evolved (developed gradually) over millions of years from a few
common ancestors.

42

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES


Robustness: Evolutionary Algorithms are suitable for many different types
of problems (see Section 3.1). Specialized methods often achieve better
results only on certain problems.

The most important types of Evolutionary Algorithms are enumerated in the


list below. This classification is not unique, because there are a lot of hybrid
approaches.
Genetic Algorithms
Evolutionary Strategies
Evolutionary Programming
Classifier Systems
Genetic Systems
Genetic Programming
In the following the most important and most popular representatives, the Genetic Algorithms, which are used in this thesis for optimizing the base station
parameters, are described in more detail.

3.5.1

Genetic Algorithms

The idea of using genetic approaches for optimization originated from J. H. Holland about thirty years ago. He wrote the primary work about Genetic Algorithms
(GA) [53]. De Jong [31] extended his work for functional optimization, involving
the use of optimization search strategies based on after the Darwinian notion of
natural selection and evolution. In the last two decades, first D. E. Goldberg [50]
and then Z. Michalewicz [13, 76, 77] enhanced the methods and made research
on the theoretical basics of Genetic Algorithms.
Genetic Algorithms are particularly effective, when the goal is to find an approximate global maximum in a high dimensional, multi-modal function domain in
a near-optimum manner [61]. They differ from other conventional techniques by
operating on a group (or population) of trial solutions in parallel. Normally a
Genetic Algorithm operates on a coding of the function parameters (a chromosome) rather than on the parameters themselves. For the coding of the problem a
suitable structure should be used. The most common coding schemes are binary
coding or gray coding. Simple, stochastic operators (selection, crossover and mutation) are used to explore the solution domain in search of an optimal solution.
The basic block diagram with the three operators are depicted in Figure 3.5.

3.5. EVOLUTIONARY ALGORITHMS

Figure 3.5: Genetic Algorithm.

procedure GA
begin
i 0;
initialize(P (i));
evaluate (P (i));
while (not termination-condition) do
i i + 1;
Qs (i) select(P (i 1));
Qr (i) recombine(Qs (i));
P (i) mutate(Qr (i));
evaluate (P (i));
done
end
Figure 3.6: Canonical Genetic Algorithm.

43

44

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

Figure 3.6 shows the algorithm of the block diagram from Figure 3.5. This simple
type of GA is known as Canonical GA. In Figure 3.6 P (i) denotes the population
of the actual iteration, Qs (t) indicates the population after the selection process
and Qr (t) is the population after recombination. In keeping with the natural
selection analogy, successive populations of trial solutions are called generations.
Subsequent generations are made up of children, produced through the selective
reproduction of pairs of parents taken from the current generation. A list of some
of the commonly encountered GA terms relating to the optimization problem is
presented below:
Population:

Set of trial solutions.

Parent:

Member of the current generation.

Child:

Member of the next generation.

Generation:

Successively created populations (GA iterations).

Chromosome:

Coded form of a trial solution vector (string) consisting of genes made of alleles. A chromosom is
also referred to as individual.

Gene:

Each gene can have a value (allele) of a certain


value set (e.g. one bit).

Allele:

Concrete value of a gene (e.g for a bit representation: zero or one).

Fitness:

Positive number assigned to an individual representing a measure of quality.

In the beginning of the optimization process the first population has to be initialized (see Figure 3.6). Normally this is done by a random setting. For the evaluation of all the individuals of the population, a fitness function f (i) is needed.
A higher fitness value means a better solution and a lower fitness value shows a
worse solution. If the fitness function reaches the highest value, this means that
the searched optimal solution is found.
In the following the three operators are shortly explained.
3.5.1.1

Selection

The selection of the parents for the next generation is mostly controlled by randomness, but according to the natural selection: Better individuals are selected
more often then worse individuals. The selection process forces the population
of the GA in the direction of better solutions. In the majority of cases a fitness
proportional selection scheme is used. This method is referred to as roulette-wheel

3.5. EVOLUTIONARY ALGORITHMS

45

selection, because it works like the roulette game with different probabilities. For
each individual the fitness value is scaled according to the sum of the fitness of all
the individuals. The probability for an individual i to be selected in the selection
process is shown in Equ. (3.2).
f (i)
ps (i) = Pn
j=1 f (j)
with f (j) 0 and

n
X

(3.2)

f (j) > 0

j=1

In Equ. (3.2) n denotes the size of the population.


3.5.1.2

Recombination

Recombination is the primary operator for generating new individuals. In Figure 3.7 the 1-point crossover is shown, as one possible example for a recombination operator. In this example 8 parameters are binary coded on one chromosome.

Figure 3.7: 1-point crossover.


The crossover is either performed for all individuals of the population, or for the
majority of the population controlled by a certain randomness (e. g. pc = 0.8).
The creation of couples as well as the choice of the crossover points are also
performed randomly.
3.5.1.3

Mutation

The mutation operator is used as secondary operator in Genetic Algorithms.


This operator makes small, random changes. Only one gene of a chromosome
obtains a new value. Mutation is responsible for introducing new and lost gene
material (alleles) into the population. In Figure 3.8 an example of mutation for
a chromosome with eight binary coded parameters is shown. Parameter no. 4
mutates from 1 to 0.
The mutation is usually performed with a ceratin probability only (e.g. pm = 0.5).

46

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

Figure 3.8: Mutation.

3.5.1.4

Applications for Genetic Algorithms

Researchers and developers use Genetic Algorithms in many areas where a global
optimum is searched for. In particular they are used for practical optimization
applications with a big and complex solution spaces. In the field of electromagnetics engineering, developers use Genetic Algorithms for example for the design
of lightweight, broadband microwave absorbers, the reduction of array sidelobes
in thinned arrays, the shaped-beam antenna arrays or the design of broadband
patch antennas [61].
In the area of network planning for 2G and 3G systems, Genetic Algorithms
are very popular. In [52, 78] these algorithms are used for the base station
location problem. The papers [21, 55, 70] demonstrate approaches for coverage
and capacity optimization in wireless networks using Genetic Algorithms.

3.6

Ant Colony Optimization

The idea of imitating the behavior of ants to find solutions for combinatorial
optimization problems was initiated by Colorni, Dorigo and Maniezoo in 1991
[24, 25, 26]. The metaphor comes from the way ants search for food and find
their way back to the nest. Initially, ants explore the area surrounding their nest
in a random manner. As soon as an ant finds a source food, it evaluates the
interest of the source (quality and quantity) and carries some of the food to the
nest. During the return trip, the ant leaves a chemical pheromone trail on the
ground, whose quantity depends on the quality of the source. The role of this
pheromone trail is to guide other ants towards the source. After a while, the
path to a good source of food will be indicated by a strong pheromone trail, as
the trail grows with the number of ants that reach the source. Since sources that
are close to the nest are visited more frequently than those that are far away,
pheromone trails leading to the nearest sources grow faster. The final result of
this process is that ants are able to optimize their work.
The transfer of this food searching behavior into an algorithmic framework for

3.6. ANT COLONY OPTIMIZATION

47

solving combinatorial optimization problems is obtained through an analogy between:


the search area of the real ants and the set of feasible solutions to the
combinatorial problem;
the amount of food associated with a source and the fitness function;
the pheromone trail and an adaptive memory.
A standard ant system is schematically described as follows:
1. Initialize the pheromone trail.
2. While a stopping criterion is not met do:
(a) for each ant, construct a new solution using the current pheromone
trail and an evaluation of the partial solution being constructed;
(b) update the pheromone trail.
The most important component of an ant system is the management of pheromone
trails. In a standard ant system, pheromone trails are used in conjunction with
the fitness function to guide the construction of new solutions. Once a solution has been produced, a standard ant system updates the pheromone trails as
follows: first all trails are weakened to simulate the evaporation of pheromone;
then, pheromone trails that correspond to components that were used to construct the resulting solution are reinforced, taking into consideration the quality
of this solution.
Ant systems have been applied to different combinatorial optimization problems
like TSPs [25, 34, 46] and Quadratic Assignment Problems 7 (QAP) [34] with
comparable or even better performances than other natural inspired systems,
like Genetic Algorithms. For more practical combinatorial problems, like transport logistic problems or project and process planning problems these very new
optimization techniques are also in use [33]. Ant systems are used for routing and
load balancing in circuit switched telecommunications networks [94] and routing
in packet switched telecommunications networks like the Internet [32]. For continuous optimization problems, like the base station parameter optimization of
an wireless network the ant colony technique is not fully developed to achieve
good results in agreeable time [57].
7

Quadratic Assignment Problem: The problem of assigning a set of facilities to a set of


locations with given distances between the locations and given flows between the facilities. The
goal then is to place the facilities on locations in such a way that the sum of the product
between flows and distances is minimal.

48

CHAPTER 3. OVERVIEW OF OPTIMIZATION TECHNIQUES

Chapter 4
Coverage- and Capacity-limiting
Factors
4.1

Introduction

Unlike GSM, the coverage and capacity improvement methods cannot be separated anymore in the UMTS system. There is always a tradeoff between coverage
and capacity. Some of the improvement methods enhance the coverage at the cost
of capacity, while others improve capacity, but at the same time the coverage decreases.
In UMTS the network coverage and capacity can be either uplink or downlink
limited. It is generally accepted that service coverage is uplink limited. However, system capacity may be either uplink or downlink limited depending upon
the system configuration and the traffic profile. In rural environments, where
the network is normally planned with relatively low uplink load, the scenario is
typically capacity limited in the uplink. A downlink capacity-limited scenario is
more likely in an urban scenario, where the network is planned for higher uplink
load to increase the system capacity [68].
When a cells capacity limitation is reached, additional users cannot be admitted
to the system and, therefore, they are put to outage. Outaged users are within
the coverage of the cell, but not able to access the network services. Thus, as the
number of users at outage increases, the network capacity decreases. The outage
problems can be managed by radio resource management (RRM) and optimization of the base station parameters. Therefore, understanding and identifying
the limitations is important for the development of optimization strategies for
increasing coverage and capacity effectively.
This chapter provides the basis for understanding the reasons for coverage and
capacity limited scenarios both in the up- and downlink. Further, the corre49

50

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

sponding solutions for the enhancement of network coverage and capacity are
presented.

4.2

Uplink and Downlink Coverage-limited Scenarios

The majority of existing literature makes the assumption that service coverage
is uplink limited [68]. In general this is true, though it is fairly easy to identify scenarios where service coverage is downlink limited, for example when the
data rate is asymmetric with more data in the downlink combined with a limited
base station transmit power capability. The simplest method for studying service coverage performance is using a link budget. For the identification, which
parameters need to be improved to enhance service coverage performance, the
link budget is also very useful.
Techniques, which require additional investments for improving the service coverage are active antennas, mast head amplifiers, higher-order receive diversity,
increased sectorization, repeaters and smart antennas. Some of these techniques
improve coverage, but at the cost of capacity. However, other techniques like
smart antennas simultaneously improve both coverage and capacity. A detailed
description of the individual techniques can be found in [68]. A good overview
on smart antennas is given in [16, 45].
Link budgets for a W-CDMA system follow the same principles as those for GSM.
The main differences are the inclusion of processing gain, Eb /N0 requirement, soft
handover gain, target uplink cell loading and a headroom to accommodate the
fast power control loop. In the link budget the target loading is the main capacityrelated parameter. A low value for the target loading corresponds to a larger cell
range, but a lower cell capacity.
The link budget for a data service supporting 384 kbit/s on the downlink and
64 kbit/s on the uplink is presented in Table 4.1. The lower allowed propagation
loss value indicates that the service coverage is uplink limited. If we assume a
power of 37 dBm for the power amplifier of the base station, the service coverage
of the scenario gets downlink limited. A maximum of half of the total transmit
power is generally allocated to any one single link, i.e. 34 dBm (similar to the
example from Table 4.1 where 40 dBm represents half of the total transmit power
capabiliy of a 43 dBm power amplifier). Therefore, the downlink allowed propagation loss decreases by 6 dB and results in a downlink-limited service coverage.
A series of typical uplink link budgets is presented in Table 4.2 for a range of
service data rates. The difference in the allowed propagation loss value may
be used to estimate the difference in site count requirements for various service

4.2. UPLINK AND DOWNLINK COVERAGE-LIMITED SCENARIOS

Parameter

Uplink

Downlink

64

384

kbit/s

21.0

40.0a

dBm

0.0

18.5

dBi

0.0

2.0

dB

Transmit EIRP

21.0

56.5

dBm

Processing gain

17.8

10.0

dB

Required Eb /N0

2.0

4.5

dB

MDC gain

0.0

1.2

dB

Target loading

50

80d

Rise over thermal noise

3.0

7.5

dB

Thermal noise density

-174.0

-174.0

dBm/Hz

3.0

8.0

dB

-168.0

-159.0

dBm/Hz

-117.9

-99.9

dBm

Receiver antenna gain

18.5

18.5

dBi

Cable loss/ body lossb

2.0

0.0c

dB

Fast fading margin

3.0

0.0

dB

Soft handover gain

2.0

2.0f

dB

EIRP required

-133.4

-101.9

dBm

Allowed propagation loss

154.4

158.4

dB

Service bit rate


Maximum transmit power
b

Antenna gain

Body loss / cable loss

Receiver noise figure


Interference floor

Receiver sensitivity

51

Table 4.1: Example link budget for a data service (source: [68]).
40 dBm is a typical limit placed upon a downlink traffic channel for a 43 dBm
power amplifier module to prevent an excessive share of base station power being
allocated to a single user.
b
The values for the antenna gain, body loss and cable loss are very optimistic.
A detailed research on this topic shows [87].
c
It has been assumed that data services do not incur a body loss.
d
The downlink target loading is a function of the traffic mix loading in the cell.
80 % is a typical value.
e
Measurements of the background noise floor can be found in [82].
f
The value for soft handover gain in the downlink is too hard. Typical values,
based on numerous simulations can be found in [17].
a

52

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

coverage objectives [68]. Table 4.2 shows that the highest data rate service defines
the cell range in terms of the allowed propagation loss. Planning the network
for a 384 kbit/s service coverage will be sufficient to ensure acceptable coverage
performance for lower data rate services and speech.
Service type

Speech

Data

Data

Data

Uplink bit rate

12.2

64

144

384

kbit/s

Maximum transmit power

21.0

21.0

21.0

21.0

dBm

Antenna gaina

0.0

0.0

2.0b

2.0b

dBi

Body loss / cable lossa

3.0

0.0c

0.0c

0.0c

dB

Transmit EIRP

18.0

21.0

23.0

23.0

dBm

Processing gain

25.0

17.8

14.3

10.0

dB

Required Eb /N0

4.0

2.0

1.5

1.0

dB

Target loading

50

50

50

50

Rise over thermal noise

3.0

3.0

3.0

3.0

dB

Thermal noise density

-174.0

-174.0

-174.0

-174.0

dBm/Hz

3.0

3.0

3.0

3.0

dB

Interference floor

-168.0

-168.0

-168.0

-168.0

dBm/Hz

Receiver sensitivity

-123.1

-117.9

-114.9

-111.1

dBm

18.5

18.5

18.5

18.5

dBi

2.0

2.0

2.0

2.0

dB

Fast fading margin

3.0

3.0

3.0

3.0

dB

Soft handover gain

2.0

2.0

2.0

2.0

dB

EIRP required

-138.6

-133.4

-130.4

-126.6

dBm

Allowed propagation loss

156.6

154.4

153.4

149.6

dB

Receiver noise figure

Receiver antenna gain


Cable loss/ body loss

Table 4.2: Example uplink link budgets for illustrating the impact of service data
rate (source: [68]).
a
The values for the antenna gain, body loss and cable loss are very optimistic.
A detailed research on this topic shows [87].
b
It has been assumed that terminals supporting higher data rates are superior in
terms of antenna configuration.
c
It has been assumed that data services do not incur a body loss.
Improving any of the parameters in the link budget will lead to an improvement
in service coverage performance. However, improving service coverage leads to a
greater average base station transmit power requirement per downlink connection.

4.3. UPLINK CAPACITY-LIMITED SCENARIOS

53

If the system capacity is uplink limited, then this is of no consequence. Although,


if the system capacity is downlink limited, then improving service coverage will
lead to a loss in system capacity. A different approach would be to improve the
Eb /N0 performance. Then, it is possible to simultaneously enhance both service
coverage and system capacity.

4.3

Uplink Capacity-limited Scenarios

In the uplink, there are two possible limiting factors for uplink capacity-limited
systems. One reason could be that the mobile doesnt have enough transmit
power to achieve the required bit energy to interference plus noise density ratio (Eb /I0 ) to access the network services. An uplink capacity-limited scenario
can also occur when the maximum uplink load is reached and therefore no additional users can be accepted in the system. The traffic associated with an uplink
capacity-limited scenario is generally relatively symmetric.

4.3.1

Insufficient Uplink Power

The maximum allowed transmit power of a mobile must be enough to fulfill the
Eb /I0 requirement at the base station in order to access the network services.
The transmit power PT X,M S needed for the mobile is calculated using Equ. (4.1)
and compared to the maximum allowed.
PT X,M S =

N0 L p
(1 U L ) (1 +

W
)
R

(4.1)

Where N0 is the background noise, Lp is the propagation loss between the mobile
and the base station, R, and are the bit rate, service activity and uplink
Eb /N0 requirement of the chosen service respectively, W is the W-CDMA chip
rate and U L is the uplink loading.
Hence, if the mobile fails to fulfill the required Eb /I0 , the RNC commands the
mobile to increase its transmit power through the closed loop power control algorithm, which is based on the received power measured at the base station. If this
is not possible, because the maximum transmit power of the mobile is achieved,
the mobile is put to outage.
From Equ. (4.1) we can see that the required transmit power of a mobile is
directly proportional to the path loss. Consequently, this power level could be
reduced by decreasing the path loss, e.g. by adjusting the antenna downtilt or
the antenna azimuth.

54

4.3.2

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

Uplink Cell Load Limitation

An uplink capacity-limited scenario is likely to occur in environments where the


capacity requirements are relatively low and the network has been planned with
a low uplink cell load to maximize cell range and thus reduce the requirements
for the sites. The maximum permissible level of uplink cell load U L,threshold determines the interference margin that appears in any link budget calculation (see
Table 4.1). The greater the cell loading, the greater the required number of sites,
as well as the higher potential capacity per site. The traffic in an uplink capacitylimited scenario is generally relatively symmetric. The uplink load equation is
defined as Equ. (4.2) [68].

U L =

KN
X
k=1

1
1+

W
k Rk

(1 + i)

(4.2)

Where KN is the number of mobiles connected to base station N. Rk and k


are the bit rate and Eb /N0 requirement from the user k of the chosen service,
respectively. W is the W-CDMA chip rate and i is the other-to-own cell received
power ratio. This ratio is defined as Equ. (5.1) in Section 5.2.2.
In an uplink capacity-limited system, the capacity per cell is directly proportional
to the maximum permissible level of uplink cell load [68]. Each mobile which
establishes a connection with the same Eb /N0 requirement and activity factor,
increases the cell load by the same amount. Doubling the maximum cell load
U L,threshold results in doubling the cell capacity for an uplink-limited scenario.
The impact upon cell range is dependent upon the absolute levels of the cell load.
The relationship between the cell load and the maximum allowed propagation loss
is exponential [68]. In Equ. (4.3) the relationship between the uplink cell load
U L and the resulting increase in receiver interference floor L is shown.
L = 10 log10 (1 U L )

(4.3)

As U L achieves 100 %, the receiver interference floor increases without limit.


However, this condition in practise can never occur, because the mobiles have a
finite transmit power capability.
When the maximum uplink load of a cell is reached, U L U L,threshold (where
U L,threshold is the planned maximum permissible level of uplink cell load), any
additional users will be set to outage even though the users would have enough
transmit power to access the network services.
In the following list the reasons, why the maximum uplink cell load is reached
are summarized:

4.4. DOWNLINK CAPACITY-LIMITED SCENARIOS

55

The network is planned with a too low uplink cell load U L .


High base station transmit power capability.
Relatively symmetric traffic (e.g. speech users).
For a uplink capacity limited scenario, more users can be admitted into the
system by shrinking the planned service coverage area of the cells (by the use of
down tilting the antenna) so that the mobiles at the cell boundaries handover to
adjacent cells that have lower traffic density to achieve load balancing within the
network. As a result, the overall network capacity will be improved.

4.4

Downlink Capacity-limited Scenarios

A downlink capacity-limited scenario occurs due to several reasons: the maximum


transmit power of the base station is reached, OVSF code utilization reaches its
limitation, or the requested code power for a mobile is higher than the permitted
level.
Downlink capacity-limited scenarios are likely to occur in suburban or urban
environments, where the the network has been planned to a relatively high uplink
cell loading [68]. The traffic associated with a downlink capacity-limited scenario
is generally asymmetric, with a greater amount of traffic in the downlink.

4.4.1

Cell Power Limitation

Downlink capacity-limited scenarios due to maximum cell power are likely to


occur where the network has been configured with low base station transmit
power capability, which may have been done in some circumstances to reduce
the requirement for power amplifier modules. In Equ. (4.4) the total required
transmit power from the serving base station is shown.

PT =

n
X

PT X,n + Pcommon

(4.4)

i=1

In Equ. (4.4) PT X,n is the required code power for the connected user n, and
Pcommon is the overall transmit power of the common channels. In general, approximately 20 % of the maximum cell power PT,max is assigned to the pilot and
common control channels. The remaining 80 % is available to support traffic
channel capacity.

56

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

When the base station reaches its maximum transmit power level, PT = PT,max
(where PT,max is the maximum base station transmit power capability), it cannot
allocate extra power to an additional user even if the cell is not highly loaded. In
this case, additional users cannot be added without modifying the base station
configuration.
All active users belonging to a cell, including those mobiles connected by soft
handover share the total transmit power PT . Hence, a lower average code power
P
requirement P T X (P T X = n1 ni=1 PT X,n ) results in a higher cell capacity. Furthermore, it is possible to increase the number of served users by reducing the
soft handover overhead. Soft handover links only occur at the cell border, which
experience maximum path loss and therefore require higher code power.
Table 4.3 shows a range of typical W-CDMA base station transmit power configurations PT,max . The capacity offered by each transmit power configuration
PT,max is a function of the traffic profile as well as the maximum propagation loss
defining the cell range. The greater the propagation loss, the greater the average
code power P T X and the lower the cell capacity. In other words, we can say that
the smaller the cell size the lower P T X and the higher the cell capacity.
Base station Tx power

Application

per cell per carrier


37 dBm (5 W)

Provides low capacity for scenarios where the main


objective is providing service coverage.

40 dBm (10 W)

Provides medium capacity per carrier. Two carriers


each configured with 10 W provide greater capacity
than one carrier configured with 20 W.

43 dBm (20 W)

Provides high capacity per carrier.

46 dBm (40 W)

Provides increased capacity per carrier when


maximum allowed propagation loss is relatively high.

Table 4.3: Typical base station transmit power configurations (source: [68]).
As we can see from Equ. (4.4), a part of the total transmit power of the base
station is assigned to the common pilot channel (CPICH) and the other common channels. Consequently, by reducing the CPICH power and the powers of
the other common channels, more power will be available to support the traffic
channel capacity.
In the following list the reasons, why the maximum transmit power PT,max of a
base station is reached, are summarized:

4.4. DOWNLINK CAPACITY-LIMITED SCENARIOS

57

The network is planned with a too high uplink cell load U L .


Low base station transmit power capability.
Greater traffic on the downlink (asymmetric traffic, e.g. data users).

4.4.2

Orthogonal Variable Spreading Factor (OVSF) Code


Limitation

In W-CDMA spreading and scrambling operations are performed in two steps.


The user signal is first spread by the channelization code, called orthogonal variable spreading factor (OVSF) code, and then randomized by the scrambling code.
In the downlink, scrambling codes are used for the separation of the individual
cells, which have maximum 512 available codes. The separation of the individual
donwlink connections for different users within one cell is done by the OVSF
codes. A detailed description about the scrambling and spreading operations are
presented in Section 2.2.2.
Channelization codes become the limitimg factor under relatively high throughput scenarios [68]. This is likely to occur in either microcell or indoor scenarios,
where the cell range is limited and code orthogonality is high.
One possible solution to avoid OVSF code limitation is to reduce SHO links.
Hence, the number of available OVSF codes for serving links increases. This is
because SHO connections require independent channels from serveral base stations at the same time and therefore are reducing the available OVSF codes.
Another possibility when the channelization codes become the limiting factor
would be to use a second scrambling code to introduce a second channelization
code tree. The two code trees will not be orthogonal to another and so this will
cause higher intra-cell interference.

4.4.3

Code Power Limitation

In the downlink, each dedicated link needs a certain transmit power to reach the
Eb /N0 requirement for a sufficient connection to the mobile. In Equ. (4.5) the
required transmit power for a certain link is presented.

PT X

R
W
P
k
k Lp (Itot k Ik +NM S )
k

(4.5)

58

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

Where NM S is the background noise level at the mobile station, Itot is the total
wideband interference power received at the mobile station, Ik is the total wideband power received at the mobile station from base station k, Lpk is the link
loss from base station k to the mobile station, k is the orthogonality factor of
cell k and k is the scaling factor (relative maximum link powers) for different
base stations in the active set.
This code power PT X is limited per single traffic link. This limitation is defined
as the maximum transmitted power PT X,max on one channelization code on a
given carrier. If the code power PT X requested by a mobile is higher then the
permitted level (PT X > PT X,max ), the mobile will not be admitted.

4.5

Summary

Understanding the limitation mechanisms for service coverage and system capacity forms an essential part of being able to develop effective capacity optimization
strategies for UMTS radio networks.
Service coverage and system capacity can be either uplink or downlink limited. Coverage is generally uplink limited, although a low base station transmit
power capability combined with asymmetric data services may lead to a downlink coverage-limited scenario. Capacity may be either uplink or downlink limited
depending upon the planned level of uplink loading, the base station transmit
power capability, the traffic loading of the network and the performance of the
network.
There are various available techniques to increase the network capacity. According to [68], the simplest and most effective way to increase system capacity is
to add one or more carriers. When all the available carriers have been used,
then other methods such as additional scrambling codes, mast head amplifiers
and active antennas, remote HF head amplifiers, high-order receive diversity,
downlink transmit diversity, beamforming, sectorization, repeaters, micro cells
or smart antennas [16, 45] could be applied. In [68] the most of these techniques
are described in more detail.
There are also several possible ways to increase the network capacity without
additional infrastructure and cost investment, for instance:
Minimizing the intra- and inter-cell interference
Shrinking the service coverage area
Optimizing the CPICH power allocation

4.5. SUMMARY

59

Optimizing SHO links and overhead


This thesis focuses on possible ways of increasing the network capacity on system level optimization corresponding to base station configuration. Hence, the
considered capacity limited scenarios are:
Cell load limitation
Cell power and OVSF code limitation
Uplink cell load, base station transmit power and OVSF code utilization are a
powerful measure for radio network optimization.

60

CHAPTER 4. COVERAGE- AND CAPACITY-LIMITING FACTORS

Chapter 5
Key Optimization Parameters
5.1

Introduction

In this chapter the parameters, which are used in this thesis for optimizing the
capacity in a UMTS FDD network, are described in more detail.
Careful configuration of the many network and cell parameters is required and
crucial to the network operator, because they determine the capability to provide
services, influence the quality of service (QoS), and account for a major portion
of the total network deployment and maintenance costs. However, there are
numerous configurable base station parameters which are multi-dimensional and
interdependent, and their influence on the network is highly non-linear.
The following list shows the most important parameters:
Antenna settings
Antenna azimuth
Antenna tilt
Height
Antenna pattern
Primary common pilot channel (P-CPICH) power level
Soft handover parameters
Active set size
Active set window
61

62

CHAPTER 5. KEY OPTIMIZATION PARAMETERS

All these parameters have a strong influence on the interference in the system
and therefore on the amount of served mobile terminals 1 (capacity of the network). The optimization algorithms described in this thesis focus on optimizing
P-CPICH power as well as antenna tilt and antenna azimuth. So, in the following
a description of these three parameters is presented. Furthermore, the influence of
these key optimization parameters on the network, especially on system capacity
and coverage will be explained.

5.2

Antenna Parameters

The antenna parameters are the most important ones related to the interference
situation in the network. Besides the height of the antenna and the used pattern,
the azimuth angle and the elevation angle can be tuned. The height of the antenna
as well as the antenna azimuth can only changed with a higher operating expense.
However, changing antenna tilt and antenna pattern is associated with less effort.

5.2.1

Antenna Azimuth

This thesis focuses on base stations with 3 sectors (cells) and a fixed spacing of
120 between the three antennas. When adjusting the antenna azimuth, all three
antennas are turned in the same direction at the same time, so that the spacing
between them will be kept constant at 120 , as shown in Figure 5.1. The arrows
symbolize the directions of the main beams of the antennas.

Figure 5.1: Adjustment of base station azimuth.

For finding the optimum azimuth settings in a network, the interference has to be
taken into account. The goal of the azimuth optimization in this work is to reduce
the intra- and inter-cell interference. As a result the capacity of the network will
be increased. In Figure 5.2, the horizontal pattern of the used KATHREIN 739707
1

A mobile terminal is defined as served, if it is able to access a network service.

5.2. ANTENNA PARAMETERS

63

antenna [64] is shown. The pattern shows a difference in antenna gain of about
6 dB between the main direction of the antenna (0 ) compared to an angle of 60
(at this angle the adjacent sectors of this base station begin, and there the mobile
stations will initiate a handover to the neighboring cell). Due to that difference
of 6 dB, the direction of the main beam of the antenna is quite significant and
thus it is important to adjust the azimuth of the antennas in order to reach the
highest antenna gain for the users in the own cell, as well as the lowest gain (or
highest attenuation) for the mobile stations located in neighboring cells. This
way, less power is needed for covering the area, and therefore less interference is
generated.

Figure 5.2: Horizontal pattern of base station antenna (in dB).

5.2.2

Antenna Tilt

The antenna tilt is defined as the elevation angle of the main beam of the antenna
relative to the azimuth plane. Since the tilt is usally set in the direction down
to the ground, the term downtilt is often used. A positive downtilt is defined
as the negative elevation angle of the main beam of the antenna relative to the
horizontal plane (see Figure 5.3). The service area in Figure 5.3 is the own cell
and the far-end interference area is the area of the adjacent cells.
The antenna downtilt can be implemented in a mechanical way as well as by
electrical tilting. These two tilting mechanisms have different effect: When using
mechanical tilting, the antenna pattern itself stays constant and is only tilted,

64

CHAPTER 5. KEY OPTIMIZATION PARAMETERS

Figure 5.3: Adjustment of base station downtilt.

while with electrical tilting the antenna pattern changes when adjusting the tilt.
Due to the complexity of analyzing the system with changing the antenna pattern for every tilt value, in this thesis only mechanical tilting is applied. So, the
optimization algorithms in this work are evaluated on networks with one fixed
antenna pattern (KATHREIN 739707 antenna [64]) with a fixed predefined electrical tilt of 3 included in the vertical antenna pattern. Figure 5.4 shows the used
vertical antenna pattern. A very detailed examination on the effect of electrical
and mechanical antenna down-tilting in UMTS networks can be found in [44].

Figure 5.4: Vertical pattern of base station antenna (in dB).


Antenna downtilt is often used in mobile wireless systems, particularly in the
UMTS network, where traffic in all cells is simultaneously supported using the
same carrier frequency. The desired effect is a reduction of the other-to-own cell
interference ratio i, which is defined according to [68] in Equ. (5.1).

5.2. ANTENNA PARAMETERS

65

i=

Ioth
Iown

(5.1)

In Equ. (5.1) Ioth denotes the inter-cell interference (interference from the other
cells) and Iown is the intra-cell interference (interference from the own cell).
By down-tilting the antennas, the other-to-own-cell interference ratio i can be
reduced: The antenna main beam delivers less power towards the neighboring
base stations, and therefore most of the radiated power goes to the area that is
intended to be served by this particular base station [20]. Due to the fact that
the interference in the system is decreasing, the capacity increases and more users
can be served in the network. However, down-tilting the antenna will also reduce
the sectorization efficiency, which will decrease the cell capacity.
Both, other-to-own-cell interference ratio i and sectorization efficiency affect the
overall network capacity. According to [20], for small and moderate antenna
downtilt angles, the improvement provided through inter-cell interference rejection dominates, and a net increase in capacity can be achieved. For larger downtilt
values, the reduction of sectorization efficiency dominates, and the result is a net
decrease in capacity.
Additionally, antenna tilt adjustment also affects the cell coverage area. Too
much down-tilting causes that the service area could become too small and also
holes in the coverage of the network can occur. Furthermore, if the down-tilting
reaches a certain value, the interference in the neighboring cells increases again
due to the side lobes of the vertical antenna pattern. Figure 5.5 in Section 5.4
shows this effect. In [88] it is shown that, for smaller inter-cell site separation,
higher downtilt is required to mitigate the inter-cell interference. As the inter-cell
site separation increases, smaller downtilt is advantageous, offering higher gains
to distant users. Hence, the impact on the cell coverage area limits the tilt to
reasonable values.
The simulation analysis of [69] shows that the optimum value for the antenna tilting depends on the propagation environment, the cell site, user locations, and the
antenna radiation pattern. Furthermore, in order to achieve the highest number
of served users, it is very crucial to effectively control the inter-cell interference
and soft handover overhead. It also stated that due to the antenna radiation
pattern side lobes and nulls, there could be some variations of i and coverage
probability can occur as a function of the antenna tilting angle.
In [72] it is demonstrated that antenna tilt tuning can also help to relieve congestion in hot-spot sectors and maintain the blocking probability at an acceptable
level.
Detailed descriptions of the effect of antenna tilt on the system capacity are
presented in [20, 60, 69, 72, 88].

66

CHAPTER 5. KEY OPTIMIZATION PARAMETERS

5.3

Common Pilot Channel (CPICH) Power

The common pilot channel in UMTS consists of two subchannels, the primary
CPICH (P-CPICH) and the secondary CPICH (S-CPICH). A detailed description
of the CPICH channels is given in Section 2.3.2. The algorithms in this thesis
focus on the optimization of the P-CPICH power, and therefore the term CPICH
will be used for P-CPICH in the following.
The CPICH is very important for handover, cell selection and cell reselection. After turning on the power of the mobile station and while roaming in the network,
the mobile measures and reports the received level of chip energy to interference
plus noise density ratio (Ec /I0 ) on the CPICH to the base station for the cell
selection procedures. Ec is the average energy per pseudo noise (PN) chip, and
I0 denotes the total received power density, including signal and interference, as
measured at the mobile station antenna connector2 . This Ec /I0 ratio is given by
Equ. (5.2),
RSCPCP ICH
Ec
=
Io
RSSI

(5.2)

where the received signal code power (RSCPCP ICH ) is the received power of the
CPICH measured at the mobile station. It can be used to estimate the path loss,
since the transmission power of the CPICH is either known or can be read from
the system information. The received signal strength indicator (RSSI) is the
wideband received power within the relevant channel bandwidth in the downlink.
The cell with the highest received CPICH level at the mobile station is selected
as the serving cell. As a consequence, by adjusting the CPICH power level, the
cell load can be balanced between neighboring cells, which reduces the inter-cell
interference, stabilizes network operation and facilitates radio resource management [103]. Reducing the CPICH power of one cell causes part of the terminals
to hand over to adjacent cells, while increasing it invites more terminals to hand
over to the own cell, as well as to make their initial access to the network in that
cell.
During the radio network planning process, the CPICH transmit power of the
base stations should be set as low as possible, while ensuring that the serving cells
and neighboring cells can be measured and synchronized to and the CPICH can be
2

There exist quantities that are a ratio of energy per chip to PSD. This is the common
practice of relating energy magnitudes in communication systems. It can be seen that if both
energy magnitudes in the ratio are divided by time, the ratio is converted from an energy ratio
to a power ratio, which is more useful from a measurement point of view. It follows that an
energy per chip of X dBm/3.84 MHz can be expressed as a mean power per chip of X dBm.
Similarly, a signal PSD of Y dBm/3.84 MHz can be expressed as a signal power of Y dBm.

5.3. COMMON PILOT CHANNEL (CPICH) POWER

67

used as a phase reference for all other downlink physical channels. Too high values
of CPICH power will cause the cells to overlap and therefore create interference
to the neighboring cells, called pilot pollution, which will decrease the network
capacity. Furthermore, the CPICH power is part of the total transmit power
of the base station, which is generally limited. Thus, less CPICH power would
provide more power for the traffic channels, and therefore increase the capacity.
On the other hand, the mobile stations are only able to receive the CPICH down
to a certain threshold level of Ec /I0 , which determines the coverage area. Due to
that fact, setting the CPICH power too low will cause uncovered areas between
the cells. In an uncovered area, CPICH power is too weak for the mobile to
decode the signal, and call setup is impossible. According to the specifications of
the Third Generation Partnership Project (3GPP), the mobile must be able to
decode the pilot from a signal with Ec /I0 of -20 dB [2].
To make Equ. (5.2) better understandable, the Ec /I0 ratio can also be described
with Equ. (5.3).

CP ICHEc /I0 =

PCP ICH
Lp
PnumBSs PT X,i
+ IACI
i=1
Lpi

+ N0

(5.3)

Where PCP ICH is the CPICH power of the best server, Lp is the link loss to the
best server, PT X,i is the total transmit power of BS i, Lpi is the link loss to base
station i, IACI is adjacent channel interference, N0 is the thermal noise of the
mobile and numBSs is the number of base stations in the network.
The soft handover (SHO) area can also be controlled by the strength of the
CPICH power. By reducing the CPICH power, the SHO areas will decrease.
However, a certain amount of overlapping cell boundaries is necessary for mobiles
near the cell border to perform SHO and to counteract fluctuations of receiving
signal power.
As conclusion, the level of the CPICH power is very important to reach high
capacity in the system. Therefore, the CPICH power level is a key optimization
parameter and is included in the optimization strategies for the increase of the
capacity in this thesis. The power levels of the other common channels (PCH,
SCH,...) are typically set with respect to the CPICH power level [68]. Therefore,
the optimization algorithms in this thesis change the power of the PCH and SCH
in the same way like the CPICH power. For example, if the CPICH power is
decreased by 1 dB also the power of PCH and SCH is reduced by 1 dB.
Further information regarding the influence of the adjustment of CPICH power
on the system capacity can be found in [60, 65, 68, 72, 91, 103]

68

5.4

CHAPTER 5. KEY OPTIMIZATION PARAMETERS

A Short Investigation of CPICH and Antenna Tilt Changing

To show the influence of CPICH power and antenna tilt the different settings for
all base stations are evaluated on the small scenario. In Section 7.4.2 this scenario
is introduced. It consists of 9 base stations equipped with 3-sector antennas,
thus comprising 27 cells. In this scenario only speech users are used instead
of two-way 64 kbit/s data users, which are normally used for the small network
scenario. Figure 7.4 shows the distribution of the base stations as well as the user
distribution.
For this investigation the antenna downtilt was varied from 0 up to 16 , in steps
of 1 , with the same value for the antenna tilt in each cell. Further, the CPICH
power value was changed from 33 dBm in 1 dB steps down to 10 dBm, again with
the same value in each cell. All possible combinations for these two parameters
were evaluated on the small scenario with the UMTS FDD network simulator in
CPICH coverage verification mode 1 (see Section 7.2.2.1). A 3D-plot in Figure 5.5
shows the results of this investigation.

Figure 5.5: Capacity for different CPICH power and antenna tilt settings. (Capacity is measured as served users.)

In Figure 5.5 the x- and y-axis show the key optimization parameters CPICH
power and antenna tilt. The z-axis shows the number of served users in the

5.5. SUMMARY

69

network. From the plot we see that if the CPICH power is varied by the same
value in each cell, there is no strong influence on the number of served users.
This means that the absolute level of the CPICH power is not so important
for the capacity, because the serving cell areas remain the same. However, if
the cells would have different power values for the CPICH, this will influence the
serving cell areas in the network and so causes an increase or decrease in capacity,
depending on the setting.
The antenna tilt affects the capacity in a different way. From Figure 5.5 we can
divide the antenna tilt range in 4 areas:
area1 : 0 6
area2 : 6 11
area3 : 11 13
area4 : 13 16
In area1 the capacity increases, if the antenna downtilt value increases. This is
due to the reduction of inter-cell interference. In area2 the capacity decreases
due to the antenna radiation pattern (see Figure 5.4 in Section 5.2.2). The first
side lobe in the antenna pattern (compare with Figure 5.4) increases the capacity
in the network in area3. From a downtilt of 13 (area4 ) up to higher values the
capacity decreases again. Note that these results are only valid for a scenario
with a plane terrain. In a scenario with hilly terrain the situation can be totally
different.

5.5

Summary

Adjusting the antenna parameters antenna tilt and antenna azimuth as well as
the CPICH power enables to increase the network capacity by:
1. Reducing inter-cell interference and pilot pollution.
2. Optimizing base station transmit power resources.
3. Load sharing and balancing between cells.
4. Optimizing SHO areas.

70

CHAPTER 5. KEY OPTIMIZATION PARAMETERS

The amount of transmit power for the CPICH is not specified in the 3GPP
standard. So, it is up to the network operator to assign appropriate power levels
to the CPICH. In [68], it suggests that the CPICH power is set to about 5-10 %
of the total cell transmit power capability.
Despite the challenges associated with CPICH power, antenna tilt and antenna
azimuth settings adjustment, it is clear that these parameters provide low cost
techniques for optimizing expensive UMTS networks, because no additional expenditure in infrastructure is necessary.

Chapter 6
Fitness Function and
Performance Indicators
6.1

Introduction

In this chapter the performance indicators of the various optimization algorithms


are presented. The indicators are the basis of decisions for the CPICH power and
antenna tilt adjustments. The algorithms presented in Section 8.2 and Section 8.3
use the number of outaged mobiles as well as the quality factor QF, introduced
by SYMENA, Software & Consulting GmbH, for these decisions.
First, the used fitness functions for UMTS network optimization in this work are
introduced in Section 6.2. Section 6.3 describes the definition of the Grade of Service (GoS), which is an important side constraint for the optimization throughout
this thesis. The two important performance indicators, the number of outaged
mobiles and the quality factor, respectively, will be described in Section 6.4.1
and 6.4.2.

6.2

Fitness Function

For the evaluation of the network, a fitness function has to be defined. The fitness
function represents the optimization goal. This thesis focuses on the capacity
optimization in a UMTS FDD network. So, the the main goal of the optimization
process is to increase the number of served users.
In this section two fitness functions, which are used during this work, are presented. First, a basic version is introduced, which is used for the local optimization algorithms (Section 8.2). During the development of the genetic algorithm
71

72CHAPTER 6. FITNESS FUNCTION AND PERFORMANCE INDICATORS


(global optimization technique) in Section 8.3, the fitness function was improved
by including the coverage area as well as the soft handover.

6.2.1

Basic Fitness Function

As mentioned before, the goal of the optimization is to increase the capacity. So,
the basic fitness function considers the number of served users in the network
as the goal of optimization. Equ. (6.1) shows the fitness function for one test
solution i1 .

g(i) =

cells
X

servedk

(6.1)

k=0

In Equ. (6.1), servedk is the number of served users of cell k, and cells is the
number of total cells in the network.

6.2.2

Extended Fitness Function

The basic fitness function (see Equ. (6.1)) was extended for the use of the genetic
algorithm. As before, the number of served users is used again in the fitness
function. In addition to the capacity, also the coverage area and the number of
mobiles in soft handover are taken into account for the fitness calculation. This
is done to increase the accuracy of the fitness value and to allow the algorithm
to differentiate between two otherwise equal solutions: If two solutions have the
same number of served users, the solution with the larger coverage area and
fewer mobiles in SHO has a higher fitness value. In Equ. (6.2) the extended
fitness function g(i) for one test solution i is shown.

g(i) =

cells
X

servedk + gcov + gSHO

(6.2)

k=0

In Equ. (6.2), servedk is the number of served users of cell k, and cells is the
number of total cells in the network. The term gcov represents the coverage probability of the pixels in the simulation area (covered pixels over existing pixels),
scaled between 0 (no coverage) and 1 (all pixels are covered). The SHO proportion is taken into account by the term gSHO in Equ. (6.2). In order to serve more
mobiles in a cell, the idea is to reduce the number of SHO links in this cell. In
Equ. (6.3) the calculation of gSHO is shown.
1

In this thesis one test solution means one evaluated network state with one particular
parameter setting (antenna azimuth, antenna tilt and CPICH power).

6.2. FITNESS FUNCTION

gSHO = 1

73

X gSHO,cell (k)
1 cells
cells k=0 gSHO,max

(6.3)

The term gSHO,cell (k) in Equ. (6.3) denotes the number of SHO links in cell k
and gSHO,max is the maximum over all gSHO,cell (k). The range of gSHO is between
0 and 1. A high value of gSHO means a low SHO rate and a low value of gSHO
characterizes a high amount of SHO connections.
There is always a tradeoff between capacity and coverage, QoS and costs. The
goal in this thesis is to increase network capacity in a cost effective way, while
maintaining the coverage and QoS. Figure 6.1 shows the tradeoff between these
factors.

Figure 6.1: The three tradeoffs of UMTS optimization (source: [75]).

6.2.3

Used Fitness Function

Several optimization algorithms are presented in this thesis, using either the
basic fitness function presented in Section 6.2.1 or the fitness function from Section 6.2.2. Table 6.1 summarizes which fitness function is used for which algorithm.

74CHAPTER 6. FITNESS FUNCTION AND PERFORMANCE INDICATORS


Algorithm

Section

Rule Based Approach

8.2.1

Simulated Annealing

Used fitness function


g(i) =

8.2.2

Adaptive Rule Based Approach

g(i) =

8.2.3

g(i) =

cells
P
k=0
cells
P
k=0
cells
P

servedk
servedk
servedk

k=0

Genetic Algorithm

8.3

g(i) =

cells
P

servedk + gcov + gSHO

k=0

Analytic Optimization Algorithm

8.4

g(i) =

cells
P

servedk

k=0

The fitness function value is not considered by the Analytic Optimization Algorithm, but it
was used during the development of the algorithm. Further details are described in Section 8.4.

Table 6.1: Assignment fitness function - optimization algorithm.

6.3

Grade of Service

Besides the fitness value, a second indicator is very important during the optimization process, which is used to characterize the proportion of the users that
can be provided with a service. This value is called Grade of Service (GoS) and
describes the ratio of served users over all existing users2 . In this thesis the GoS
is defined as

GoS =

served
existing

(6.4)

In Equ. (6.4), served denotes the total number of served users in a defined area
(e.g. the whole simulation area), and existing is the total number of simulated
users in the same area. During the optimization process, GoS increases from its
initial value of 95 % until it has reached 100 %. Then all users are served and the
optimization algorithm cannot proceed any further. However, the network could
accept more users. Thus, the different optimization algorithms developed in his
thesis apply the following approach: When GoS reaches a value of 96 %, new
users are added to the network until the initially defined GoS of 95 % is reached
again.
The calculation of the number of additional users is done within the network
evaluation with the UMTS FDD network simulator described in Section 7.2. The
2

Note that some literature defines GoS as the probability of a call being blocked or delayed
for more than a specified interval.

6.4. PERFORMANCE INDICATORS

75

service mix (40 % speech users and 60 % 64 kbit/s data users) after adding the additional users remains the same. For the big network scenario (see Section 7.4.1)
the new users are only added in the optimization area (usergroups optarea1 and
optarea2 ), and not in the whole network scenario. However, in the small scenario
the additional users are added in the whole network. Remember, the optimization
area in the small scenario covers the whole network scenario (see Section 7.4.2).
In this thesis, the function for adding the additional users is referred to as Add
Users.

6.4
6.4.1

Performance Indicators
Outaged Mobiles

The first performance indicator that is used for the algorithms are the outaged
users of a cell. A user is put to outage due to several reasons. These reasons are
called the outaged reasons. In Chapter 4, a general description of the coverageand capacity-limiting factors, which reflect the outage reasons, is given.
The evaluation of a network scenario with the static UMTS FDD network simulator delivers the number of outaged mobiles per cell. The following list shows
the used outaged reasons of the simulator:
DL outage:
DL cell power: If the total base station power is to high, the connection with the highest code power is closed. This is repeated until the
base station transmit power is below the predefined maximum value.
The outage priority could also be on the existing SHO connections
(not implemented in the used simulator). The latter criterion usually
equals the former, since SHO connections in most cases have to be
transmitted with higher power levels than active links. This is obvious, since SHO connections only occur at the cell border and they
require higher DL code power levels as active connections.
DL code power: If the required code power for a mobile is to high, this
mobile is put to outage.
OVSF code power limitation: In that case the number of connections
has to be reduced. Similar to the procedure regarding the maximum
base station power, the priority is on the connection with the highest
power, but could also be on the SHO links.

76CHAPTER 6. FITNESS FUNCTION AND PERFORMANCE INDICATORS


Ec /I0 requirement: The mobiles, which do not fulfill the Ec /I0 requirements are put to outage. (Only in CPICH coverage verification mode
2, see Section 7.2.2.2)
UL outage:
UL power: If the required transmit power of the mobile exceeds the
maximum transmit power, the connection will be closed.
UL load: If the actual load exceeds the planning threshold (also called
system design load ), the user with the highest transmit power will
be put to outage. This procedure is repeated until the system load is
below the predefined maximum system load.

6.4.2

Quality Factor

To describe the quality of a cell in the network, a performance indicator denoted


as quality factor (QF ) is introduced. The QF indicates whether a cell is heavily
loaded or not, and it is the minimum of the following three values:
1. cell load f actor
2. cell pwr f actor
3. ovsf f actor
The cell load f actor is a measure of the uplink cell loading condition and is
defined as
cell load f actor =

load threshold cell load


load threshold

(6.5)

In Equ. (6.5), load threshold denotes the planned uplink cell load and cell load
is the actual cell loading. The second value, cell pwr f actor, is a measure of base
station transmit power utilization in the downlink and is defined as
cell pwr f actor =

max cell pwr cell tx pwr


max cell pwr

(6.6)

In Equ. (6.6), max cell pwr denotes the maximum transmit power of a cell and
cell tx pwr represents the current total transmit power of that cell. The third
value, ovsf f actor, is a measure of the OVSF code utilization in the downlink
and is defined as

6.4. PERFORMANCE INDICATORS

ovsf f actor =

77

ovsf limit ovsf utilization


ovsf limit

(6.7)

In Equ. (6.7), ovsf limit is the maximum available number of OVSF codes, which
is 512; ovsf utilization is the number of used OVSF codes. For example, a voice
call occupies 2 codes, so 256 voice users can be served. If there are only 64 kbit/s
data users 32 users can be served, because one 64 kbit/s data user occupies 16
codes.
The range of each of these three factors is between zero and one. For the QF
the minimum of these three values is taken and therefore the range of the QF
is between zero and one. A low value for QF describes a heavily loaded cell,
and a high value for QF describes a weakly loaded cell. Therefore, by using
the QF as the performance indicator, CPICH power and antenna tilt settings
can be adjusted adaptively according to the loading condition in the uplink and
downlink of a cell.

6.4.3

Summary of Performance Indicators

In this chapter two performance indicators, the number of outaged mobiles as well
as the quality factor QF were introduced. Nor for all algorithms, both indicators
are used. Table 6.2 gives an overview which indicator is used for which algorithm.
Algorithm
Rule Based Approach
Simulated Annealing
Adaptive Rule Based Approach
Genetic Algorithm

Outaged mobiles

Quality factor QF

Analytic Optimization Algorithm

The analytic optimization algorithm uses different parameters as basis for the adjustment of
the parameters, which are described in Section 8.4.

Table 6.2: Used performance indicators for the several algorithms.

78CHAPTER 6. FITNESS FUNCTION AND PERFORMANCE INDICATORS

Chapter 7
Simulation Environment
7.1

Introduction

For the evaluation of the different optimization techniques, the algorithms have
to be evaluated on a network scenario. Furthermore, a network simulator is
necessary to calculate the coverage and capacity relevant information.
In this chapter the used static UMTS network simulator and the interaction
with the optimization algorithms are described in Section 7.2 and Section 7.3.
Further, the network scenarios, on which the different algorithms are evaluated,
are presented in Section 7.4.

7.2

The Used UMTS FDD Simulator

For the evaluation of the network configuration in this thesis, a static UMTS
FDD network simulator based on the Monte Carlo approach is used. This approach utilizes a sufficient number of independent snapshots of potential user
distributions with one fixed network configuration. With this they allow the
compilations of significant statistics of the system performance parameters. In
the static approach, each single snapshot corresponds to a network situation in
equilibrium, which means that the physical layer procedures like power control
are applied iteratively for each user in the system. With this we can combat the
near-far problem of the real system and find a stable system condition as it would
be, if we have fast power control in real time. Due to the static scenario there is
no need for a real time fast power control implementation.
The Monte Carlo approach takes not only propagation conditions into account,
but also the changes of the individual services, data rates, requirements, user
79

80

CHAPTER 7. SIMULATION ENVIRONMENT

positions and some other time-invariant parameters like SHO-gain. In the simplest case the different snapshots (realizations) in a run represent various user
positions in the network area. For each realization (which are initialized at the
beginning of each run) an iteration is done until a certain convergence criterion
is fulfilled. This criterion can, as an example, be based on the variation of the
TX power per iteration for all mobiles in the system. If the change request for
the transmit power of a terminal between two consecutive iterations is below a
certain threshold for, e.g. 95 % of all mobiles, convergence is attained. When this
stable situation has been reached, the number of mobiles able to achieve their
performance targets can be determined and a new realization can be computed.
For a representative statistical evaluation it is important to perform sufficient
realizations of the investigated configuration. The principles of the Monte Carlo
approach as well as a schematic flowchart of a static simulator are shown in
Figure 7.1.

Figure 7.1: Schematic principle of a static Monte Carlo simulator.


Examples for the UMTS system simulations using the Monte Carlo method can
be found in [30, 58, 62, 68, 81]. Proposals for the number of snapshots that have
to be done are given in [5].
The simulator used in this work is provided by SYMENA, Software & Consulting

7.2. THE USED UMTS FDD SIMULATOR

81

GmbH, Austria (http://www.symena.com). Coverage, capacity, and quality of


service related issues can be analyzed with this tool. Input to the simulator is the
network configuration and the user distribution. Uplink and downlink are jointly
analyzed, and the simulation results comprise coverage and capacity information,
traffic statistic, outage reasons for the unserved users, and soft handover (SHO)
statistics.

7.2.1

Parameters of the Network Simulator

For the network simulator a lot of different parameters have to be defined. In


the following the values of the main parameters are introduced. Appendix I
summarizes all the network parameters.
The simulator uses one carrier and one OVSF code tree. The maximum power of
all cells in the network is set to 43 dBm, the maximum code power to 40 dBm and
the minimum to 15 dBm. The P-CPICH power level (CPICH) is set to 33 dBm
(10 % of the maximum cell power) in the reference scenario (initial value). Additionally, there are parameters for SCH and PCH power, which are both 5 dBm.
However, these two parameters are only included in the interference calculation
and have no other impact. Note that during the optimization process for the
CPICH power, SCH and PCH are changed in the same way (see Section 5.3).
The active set window is 3 dBm and the active set size is set to 2. The receiver
of the base station utilize 3 rake fingers and use a maximum ratio combining
efficiency for uplink softer handover operations of 100 %. Appendix C presents
a detailed description of the rake receiver. The transmitter loss is 0 dB and the
base station noise figure 5 dB. Furthermore, there is the possibility to set the
transmission power control (TPC) dynamic range and headroom of the downlink
as well as for the uplink, which are 25 dB and 1 dB for the downlink as well as
65 dB and 2 dB for the uplink.
In the network conventional antennas are used, with a height of 30 m in all cells.
The base stations have 3 sectors using antenna type 739707 from Kathrein with
an antenna gain of 16 dBi [64]. The horizontal and vertical antenna pattern are
shown in Figure 5.2 and Figure 5.4. The vertical antenna pattern presents an
electrical tilt of 3 . For the optimization of the antenna tilt only the mechanical
tilt is changed. The initial mechanical tilt is set to 0 in the reference scenario.
As parameters of the channel, the following values can be specified in the simulator: The background noise floor is set to -107 dBm, and as path loss model an
improved COST 259 model for macro cells is used. A detailed description of the
COST 259 model can be found in Appendix B. The large scale fading has zero
mean and a standard deviation of 2 dBm. The maximum delay of the channel is
2 s and the total number of multipath components of the channel is set to 5.

82

CHAPTER 7. SIMULATION ENVIRONMENT

The inter-symbol interference of the channel destroys the orthogonality of the


codes. This is described by the downlink orthogonality factor, which is 0 if the
codes are perfectly orthogonal, and 1 if the codes are totally non-orthogonal. The
downlink orthogonality factor is chosen to be 0.4.
The standard deviation of the gaussian distribution of the azimuth angles of the
channel taps (sigma azimuth of taps) is set to 5 . The adjacent cell interference
ratio (ACIR) is 5 % and the inter system interference ratio is 5 % as well. These
two parameters only describe an increase of the existing interference level in the
system due to other carries or other systems like GSM.
Further information about the several network parameter can be found in [68].

7.2.2

CPICH Coverage Verification Modes

For the optimization of a UMTS system the CPICH power level is very important
(see Section 5.3 and Section 5.4). The level should be optimized very accurately
to use the power resources of the base station efficiently, to reduce the interference
level, and nevertheless provide the required CPICH coverage in the system.
Due to the importance of the CPICH coverage and to achieve the possible minimum of CPICH power level, it is mandatory to know how the simulator defines
CPICH coverage when deriving CPICH optimization strategies.
The UMTS network simulator CAP ESSOT M from SYMENA provides two modes
for CPICH coverage verification: mode 1 with a fixed threshold for CPICH coverage and mode 2, including the interference by a Ec /I0 threshold (for details see
[68]). In the following both modes are explained in detail.
7.2.2.1

Mode 1

In the CPICH coverage verification mode 1, the CPICH coverage is defined by


using the received power of the CPICH measured at the mobile station compared
to a certain threshold for the lower limit, the receiver sensitivity. Expressed in
Equ. (7.1) it is:
RSCPCP ICH S

(7.1)

In Equ. (7.1) RSCPCP ICH is the received signal code power of CPICH and S is
the receiver sensitivity with a default value of -120 dBm.
It is crucial to mention that the CPICH coverage verification mode 1 does not take
any interference into account for the calculation of coverage. In this mode the
CPICH coverage of a single cell would be the same with or without the presence
of the total system.

7.3. SIMULATOR INTERFACE


7.2.2.2

83

Mode 2

The CPICH coverage verification mode 2 includes the interference level of the system in the calculation of CPICH coverage. In this mode, a mobile in the simulated
scenario is covered by the CPICH if the EC /I0 for the received CPICH at the mobile station exceeds a certain threshold. This threshold is called CP ICH EC /I0 thres
according to the 3GPP specification [2] and is set to -12 dB for the most of the
simulations performed in this work. The mobiles, which dont fulfill the Ec /I0
requirements are put to outage (see Section 6.4.1). In CAP ESSOT M the Ec /I0
threshold is set by the receiver sensitivity parameter. A receiver sensitivity of
-120 dBm is equivalent to an EC /I0 threshold of -12 dB. Equ. (7.2) shows the
relationship between EC /I0 threshold and the receiver sensitivity.
CP ICH EC /I0 thres = S Pnoise

(7.2)

In Equ. (7.2) S denotes the receiver sensitivity and Pnoise is the noise power at
20 C, which is -108.09 dBm.
Also, the signal to noise ratio (SNR) is called CPICH EC /I0 although there is an
important difference, which will be explained in the following. In Equ. (7.3) the
calculation in CAP ESSOT M for mode 2 is presented.

CP ICHEc /I0 =

RSCPCP ICH
CP ICH Ec /I0 thres
RSSI RSCPCP ICH

(7.3)

In Equ. (7.3) RSCPCP ICH denotes the received signal code power of the CPICH
as measured by the mobile and RSSI (received signal strength indicator) is the
wideband received power within the relevant channel bandwidth in the downlink.
The mentioned difference is that in the 3GPP specification the CP ICHEc /I0 is
the ratio of the received energy per PN chip for the CPICH to the total received
power spectral density at the UE antenna connector as shown in Equ. (5.2) and
Equ. (5.3), whereas in CAP ESSOT M the CP ICHEc /I0 is the real SNR.

7.3

Simulator Interface

The static UMTS network simulator CAP ESSOT M from SYMENA, Software &
Consulting GmbH consists of two parts: the simulation engine and the graphical
user interface (GUI). The GUI helps the operator to feed the simulation with the
relevant data (user distribution, network configuration and parameters). Further,
the GUI prepares the simulation results in tables, diagrams and plots for the

84

CHAPTER 7. SIMULATION ENVIRONMENT

operator. As interface between the simulation engine and the GUI, XML1 files
are used, as shown in Figure 7.2.

Figure 7.2: Interface between network simulator and optimization algorithm.

Two files are used as interface: One, which includes all the input information
for the simulator, in Figure 7.2 named as XML Input File. The second one, the
XML Output File, is generated after the simulation is finished and comprises all
the results for the GUI.
The XML files are also used as interface for the optimization process. The optimization algorithm prepare the input file for the simulator and starts the simulation. After the simulation is finished the output file of the network analysis is
read from the optimization algorithm for calculation of the fitness function and
GoS. Fig 7.2 shows this interface marked in blue.

7.4

Network Scenarios

For the evaluation of the several optimization algorithms two virtual network
scenarios of a typical European city are used. In the first scenario (bigger one)
the network covers the whole area of the city. The second scenario contains only
downtown. In the following both scenarios are described. The more interesting
one is the bigger one. For the results present in Chapter 9 normally this one is
used. Only if it is explicitly mentioned the small scenario is used.
1

XML: Extensible markup language. The next-generation of HTML, is now viewed as the
standard way information will be exchanged that dont share common platforms (www.xml.org).

7.4. NETWORK SCENARIOS

7.4.1

85

Big Scenario

In the big network scenario 25 base stations equipped with 3-sector antennas,
thus comprising 75 cells, are used. Figure 7.3 shows the distribution of the base
stations as well as the distribution of the users. The arrows in the figure represent
the main antenna direction of each cell, and the black dots symbolize the users
in the system.

Figure 7.3: Base station location and one user distribution of the big network
scenario.
In this simulation scenario the area inside the rectangle is defined as the optimization area. This means that this region is the area of interest, and the fitness
function as well as the GoS is evaluated over this part.
In the total simulation area the users are distributed equally in each cell according
to the best server plot. The best server plot shows the regions of the dominance
areas of the different cells due to the highest received CPICH power level. That
means, all the pixels (one pixel is equivalent to 100 m 100 m) in the best server
plot of the scenario with the highest received CPICH power level from one cell

86

CHAPTER 7. SIMULATION ENVIRONMENT

(so-called serving cell ) are dedicated to that cell and describe the best server area
of that certain cell. The same number of mobiles are distributed in each of these
areas. This kind of distribution is called Best Server Equal distribution in this
thesis. It is also possible to use an equally distributed user model. In this model
the users are equally distributed over a defined rectangular area. In this work
this distribution model is called Equal distribution.
For the simulation all the mobiles are assigned to four different user groups. It is
possible to use Best Server Equaland Equaldistribution in each user group,
as well as a certain service (voice or data). The big scenario uses four usergroups:
usergroup1, usergroup2, optarea1 and optarea2. The first two usergroups cover
the whole scenario and so define the service mix in the whole scenario. Usergroups
optarea1 and optarea2 only cover the optimization area in the network and so
define the service mix in this area. For the results present in Chapter 9 normally
the Best Server Equaldistribution is used in all usergroups. In the scenario a
service mix of 40 % 12.2 kbit/s speech users and 60 % two-way 64 kbit/s data
users is assumed with an activity factor of 50 % when using speech service and
100 % when using PS data service. The initial number of users in the whole
network is 1057. Additional users during the optimization process are admitted
in optarea1 and optarea2 by the Add Users function (see Section 6.3).
For the variation of the positions of the mobiles in the scenario (generate different
snapshots, see Figure 7.1 in Section 7.2), the initialization value of the random
generator for the different usergroups can be changed by a parameter, which is
called rand init.
In the simulator also some parameters of the mobile stations can be defined and
accordingly varied. The maximum transmit power of the mobile station is set
to 21 dBm. The mobiles have an antenna gain of 0 dB, and the body loss as
well as the receiver noise figure are set to 0 dB. The threshold down to which
the received CPICH power level is considered for the interference calculations is
-126 dBm, and the receiver sensitivity S of the mobiles is -120 dBm. Since the
receiver sensitivity defines the CPICH coverage, S can also reach other values
(see Section 7.2.2).

7.4.2

Small Scenario

The second scenario, which is used for some investigations in this work represents
the downtown of the European city used for the big network scenario. This
scenario is equivalent to the optimization area of the scenario described in the
former section.
In the small network scenario 9 base stations equipped with 3-sector antennas,
thus comprising 27 cells, are used. Figure 7.4 shows the distribution of the base

7.4. NETWORK SCENARIOS

87

stations as well as one distribution of the users.

Figure 7.4: Base station location and one user distribution of the small network
scenario.

In this scenario the optimization area comprise the whole network. Two usergroups are used: usergroup1 and optarea1. These usergroups cover the whole
area. The Best Server Equaldistribution with only two-way 64 kbit/s data
users is used in both usergroups. The initial number of users in the network is
376. The other parameters are the same like in the other scenario. Note that
additional users are only admitted in optarea1 by the Add Users function (see
Section 6.3).

88

CHAPTER 7. SIMULATION ENVIRONMENT

Chapter 8
Optimization Algorithms
8.1

Introduction

In this main chapter of the thesis the developed optimization algorithms are presented. Altogether five different strategies for the optimization of antenna tilt
(see Section 5.2.2) and CPICH power (see Section 5.3) were developed. First,
the optimization with local algorithms was studied and consequentially a Rule
Based Approach algorithm, a Simulated Annealing algorithm and an Adpative
Rule Based Approach algorithm originated. Further, a Genetic Algorithm approach was implemented and studied in more detail. The disadvantage of all the
local and especially the genetic approach is that they are very time consuming.
So, also an analytic algorithm was developed with the objective to use as few
steps as possible in contrast to the other strategies. This algorithm optimizes
also antenna azimuth besides antenna tilt and CPICH power (see Section 5.2.1).
The following list summarizes all developed approaches:
Local Optimization Algorithms:
Rule Based Approach
Simulated Annealing
Adaptive Rule Based Approach
Genetic Algorithm
Analytic Optimization Algorithm
The Adaptive Rule Based Approach and the Analytic Optimization Algorithm
were developed and studied within the scope of the diploma these of Yee Yang
89

90

CHAPTER 8. OPTIMIZATION ALGORITHMS

Chong and Wolfgang Karner under supervision of the author. In this PhD thesis
only a brief summary of the approaches as well as some optimization results for
comparison with the other algorithms are given. For a detailed description as
well as further simulation results see [23, 63].

8.2

Local Optimization Algorithms

In the case of the local algorithms, the optimization of the base station parameters
CPICH power and antenna tilt begins with an initial evaluation of the network.
After analyzing the results of the first evaluation, the iterative optimization process is started. Each optimization loop includes two steps. After changing the
parameters in the first step, the network is evaluated in the second step. Then,
the next iteration of the optimization loop is started and the parameters are
changed again. The optimization runs until a specific termination condition is
fulfilled. Figure 8.1 shows the flow chart of the optimization process.

Figure 8.1: Structure of local optimization process.

For the three different local approaches this optimization process is always the
same. The difference lies in the block Change parameters. Further, the decision,
which results (test solutions) are accepted and which are not is different for the
strategies. In the three Sections 8.2.1, 8.2.2 and 8.2.3 the individual approaches
are explained and the differences are worked out.

8.2. LOCAL OPTIMIZATION ALGORITHMS

8.2.1

91

Rule Based Approach

The algorithm is a rule based optimization approach. It starts from a scenario


where all cells of the network are set to the identical initial values for CPICH
power and tilt. After a first evaluation, the optimization starts by reducing the
CPICH power and increasing the antenna downtilt in individual cells according
to a configurable rule set. After each optimization step, a new evaluation of
the fitness function is perfomed. The algorithm of the Rule Based Approach is
illustrated in Figure 8.2, which reflects the optimization process from Figure 8.1.
Break denotes the termination condition.
Procedure RuleBased()
begin
X := initial parameters
f it := evaluation(X)
while != break
X 0 := change parameters(X)
f it0 := evaluation(X)
if (f it0 > f it)
X := X 0
f it := f it0
end
end
end
Figure 8.2: Rule Based Optimization Algorithm.

The crucial part of the algorithm is the function change paramters(x) (see Figure 8.2). This function processes a rule set like the one depicted in Table 8.1 and
is executed for each cell with outaged mobiles (see Section 6.4.1). The parameters of the table describe the following: param specifies the modified parameter;
delta denotes the amount of change; limit describes the lower or upper limit of
the parameter, and iter specifies how often the rule is applied at most. When
the optimization process is launched, the algorithm starts with the first rule of
the rule set. In each iteration only one rule is applied. According to Table 8.1,
we can see that after the first 5 optimization loops the algorithm continues with
rule number 2 and so on. During one rule also worse results are accepted. When
advancing to the next rule, however, the best result of the previous one is taken.
The algorithm terminates, when all rules of the rule set have been processed. In
Appendix D the standard rule set, which is used in Chapter 9, is presented.

92

CHAPTER 8. OPTIMIZATION ALGORITHMS


rule

param

delta

limit

iter

CPICH

-5 dB

28 dBm

tilt

CPICH

-3 dB

28 dBm

10

tilt

10

CPICH

-1 dB

28 dBm

10

tilt

10

10

Table 8.1: Example rule set for Rule Based Optimization Algorithm.

8.2.2

Simulated Annealing

Subsequently the Rule Based Approach was extended and improved by incorporating Simulated Annealing [9]. The decision to accept a bad result is in contrast
to the Rule Based Approach independent of the rule set. After an optimization
loop worse results can be taken, but only with a certain probability. This probability corresponds to a cooling function (CF). The exponential CF is analogous
to the physical process of heating and then slowly cooling steel to obtain a crystalline structure with minimum entropy. In the beginning of the optimization the
probability to take a bad result is higher. During the optimization the probability
to accept a worse result than the previous one decreases according to the CF.
Section 3.3 gives an detailed overview about Simulated Annealing.
Concerning the implementation, the main difference to the Rule Based Approach
is shown as bold text in Figure 8.3. The variable rand denotes a uniformly
distributed random value between 0 and 1. CF is the cooling function of the
Simulated Annealing algorithm. In this algorithm the same decisions for changing
CPICH power and antenna tilt settings are used. So, a rule is executed in each
cell with outaged mobiles. However, the number of outaged mobiles as well as
the outage reason do not influence the decision.
The same type of rule set (see Table 8.1) as in the Rule Based Approach is
used for Simulated Annealing in the function change paramters(x). For the Simulated Annealing algorithm the rule set was improved by including additional
rules with smaller values for delta. Appendix E presents two developed rule sets
(see Table E.1 and Table E.2), which were applied for the optimization results in
Chapter 9. Again, the basic fitness function from Equ. (6.1) is used.
The implementation of Simulated Annealing in this thesis uses only the rule set for
finding a new parameter setting and not the Simulated Annealing mechanism. So,
like in the Rule Based Approach it is only possible to decrease the CPICH power

8.2. LOCAL OPTIMIZATION ALGORITHMS

93

Procedure RuleBased()
begin
X := initial parameters
f it := evaluation(X)
while != break
X 0 := change parameters(X)
f it0 := evaluation(X)
if (f it0 > f it) or (rand < CF )
X := X 0
f it := f it0
end
end
end
Figure 8.3: Extension of Rule Based Optimization algorithm to Simulated Annealing.
and increase the antenna downtilt. The idea was to use Simulated Annealing for
accepting also bad results with a certain probability.
In the algorithm two different functions for the cooling temperature TC are implemented. On the one hand Geometric Cooling according to [9] is implemented
with the following function:
TC,N EW = TC

(8.1)

In Equ. (8.1) denotes a parameter of the function. On the other hand also Slow
Cooling is implemented according the following function:
TC,N EW =

TC
1 + TC

(8.2)

In Equ. (8.2) denotes a parameter of the function. In Chapter 9 both cooling


functions for the cooling temperature TC are used for optimization. For the
cooling function the following equation is used:
CF = e

GoSGoSOLD
TC

(8.3)

In Equ. (8.3), GoS denotes the actual GoS and GoSOLD represents the GoS of
the previous iteration. On the one hand, CF depends on the cooling temperature
TC . On the other hand, it also depends on the change of the GoS. Thus, for

94

CHAPTER 8. OPTIMIZATION ALGORITHMS

big differences in GoS (if the result is worse than before), the value of CF, and
thus the probability to take the new result is lower. If the random value rand is
smaller than the CF the optimization result is taken (see Figure 8.3).
If a result is not accepted, the previous result is modified again according the
same rule. However, only in half of the cells the parameter (CPICH power or
antenna tilt) is changed. For this, the cells are sorted by the number of outaged
mobiles with the most outaged mobiles first. If in the last iteration only in one
cell the parameter was changed, then the algorithm processes with the next rule
and changes the parameter again in all cells with outaged mobiles.

8.2.3

Adaptive Rule Based Approach

In this section the developed Adaptive Rule Based Approach is described. This
optimization algorithm is an extension of the Rule Based Approach introduced
in Section 8.2.1.
The parameters are changed according to an adaptive optimization technique,
described in the following. Figure 8.1 shows the basic optimization process. This
algorithm differs from the previous two described algorithms (Section 8.2.1 and
Section 8.2.2) that now CPICH power and antenna tilt are changed together.
Further, an increase of CPICH power and antenna uptilting is also possible.
Remember, the Rule Based Approach and Simulated Annealing were only able
to change the parameters in one direction (reduce CPICH power and reduce
antenna downtilt).

8.2.3.1

Why Adjust CPICH Power and Antenna Tilt Together?

In this section the idea, why CPICH power and antenna tilt are adjusted together,
is explained considering an example. Figure 8.4 shows this example. In the first
picture (Figure 8.4 (a)) two mobiles are shown. Both are served by base station
BS 1. The goal is to achieve a load balancing in a way that one mobile is served
by BS 1 and the other by BS 2. This aim can be reached with three different
strategies.
The first strategy decreases the CPICH power of BS 1. Figure 8.4 (b) shows this
situation. Now, each base station serves one mobile, but with the disadvantage
that BS 1 causes inter-cell interference to the cell of BS 2.
In the second strategy the antenna downtilt of BS 1 is increased to shrink the
coverage area of the cell (Figure 8.4 (c)). However, the CPICH power is to high
for the smaller cell and so affects pilot pollution to the adjacent cell. Further, the

8.2. LOCAL OPTIMIZATION ALGORITHMS

Figure 8.4: Why adjust CPICH power and antenna tilt together?

95

96

CHAPTER 8. OPTIMIZATION ALGORITHMS

excessive CPICH power causes a waste of base station power resources, because
the total transmit power is limited.
The last strategy, which is the basis for the Adaptive Rule Based Approach,
combines the previous ones and changes CPICH power and antenna tilt together.
So, no additional inter-cell interference and pilot pollution is produced like in
strategy one and two. This combined strategy is shown in Figure 8.4 (d).

8.2.3.2

Algorithm Description

In each iteration of the optimization loop (see Figure 8.1) the quality factor QF
(see Section 6.4.2) is computed for each cell. CPICH power and antenna tilt are
changed according to the developed rules, as shown in Table 8.2.
QF

CPICH power and antenna tilt change

QF < 0.5

Decrease CPICH power and tilt antenna down

0.5 QF 0.7

No change

QF > 0.7

Increase CPICH power and tilt antenna up

Table 8.2: CPICH power and antenna tilt adjustment.


The rules are designed in such a way that a highly loaded cell (QF < 0.5)
having outage users is required to shrink its coverage area by decreasing the
CPICH power and increasing the antenna downtilt. Conversely, if the cell has a
low user density (QF > 0.7), it has to expand its coverage area to cover more
users by increasing the CPICH power and antenna uptilt 1 . With this approach,
load balancing within the cells in a network can be achieved, resulting in higher
network capacity.
CPICH power and antenna tilt modifications are controlled by a set of rules with
different stepsize and limitation settings, as shown in Table 8.3. In principle the
same type of rule set is used as for the Rule Based Approach, however with
the important difference that now one rule consists of two instructions: one
instruction for the CPICH power and one instruction for the antenna tilt.
In Table 8.3 param specifies the modified parameter; stepsize denotes the maximum allowed adjustment of CPICH power and antenna tilt settings per iteration;
limit describes the lower or upper limit of the parameter, and iter specifies how
often the rule can be applied at most.
1

Numerous simulations with different values for the limits of QF from Table 8.2 show that
0.5 and 0.7 achieves the best results.

8.2. LOCAL OPTIMIZATION ALGORITHMS

97

rule

param

stepsize

limit

iter

CPICH

3 dB

25 dBm

50

tilt

1.5

CPICH

0.5 dB

10 dBm

tilt

50

0.25

Table 8.3: Example of CPICH power and antenna tilt limitation & stepsize settings.
When the QF of a cell is greater than 0.7, both its CPICH power and antenna
up tilt will be increased by


QF 0.7
.

stepsize

0.3

(8.4)

On the other hand, if the cells QF is less than 0.5, the CPICH power will be
reduced and antenna down tilt will be increased by
stepsize (1 QF ).

(8.5)

Consequently, the adjustments of CPICH power and antenna tilt depend on the
cells actual QF . With this strategy, CPICH power and antenna tilt are adjusted
adaptively according to the loading condition of a cell.
The modification of antenna down tilt and CPICH power has to be limited (lower
limit for CPICH power and upper limit for antenna down tilt) in each rule by
the parameter limit in order to avoid too big changes in a particular cell (see
Table 8.3). There are no limitations set for the maximum CPICH power and
maximum antenna up tilt, so that larger service coverage area is allowed in regions
where the user density is low. Furthermore, the developed algorithm is biased
toward reducing the initial CPICH power and increasing the antenna down tilt.
When the optimization process is launched, the algorithm starts with the first
rule of the used rule set (e.g. from Table 8.3). For each rule several iterations are
performed. According to Table 8.3, we can see that in this case the algorithm
continues with rule number 2 after the first 50 iterations.
If the GoS after one iteration within one rule is lower than 95 percent and lower
than the GoS from the previous iteration, the new result is not accepted. In this
case, the same iteration is processed again, but only in two third of the previously
modified cells (priority according QF ) the CPICH power and tilt settings are

98

CHAPTER 8. OPTIMIZATION ALGORITHMS

changed (Cell Reduction). The algorithm terminates, when either all rules of the
rule set have been processed, or if the QF in all the cells is between 0.5 and 0.7
(see Table 8.2). This means, the network is balanced or the algorithm cannot
process further, due to the limits for CPICH power and tilt. A detailed flowchart
of the optimization loop is shown in Figure 8.5.

Figure 8.5: Detailed flowchart of Adaptive Rule Based Approach.


The flowchart in Figure 8.5 shows that after the network evaluation with CAP ESSOT M
the GoS is used as fitness function. The goal of the optimization is to maximize
the capacity of the network, and so the number of served users has to be con-

8.3. GENETIC ALGORITHM

99

sidered, like in the basic fitness function in Equ. 6.1. Since the GoS (Equ. 6.4)
includes the number of served users in the enumerator, also this performance
indicator is utilizable as fitness function.
Appendix F presents four developed rule sets for the Adaptive Rule Based Approach (see Table F.1, Table F.2, Table F.3 and Table F.4), which were applied
for the optimization results in Chapter 9.

8.3

Genetic Algorithm

After the development of different local optimization algorithms also a global


approach was implemented. The theoretical basics of global optimization are described in Section 3.5. The most popular representative of this family of optimization techniques are Genetic Algorithms. Section 3.5.1 gives a general overview of
this technique. In this section, my developed genetic approach, which is especially
adapted for the optimization of network parameters for the UMTS system, is described. For the algorithm I use a deterministic fitness proportional selection,
a problem specific recombination operator (altogether 3 different versions were
developed) and an improved mutation operator. In addition, a simple local optimization based on the Rule Based Approach from Section 8.2.1 is implemented,
which is applied to the best individuals to improve their fitness. Finally, a parallel implementation is introduced to distribute the individual network evaluations
of the population on several computers with a master/slave concept.
For the implementation of the Genetic Algorithm an XML file is used as initialization file for the specification of the most important parameters. Appendix G
shows this file.

8.3.1

Representation

For the Genetic Algorithm a suitable representation of the parameters CPICH


power and antenna tilt is needed. In Figure 8.6 the used coding is shown.

Figure 8.6: Representation of individuals for capacity optimization.


Each individual of the population consists of 2n genes, where n is the number of
cells. For one cell, one gene is used for the CPICH power and one for the antenna

100

CHAPTER 8. OPTIMIZATION ALGORITHMS

tilt. The search space for the algorithm is set in the following way: The range for
the CPICH values is from 15 to 38 dBm with a resolution of 0.5 dB; The values
for the antenna downtilt are limited by 2 and 10 . The resolution of the tilt is
set to 0.5 . The element <limit> specifies the search space in the initialization
file (Appendix G).

8.3.2

Algorithm

In this subsection the optimization process as well as the used genetic operators
are described in more detail. Newly developed operators are used to incorporate
knowledge about the quality of the cells in the network. In Figure 8.7, the
flowchart of the implemented algorithm is shown.
The algorithm starts with the initialization of all individuals of the population.
Section 8.3.2.1 describes this initialization of the population. After the initial
phase, the whole population is evaluated with the static UMTS FDD network
simulator. The fitness function g(i) as well the GoS are calculated for all the
individuals (i = 0, 1, ..., n)2 . For the fitness evaluation the extended fitness function from Equ. 6.2 in Section 6.2.2 is used. In the next step the GoS of the best
individual, i.e. the individual with the highest fitness value, is compared to the
limit3 of 96 %. If the GoS is higher than this threshold, additional users are added
in the simulation (Add Users)4 , and the whole population has to be reevaluated
again to get the new values for g(i) and GoS.
After this preprocessing of the population, the optimization process is started. In
my genetic approach I have implemented selection, recombination and mutation.
With the selection operator, the individuals for the new population are selected.
Of some individuals multiple copies can be selected, and some other individuals will be not selected at all, according to the implemented selection method.
After the selection process, the individuals (parents) are recombined to create
new individuals (children). The last genetic operator randomly mutates genes
of the individuals with a certain probability. When the evolution process of one
population is finished, the whole population has to be evaluated again to get the
new values for g(i) and GoS of the individuals.
With the best individuals of the population a local optimization step is performed
to improve the performance of the Genetic Algorithm. After the local step the
GoS of the best individual is compared to the threshold of 96 %. If the GoS is
higher, then additional users are added and the population is reevaluated. In
2

The variable n denotes the size of the population.


This threshold is specified in the initialization file by the attribute add gos in the element
<ga> (Appendix G).
4
See Section 6.3 for details.
3

8.3. GENETIC ALGORITHM

Figure 8.7: Flowchart of Genetic Algorithm approach.

101

102

CHAPTER 8. OPTIMIZATION ALGORITHMS

the next iteration, the evolution process is repeated. The Genetic Algorithm
stops, if a certain termination condition is fulfilled, e.g. after a certain number
of iterations.

8.3.2.1

Initial Population

For all the individuals of the population the initial values for CPICH power and
antenna tilt are chosen randomly, but with the same CPICH power and antenna
tilt values for all cells. The values for the CPICH power are chosen within 15
and 38 dBm with a resolution of 1 dB. The initial values for the antenna tilt are
selected within 2 and 10 with a resolution of 1 .
However, not all individuals are initialized randomly. The first 12 individuals
of the population are set to fixed predefined values. The element <init> of
the initialization file (Appendix G) defines the setting. For each of these 12
individuals one value for CPICH power and one value for the antenna tilt is
defined. So, each cell of the network has the same CPICH power and antenna tilt
value. The background of this initialization is that the search space is covered
by the start population as good as possible.

8.3.2.2

Selection

Deterministic Fitness Proportional Selection


The selection of the individuals for the new population is implemented as a deterministic fitness proportional selection, because trials of several other methods (roulette wheel selection, tournament selection,...)5 have indicated that this
method fits best for the problem of UMTS base station parameter optimization.
The fitness values g(i) of the individuals are scaled to get suitable fitness values f (i) for the selection process. I use a linear scaling function, presented in
Equ. (8.6).

with a =
5

f (i) = a g(i) b

(8.6)

Cm g g
gmax g

(8.7)

and b = a g g

In [76, 77, 92] all the selection methods are explained in detail.

8.3. GENETIC ALGORITHM

103

In Equ. (8.7), Cm denotes the selection pressure 6 , which describes how much the
algorithm favors good individuals compared to bad individuals. The mean value
over all g(i) is denoted as g in Equ. (8.7), and gmax is the highest fitness that
occurs in the population.
My implementation of the fitness proportional selection works as follows: First,
the best individual since the last function call of Add User is selected to guarantee that the best solution cannot get lost. This method is called elitism 7 in
literature [76, 92]. Next, the expected number of descendants for each individual
is calculated with the following equation:
f (i)
n
e(i) = Pn
j=1 f (j)

(8.8)

In Equ. (8.8), n denotes the size of the population. From each individual be(i)c8
descendants are produced. To complete the population, the best individual of
the last population is repeatedly selected, until n individuals are in the new
population.

Tournament Selection
As mentioned before, also other selection methods were tested. For the sake of
completeness also tournament selection is explained in the following. With this
selection method also good results were achieved during the algorithm development process.
First, the implemented tournament selection method selects the best individual of
the old population (elitism). Afterwards, k randomly chosen (equal distributed)
individuals are selected. The individual with the highest fitness value of this
selected sub-population is taken over in the new population. This selection procedure is repeated until n individuals are admitted in the new population. The
factor k controls the selection pressure. A higher value of k corresponds to a high
selection pressure and vice versa. The value for k depends on the optimization
problem. In the case of the considered problem in this thesis, it has been shown
that good values for k lies between 2 and 10 % of the population size.
6

The selection pressure is specified in the initialization file by the attribute cm in the element
<ga> (Appendix G). Good values of Cm for the base station parameter optimization problem
are between 3.5 and 6.
7
Elitism: Independent of the selection function the best k individuals are taken into the new
population to guarantee a monotonic increase of the fitness.
8
The bc operator denotes the floor function.

104
8.3.2.3

CHAPTER 8. OPTIMIZATION ALGORITHMS


Recombination

Altogether 3 different recombination operators were developed. These 3 operators


are called:
Recombinate outage
Recombinate basic
Recombinate average
In the following all 3 recombination operators are described.
Recombinate outage
My first implemented recombination operator takes the quality of the cells into
account. To describe the quality of a cell the quality factor (QF ), introduced in
Section 6.4.2, is used as performance indicator.
The algorithm selects n times randomly two individuals (referred to as parent 1
and parent 2 ) from the population and produces a new child with a recombination
probability pc . The recombination operator is shown in Figure 8.8.

Figure 8.8: Recombination operator.

For each cell in the network the number of outaged mobiles and the QF between
the two individuals are compared. If the number of outaged mobiles of parent 2
is smaller than that of parent 1 and the QF of parent 2 is better for this cell,
then the corresponding genes for this cell are taken from parent 2 for the new
child, otherwise the two genes for CPICH power and antenna tilt are taken from
parent 1. If the algorithm decides not to produce a child by recombination (with

8.3. GENETIC ALGORITHM

105

a probability of (1 pc )), then the first selected parent is taken unchanged. After
Recombinate outage, the population again has a size of n individuals, consisting
only of the children produced by the recombination.
During the development of the Genetic Algorithm this recombination operator
showed the best performance. So, for the results, which are presented in Chapter 9, only the Recombinate outage operator is used.

Recombinate basic
In contrast to the previous recombination operator, the Recombinate basic operator selects n/2 times randomly two individuals from the population (parent 1
and parent 2 ) and produces two new children (referred to as child 1 and child 2 )
with a recombination probability pc .
First, parent 1 bequeaths all his genes to child 1 and parent 2 bequeaths all his
genes to child 2. So, child 1 and child 2 have the same parameter setting for
CPICH power and tilt like parent 1 and parent 2, respectively. Then, for each
cell in the network the number of outaged mobiles and the QF between the two
children are compared. If the number of outaged mobiles of child 2 is smaller than
that of child 1 and the QF of child 2 is better for this cell, then the corresponding
genes for this cell are exchanged between the children, otherwise the genes are
not exchanged.
If the algorithm decides not to produce children by recombination (with a probability of (1 pc )), then the selected parents are taken unchanged in the population. After Recombinate basic, the population again has a size of n individuals,
consisting only of the children produced by the recombination.

Recombinate average
The last version of the recombination operator doesnt take any performance
indicator into account. Hence, this operator performs worst of all 3 procedures.
For the sake of completeness also this recombination version is explained in this
thesis.
The Recombinate average algorithm selects n times randomly two individuals
(referred to as parent 1 and parent 2 ) from the population and produces a new
child with a recombination probability pc . For each cell in the network the average
of the parents CPICH power and antenna tilt setting is calculated and taken as
new setting for the new child. Equ. 8.9 and Equ. 8.10 show the calculation of the
average values.

106

CHAPTER 8. OPTIMIZATION ALGORITHMS

CP ICHchild =

CP ICHparent 1 + CP ICHparent 2
2

tiltchild =

tiltparent 1 + tiltparent 2
2

(8.9)

(8.10)

If the algorithm decides not to produce a child by recombination (with a probability of (1 pc )), then the first selected parent is taken over unchanged in the
new population. After Recombinate average, the population again has a size of
n individuals.

8.3.2.4

Mutation

The mutation operator is performed with each individual of the population. The
algorithm decides for each gene of the individual with a mutation probability pm
whether the value of the gene will be mutated or not (for CPICH power and
antenna tilt separately). In Figure 8.9, the mutation for one cell is shown.

Figure 8.9: Mutation operator.

Parameters are changed from their old value xold to their new value xnew according
to the following rule:
xold x xnew xold + x

(8.11)

The value for x in Equ. (8.11) is set to 0.5 for CPICH power and antenna
tilt, respectively. From this interval the value xnew is randomly chosen with a
resolution of 0.5. So, three solutions are possible: xnew = xold x, xnew = xold
or xnew = xold + x. If xnew lies outside the search space, then the corresponding
limiting value is taken.

8.3. GENETIC ALGORITHM


8.3.2.5

107

Local Optimization

After the evolution process, a local optimization with the best local num individuals is carried out. The flowchart of the local optimization is shown in
Figure 8.10.
First, the best local num individuals are selected. For each of these individuals,
local iter local optimization iterations are performed. These parameters are
specified in the initialization file by the attributes local num and local iter
in the element <ga> (Appendix G). Each iteration includes two steps. In the
first step, the parameters are changed according to the quality of the cells in the
network. The rules for changing the parameters are based on the Rule Based
Approach from Section 8.2.1, with the extension that the parameters can be
changed in both directions. In the second step, the individuals are evaluated. If
the fitness value for the new parameter setting of the individual is better than
the old one, then this setting is taken, otherwise the old parameter setting is
retained.

Figure 8.10: Detailed flowchart of local optimization.

I use two rules for the local optimization, the first rule (rule 1 ) to shrink a cell in
the network and the second rule (rule 2 ) to enlarge a cell. In each local iteration
for each individual, a random value decides, whether rule 1 or 2 is used. If rule 1

108

CHAPTER 8. OPTIMIZATION ALGORITHMS

is selected, then for each cell is checked, if there are outaged mobiles and if QF is
bad (QF < 0.1). If this is the case, the CPICH power is decreased by 0.5 dB and
the antenna downtilt is increased by 0.5 in this cell, both with a probability of
0.7. In the case of rule 2, in cells without outaged mobiles and with a good QF
(QF > 0.1), the CPICH power is increased by 0.5 dB and the antenna downtilt
is decreased by 0.5 , both with a probability of 0.5.
8.3.2.6

Adding Users as New Impulse

During the optimization process with the Genetic Algorithm sometimes for several generations no better results are found. The mathematical meaning of this
is that the algorithm gets stuck in a local optimum. This especially occurs near
the end of the optimization. So, to counteract this phenomenon, the idea is to
give the algorithm a new impulse to find a better result.
This new impulse is given by admitting new users by the Add Users function.
Normally, the function is called, if the GoS of the best solution in the population
reaches a value of 96 %. Then, so many additional users are admitted to the
system that the best solution has approximately a GoS of 95 %. In the initialization file it is possible to configure the algorithm in that way that before the
GoS reaches a value of 96 % the Add Users function will be called. The attribute
add ever of the element <ga> (Appendix G) specifies the number of generations,
after that at the latest the function is called. This means, if add ever is set to
a value of 20, at the latest after 20 generations new users area admitted to the
system. Also, if the GoS of the best individual of the population is lower than
96 %.
With this additional impulse the algorithm is led in a new direction and shows
altogether better results.
8.3.2.7

Reduced Population Size

A second idea to give a new impulse during the optimization with the Genetic
Algorithm is to increase the selection pressure Cm after a certain number of
populations. This can be done either by increasing the value of Cm , or by reducing
the size of the population.
The idea of this strategy is that first the algorithm searches in a broader view
and also tracks solutions with a not so good fitness value. After the selection
pressure is increased, the algorithm focuses more on the better solutions and so
increases the convergence behavior of the algorithm.
In my algorithm I implemented the following approach: In the initialization file
two attributes (reduce iter and min pop size) of the element <ga> configure

8.3. GENETIC ALGORITHM

109

the increase of the selection pressure (Appendix G). After reduce iter generations the size of the population is halved. This means that the population is
sorted by the individuals fitness value, and only the first half is consider furthermore. The lower limit of individuals in one population is specified by the
attribute min pop size. So, the selection pressure Cm is increased9 after each
reduce iter generations. This means that for example, if the Genetic Algorithm
starts with a population size of 200 individuals and reduce iter is set to 100,
after 100 generations the size of the population is 100, after 200 generations there
are only 50 individuals in the population and so on. If the size of the population
reaches the lower limit min pop size, then number of individuals arent reduced
furthermore.
Simulations have shown that this approach only brings a very small increase of
the performance of the Genetic Algorithm. So, this feature is not implemented
for the simulation results presented in Chapter 9.
8.3.2.8

Implementation for Distributed Computing

Genetic Algorithms are well suited for a parallel implementation. The big advantage of a parallel implementation is that several computers share the work
and so save computation time. If we look at the flow chart of the genetic approach (see Figure 8.7), we see that in the block Evaluation for each individual
one network evaluation is performed. This means that altogether n times the
simulation engine (see Figure 7.2 in Section 7.3) will be started. Normally, the
network evaluations are done in serial processing on one computer.
For Genetic Algorithms different methods are known for an implementation with
several processors. The following list gives an overview of the methods. A description of the individual approaches can be found in [79, 92].
Synchrony master/slave model : A master is responsible for population management, selection, recombination, mutation, etc. For the evaluation of the
fitness function, the new individuals are sent to several slaves. When all
slaves have done their work, the master starts with the next generation.
Disadvantages:
Master has to wait until all processors have finished their network
evaluations
Total breakdown, if master has a malfunction
High communication effort
9

Note, that a reduction of 50 % of the individuals does not mean an increase in the selection
pressure of 50 %.

110

CHAPTER 8. OPTIMIZATION ALGORITHMS

Semi-synchrony master/slave model : Works like a steady state algorithm 10 .


The master feeds a slave with a new individual promptly after finishing an
evaluation.
Asynchrony master/slave model : Each processor has its own Genetic Algorithm, but they use one population together.
Parallel island model : Such algorithms assume that several sub-populations
evolve in parallel. The model includes a concept of migration (movement of
an individual from one sub-population to another) and recombination between individuals from different sub-populations. This concept is discussed
in detail in [21, 51].
In my Genetic Algorithm I use a synchrony master/slave model for a parallel
implementation. This concept has some disadvantages and other concepts are
much better. However, it is used because a server/client program with this concept was provided by SYMENA, Software & Consulting GmbH. Figure 8.11 shows
the topology of the system.

Figure 8.11: Distributed Computing.

On one processor the Genetic Algorithm is running together with a Distributed


Client Scheduler. This scheduler is responsible for the communication with the
clients (slaves). The Genetic Algorithm prepares the n XML input files (see
10

A steady state GA produces only one new individual per generation and this new individual
replaces one of the current population.

8.3. GENETIC ALGORITHM

111

Section 7.3) for the individuals of the current population and initiates the scheduler to distribute the files to the clients. Figure 8.12 shows a screen-shot of the
Distributed Client Scheduler.

Figure 8.12: Distributed Client Scheduler.

The clients are running on other processors in the background. It is possible to


connect up to 8 processors at the same time with the scheduler. When a client
receives an input file, it starts the network evaluation and when it is finished, the
XML output file is send back to the scheduler. Figure 8.13 shows a screen-shot
of a client.

Figure 8.13: Client.

112

CHAPTER 8. OPTIMIZATION ALGORITHMS

So, the clients evaluate the whole population. After finishing the evaluation of
the last individual the Distributed Client Scheduler finishes its job with preparing
all the XML output files for the Genetic Algorithm.
With this approach a speedup in execution time can be achieved with a minimum
of 3 clients. If only 2 clients are used, the TCP/IP overhead of the network compensates the advantage of more processors. If 8 clients are used at the same time
a speedup of about 600 % is achieved. This is very important for the evaluation of
big network scenarios, where the evaluation of one single parameter setting takes
a long time. Note, that this parallel implemention only speeds up the runtime
of the algorithm, but it doesnt influence the process of the optimization nor the
result.

8.3.3

Summary of Genetic Algorithm

During the development process several settings were tried for the parameters of
the Genetic Algorithm. Table 8.4 shows the best setting. With this setting the
highest increase was achieved for both scenarios (see Section 7.4).
I use a population size of 400 individuals with a selection pressure Cm of 5. With
this high value for Cm , I facilitate the production of so-called super-individuals to
shorten the runtime of the algorithm, thus accepting a reduced solution diversity
in the population. The algorithm stops after 350 iterations. This is equivalent to
150 000 network evaluations. On a Pentium IV with a clock rate of 2.0 GHz and
512 MB main memory, my Genetic Algorithm requires about 64 hours runtime
without distributed computing. If the parallel implementation with 8 clients is
used, the same algorithm takes only about 10 hours runtime.
Normally, the settings from Table 8.4 are used for the simulation results presented
in Chapter 9. In cases, where a different setting is used for the algorithm, the
alternative setting is explicitly stated.

8.4

Analytic Optimization Algorithm

In this section ad hoc strategies for the adjustment of the three key optimization
parameters antenna azimuth, antenna downtilt and CPICH power are described,
which were developed during a diploma thesis by Wolfgang Karner. Ad hoc
in this context means to reach the result only by considering the structure of
the UMTS network, for example the position of the base stations to each other,
the height of the antennas, the height profile of the terrain, or the maximum
transmit power of the base stations. The objective of these strategies is to use as
few steps as possible in contrast to the step by step optimization strategies from

8.4. ANALYTIC OPTIMIZATION ALGORITHM


Population size

n = 400

Selection pressure

Cm = 5

Probability Recombination

pc = 0.8

Probability Mutation

pm = 0.1

113

Local Optimization:
Number of individuals

local num = 20

Number of iterations

local iter = 2

Area of search space:


CPICH power

15 dBm 38 dBm

Antenna downtilt

2 10

Resolution

0.5 dBm and 0.5

Termination condition

after 350 iterations

Table 8.4: Best setting of optimization parameters for GA.


Section 8.2 and Section 8.3, therefore needing as few network evaluations with
CAP ESSOT M as possible.
The following sections give a brief description of the developed strategies for the
three parameters. The full description is presented in [63].

8.4.1

Azimuth Adjustment Strategy

The azimuth adjustment strategy is based on studying the optimum solution for
the regular hexagonal layout. The optimum azimuth setting for the regular case
was discussed in several papers (e.g. [80]). Figure 8.14 shows the best and the
worst case for a scenario with 19 base stations equipped with 3-sector antennas.
According to [80], an improper direction of sector antennas in a regular hexagonal
layout can cause a capacity degradation exceeding even 20 % and requires an
increase of base station transmit power in the range from 3 to 6 dB.
The effect of these directions in a regular hexagonal grid can be seen in Figure 8.15a for the worst case and in Figure 8.15b for the best case, where the
pathgain is presented in an area of 5000 times 5000 meters with 19 base stations equipped with 3-sector antennas. The plots in Figure 8.15 are produced by
a downlink static UMTS FDD simulator, described in [16]. If the transmitted
CPICH power level is added to the pathgain in the diagram, the received CPICH
power level at each point of the area can be obtained. In the worst case scenario

114

CHAPTER 8. OPTIMIZATION ALGORITHMS

Figure 8.14: Best and worst case of antenna directions in a regular hexagonal
grid. Source: [80].

(Figure 8.15a), there are zones in the network with bad coverage. These critical
zones would need more transmit power to be covered and therefore would cause
more interference in the system. In the best case (Figure 8.15b), there are no
such holes of bad coverage, and the total area is covered more regularly.

a.

b.

Figure 8.15: Worst case and best case of base station azimuth in a regular grid,
pathgain in dB.

8.4. ANALYTIC OPTIMIZATION ALGORITHM


8.4.1.1

115

Algorithm Description

The azimuth adjustment strategy is based on the knowledge of the regular case.
The initial setting for the azimuth values can be arbitrarily chosen. Before optimization, all the base stations are marked as unchanged. The algorithm consists
of two steps. First, the main lobes of a partition of the antennas are turned to
so-called critical spots. In the second step the remaining antennas are interleaved,
so that the interference in the network will be minimized.
Step 1: Turn Base Stations to Critical Spots
For a given UMTS network, first the number and coordinates of the critical spots
have to be defined, as well as the number of base stations nr bs, which should be
turned to these critical spots.
For the azimuth adjustment routine one or more critical spots have to be defined.
We define a critical spot as a place in the network with a higher minimum distance
to all the base stations than all other places; or in other words: a critical spot is
a place with a big distance to all the surrounding base stations. Therefore, it is
difficult to cover that area and to serve mobile stations located there.
For each critical spot the nearest nr bs are determined. The algorithm then
adjusts the azimuth values of these nearest base stations, so that one of the main
lobes of the three antennas points directly to that critical spot. The routine
determines the rotating angles in a way, that the base stations have to be turned
between 0 and 120 clockwise starting from north (0 ). After a base station
azimuth has been adjusted, this site is marked as changed and cannot be adjusted
any more. Figure H.1 in Appendix H shows the flowchart of this step.
Step 2: Interleaving of the Remaining Base Stations
In the second step, the remaining base stations are interleaved. During the azimuth adjustment there are two types of base stations: changed and unchanged
ones. For each unchanged base station the distances to all already adjusted base
stations are calculated. The base station with the minimum distance to an already changed base station is interleaved first and will be called bs to interleave
in the following.
In the next step the rotation angle has to be calculated. For this purpose, the closest 5 already adjusted base stations (in the following denoted as bs ref erence(i), i =
1..5) are taken into account. However, a base station is only taken as bs ref erence(i),
if it has a distance to bs to interleave smaller than 1.5 times the distance of the
closest base station. Consequently, the maximum value of i can be smaller than 5.

116

CHAPTER 8. OPTIMIZATION ALGORITHMS

The rotation angles, by which bs to interleave has to be turned from 0 (north)


in order to be interleaved with each bs ref erence(i), are calculated. These angles i are calculated in a way, that the main lobe of one of the three antennas
of bs to interleave points directly to bs ref erence(i). If bs ref erence(i) looks
directly to bs to interleave however, the angle is calculated so that no main
beam of an antenna of bs to interleave points directly back to bs ref erence(i).
Bs ref erence(i) is defined as directly looking to bs to interleave, if bs to interleave
is within an angle of 30 around the main lobe of one antenna of bs ref erence(i),
as can be seen in Figure 8.16.

Figure 8.16: Check, if bs ref erence(i) looks to bs to interleave.

Now we have up to 5 possible angles for turning bs to interleave. If there is


only one base station as reference, it is trivial. Then we only have one angle
1 , and thus we can directly take that angle for bs to interleave. If there is
more than one reference base stations (bs ref erence(i)) however, the angle for
adjusting bs to interleave is calculated by using a sliding window over all the
angle values i . The sliding window size was selected to be 20 , so that the
maximum deviation, resulting from calculating the adjusting angle, does not
exceed 10 . The adjusting angle is the mean of the angles i in the window. Due
to the period of the angles of 120 , the center of the sliding window runs from
10 to 130 to close the calculation of the period. Of course, the angles between
0 and 20 have to be copied to the equivalent locations between 120 and 140
first.

Figure 8.17: Window 1 contains the most angles.

8.4. ANALYTIC OPTIMIZATION ALGORITHM

117

To calculate the rotation angle for bs to interleave, we choose the window which
contains the most angles (Figure 8.17). If the number of angles is equal in all
windows, we take the window with the minimum difference between the angles
contained in it and calculate the mean angle. If there is only one angle in every
window, the angle of the nearest base station is taken as the value for turning
bs to interleave. After the azimuth value of base station bs to interleave has
been adjusted, it is marked as changed.
The algorithm repeats step 2 of the azimuth adjustment routine until no unchanged base stations remain. Figure H.2 in Appendix H shows the flowchart of
step 2.

8.4.2

Strategy for Antenna Downtilt

The optimum adjustment of the antenna downtilt would be to achieve maximum


antenna gain for the mobile stations in the own cell and at the same time having
maximum loss in the far end interference area that means in the neighboring
cells. A trade off has be found between the requirements of the own users and
the advantages for the neighboring cells.
In the analytic optimization strategy the antenna downtilt is adjusted according
to the mean elevation angle 11 of the users in the coverage area of the cell in
reference to the base station antenna, as shown in Figure 8.18.

Figure 8.18: Elevation angle of the cell area.

The antenna downtilt is not set directly equal to the mean elevation, but according to certain rules obtained from results of the Rule Based Approach from
Section 8.2.1. For CPICH coverage verification mode 1 these rules are presented
11

The mean elevation angle is defined as the mean value over all elevation angles of the mobiles
in one cell. Single mobiles, which are covered in the cell but are situated e.g. just below the
antenna or in a remote area of coverage, having very big or small elevation angles and therefore
would falsify the result of the mean elevation, are not considered for the calculation.

118

CHAPTER 8. OPTIMIZATION ALGORITHMS

in Table 8.5 and in Figure 8.19 as a linearized function over the mean elevation
angle. For CPICH coverage verification mode 2 these rules are shown in Table 8.6
and Figure 8.20.
Mean elevation angle Antenna downtilt
-1.5

-4

1.5 < 1

-3

1 < 0.5

-2

0.5 < 0

-1

0 < 0.5

0.5 < 1

1 < 1.5

1.5 < 2

2 < 2.5

2.5 < 3

3<4

4<5

5<6

6<7

Table 8.5: Rules for adjusting the antenna downtilt according to the mean elevation angle in CPICH coverage verification mode 1.

These rules are only valid for the antenna pattern of the used KATHREIN 739707
antenna [64]. For a different antenna the rules will be different. Note, that the
difference between CPICH coverage verification mode 1 and mode 2 is exactly
2 .
Figure H.3 in Appendix H shows the flowchart for the implementation of the
antenna tilt adjustment.

8.4.3

CPICH Power Level Adjustment Strategy

In Chapter 5.3 it has been shown that the optimum CPICH power level is the
lowest CPICH value that can be received correctly by the mobile in the serving
area. With this minimum power level, too much overlapping in the CPICH
coverage areas is avoided and therefore the minimum pilot pollution and the

8.4. ANALYTIC OPTIMIZATION ALGORITHM

119

Figure 8.19: Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 1.

Mean elevation angle Antenna downtilt


-1.5

-6

1.5 < 1

-5

1 < 0.5

-4

0.5 < 0

-3

0 < 0.5

-2

0.5 < 1

-1

1 < 1.5

1.5 < 2

2 < 2.5

2.5 < 3

3<4

4<5

5<6

6<7

Table 8.6: Rules for adjusting the antenna downtilt according to the mean elevation angle in CPICH coverage verification mode 2.

120

CHAPTER 8. OPTIMIZATION ALGORITHMS

Figure 8.20: Function for adjustment of the antenna downtilt according to the
mean elevation angle in CPICH coverage verification mode 2.

minimum interference is created in the system. Furthermore, due to the transmit


power limitation of the base stations, it is also an advantage to have the minimum
CPICH level, in order to provide more power resources to the traffic channels and
therefore achieve a higher capacity.

The same result can be seen in Figure 8.21, where the total transmit power of
one cell in the optimization area of the big network scenario (see Figure 7.3 in
Section 7.4.1) is presented over the CPICH power level. This curve is a result of
simulations with CAP ESSOT M in CPICH coverage verification mode 1. During the simulation the CPICH power levels of all cells in the scenario have been
decreased. To achieve a valid result, the cell had 19 mobiles served for all measurement points. What we can see is that by decreasing the CPICH power level
of the cells, the total transmit power of the cells decreases as well, not only due
to the amount of reduced CPICH power, but also because of a certain down
swinging effect due to a reduced interference level in the system.

Thus, the main task in optimizing the CPICH power level is to avoid a too high
degree of cell overlapping while still having a certain minimum CPICH power
for providing the required CPICH coverage in the defined area. Therefore, the
overall problem is to find that minimum CPICH level for maintaining CPICH
coverage. Due to the significance of the correct calculation of CPICH coverage
and because of the quite different coverage verification mode 1 and mode 2, two
strategies for the two modes have been developed.

8.4. ANALYTIC OPTIMIZATION ALGORITHM

121

Figure 8.21: Transmit power of one cell in the optimization area of the big network
scenario.

8.4.3.1

Strategy for CPICH Coverage Verification Mode 1

In CPICH coverage verification mode 1, the coverage is defined by the mobiles


receiver sensitivity, but without consideration of any interference. Therefore, the
required coverage can be maintained down to a non realistic CPICH power level
of less than 0 dBm at a receiver sensitivity of -120 dBm, as set in the simulator. To
have a more realistic lower bound of CPICH power level, a minimum of 15 dBm
was set for the CPICH in the optimization strategy.
Starting from that minimum CPICH power level of 15 dBm, it is quite easy to
adjust the CPICH power levels in all cells for the required CPICH coverage in
the defined area, since no interference is included in the coverage verification.
What we have to do is to look at all the pixels in the defined area, which are
not covered and adjust the CPICH power level of the cell with the smallest path
loss to that pixel. The amount of CPICH power, which has to be changed in the
base station is the difference of the received CPICH power level at the mobile to
the threshold of -120 dBm. In this way we try to cover a predefined part of the
system area. One UMTS network operator for example could aim for a coverage
of 98% in urban and 80% in rural areas.
In the XML output file of the simulator no direct information about the received
CPICH power levels at every pixel of the scenario is available, so a overcrowded
scenario with about 100 users per cell is used. This is no problem in CPICH
coverage verification mode 1, as for coverage verification the interference is not
considered. These 100 mobile stations per cell, which are measuring the received
CPICH level, are almost covering the whole cell area and therefore provide a

122

CHAPTER 8. OPTIMIZATION ALGORITHMS

good statistics for the CPICH levels in the cell.

8.4.3.2

Strategy for CPICH Coverage Verification Mode 2

In CPICH coverage verification mode 2 the CPICH coverage is defined by a


certain CP ICHEc /I0 threshold. In this mode the simulator considers the interference level in the system for the coverage verification. Due to the fact that
when changing the CPICH power levels, the interference is changing as well, it is
very difficult or impossible to adjust the minimum required CPICH power level
in one step.
The worst case for coverage is the case with the highest possible interference level
in the system which happens when all the base stations are transmitting with the
maximum power. At the same time this is the ideal case for capacity, because
the system will accept users until the power limit is reached if there is no other
capacity limitation factor reached before (see Chapter 4 for capacity limitation
factors).
Therefore, we can conclude that the CPICH coverage should always be adjusted
according to the worst case, which is the highest possible interference level in the
system. This is the ideal case, if all the cells are transmitting with maximum
power. However, it is not ideal, if all the base stations have the same maximum
transmit power level. Then, the small cells are creating more inter-cell interference for the adjacent cells, because of the high total transmit power. Therefore,
the transmit powers of the cells have to be adapted in a way, that the interference
levels do not exceed a certain threshold at the cell border. With this strategy
(smaller cells for example have a lower total power limit than big cells) in the
network the worst case (=ideal case) will be reached.
In the strategy for CPICH coverage verification mode 2 the worst case is determined by creating the highest possible interference level in the system with the
maximum transmit powers of the cells and then adjusting the CPICH power levels of the cells in order to achieve the required coverage probability in the defined
area. The maximum transmit powers of the base stations are achieved by setting
the PCH value in the simulator to the maximum (40 dBm) and the SCH value in
a way that the total transmit power of the cell (43 dBm) is reached.
Like in the CPICH coverage verification mode 1 an overcrowded scenario is used
again for adjusting the CPICH power levels. However, when setting the total
transmit powers of the cells to the maximum by increasing the SCH and the
PCH, there will not be any mobile served. Therefore, for example 100 mobiles
in each cell are investigated. For each mobile the CPICH power level of the

8.4. ANALYTIC OPTIMIZATION ALGORITHM

123

serving cell is adjusted in order to reach the received Ec /I0 requirement12 . If the
requirement is already fulfilled, the CPICH power remains unchanged.
Figure H.4 and Figure H.5 in Appendix H show the flowchart for the implementation of the CPICH power adjustment for CPICH coverage verification mode
2.

12

For the simulations in Chapter 9 the Ec /I0 requirements are usually set to -12 dB (Section 7.2.2.2).

124

CHAPTER 8. OPTIMIZATION ALGORITHMS

Chapter 9
Algorithm Performance Analysis
9.1

Introduction

In this second main chapter, the performance of the various algorithms, introduced in Chapter 8, are presented. Usually the big network scenario (see Section 7.4.1) is used for the analysis of the algorithms. If the small network scenario
(7.4.2) is used, it is explicitly mentioned.
The results for the local algorithms Rule Based Approach, Simulated Annealing
and Adaptive Rule Based Approach, are presented in Section 9.2, Section 9.3
and Section 9.4, respectively. Section 9.5 presents the results for my Genetic
Algorithm. The analysis for the Analytic Optimization Algorithm is presented
in Section 9.6. Section 9.7 compares all the different algorithms.
Note that in general the results of the different sections cannot be compared with
each other, because during the development of the several algorithms also the
network simulator has been improved. However, results presented in one section
are evaluated with the same version of CAP ESSOT M .

9.2

Results for the Rule Based Approach

In this section the results for the Rule Based Approach (Section 8.2.1) are presented. The network simulations are performed with CPICH coverage verification
mode 1 and the rule set from Appendix D is used. The results of the optimization
are shown in Table 9.1. Before optimization the number of served users is 483
in the optimization area. After optimization with the Rule Based Approach, 817
users are served. The GoS in both cases is about 95 %. This increase in served
users leads to a capacity gain of 69.5 %.
125

126

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS


Before optimization

After optimization

483

817

512

856

94.3

94.5

Served users in
optimization area
Covered users in
optimization area
GoS [%]
Capacity gain [%]

69.5 %

Computational effort

70

[iterations]
Table 9.1: Results for the Rule Based Approach with CPICH verification mode 1.
Further, the algorithm was executed with various number of iterations (see iter
in Table D.1 in Appendix D). The default setting for the rule set is from 6 to 10.
Table 9.2 shows the results. The table shows that there are not enough iterations
in the first case (4..8 iterations), while the best result (822) has been obtained
by using much more iterations.
Iterations

Served users in optimization


area after optimization

4..8

765

5..9

807

6..10 (default)

817

7..11

807

8..12

810

9..13

822

10..14

819

Table 9.2: Results for the Rule Based Approach with CPICH verification mode 1.

9.2.1

Various Investigations with the Rule Based Approach

Based on the default rule set from Table D.1 in Appendix D, several investigations
have been carried out. In each investigation, one parameter was changed, while
all others were kept constant during the optimization process.

9.2. RESULTS FOR THE RULE BASED APPROACH


9.2.1.1

127

Optimization on Different Snapshots

In this investigation the Rule Based Approach was performed on several snapshots with different user distributions. This means that before the optimization
a new user distribution was created and then the algorithm worked with this specific distribution. During the optimization process the distribution of the users
remained the same.
Figure 9.1 shows the results for 20 different snapshots. In the legend of the figure
Start serv and End serv denote the served users in the optimization area before
and after the optimization. Start cov and End cov are the covered users. In
snapshot 1 the default user distribution is used (result from previous section).

Figure 9.1: Results for the Rule Based Approach on different snapshots.

The number of served and covered users are within a relatively close range. The
average number of served users values at algorithm start is 459. After optimization there are 791 served users in the optimization area on average (compared to
483 and 817 for the result of the previous section).
9.2.1.2

Analyzing Optimization Results with Different Snapshots

A very important question is, how stable is an optimized network towards changing user distributions. In this section results are presented with changing the
user distribution after the optimization process. For 5 different snapshots from

128

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

the section before (snapshot 1, 6, 11, 14 and 18; all shaded in grey in Figure 9.1)
also after the optimization 20 snapshots are simulated with CAP ESSOT M .
For using the optimized scenarios where the algorithm has performed well (snapshot 1, 11 and 18) the results are shown in Figures 9.2, 9.3 and 9.4.

Figure 9.2: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 1.
As Figures 9.2, 9.3 and 9.4 show, the network is clearly optimized for the scenario generated by the receptive snapshot (if the network is originally optimized
for snapshot 18, then the results are best if the same user distribution is used
after optimization - see shaded areas in the figures). However, apart from these
maxima, the results are relatively stable. This is important, since in a real scenario the users will move all the time, so it would make no sense to optimize the
network for a static distribution of users that is never allowed to change.
When changing the user distribution of badly optimized scenarios (snapshot 6
and 14), the results are not so good. Figures 9.5 and 9.6 show the results.
In these cases, there is no such clear maximum in the results if the user distribution is left unchanged - in some cases even better results are obtained by
changing it! This indicates that certain distributions of users exist that the algorithm cannot optimize efficiently. On the other hand, this can also be seen as
an advantage: in those cases, the algorithm didnt narrowly optimize for a very
specific user distribution, and thus the results are worse in absolute numbers, but
more generally valid.

9.2. RESULTS FOR THE RULE BASED APPROACH

129

Figure 9.3: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 11.

Figure 9.4: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 18.

130

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.5: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 6.

Figure 9.6: Results for the Rule Based Approach on different snapshots after
optimization for snapshot 14.

9.2. RESULTS FOR THE RULE BASED APPROACH


9.2.1.3

131

Optimization with Modified Parameter Ranges

In this evaluation, the allowed range for modifications of parameters (CPICH


power and antenna tilt) was adapted. Unless otherwise noted - and unlike in the
rule set from Appendix D - the indicated limits were valid throughout the whole
simulation (compared to different limits in each rule, see Table D.1). Table 9.3
shows the individual results (bold limits indicate restrained parameter ranges,
compare Table D.1).
CPICH power Antenna tilt

Served users in

Covered users in

GoS [%]

[dBm]

[ ]

15

816

855

93.9

20

765

805

94.7

20

756

794

95.1

25

730

767

95.2

15

820

863

93.4

15

718

754

95.2

25

694

730

95.1

24..20

5..8

776

816

95.0

20..10

5..9

774

814

92.1

10

6..10

716

754

94.9

optimization area optimization area

Table 9.3: Results for the Rule Based Approach with different parameter ranges.
Applying constrains to the range of parameters mostly degrades the results. If
they need to be in place for some external reasons, the algorithm can still deliver
good results; it is however preferable not to have such constrains.
Figure 9.7 shows a 3D-visualization of the optimization gain at combinations of
limited CPICH power and antenna tilt ranges.
Obviously, for the used network scenario, CPICH power modifications should
be allowed as far downwards as to 15 dBm, and antenna downtilts should be
increasable up to 8 . If the range is too large, however, results deteriorate. This
is mostly due to the algorithmss linear design - in each step it only moves into one
direction and keeps the results if they are slightly better than the results obtained
in the previous step - even if a smaller step would have been more beneficial. The
gains obtained by a true bi-directional algorithm saturates at a higher level, even
if the allowed parameter ranges are too large (see results for the Adaptive Rule
Based Approach in Section 9.4).

132

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.7: 3D matrix of parameter range limits for the Rule Based Approach.

9.2. RESULTS FOR THE RULE BASED APPROACH


9.2.1.4

133

Optimization on Adapted Start Scenarios

For a given scenario, it might be advantageous to modify certain attributes of all


cells (e.g a downtilt of 3 instead of 0 for each antenna in the whole network).
The purpose of these investigations was to run the algorithm on such an a priori
improved scenario. Table 9.4 shows the initial values and end results for scenarios
with a fixed tilt on all antennas.
Initial values

Results

Antenna tilt

Served users in

GoS [%]

Served users in

GoS [%]

[]

optimization area

483

94.4

817

94.6

485

94.7

795

94.6

484

94.5

790

93.7

521

95.3

736

95.3

555

95.4

732

95.2

572

95.0

784

93.6

616

95.2

824

94.1

694

95.3

805

94.4

optimization area

Table 9.4: Results for the Rule Based Approach with adapted antenna tilt in the
start scenario.
The column Served users in optimization area (Initial values) indicates the usefulness of setting the downtilt to the indicated value for all cells (in this unique
scenario).
Similar simulations have been carried out, modifying the CPICH power. The
results are presented in Table 9.5.
These results show that the developed Rule Based Approach depends on the
initial setting of the CPICH power and antenna tilt. The main reason for this is
that the algorithm is only able to go in one direction.

9.2.2

Runtime of the Rule Based Approach

For the evaluation a Pentium IV with a clock rate of 2 GHz and 512 MB main
memory was used. The algorithm with the rule set from Appendix D and with a
first version of the network simulator (not runtime optimized) requires about 1 h
30 min for the 80 iterations.

134

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS


Initial values

Results

CPICH power

Served users in

GoS [%]

Served users in

GoS [%]

[]

optimization area

33

483

94.4

817

94.6

32

485

94.7

815

94.6

31

486

94.9

807

93.8

30

494

95.2

833

94.0

29

510

95.0

814

94.1

27

521

95.3

776

94.1

25

524

95.1

767

94.5

optimization area

Table 9.5: Results for the Rule Based Approach with adapted CPICH power in
the start scenario.

9.3

Results for Simulated Annealing

For the results with Simulated Annealing (Section 8.2.2) usually the rule set from
Table E.1 is used. The network simulations are performed with CPICH coverage
verification mode 1 on the big network scenario (7.4.1). Before optimization the
number of served users is 483 in the optimization area like in Section 9.2.
In this section results with both introduced cooling functions, Slow Cooling
(Equ. 8.2) and Geometric Cooling (Equ. 8.1) are presented (Section 9.3.1 and
Section 9.3.2). Further, results with the rule set from Table E.2 are presented
(Section 9.3.3).

9.3.1

Results with Slow Cooling

For Slow Cooling an initial cooling temperature of 0.07 was used for TC . The
value of was changed from 0.9 to 1.4. For each parameter three trials were
done. The results are shown in Table 9.6.
The results show that there is an important influence of the parameter on
the cooling function. The best result (average of 3 runs), 858 served users in
the optimization area, has been obtained with = 1.1. This corresponds to a
capacity increase of 77.6 %.
Table 9.7 show results with different initial values of TC . With this parameter
the shape of the cooling function can be changed, like by . The shape of the

9.3. RESULTS FOR SIMULATED ANNEALING

Trial Cooling parameter

Served users in

135

Number of iterations

optimization area

0.9

827

102

0.9

836

119

0.9

868

114

1.0

868

121

1.0

803

128

1.0

839

123

1.1

870

104

1.1

859

112

1.1

846

144

1.2

846

135

1.2

846

126

1.2

805

184

1.3

849

102

1.3

774

159

1.3

805

122

1.4

729

131

1.4

824

141

1.4

821

101

Mean value
served users
844

836

858

832

809

791

Table 9.6: Results for Simulated Annealing with Slow Cooling, different values
for .

136

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

cooling function is a very important factor, because the probability of taking a


bad result heavily depends on TC . The parameter TC was varied from 0.03 to 1.
For a value of 1.1 was used.
Trial Initial temperature

Served users in

Number of iterations

for TC

optimization area

0.03

718

163

0.03

801

131

0.03

852

140

0.04

710

155

0.04

767

157

0.04

804

190

0.05

851

122

0.05

860

117

0.05

828

127

0.05

834

123

0.07

870

104

0.07

859

112

0.07

846

144

0.1

829

132

0.1

821

126

0.1

773

157

0.5

854

104

0.5

829

133

0.5

828

131

866

103

860

87

829

111

Mean value
served users
790

760

843

858

808

837

852

Table 9.7: Results for Simulated Annealing with Slow Cooling, different values
for TC .
The best result I achieved with a initial value for TC of 0.07. So, I conclude
that the best choice of the Slow Cooling parameters for the UMTS base station

9.3. RESULTS FOR SIMULATED ANNEALING

137

parameter optimization problem is, if = 1.1 and the initial value of TC is


set to 0.07. However, I have to mention that for one cooling parameter setting
considerably more trials should be carried out. Due to the long runtime of the
algorithm it was only possible to make a few trials for one setting. With only 3
runs a statistical significance cannot be seen.

9.3.2

Results with Geometric Cooling

For the simulations with the Geometric Cooling function (Equ. 8.1), the initial
value of TC was set to 0.07. The parameter was changed from 0.95 to 0.99. For
each parameter three trials have been carried out. The rule type was the same
as in Section 9.3.1. The results are shown in Table 9.8.
From the table we see that the best results are obtained with a value for of
0.990. The algorithm found always the same solution, but always with a different
number of iterations.
If we compare Slow Cooling and Geometric Cooling, we can conclude that Geometric Cooling provides slightly better results.

9.3.3

Results with Improved Rule Set

I extended the rule set from Table E.1 by two additional rules (Table E.2). Now,
the algorithm is able to adjust the CPICH power and antenna tilt parameter with
a higher resolution.
Both cooling functions with the following settings have been tested:

Slow Cooling: Initial value for TC : 0.07, = 1.1


Geometric Cooling: Initial value for TC : 0.07, = 0.990

With each cooling function three simulations have been carried out, to see if the
results are independent of the Simulated Annealing random value (probability
for the choice of bad results). Table 9.9 shows the results of the simulations.
From the table we see again that Geometric Cooling outperforms Slow Cooling.
The best result was found with Slow Cooling, however Geometric Cooling found
in each trial the same good result.

138

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Trial Cooling parameter

Served users in

Number of iterations

optimization area

0.950

717

145

0.950

740

140

0.950

731

106

0.960

727

107

0.960

736

149

0.960

750

128

0.970

846

136

0.970

821

135

0.970

849

96

0.975

832

121

0.975

846

126

0.975

863

100

0.980

855

116

0.980

866

102

0.980

866

108

0.985

860

89

0.985

830

113

0.985

867

95

0.990

866

105

0.990

866

96

0.990

866

123

Mean value
served users

Table 9.8: Results for Simulated Annealing with Geometric Cooling.

729

738

839

847

862

852

866

9.4. RESULTS FOR THE ADAPTIVE RULE BASED APPROACH


Cooling function

Trial

Served users in

Number of iterations

optimization area
Slow Cooling

Geometric Cooling

139
Mean value
served users

886

188

831

131

834

153

879

143

879

162

879

159

850

879

Table 9.9: Results for Simulated Annealing with improved rule set.

9.3.4

Runtime of Simulated Annealing

For the evaluation a Pentium IV with a clock rate of 2 GHz and 512 MB main
memory was used. The two rule sets have a little bit different runtime. Rule
set 1 (see Table E.1) requires on average 124 iterations, and therefore the average
runtime is about 2 h 10 min. Rule set 2 (see Table E.2) needs on average 156
iterations, and therefore the average runtime is about 2 h 50 min.

9.4

Results for the Adaptive Rule Based Approach

As described in Chapter 8 (Section 8.2.3), the implemented Adaptive Rule Based


Algorithm is limited by a set of rules, which restrict the changes of CPICH power
and antenna tilt per adjustment. In this section the results are presented for the
four developed rules sets (Appendix F). The differences in the four rule sets are
the step size and parameter (CPICH power and antenna tilt) limitation settings.
The results in this section are obtained with CPICH coverage verification mode 1.
Before and after optimization 50 snapshots with different user distributions are
simulated. The optimization process itself is performed with one fixed user distribution. Before optimization, the mean number of served users in the optimization
area is 511. Figure 9.8 shows the block diagram for the simulation.
Figure 9.9 gives the cumulative distribution function (cdf) of the achievable number of served users in the optimization area, at a GoS of 95 % over the 50 snapshots. It shows that different rule sets have significant influence on the optimization result.

140

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.8: Block diagram for the simulation of the four rule sets over 50 snapshots.

Figure 9.9: Comparison of cdf curves for the four rules sets of the Adaptive Rule
Based Approach (50 snapshots).

9.5. RESULTS WITH THE GENETIC ALGORITHM

141

The fluctuation of the cdf curves suggests that different rule sets favor different
realizations of the user distribution. There is no particular rule set that is able
to achieve the highest number of served users in all 50 snapshots. Nevertheless,
the gradient of the cdf curves illustrates that rule set #4gives the most stable
performance among the four different rule sets.
Additionally, the mean achieved number of served users for each rule set, shown
in Table 9.10, indicates that rule set #4has the highest mean number of served
users and therefore yields to a capacity gain of 66.9 %.
Simulation

Mean # of served

Standard deviation

Capacity

users in target area

(# served users)

gain [%]

Before optimization

511

7.54 (1.48 %)

Rule set #1

831

31 (3.7 %)

62.6 %

Rule set #2

839

29 (3.5 %)

64.2 %

Rule set #3

842

34 (3.9 %)

64.8 %

Rule set #4

853

25 (2.9 %)

66.9 %

Table 9.10: Results for Adaptive Rule Based Approach with CPICH verification
mode 1.
It is noticeable that the standard deviation of served users with respect to different
user distributions after the optimization is much higher than before. This means
that the optimized network is less stable regarding the user distribution.
In this thesis only a small part of the simulations are shown. In [23] more results
and a stability analysis of the algorithm are presented.
The runtime of the Adaptive Rule Based Algorithm depends on the used rule set.
On average the algorithm requires about 150 iterations. On a Pentium IV with
a clock rate of 2 GHz and 512 MB main memory the runtime is approximate 2 h
45 min.

9.5

Results with the Genetic Algorithm

The Genetic Algorithm has been tested on both scenarios (small and big network scenario) as well as with both CPICH coverage verification modes. Several
different settings of the parameters for the Genetic Algorithm, like size of the population, selection pressure and parameters for the local optimization have been
tested. Table 8.4 in Section 8.3.3 shows the best settings. However, good results

142

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

have also been obtained with other settings. As fitness function, the extended
fitness function from Equ. (6.2) was used.

9.5.1

CPICH Coverage Verification Mode 1

9.5.1.1

Results on the Small Network Scenario

For the small network scenario altogether 60 optimization runs have been carried
out, always with the same user distribution. Before the optimization 347 served
users are in the network. Figure 9.10 shows the results.
In Figure 9.10 the results are in chronological order of execution. From the figure
we see that during the development process the results got better. The mean
number of served users over all 60 runs are 488. This corresponds to a capacity
increase of 40.6 %. The standard deviation is 17.6 (3.6 %).
The best result was achieved with the optimization run at index 53. After this
optimization run 518 users were served, this corresponds to a capacity increase
of 49.3 %. Figure 9.11 shows the growth of the fitness value (fitness value of the
best individual and average fitness of the population) during the optimization for
this run and Table 9.11 presents the used parameters for the Genetic Algorithm.
Population size

n = 100

Selection pressure

Cm = 5

Local optimization:
Number of individuals

local num = 20

Number of iterations

local iter = 2

Area of search space:


CPICH power

15 dBm 33 dBm

Antenna downtilt

0 8

Table 9.11: GA settings for the best optimization run on the small network
scenario with CPICH verification mode 1.
For the best optimization run the algorithm terminates after 172 populations.
However, Figure 9.11 shows that the final result is already found after 75 populations. Afterwards, no better results were found. To evaluate the stability of the
result, the CPICH power and antenna tilt settings before and after optimization
were evaluated with 100 different user distribution snapshots. Table 9.12 shows
the results of the analysis.

9.5. RESULTS WITH THE GENETIC ALGORITHM

143

Figure 9.10: Results for the Genetic Algorithm on the small network scenario
with CPICH verification mode 1.

144

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.11: Optimization run for the best result on the small network scenario
with CPICH verification mode 1.

Number of served users

Before optimization

After optimization

Min

323

460

Max

363

518

Mean

344

483

Standard deviation

7.2 (2.1 %)

9.1 (1.9 %)

in optimization area

Capacity Gain [%]

40.4

Table 9.12: Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 1.

9.5. RESULTS WITH THE GENETIC ALGORITHM

145

The capacity increase for the mean number of served users is only 40.4 %. However, from Table 9.12 we see that the result after optimization is also stable like
before optimization, because the standard deviation does not increase. Furthermore we see that the optimization has been carried out for one single user distribution and so during the evaluation with different user distributions no higher
result than 518 served users was found.

9.5.1.2

Results on the Big Network Scenario

For the big network scenario altogether 17 optimization runs have been carried
out, again always with the same user distribution. Before the optimization 506
served users are in the optimization area. Figure 9.12 shows the results, again in
chronological order of execution.
From Figure 9.12 we see again that optimization runs, which were carried out
later in the development process, deliver better results. The mean number of
served users over all 17 runs are 1064. This corresponds to a capacity increase of
110.3 %. The standard deviation is 77.8 (7.3 %).
The best result was achieved with the optimization run with index 17. After
this optimization run, 1148 users were served in the optimization area; this corresponds to a capacity increase of 126.9 %. Figure 9.13 shows the growth of the
fitness value during the optimization for this run and Table 9.13 presents the
used GA parameters.
Population size

n = 100

Selection pressure

Cm = 5

Local optimization:
Number of individuals

local num = 20

Number of iterations

local iter = 2

Area of search space:


CPICH power

15 dBm 33 dBm

Antenna downtilt

0 8

Table 9.13: GA settings for the best optimization run on the big network scenario
with CPICH verification mode 1.
The algorithm runs altogether 335 populations. However, like for the small network scenario the best result was already found after 198 populations. From
Figure 9.13 we again see (like in Figure 9.11) the biggest increase of the fitness

146

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.12: Results for the Genetic Algorithm on the big network scenario with
CPICH verification mode 1.

9.5. RESULTS WITH THE GENETIC ALGORITHM

147

Figure 9.13: Optimization run for the best result on the big network scenario
with CPICH verification mode 1.

in the first part of the optimization process (up to population 100). Afterwards,
only a small increase of the fitness values per population is observed. To evaluate the stability of the result, again the CPICH power and antenna tilt settings
before and after optimization were evaluated with 100 different user distribution
snapshots. Table 9.14 shows the results.
Number of served users

Before optimization

After optimization

Min

485

1044

Max

529

1148

Mean

511

1128

Standard deviation

7.5 (1.5 %)

13.0 (1.2 %)

in optimization area

Capacity Gain [%]

120.7

Table 9.14: Evaluation of the best result with 100 different snapshots on the big
network scenario with CPICH verification mode 1.
The capacity increase for the mean number of served users is 120.7 % and the

148

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

result after optimization is stable again, like before optimization.

9.5.2

CPICH Coverage Verification Mode 2

All results, which have been performed with the CPICH coverage verification
mode 2 were analyzed before and after the optimization with 100 different user
distribution snapshots. However, the optimization itself has been carried out
with one fixed user distribution. For the CP ICHEc /I0 threshold a value of -12 dB
is used.
9.5.2.1

Results on the Small Network Scenario

Figure 9.14 shows the results on the small network scenario. Altogether, 7 different optimization runs have been carried out. Before the optimization in average
352 users are served in the network. The mean number of served users after the
optimization over all 7 runs are 374. This corresponds to a capacity increase of
6.3 %. The standard deviation is 7.6 (2.0 %).
Index 5 in Figure 9.14 shows the best result. After this optimization run 382
users are served in average, this corresponds to a capacity increase of 8.5 %. If
we compare the results from CPICH coverage verification mode 2 with mode 1,
we see that now the capacity increase is much lower. This is due to the fact
that for mode 2 the interference is included in the calculation and so the network
simulator delivers more pessimistic results.
Table 9.15 summarizes the optimization results and Table 9.16 presents the used
parameters. Again, like for the CPICH coverage verification mode 1 the growth
of the fitness value during the optimization run is shown for the best run (index
5 in Figure 9.14). See Figure 9.15 for the diagram.
Figure 9.15 shows that the optimization run terminates after 452 populations.
Only one user distribution has been used during the optimization process. The
starting value is 351 served users and the final value is 397 served users. The
best fitness value is reached after 416 populations. If we compare the curve
from Figure 9.15 with the fitness growth of CPICH coverage verification mode 1
(Figure 9.11), we see that now the increase in fitness is slower. Remember, in
Figure 9.11 the best result was found after 75 populations.
9.5.2.2

Results on the Big Network Scenario

For the big network scenario altogether 21 optimization runs have been carried
out. Before the optimization in average 512 users are served in the optimization
area. Figure 9.16 shows the results.

9.5. RESULTS WITH THE GENETIC ALGORITHM

149

Figure 9.14: Mean results over 100 snapshots for the Genetic Algorithm on the
small network scenario with CPICH verification mode 2.

150

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Number of served users

Before optimization

After optimization

Min

334

366

Max

369

397

Mean

352

382

Standard deviation

6.8 (1.9 %)

6.5 (1.7 %)

in optimization area

Capacity Gain [%]

8.5

Table 9.15: Evaluation of the best result with 100 different snapshots on the
small network scenario with CPICH verification mode 2.

Population size

n = 100

Selection pressure

Cm = 5

Local optimization:
Number of individuals

local num = 20

Number of iterations

local iter = 2

Area of search space:


CPICH power

15 dBm 33 dBm

Antenna downtilt

0 8

Table 9.16: GA settings for the best optimization run on the small network
scenario with CPICH verification mode 2.

9.5. RESULTS WITH THE GENETIC ALGORITHM

151

Figure 9.15: Optimization run for the best result on the small network scenario
with CPICH verification mode 2.

In Figure 9.16 the results are in chronological order of execution. From the figure
we see that during the development process the results get better. The mean
number of served users over all 21 runs are 656. This corresponds to a capacity
increase of 28.1 %. The standard deviation is 22.4 (3.4 %).
The best result was achieved with the optimization run with the index 16. After
optimization 693 users are served in average. This corresponds to a capacity
increase of 35.4 %. Again, like for the small network scenario we see that the
network simulator is more pessimistic and so the capacity gain is much lower
than with the CPICH coverage verification mode 1. Remember, with CPICH
verification mode 1 we had a capacity increase of 120 %! However, we have to keep
in mind that the results of the CPICH coverage verification mode 2 approximates
the real situation better.
The optimization results for the best run are summarized in Table 9.17 and the
GA parameters in Table 9.18. Figure 9.17 shows the the growth of the fitness
value during the optimization run.
In Figure 9.17 the starting value of the optimization are 498 served users and the
termination value are 733 served users in the optimization area. The optimization
runs 338 populations. From the figure we see that first there is a very sharp rise in
fitness. Then, from population 150 to 280 no better result is found. Afterwards,

152

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.16: Mean results over 100 snapshots for the Genetic Algorithm on the
big network scenario with CPICH verification mode 2.

9.5. RESULTS WITH THE GENETIC ALGORITHM

Number of served users

153

Before optimization

After optimization

Min

498

672

Max

530

733

Mean

512

693

Standard deviation

7.5 (1.5 %)

9.6 (1.4 %)

in optimization area

Capacity Gain [%]

35.4

Table 9.17: Evaluation of best result with 100 different snapshots on the big
network scenario with CPICH verification mode 2.

Population size

n = 400

Selection pressure

Cm = 5

Local optimization:
Number of individuals

local num = 20

Number of iterations

local iter = 2

Area of search space:


CPICH power

15 dBm 38 dBm

Antenna downtilt

2 10

Table 9.18: GA settings for the best optimization run on the big network scenario
with CPICH verification mode 2.

154

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Figure 9.17: Optimization run for the best result on the big network scenario
with CPICH verification mode 2.

the algorithm finds a new solution and so gets a new impulse. The best result
is then found after 320 populations. From the figure we see also that, if we are
only interested in a good result and do not want to wait until the algorithm is
converged, the algorithm can be stopped after 150 populations. So, we save a lot
of computation time.

9.5.3

Runtime of the Genetic Algorithm

The runtime of the Genetic Algorithm depends on the network scenario and on
the size of the population. For the presented results two different population
sizes were used: 100 and 400 individuals. On a Pentium IV with a clock rate of
2 GHz and 512 MB main memory the Genetic Algorithm runs on the big network
scenario with 100 individuals about 15 h and with 400 individuals about 65 h.
The small network scenario requires approximately half of the time of the big
network scenario. The version of the network simulator, which was used for the
Genetic Algorithm was optimized in runtime and therefore much faster than the
first version, used for the local algorithms.

9.6. RESULTS FOR THE ANALYTIC OPTIMIZATION ALGORITHM

9.6

155

Results for the Analytic Optimization Algorithm

For obtaining the results with the Analytic Optimization Algorithm, both CPICH
coverage verification mode 1 and mode 2 have been used. The two main results in
the following Sections 9.6.1 and 9.6.2 are the number of served users in the optimization area (nr served target) and the mean number of served users in the optimization area over 40 different user distribution snapshots (mean cap target).
The reference scenario is characterized by antenna downtilts of 0 (+3 electrical tilt in the antenna pattern) for all the antennas, and CPICH power levels
of 33 dBm in all the cells in the scenario. In the Tables 9.19, 9.20 and 9.21,
four columns of results are presented. The first column shows the results of the
reference scenario. The other three columns titled Adj azimuth, Adj tilt and
Adj CPICH contain then results after running the strategy for the adjustment of
the antenna azimuth, after the additional strategy for the antenna downtilt and
after the additional CPICH adjustment strategy, respectively.
In the adjustment routine for the antenna azimuth, three critical spots 1 inside the
optimization area were used. In the strategy for the adjustment of the antenna
downtilt, the initial tilt value was set to 3 (+3 electrical tilt) in all cells, while the
initial CPICH power value (start CP ICH) was varied for the diverse simulations.
An initial CPICH power level (start CP ICH) is also necessary for the CPICH
adjustment strategy. The actually used initial CPICH power level for the special
simulation is given in the additional notes in Tables 9.19, 9.20 and 9.21.
In the simulations, the required coverage probability for the worst case in the total
scenario was set to 50 % and in the optimization area to 75 %, or 80 % in the total
scenario and 98 % in the optimization area, as again indicated in the additional
notes in the various tables. In CPICH coverage verification mode 2 the coverage
probability is indicated as CP ICHEc /I0 cov prob total or CP ICHEc /I0 cov prob target,
while in CPICH coverage verification mode 1 simply as cov prob total or cov prob target.
For the overcrowded scenario in antenna tilt and CPICH power strategies, 5000
mobiles were used.
In Figures 9.18 to 9.19, the mean numbers of served users in the optimization area
from the Tables 9.19, 9.20 and 9.21 are presented in bar charts. The bar chart for
CPICH coverage verification mode 1 (Figure 9.18) shows the mean capacity in the
optimization area in the reference scenario (initial parameter setting) in blue, as
well as after the adjustment of base station azimuth (green bar), antenna downtilt
(yellow bar) and CPICH power level (red bar). The bar chart for CPICH coverage
verification mode 2 (Figure 9.19) presents an additional bar for the variation of
1

For the definition of a critical spot see Section 8.4.1.

156

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

the required coverage probability for the worst case in the optimization area.
Thus, in CPICH coverage verification mode 2 there are two different bars for the
mean number of served users in optimization area after the CPICH adjustment
routine, one for required coverage probability in the total scenario of 0.8 and
0.98 in the optimization area, and another for the required coverage probability
in the total scenario of 0.5 and 0.75 in the optimization area. It is important
to remember that these required coverage probabilities are only for the worst
case. The effective coverage probabilities in normal interference situation are
presented in the Tables 9.19, 9.20 and 9.21 and are between 0.91 and 1, both in
total scenario and in the optimization area.

9.6.1

CPICH Coverage Verification Mode 1

As Figure 9.18 shows, the mean capacity in the optimization area can be increased
from 512 mean served users in the reference scenario (blue bar) up to 829 mean
served users (red bar). This equals a gain of 62 % in mean capacity compared
to the reference scenario. It is important to mention that after the CPICH
adjustment the CPICH power level in all cells is 15 dBm, equal to the minimum
CPICH power level. It is also shown in Table 9.19 that the reached coverage
probability after the CPICH strategy in the total scenario (cov prob total) is more
than 91 % and in the optimization area (cov prob target) is 100 %, even though
the required coverage probability was set to 0.8 and 0.98 in the total scenario and
in the optimization scenario, respectively. However, if using a parameter setting
with a CPICH power level of 33 dBm in all cells, thus utilizing the routines
without the CPICH procedure, there is also an increase in mean capacity in the
optimization area from 512 to 703 mean served users (yellow bar in Figure 9.18).
This is an improvement of 37 % compared to the reference scenario.
Reference scenario Adj azimuth

Adj tilt Adj CPICH

nr served target

508

540

691

821

mean cap target

512

557

703

829

cov prob total

0.9963

0.9904

0.9193

cov prob target

additional notes

start CP ICH in adj tilt: 15 dBm, start CP ICH in adj CPICH: 15 dBm,
final CPICH values = 15 dBm in all cells, required coverage probability: 0.8
total and 0.98 in target area, best server equal dist.

Table 9.19: Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots).

9.6. RESULTS FOR THE ANALYTIC OPTIMIZATION ALGORITHM

157

Figure 9.18: Results for the Analytic Optimization Algorithm with CPICH verification mode 1 (40 snapshots).

9.6.2

CPICH Coverage Verification Mode 2

For this mode the results are shown in Table 9.20 for a required coverage probability in worst case of 0.5 in the total scenario and 0.75 in the optimization
area. Before parameter adjustment the mean number of served users is 520 in
the reference scenario. The improvement of capacity in the optimization area,
which can be reached by using CPICH coverage verification mode 2 is 15 % (to
595 served users) compared to the reference scenario after adjusting the antenna
azimuth and of 21 % (to 630 served users) after the antenna tilt routine. After
the procedure for adjustment of CPICH power levels, however, Table 9.20 shows
a decrease in capacity. This is due to the fact that for reaching the coverage requirements in the worst case with quite severe CP ICHEc /I0 threshold of -12 dB,
the CPICH power levels have to be increased from the initial value of 33 dBm in
the reference scenario. The final CPICH power levels after CPICH adjustment
are between 32 dBm and 35 dBm. If using the more severe requirements in coverage probability of 0.8 in the total scenario and 0.98 in the optimization area, the
final results goes down even more to 553 served users in the optimization area
(see Table 9.21). The final CPICH power levels in this case are between 35 dBm
and 37 dBm. Figure 9.19 shows the results in a diagram.
Comparing the results from CPICH coverage verification mode 1 and mode 2,
we can see that the possible improvement by optimization is smaller in the more
realistic case (mode 2), because the CPICH coverage verification is including

158

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Reference

Adj azimuth

Adj tilt

Adj CPICH

scenario
nr served target

480

569

624

614

mean cap target

520

595

630

612

CP ICHEc /I0 cov prob total

0.9339

0.9288

0.9433

0.9598

CP ICHEc /I0 cov prob target

0.9545

0.9533

0.9635

0.9860

additional notes

start CP ICH in adj tilt: 36 dBm, start CP ICH in adj CPICH:


32 dBm, final CPICH values: ' 32-35 dBm, required coverage probability in worst case: 0.5 total and 0.75 in target area, best server equal
dist., CP ICHEc /I0 threshold = 12 dB

Table 9.20: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB and required
coverage probability in worst case of 0.5/0.75.

Reference

Adj azimuth

Adj tilt

Adj CPICH

scenario
nr served target

480

569

624

522

mean cap target

520

595

630

553

CP ICHEc /I0 cov prob total

0.9339

0.9288

0.9433

0.9823

CP ICHEc /I0 cov prob target

0.9545

0.9533

0.9635

additional notes

start CP ICH in adj tilt: 36 dBm, start CP ICH in adj CPICH:


32 dBm, final CPICH values: ' 35-37 dBm, required coverage probability in worst case: 0.8 total and 0.98 in target area, best server equal
dist., CP ICHEc /I0 threshold = 12 dB

Table 9.21: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB and required
coverage probability in worst case of 0.8/0.98.

9.7. COMPARISON OF THE VARIOUS APPROACHES

159

Figure 9.19: Results for the Analytic Optimization Algorithm with CPICH verification mode 2 (40 snapshots) with CP ICHEc /I0 threshold -12 dB.

interference calculation.

9.6.3

Runtime of the Analytic Optimization Algorithm

The Analytic Optimization Algorithm requires only 5 network evaluations. Therefore the runtime is very fast. On a Pentium IV with a clock rate of 2 GHz and
512 MB main memory the algorithms requires with the fast version of the network
simulator only about 1 min.

9.7

Comparison of the Various Approaches

In this section a comparison of the individual algorithms is given. The used big
network scenario is evaluated with CPICH coverage verification mode 1. Table 9.22 shows the results for 50 user distribution snapshots and in Figure 9.20
the corresponding diagram is presented.
From Table 9.22 we see that the Genetic Algorithm performs best. It outperforms
all other algorithms easily. Beside the best optimization results we also can
conclude that the GA delivers stable results. This means that the resulting

160

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

Number of served
Simulation
Before optimization

Number of

users in

Capacity gain

network

optimization area

[%]

evaluations

511

Algorithm
Rule Based Approach

806

57.7

80

815

59.5

150

Adaptive Rule Based

853

66.9

150

Genetic Algorithmb

1046

104.7

150000

Analytic Optimization

821

60.7

Simulated Annealing
Approach

Algorithm
Table 9.22: Comparison of the different algorithms with 50 different snapshots
on the big network scenario with CPICH verification mode 1.
a
The presented value is the result of only one run. However, for a fair comparison
the mean value of several runs should be calculated.
b
The presented value is the mean value over 17 different runs.

9.7. COMPARISON OF THE VARIOUS APPROACHES

161

Figure 9.20: Comparison of the different algorithms on the big network scenario
with CPICH verification mode 1.

162

CHAPTER 9. ALGORITHM PERFORMANCE ANALYSIS

parameter settings also works fine with different user distribution snapshots. For
the other algorithms I cannot claim this. However, the Genetic Algorithm has
one significant drawback, its runtime. The computational effort is much higher
than for all other algorithms.
If we compare the local optimization techniques, Rule Based Approach, Simulated
Annealing2 and Adaptive Rule Based Approach, we see that the Adaptive Rule
Based Approach shows the best result. The computation effort is for all three
algorithms approximately the same.
Comparing the result of the Analytic Optimization Algorithm with the local
techniques, we see that almost the same improvement is reached. It is important
to mention that the computational effort with only 5 network evaluations is much
smaller than for all other optimization algorithms.
Finally, in this chapter I conclude that if the computational effort is irrelevant,
then the Genetic Algorithm is the best choice for optimizing the base station
parameters in a UMTS network. However, if the runtime of the algorithm is
very important, the Analytic Optimization Algorithm shows a good performance,
because in only 5 evaluations a good result is achieved. If the computational effort
plays only an underpart, then the Adaptive Rule Based Approach is a good choice,
because with about 150 network evaluations the second best result is obtained.

For a fair comparison a mean value of several runs should be calculated.

Chapter 10
Summary and Conclusion
In this thesis I addressed the problem of capacity optimization in UMTS FDD
networks. The goal was to improve the capacity of the network, measured as
served users, without any additional expenses: The capacity should be improved
only by changing the parameters of the base stations.
For the operation of a network, a specific number of base station parameters
influence the capacity and therefore affect the performance of the network. Each
sector of a base station can be configured by selecting: antenna type, antenna tilt,
antenna pattern or CPICH power. All these parameters have a strong influence
on the interference in the system and therefore on the amount of served users.
In this thesis I focused on the optimization of CPICH power and antenna tilt,
because these parameters have the most influence [54, 88]. By optimizing the
antenna tilt settings, the other-to-own-cell interference ratio can be reduced: The
antenna main beam delivers less power towards the neighboring base stations,
and therefore most of the radiated power goes to the area that is intended to be
served by this particular base station. Also the CPICH power settings are very
important: The CPICH power has to be set such that the coverage is ensured with
minimum interference to neighboring cells in order to reduce the pilot pollution.
Altogether five different algorithms has been developed. The first three optimization algorithms, Rule Based Approach, Simulated Annealing and Adaptive
Rule Based Approach are local techniques. Also a global technique, the Genetic
Algorithm, has been developed. The last algorithm is an analytic approach.
The Rule Based Approach starts from a scenario where all cells of the network
are set to identical values for CPICH power and antenna tilt. The optimization
process is characterized by reducing the CPICH power and increasing the antenna
downtilt in the individual cells according to a configurable rule set.
Based on the Rule Based Approach the algorithm was subsequently extended and
163

164

CHAPTER 10. SUMMARY AND CONCLUSION

improved by incorporating Simulated Annealing. In contrast to the first method,


the decision to take a bad result is independent of the rule set.
The third algorithm was developed within the scope of a master thesis and is also
a further development of the Rule Based Approach. The main difference between
the Adaptive Rule Based Approach and the other two local approaches is that
CPICH power and antenna tilt are changed together, and that also an increase of
CPICH power and antenna up-tilting is possible during the optimization process.
In the next step a Genetic Algorithm was investigated and adapted for the UMTS
capacity optimization problem. I implemented a representation scheme, which
can also be utilized for optimizing other base station parameters. The genetic
operators were adapted for the UMTS capacity optimization problem by considering the quality of the network. In addition, a simple local optimization was
implemented, which is applied to the best individuals to improve their fitness.
The last discussed optimization algorithm is the Analytical Optimization Algorithm. This adjustment routine was developed within the scope of a master
thesis. Besides antenna tilt and CPICH power settings, the ad hoc algorithm
also optimizes antenna azimuth. Ad hoc in this context means to reach the
result only by considering the structure of the UMTS network. Thus, in contrast
to the other step by step optimization strategies only a few optimization steps
are necessary.
A fitness function is needed to represent the optimization goal. In this thesis, I
considered the number of served users as the goal of the optimization. For the
Genetic Algorithm I used a fitness function, which also includes coverage and
soft handover in addition to the capacity, in order to represent the quality of the
solutions more accurately.
For the evaluation of the network configuration and in consequence for the evaluation of the fitness function, a network simulator is necessary. I used the static
UMTS FDD network simulator CAP ESSOT M from SYMENA, Software & Consulting GmbH, for these evaluations. The simulator allows two different modes:
CPICH coverage verification mode 1 with a fixed threshold for CPICH coverage,
and CPICH coverage verification mode 2 including the interference by an EC /I0
threshold.
The analysis of the individual algorithms was done on two virtual scenarios of a
typical European city. In the first scenario the network covers the whole area of
the city. The second scenario contains only downtown.
With the different algorithms I achieved improvements in capacity of up to 105 %
for CPICH coverage verification mode 1 and up to 35 % CPICH coverage verification mode 2. In both modes the Genetic Algorithm performs best, but with
the drawback of a high computation time. The Rule Based Approach showed the

165
worst performance. The capacity increase was about 60 % for CPICH coverage
verification mode 1.
So, I conclude for the developed algorithms that if the computational
effort is irrelevant, the Genetic Algorithm is the best choice for optimizing the base station parameters in a UMTS network. However,
if the runtime of the algorithm is very important, the Analytic Optimization Algorithm should be used, because this algorithm shows good
results in only 5 network evaluations.

166

CHAPTER 10. SUMMARY AND CONCLUSION

Chapter 11
Appendix

167

168

CHAPTER 11. APPENDIX

Appendix A
UMTS - Network Structure
In this appendix an overview over the basic entities of the UMTS network is given.
The overview is based on the standard presented in [1]. The basic configuration
of a Public Land Mobile Network (PLMN) is shown in Figure A.1.
In the basic configuration presented in Figure A.1, all the functions are considered implemented in different equipments. Therefore, all the interfaces within
PLMN are external. Interface A and Abis are defined in the GSM 08-series of
the Technical Specifications (TS). Interfaces Iu, Iur are defined in the UMTS
25.4xx-series of Technical Specifications. Interfaces B, C, D, E, F and G need
the support of the Mobile Application Part of the signalling system No. 7 to
exchange the data necessary to provide the mobile service. No protocols for the
H-interface and for the I-interface are standardized. All the GPRS-specific interfaces (G-series) are defined in the UMTS 23-series and 24-series of Technical
Specifications. Interfaces Mc, Nb and Nc are defined in UMTS 23.205 and in the
UMTS 29-series of Technical Specification1 .
From this configuration, all the possible PLMN organizations can be deduced.
In the case when some functions are contained in the same equipment, the relevant interfaces become internal to that equipment. The individual blocks and
functionalities of the Core Network (CN) are well presented in [1]. The CN can
use two different types of access networks: the base station system (BSS) and
the radio network system (RNS). The MSC (respectively SGSN) can connect to
one of these Access Network (AN) type or to both of them2 .
For the network optimization the functionalities in, and the interfaces between
1

All specifications can be found on the 3gpp server, http://www.3gpp.org.


The access technologies offered by the BSS are described in the 45-series of 3GPP specifications. The access technologies offered by the RNS (FDD, TDD) are described in the 25-series
of 3GPP specifications (www.3gpp.org).
2

169

170

APPENDIX A. UMTS - NETWORK STRUCTURE

RNS and MS are of interest3 . In the network simulator from SYMENA, Software
& Consulting GmbH, which is used throughout this thesis, the functionalities
of the radio network controller (i.e. SHO, RRM,...) are modeled under the
assumption that there is no difference between two different RNCs. This means
that the Iur-Interface is not considered.

The interface between the MS and the RNS is specified in the 24- and 25-series of UMTS
Technical Specifications (www.3gpp.org).

171

Figure A.1: Overview and basic entities of the UMTS network structure.

172

APPENDIX A. UMTS - NETWORK STRUCTURE

Appendix B
3GPP COST 259 Channel
Models
This appendix is based on the deployment aspects for channel models in [7].
Within 3GPP only a small portion of the COST 259 models [27] are included.
Nevertheless, in the following description the 3GPP point of view is presented.
COST 259 [27] is a research forum founded by the EU, in which there are participants from manufactors, operators and universities. This forum is a successor
of COST 207 [36] and COST 231 [28], which did the work on which the channel
models used in GSM standardization were based. One of the work items identified in COST 259 was to propose a new set of channel models that overcome
the limitations in the GSM channel models, while aiming at the same general
acceptance. The models are aimed at UMTS and HIPERLAN, with particular
emphasis on adaptive antennas and directional channels.

B.1

Model Descriptions

The main difference between the COST 259 model and previous models is that it
tries to describe the complex range of conditions found in the real world by distributions of channels rather than a few typicalcases. The probability densities
for the occurrence of different channels are functions of mainly two parameters:
1. Environment
2. Distance
Given a certain environment (e.g. Urban Macrocell) and a certain distance (or
distance range/cell radius), the parameters describing the distribution functions
173

174

APPENDIX B. 3GPP COST 259 CHANNEL MODELS

for this particular case can be extracted. Performing a sufficient number of channel realizations will give a distribution of channels, which give a much better
representation of reality than what would be possible using only one channel.
The environments identified in COST 259 and included in 3GPP so far are given
in Table B.1. The macrocellular environments have the same names as the GSM
models. Further parameters and a much more detailed description of the model
can be found in [27, 98].
Macrocell

Microcell

Picocell

Typical Urban

(Street Canyons)

(Tunnel/Corridor)

Bad Urban

(Open Places)

(Factory)

Rural Area

(Tunnels)

(Office/Residential Home)

Hilly Terrain

(Street Crossings)

(Open Lounge)

Table B.1: Preliminary environments identified by COST 259.


In COST 259, a number of properties of the propagation channel has been considered in the model work. The full proposal includes all of these properties,
but it is quite simple and straightforward to implement the model in a modular
structure, so that each of the properties can be switched on or off individually
depending on the application. Inherent in the model are also correlations between the properties, e.g. time dispersion and shadow fading are modeled as
being partially correlated.

B.2

3GPP Considerations

The propagation properties considered in the COST 259 model and considered
by 3GPP are shown in Table B.2.
The shape of the channel is given by one or several clusters, where each cluster
is exponentially decreasing in delay and Laplacian (double-sided exponential) in
azimuth. Each cluster consists of a number of Rayleigh-fading paths, plus a
possible non-fading path to get Rice fading.
Of interest here are mainly the properties 4 and 7 shown in Table B.2. For this
case, a full description of the channel is given by specifying the parameter set (see
Figure B.1). The ith cluster is described by its total power Pi , the delay of the
first path i and the cluster delay spread ,i . The last parameter describes the
slope of the exponentially decaying power in the cluster. The number of clusters
present is given by NC .

B.2. 3GPP CONSIDERATIONS

175

Path Loss

Shadow Fading

Fast Fading

Time Dispersion

5 Angular dispersion (azimuth and/or elevation at BS)


6

Polarization

Multiple Clusters

Dynamic channel variations (variations 1-7)

Table B.2: Propagation properties proposed by COST 259 and considered by


3GPP.

Figure B.1: Channel shape (power delay profile) with multiple clusters. Source:
[7].

176

APPENDIX B. 3GPP COST 259 CHANNEL MODELS

From the 3GPP point of view it is possible to reduce the complexity of the COST
259 model by approximating the continuous distributions with a small number of
cases, selected to be typical representations of the channel in common environment. 3GPP proposes a set of models with fixed parameters as shown in Figure
B.2. The selected parameters correspond to the COST 207/GSM models with
one important difference concerning the delay spread value for the Typical Urban
channel. This has been reduced to better correspond to typical measurement
results.

Figure B.2: Reduced complexity channel model parameters. Source: [7].

A cluster in the models outlined here is represented by a number N P independent


Rayleigh-fading paths with classical Doppler spectrum, randomly distributed in
the interval [i , i + k ,i ]. Preliminary assignments are N P = 20 and k = 4.
For the evolution of the optimization algorithms in this thesis, an improved version of this COST 259 model is used. A data set with all the pathloss information
was kindly provided by Dipl. Ing. Dr. Hermann B
uhler GmbH, Modling, Autria.

Appendix C
RAKE Reception
In this appendix the details about the RAKE implementation for UMTS are
explained. The basic operations for CDMA signal reception can be described in
three steps:
Time delay identification:
First of all the different time delays at which significant energy arrives
have to be identified. With this, the correlation receivers, i.e. the RAKE
fingers, have to be allocated to the individual peaks. Quoted from [54], the
measurement grid for acquiring the multipath delays is in the order of one
chip duration (typically within the range of 14 - 21 chip duration) with an
update rate in the order of some tens of milliseconds.
Note that the chip duration in UMTS is Tc = 0.26 s. With this, multipath components with a difference in the path length of at least 0.26 s
can be separated and combined coherently. This difference in path length
corresponds to a distance of about 78 m, which can be obtained even in
small cells. For IS-95 systems, with a chip duration of about 1 s (i.e about
300m), this multipath diversity in small cells is not possible.
Tracking of phase and amplitude:
Within each RAKE finger both, phase and amplitude changes (caused by
small scale fading) have to be tracked and removed (see Figure C.1).
Signal Combination:
The different signal contributions (the individual RAKE fingers) have to be
combined coherently. The resulting symbols can then be presented to the
decoder for further processing.
In order to facilitate the tracking of both, signal-phase and -amplitude, UMTS
uses known pilot symbols that are used to sound the channel state (i.e the weight
177

178

APPENDIX C. RAKE RECEPTION

Figure C.1: The principle of maximum ratio combining within the CDMA RAKE
receiver. Source: [54].

vectors) for a particular finger. With this weight vector the received symbol is
rotated back in order to cancel the phase rotation caused by the radio propagation
channel. The channel-compensatedsymbols can then be summed up to recover
the energy across all delays1 .
In Figure C.2 the block diagram of a W-CDMA RAKE receiver is shown. Code
generators and correlators perform the despreading and integration to user data
symbols of received I- and Q-branches from the HF front-end. The channel
estimation uses the pilot symbols for estimating the channel state which will
then be removed by the phase rotator from the received symbols. In the Delay
Equalizer the different delays for the individual taps are compensated. Since the
individual taps are uncorrelated (they have different fading statistics), the delay
equalization provides a multi-path diversity gain.
In order to perform a successful despreading, code and data timing must be
known. This can be estimated by a so-called matched-filter. A matched filter
works the following way (see Figure C.3): We assume an incoming serial data
stream. When the samples of the incoming serial data are equal to bits of predefined data (i.e. pilot symbols), there is a maximum at filter output.
Although there are several differences between the UMTS RAKE receivers in
the mobile and the base station [104], all the basic principles presented in this
appendix are the same. To learn more about RAKE reception in UMTS see
[54, 97].
1

This maximum ratio combining (MRC) algorithm performs optimal in case that the interference is uncorrelated [108].

179

Figure C.2: Block diagram of a W-CDMA RAKE receiver. Source: [54].

Figure C.3: Schematic block diagram of matched-filter.

180

APPENDIX C. RAKE RECEPTION

Appendix D
Rule Set for Rule Based
Approach
The standard rule set for the Rule Based Approach (Section 8.2.1) is shown in
Table D.1.
rule

param

delta

limit

iter

CPICH

-5 dB

24 dBm

tilt

CPICH

-4 dB

22 dBm

tilt

CPICH

-3 dB

20 dBm

tilt

CPICH

-2 dB

18 dBm

tilt

CPICH

-1 dB

15 dBm

10

tilt

10

Table D.1: Standard rule set used for Rule Based Approach.

181

182

APPENDIX D. RULE SET FOR RULE BASED APPROACH

Appendix E
Rule Sets for Simulated
Annealing
The two rule sets, which are used for the Simulated Annealing algorithm (Section 8.2.2) are listed in the following tables.
rule

param

delta

limit

iter

CPICH

-5 dB

24 dBm

20

tilt

20

CPICH

-4 dB

22 dBm

20

tilt

20

CPICH

-3 dB

20 dBm

20

tilt

20

CPICH

-2 dB

18 dBm

20

tilt

20

CPICH

-1 dB

15 dBm

20

9
10
11

tilt

CPICH -0.5 dB
tilt

0.5

20

15 dBm

20

20

Table E.1: Rule set 1 for Simulated Annealing.

183

184

APPENDIX E. RULE SETS FOR SIMULATED ANNEALING

rule

param

delta

limit

iter

CPICH

-5 dB

24 dBm

20

tilt

20

CPICH

-4 dB

22 dBm

20

tilt

20

CPICH

-3 dB

20 dBm

20

tilt

20

CPICH

-2 dB

18 dBm

20

tilt

20

CPICH

-1 dB

15 dBm

20

tilt

20

10

CPICH

-0.5 dB

15 dBm

20

11

tilt

0.5

20

14 dBm

20

8.5

20

12
13

CPICH -0.25 dB
tilt

0.25

Table E.2: Rule set 2 for Simulated Annealing.

Appendix F
Rule Sets for Adaptive Rule
Based Approach
The four rule sets, which are used for the Adaptive Rule Based Approach (Section 8.2.3) are listed in the following four tables.
rule

param

delta

limit

iter

CPICH

3 dB

25 dBm

50

tilt

1.5

CPICH

2 dB

15 dBm

1
2

50

tilt

CPICH

1 dB

10 dBm

tilt

0.5

50

Table F.1: Rule set 1 for Adaptive Rule Based Approach.

185

186APPENDIX F. RULE SETS FOR ADAPTIVE RULE BASED APPROACH

rule

param

delta

limit

iter

CPICH

3 dB

25 dBm

50

1
2
3

tilt

1.5

CPICH

2 dB

20 dBm

tilt

CPICH

1 dB

15 dBm

50
50

tilt

0.5

CPICH

1 dB

10 dBm

tilt

0.5

50

Table F.2: Rule set 2 for Adaptive Rule Based Approach.

rule
0
1
2
3
4
5
6
7

param

delta

CPICH 0.5 dB

limit

iter

25 dBm

50

tilt

0.25

CPICH

3 dB

25 dBm

tilt

1.5

CPICH

1 dB

20 dBm

tilt

0.5

CPICH

3 dB

20 dBm

1.5

CPICH

2 dB

15 dBm

tilt

CPICH

3 dB

15 dBm

1.5

CPICH

1 dB

10 dBm

tilt

0.5

CPICH

3 dB

10 dBm

tilt

1.5

50
50
50

tilt

50

tilt

50

50
50

Table F.3: Rule set 3 for Adaptive Rule Based Approach.

187

rule

param

delta

limit

iter

CPICH

3 dB

25 dBm

50

tilt

1.5

1
2
3
4
5
6
7

CPICH 0.5 dB

25 dBm

tilt

0.25

CPICH

3 dB

20 dBm

tilt

1.5

CPICH 0.5 dB

20 dBm

tilt

0.25

CPICH

3 dB

15 dBm

tilt

1.5

CPICH 0.5 dB

15 dBm

tilt

0.25

CPICH

3 dB

10 dBm

tilt

1.5

CPICH 0.5 dB
tilt

0.25

10 dBm

50
50
50
50
50
50
50

Table F.4: Rule set 4 for Adaptive Rule Based Approach.

188APPENDIX F. RULE SETS FOR ADAPTIVE RULE BASED APPROACH

Appendix G
Parameter File for Genetic
Algorithm
For the genetic algorithm an XML file is used to specify the most important
parameters. The file contains 3 elements: <ga>, <limit> and <init>. The
element <ga> specifies the main parameters of the algorithm. The search space is
defined by the element <limit>. The element <init> defines the start parameter
settings of the first 12 individuals.
In the element <ga> the attributes define the following:
pop: Number of individuals in the population.
cm: Selection pressure Cm .
local num and local iter: Parameters of the local optimization (Section 8.3.2.5).
reduce iter and min pop size: Parameters of Reduced Population Size
(Section 8.3.2.7).
add gos: GoS, where new users are admitted.
add ever: Parameter of Adding Users as New Impulse (Section 8.3.2.6).
<parameter_file>
<ga
pop="100"
cm="5"
local_num="20"
local_iter="2"
189

190

APPENDIX G. PARAMETER FILE FOR GENETIC ALGORITHM

reduce_iter="999"
min_pop_size="10"
add_gos="0.96"
add_ever="20" >
</ga>
<limit
cpich_upper="38"
cpich_lower="15"
tilt_upper="-2"
tilt_lower="10" >
</limit>
<init>
<param cpich="33"
<param cpich="33"
<param cpich="15"
<param cpich="15"
<param cpich="24"
<param cpich="24"
<param cpich="24"
<param cpich="33"
<param cpich="15"
<param cpich="33"
<param cpich="24"
<param cpich="15"
</init>
</parameter_file>

tilt="0"
tilt="6"
tilt="0"
tilt="6"
tilt="0"
tilt="6"
tilt="4"
tilt="4"
tilt="4"
tilt="2"
tilt="2"
tilt="2"

/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>
/>

Appendix H
Flowcharts for Analytic
Optimization Algorithm
This appendix presents the flowcharts for the Analytic Optimization Algorithm
presented in Section 8.4.
Figure H.1 and Figure H.2 show the implementation of the azimuth adjustment
described in Section 8.4.1. The introduced optimization of the antenna downtilt
in Section 8.4.2 is presented by Figure H.3. Figure H.4 and Figure H.5 show the
implementation of the CPICH power adjustment from Section 8.4.3.
The full description of the implementation can be found in Chapter 6 of Wolfgang
Karners diploma thesis [63].

191

192APPENDIX H. FLOWCHARTS FOR ANALYTIC OPTIMIZATION ALGORITHM

Figure H.1: Automatic azimuth adjustment routine for turning base stations to
critical spots.

193

Figure H.2: Automatic azimuth adjustment routine for interleaving base stations.

194APPENDIX H. FLOWCHARTS FOR ANALYTIC OPTIMIZATION ALGORITHM

Figure H.3: Automatic tilt adjustment routine.

195

Figure H.4: Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in total scenario.

196APPENDIX H. FLOWCHARTS FOR ANALYTIC OPTIMIZATION ALGORITHM

Figure H.5: Automatic CPICH adjustment routine for CPICH coverage verification mode 2, function: set CPICH coverage in optimization area.

Appendix I
Simulation Parameters
The following tables summarize the relevant network parameters, which are used
for CAP ESSOT M .
Antenna type

Conventional

Antenna height

30 m

Antenna sector

65

Antenna gain
Antenna pattern

16 dBi
Kathrein 739707 (Figure 5.2 and 5.4)

Initial antenna downtilt

0 mechanical (varied) + 3 predefined electrical (fixed)

Initial antenna azimuth

0 (north) / 120 / 240

Table I.1: Base station antenna parameters in simulator CAP ESSOT M .

197

198

APPENDIX I. SIMULATION PARAMETERS

Number of carriers

Number of OVSF code trees

Maximum base station transmit power

43 dBm

Maximum code power

40 dBm

Minimum code power

15 dBm

Initial CPICH power

33 dBm

Initial SCH power

5 dBm

Initial PCH power

5 dBm

Active set window

3 dB

Active set size

Transmitter loss

0 dB

Base station noise figure

5 dB

Number of RAKE fingers

MRC efficiency

DL TPC dynamic range

65 dB

DL TPC headroom

1 dB

UL TPC dynamic range

25 dB

UL TPC headroom

2 dB

Changed together with CPICH power. See Section 5.3 for details.

Table I.2: System parameters in simulator CAP ESSOT M .

199
Background noise floor

-107 dBm [82]


Macro cell [5], improved COST 259 channel model

Path loss model


Maximum delay of channel

2 s

Number of channel taps

azimuth of taps

Downlink orthogonality factor

0.4

of log normal large scale fading

2 dB

of log normal large scale fading

Path loss correlation

0.5

ACIR

5%

ISIR

5%

A detailed description of the COST 259 model is given in Appendix B.

Table I.3: Channel parameters in simulator CAP ESSOT M .


Maximum mobile station transmit power

21 dBm

Mobile station antenna gain

0 dB

Body loss

0 dB

Receiver noise figure

0 dB

Receiver sensitivity

-120 dBm

Receiver CPICH threshold

-126 dBm

The Receiver sensitivity is used in CAP ESSOT M for setting the EC /I0 threshold in CPICH
coverage verification mode 2 (see Section 7.2.2.2). A receiver sensitivity of -120 dBm is equivalent to an EC /I0 threshold of -12 dBm.

Table I.4: Mobile station parameters in simulator CAP ESSOT M .


User distribution
Service mix
Activity factor

Best Server Equal / Equal


40 % 12.2kbit/s speech, 60 % 64kbit/s data
50 % speech, 100 % data

See Section 7.4 for a detailed description of the possible user distributions. During this thesis
only the Best Server Equal distribution is used for the evaluation of the different optimization
algorithms.

Table I.5: User parameters in simulator CAP ESSOT M .

200

APPENDIX I. SIMULATION PARAMETERS

Appendix J
Frequently Used Acronyms
2G
3G
3GPP
ACIR
ACTS
AICH
AN
AP-AICH
ARIB
AuC
AS
BCH
BER
BLER
BSC
BSS
BTS
cdf
CDMA
cdma2000
CD/CA-ICH
CF
CN
COST
CPCH
CPICH
CS
CSICH
CWTS

Second Generation
Third Generation
Third Generation Partnership Project
Adjacent Channel Interference Ratio
Advanced Communications Technologies and Services
Acquisition Indication Channel
Access Network
Access Preamble Acquisition Indication Channel
Association of Radio Industries and Businesses
Authentication Center
Active Set
Broadcast Channel
Bit Error Rate
Block Error Rate
Base Station Controller
Base Station System
BAse Transceiver Station
cumulative distribution function
Code Division Multiple Access
3G CDMA standard in US
Collision-Detection/Channel-Assignment Indicator Channel
Cooling Function
Core Network
European Cooperation in the field of Scientific and Technical research
Common Packed Channel
Common Pilot Channel
Circus Switched
CPCH Status Indicator Channel
China Wireless Communication Standard
201

202
DCH
DECT
DL
DPCCH
DPCH
DPDCH
DSCH
DS-CDMA
DTX
EA
EIR
EIRP
ETSI
FACH
FDD
FDMA
FRAMES
GA
GGSN
GoS
GPRS
GSM
GUI
HCS
HF
HIMM
HLR
HMM
HO
HIPERLAN
HTML
IF
IMT-2000
IP
ISIR
ITU
ITU-R
LAN
MDC
ME
MMM
MRC
MS

APPENDIX J. FREQUENTLY USED ACRONYMS


Dedicated Channel
Digital Enhanced Cordless Telefon
Downlink
Dedicated Physical Control Channel
Dedicated Physical Channel
Dedicated Physical Data Channel
Downlink Shared Channel
Direct Sequence CDMA
Discontinuous transmission mode
Evolutionary Algorithms
Equipment Identify Register
Equivalent Isotropic Radiated Power
European Telecommunications Standardization Institute
Forward Access Channel
Frequency Division Duplex
Frequency Division Multiple Access
Future Radio Wideband Multiple Access System
Genetic Algorithm
Gateway GPRS Support Node
Grade of Service
General Packet Radio Service
Global System for Mobile communications
Graphical User Interface
Hierarchical Cell Structure
Radio Frequency
High Interactive Multimedia
Home Location Register
High Multimedia
Handover
High Performance Local Area Network
Hipertext Markup Language
Intermediate Frequency
International Mobile Communications 2000
Internet Protocol
Inter System Interference Ratio
International Telecommunications Union
ITU Radiocommunication sector
Local Access Network
Macro Diversity Combining
Mobile Equipment
Medium Multimedia
Maximum Ratio Combining
Mobile Station

203
MSC
OFDMA
ODMA
QAP
QoS
QPSK
OVSF
PCH
PCPCH
PDSCH
PHY
PICH
PLMN
PN
PRACH
PS
PSD
PSTN
P-CCPCH
P-CPICH
QF
RACE
RACH
RAN
RNC
RNS
RRC
RRM
R&D
S
SA
SCH
SD
SGSN
SHO
SIM
SIR
SM
SMS
SNR
S-CCPCH
S-CPICH
T1P1

Mobile Switching Center


Orthogonal Frequency Division Multiplex Access
Opportunity Driven Multiple Access
Quadratic Assignment Problem
Quality of Service
Quadrature Phase Shift Keying
Orthogonal Variable Spreading Factor
Paging Channel
Physical Common Packed Channel
Physical Downlink Shared Channel
Physical
Paging Indicator Channel
Public Land Mobile Network
Pseudo Noise
Physical Random Access Channel
Packet Switched
Power Spectrum Density
Public Switched Telephone Network
Primary Common Control Physical Channel
Primary CPICH
Quality Factor
Research of Advanced Communications Technologies in Europe
Random Access Channel
Radio Access Network
Radio Network Controller
Radio Network System
Radio Resource Control
Radio Resource Management
Research and Development
Speech
Simulated Annealing
Synchronization Channel
Switched Data
Serving GPRS Support Node
Soft Handover
Subscriber Identity Module
Signal to Interference Ratio
Simple Messaging
Short Message Service
Signal to Noise Ratio
Secondary Common Control Physical Channel
Secondray CPICH
Technical Subcommittee in US: Wireless/Mobile Services and Systems

204
TCP/IP
TDD
TDMA
TD-SCDMA
TFI
TL
TPC
TS
TSG
TSP
TTA
TTC
TX
UE
UL
UMTS
USIM
UTRA
UTRAN
VLR
WARC
WLAN
WLL
W-CDMA
W-TDMA
WWW
XML

APPENDIX J. FREQUENTLY USED ACRONYMS


Transmission Control Protocol/ Internet Protocol
Time Division Duplex
Time Division Multiple Access
Time Division - Synchronized CDMA
Transport Format Indicator
Tabu List
Transmit Power Control
Tabu Search
Technical Specification Group
Traveling Salesman Problem
Telecommunications Technology Association
Telecommunication Technology Committee
Transmit
User Equipment
Uplink
Universal Mobile Telecommunications System
User Service Identity Module
UMTS Terrestrial Radio Access
UTRA Network
Visitor Location Register
World Administrative Radio Conference
Wireless LAN
Wireless Local Loop
Wideband CDMA
Wideband TDMA
World Wide Web
Extensible Markup Language

Appendix K
Frequently Used Symbols

Parameter of Geometric Cooling function


Orthogonality factor of cell k
Parameter of Slow Cooling function
Scaling factor (relative maximum link powers) for different
base stations in the active set
Cm
Selection Pressure
cells
Number of cells in the network
Ec /I0 for the CPICH
CP ICHEc /I0
CP ICH EC /I0 thres Threshold for CP ICHEc /I0
E
Energy state
Eb /I0
Bit energy to interference plus noise density ratio
Eb /N0
Bit energy to noise density ratio
Ec
Average energy per pseudo noise chip
Ec /I0
Chip energy to interference plus noise density ratio
e(i)
Number of descendants for individual i
existing
Total number of simulated users
U L
Uplink loading
U L,threshold
Maximum cell load
f (i)
Scaled fitness value of individual i
g(i)
Fitness value of individual i
g
Mean value over all g(i)
gmax
Highest fitness that occurs in the population
gcov
Coverage probability of the pixels in the simulation area
GoS
Grade of Service
gSHO
SHO proportion
gSHO,cell (k)
Number of SHO links in cell k
gSHO,max
Maximum over all gSHO,cell (k)
i
Other-to-own cell interference ratio
I0
Total received power density, including signal and interference,
205

206

IACI
Ik
Ioth
Iown
Itot
k
KN
L
Lp
Lpi
Lpk
n
N0
NC
NM S
numBSs
pc
Pcommon
PCP ICH
Pi
pm
Pnoise
ps (i)
PT
PT,max
P TX
PT X
PT X,i
PT X,max
PT X,M S
PT X,n
QF
R
Rk
RSCPCP ICH
RSSI

APPENDIX K. FREQUENTLY USED SYMBOLS


as measured at the mobile station antenna connector
Adjacent channel interference
Total wideband power received at the mobile station from
base station k
Inter-cell interference
Intra-cell interference
Total wideband interference power received at the mobile
station
Boltzmanns constant
Number of mobiles connected to base station N
Interference floor
Propagation loss between the mobile and the base station
Link loss to base station i
Link loss from base station k to the mobile station
Population size
Background noise
Number of clusters
Background noise level at the mobile station
Number of base stations in network
Recombination probability
Overall transmit power of the common channels
CPICH power of best server
Total power of ith cluster
Mutation probability
Noise power at 20 C, which is -108,09 dBm
Selection probability for individual i
Total required cells transmit power
Maximum cell power
Average code power
Required transmit power for a certain link
Total transmit power of base station i
Maximum base station transmit power capability
Mobiles transmit power
Required code power for the connected user n
Quality Factor
Bit rate
Bit rate of user k
Received power of the CPICH measured at the mobile station
Wideband received power within the relevant channel
bandwidth in the downlink
Uplink Eb /N0 requirement
Uplink Eb /N0 requirement of user k

207
S
served
servedk
,i
T
Tc
TC
i

Receiver Sensitivity
Total number of served users
Number of served users of cell k
Cluster delay spread
Temperature
Chip duration
Cooling temperature
Delay of the first path
Service activity
W-CDMA chip rate

208

APPENDIX K. FREQUENTLY USED SYMBOLS

Appendix L
Curriculum Vitae

Personal Data
Name:

Alexander Gerdenitsch

Address:

Berggasse 1, 7022 Loipersbach

Birthday:

21.10.1976, Eisenstadt

Family status:

Unmarried

Education
1983-1987

Primary school, 2500 Baden.

1987-1991

Comprehensive secondary school, 2500 Baden.

1991-1996

Federal Secondary College of Engineering Wiener Neustadt,


Department of Mechanical Engineering and Automation,
School leaving examination in June 1996.

1996-2001

Studies of Mechatronics with focus on Communications and


209

210

APPENDIX L. CURRICULUM VITAE


Information Engineering at Johannes Kepler University
Linz to achieve the Diplom-Ingenieur (Master degree).

2002-2004

PhD studies of Electrical and Electronical Engineering at


the Institute of Communications and Radio-Frequency
Engineering.

Military service
04/2001 - 11/2001

Completion of the military service in the Austrian army.

Career
12/2001 - 6/2002

Technical employee in the software department of the


company Landis & Gyr, 7000 Eisenstadt.

Since 7/2002

Research engineer at the Institute of Communications


and Radio-Frequency Engineering, Guhausstrae 25/389,
1040 Wien.

Publications
S. Jakl, A. Gerdenitsch, W. Karner, M. Toeltsch, An Approach for the
Initial Adjustment of Antenna Azimuth and Other Parameters in UMTS
Networks, Proc. 13th IST Mobile & Wireless Communications Summit
2004, June 2004, Lyon, France.
A. Gerdenitsch, S. Jakl, W. Karner, M. Toeltsch, Influence of Antenna
Azimuth in Non-Regular UMTS Networks, Proc. 5th World Wireless
Congress, San Francisco, US, 2004.
A. Gerdenitsch, S. Jakl, M. Toeltsch, The Use of Genetic Algorithms for
Capacity Optimization in UMTS FDD Networks, Proc. 3rd International
Conference on Networking ICN04, vol. 1, pp. 293-298, ISBN: 0-86341326-9, Guadeloupe, French Caribbean, 2004.

211
A. Gerdenitsch, S. Jakl, Y.Y. Chong, M. Toeltsch, An Adaptive Algorithm
for CPICH and Antenna Tilt Optimization in UMTS FDD Networks, Proc.
8th International Conference on Cellular and Intelligent Communications
(CIC), p. 378, ISBN: 89-5519-118-9-98560, October 2003, Seoul, Korea.
A. Gerdenitsch, S. Jakl, M. Toeltsch, T. Neubauer, Intelligent Algorithms
for System Capacity Optimization of UMTS FDD Networks, Proc. IEE
4th International Conference on 3G Mobile Communication Technologies,
pp. 222-226, ISBN: 0-85296 756-X, June 2003, London.
A. Springer, A. Gerdenitsch, Z. Li, A. Stelzer, R. Weigel, Adaptive Predistortion for Amplifier Linearization for UMTS Terminals, Proc. IEEE 7th
International Symposium on Spread-Spectrum Techniques and Applications,
pp. 78-82, ISBN: 0-7803-7627-7, Sept. 2002, Prague.
A. Springer, A. Gerdenitsch, R. Weigel, Digital Predistortion-Based Power
Amplifier Linearization for UMTS, Proc. European Conference on Wireless Technology (ECWT2001), pp. 185-189, ISBN: 0 86213 163 4, Sept.
2001, London.
Alexander Gerdenitsch, Digitale Vorverzerrung zur Linearisierung von Leistungsverstarkern f
ur UMTS, Master Thesis, June 2001.

212

APPENDIX L. CURRICULUM VITAE

Bibliography
[1] 3GPP, Network architecture, TS23.002, v3.6.0, September 2002,
http://www.3gpp.org.
[2] 3GPP, Requirements for support of radio resource management (FDD),
TS25.133, v6.0.0, September 2002, http://www.3gpp.org.
[3] 3GPP, Physical channels and mapping of transport channels onto
physical channels (FDD), TS25.211, v3.12.0, September 2002,
http://www.3gpp.org.
[4] 3GPP, Physical layer - Measurements (FDD), TS25.215, v4.7.0, June
2003, http://www.3.gpp.org.
[5] 3GPP, RF system
http://www.3.gpp.org.

scenarios,

TS25.942,

v3.3.0,

June

2002,

[6] 3GPP, Terminal conformance specification; radio transmission and reception (FDD), TS34.121, v3.10.0, September 2002, http://www.3gpp.org.
[7] 3GPP, Deployment
http://www.3pgg.org.

aspects,

TR25.943,

v5.1.0,

June

2002,

[8] E. Aarts and J. K. Lenstra, eds., Local search in combinatorial optimization,


John Wiley & Sons, Ltd., 1997.
[9] E. Aarts and J. H. M. Korst, Simulated Annealing and Boltzmann Machines, John Wiley & Sons, Ltd., 1989.
[10] ACTS, Advanced Communication Technologies and Services, 2000,
http://www.infowin.org/ACTS.
[11] S. M. Allen, S. Hurley, R. K. Taplin and R.M. Whitaker, Automatic cell
planning of broadband fixed wireless networks, Proceedings of the IEEE
VTC Conference, pp. 2808-2812, Rhodes, Greece, May 2001.
213

214

BIBLIOGRAPHY

[12] E. Amaldi, A. Capone and F. Malucelli, Improved models and algorithms


for UMTS radio planning, Proceedings of 54th IEEE Vehicular Technology
Conference, VTC 2001-Fall, IEEE VTS 54th , vol. 2 , pp. 920 -924, October
2001.
[13] T. Back, D. Fogel and Z. Michalewicz, Handbook of Evolutionary Computation, Oxford University Press, 1997.
[14] R. Battiti and A. Bertossi, Greedy, prohibition, and reactive heuristics for
graph-partitioning, Proceedings of IEEE Trancactions on Computers, vol.
48, pp. 361-385, 1999.
[15] R. Battiti, A. Bertossi and D. Cavallaro, A randomized saturation degree
heuristic for channel assignment in cellular radio networks, Proceedings
IEEE Trancactions on Vehicular Technology, vol. 50, pp. 364-374, 2001.
[16] T. Baumgartner, Smart Antenna Strategies for the UMTS FDD Downlink,
PhD Thesis, Technische Universitat Wien, Austria, August 2003.
[17] N. Binucci, K. Hiltunen and M. Caselli, Soft handover gain in WCDMA,
Proceedings IEEE 52nd Conference on Vehicular Technology, vol 3., pp.
1467-1472, Sept. 2000.
[18] P. E. Black, Dictionary of Algorithms and Data Structures, 2003,
http://www.nist.gov/dads.
[19] E. Bonek, Mobilkommunikation, Lecture notes (in German), Technische
Universitat Wien , Austria, 2001.
[20] S. C. Bundy, Antenna Downtilt Effects on CDMA Cell-Site Capacity,
Proceedings of Radio an Radio and Wireless Conference, RAWCON 99,
pp. 99-102, August 1-4, 1999.
[21] P. Calegari, F. Guidec, P. Kuonen and D. Wagner, Genetic approach to
radio network optimization for mobile systems, Proceedings IEEE 47th
Conference on Vehicular Technology, vol 2., pp. 755-759, 1997.
[22] P. Chaudhury, W. Mohr and S. Onoe, The 3GPP proposal for IMT-2000,
IEEE Communications Magazine, pp. 72-81, December 1999.
[23] Y. Y. Chong, Local Algorithm for UMTS Radio Network Capacity Optimization, Master Thesis, Helsinki University of Technology, June 2003.
[24] A. Colorni, M. Dorigo and V. Maniezzo, Positive feedback as a search
strategy, Working paper 91-16, Department of Electronics, Politecnico di
Milano, Italy, 1991.

BIBLIOGRAPHY

215

[25] A. Colorni, M. Dorigo and V. Maniezzo, Distributed optimization by ant


colonies, Proceedings of the First European Conference on Artificial Life
(ECAL-91), pp. 134-142, 1991.
[26] A. Colorni, M. Dorigo and V. Maniezzo, An investigation of some properties of an ant algorithm, Parallel Problem Solving from Nature 2, R.
Manner and B. Manderick, eds., pp. 509-520, North-Holland: Amsterdam,
1992.
[27] L. M. Correia, Wireless Flexible Personalized Communications - COST
259: European Co-Operation in Mobile Radio Research, John Wiley & Sons,
Ltd., 2001.
[28] L. M. Correia, Digital Mobile Radio towards Future Generation Systems,
COST 231 Final Report, 1999, http://www.lx.it.pt/cost231.
[29] E. Dahlmann, B. Gudmundson, M. Nilsson and J. Skold, UMTS/IMT2000 based on Wideband CDMA, IEEE Communications Magazine, vol.
1, pp. 70-80, September 1998.
[30] S. Dehghan, D. Lister, R. Owen and P. Jones, W-CDMA capacity and
planning issues, IEE Electronics and Communication Engineering Journal, pp. 108-118, June 2000.
[31] K. A. Jong, An Analysis of Behavior of a Class of Genetic Adaptive Systems, PhD Thesis, University of Michigan, 1975.
[32] G. Di Caro and M. Dorigo, AntNet: Distributed Stigmergetic Control
for Communications Networks, Journal of Artificial Intelligence Research
(JAIR), vol. 9, pp. 317-365, 1998.
[33] D. Dorner, Ant Colony Optimization, Lecture notes
http://ads.tuwien.ac.at/teaching/ws02/EvolAlg/ACO-Teil1.pdf, 2002.
[34] M. Dorigo, V. Maniezzo and A. Colorni, The ant system: optimization by
a colony of cooperating agents, IEEE Transactions on Systems, Man and
Cybernetics, 26B, pp. 29-41, 1996.
[35] Durlacher, UMTS report - an investment perspective, March 2001,
www.durlacher.com.
[36] M. Failli, COST 207 - Digital Land Mobile Radio Communications, Final
Report, Office for Official Publications of the European Communities,
Brussels, Luxembourg, 1989.
[37] ETSI SMG 24, Concept group alpha W-CDMA: System description summary, 1997, http://etsi.org.

216

BIBLIOGRAPHY

[38] ETSI SMG 24, Summary of concept description of the beta concept,
1997, http://etsi.org.
[39] ETSI SMG 24, Concept group W-TDMA: System description summary,
1997, http://etsi.org.
[40] ETSI SMG 24, Concept group delta W-TD/CDMA: System description
summary, 1997, http://etsi.org.
[41] ETSI SMG 24, Concept group epsilon ODMA: System description summary, 1997, http://etsi.org.
[42] ETSI Press Release, SMG Tdoc 40/98, Agreement Reached on Radio Interface for Third Generation Mobile System, UMTS, Paris, France, January 1998.
[43] C. A. Floudas and P. M. Pardalos, eds., State of the art in global optimization, Kluwer, Dordrecht, 1996.
[44] I. Forkel, A. Kemper, R. Pabst, R. Hermans, The Effect of Electrical
and Mechanical Antenna Down-Tilting in UMTS Networks, Proceedings
of 3rd International Conference on 3G Mobile Communication Technologies,
pp. 86-90, London, Great Britain, May 8-10, 2002.
[45] J. Fuhl, Smart Antennas for SEcond and Third Generation Mobile Communications Systems, PhD Thesis, Technische Universitat Wien, Austria,
1997.
[46] L. M. Gambardella and M. Dorigo, Solving symmetric and asymmetric
TSPs by ant colonies, Proceedings IEEE Conference on Evolutionary Computation (ICEC96), pp. 622-627, 1996.
[47] M. A. C. Garcia, Analysis of Multi-Service Traffic in UMTS FDD mode
Networks, IST, Technical University of Lisbon, Portugal, May 2000.
[48] A. Gerdenitsch, Digitale Vorverzerrung zur Linearisierung von Leistungsverstarkern f
ur UMTS, Master Thesis (in German), University Linz,
Austria, June 2001.
[49] F. Glover and M. Laguna, Tabu Search, Kluwer Academic Publishers, Dordrecht, 1997.
[50] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine
Learning, Addison-Wesley, MA, 1989.

BIBLIOGRAPHY

217

[51] M. Gorges-Schleuter, ASPARAGOS: An Asynchronous Parallel Genetic


Optimization Strategy, Proceedings of 3rd International Conference on Genetic Algorithms, pp. 422-427, Morgan Kaufmann Publishers, San Mateo,
CA, 1989.
[52] J. K. Han, B S. Park, Y S. Choi and K. Park, Genetic Approach with
a New Representation for Base Station Placement in Mobile Comunnications, Proceedings of 54th IEEE Vehicular Technology Conference, VTC
2001-Fall, vol. 4 , pp. 2703 -2707, October 2001.
[53] J. H. Holland, Adaption in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, 1975.
[54] H. Holma and A. Toskala, eds., WCDMA for UMTS Radio Access for Third
Generation Mobile Communications, John Wiley & Sons, Ltd., 2nd ed.,
2002.
[55] X. Huang, U. Behr and W. Wiesbeck, Automatic cell planning for lowcost and spectrum efficient wireless networks, Proceedings IEEE Gobal
Telecommunications Conference, GLOBECOM 00, vol 1., pp. 276-282,
2000.
[56] S. Hurley, Planning effective cellular mobile radio networks, Proceedings
of IEEE Transactions on Vehicular Technology, vol. 51, pp. 243-253, March
2002.
[57] Ant Colony Optimization Home Page:
http://iridia.ulb.ac.be/mdorigo/ACO.
[58] S. Irons, C. Johnson, A. King and D. McFarlane, Supporting the successful deployment of third generation public cellular technologies - system
dimensioning and network planning, Proc. of 1st IEE 3G Mobile Communications Technologies Conference, pp. 146-160, London, March 2000.
[59] W. C. Jakes, Microwave Mobile Communications, John Wiley & Sons, Ltd.,
1st ed., 1974.
[60] Jin Yang and Jinsong Lin, Optimization of power management in a CDMA
radio network, Proceedings of 52th IEEE Vehicular Technology Conference,
VTC 2000-Fall, vol. 6, pp. 2642-2647, September 24-28, 2000.
[61] J. M. Johnson and Y. Rahmat-Samii, Genetic Algorithms in Engineering
Electromagnetics, IEEE Antenna and Propagation Magazine, vol. 39, no.
4, pp. 7-21, August 1997.

218

BIBLIOGRAPHY

[62] P. Jones, and R. Owen, Sensitivity of UMTS FDD system capacity and
coverage to model parameters, Proc. of 1st IEE 3G Mobile Communications Technologies Conference, pp. 224-229, London, March 2000.
[63] W. Karner, Optimum Default Base Station Parameter Settings for UMTS
Networks, Master Thesis, Technische Universitat Wien, September 2003.
[64] Kathrein, 790-2200 MHz Base Station Antennas for Mobile Communications, 2001, Catalogue.
[65] D. Kim, Y. Chang and J. W. Lee, Pilot Power Control and Service Coverage Support in CDMA Mobile Systems, Proceedings of 49th IEEE Vehicular Technology Conference, VTC 1999-Spring, vol. 4, pp. 2238-2242,
Houston, TX, May, 1999.
[66] A. Klose, Simulated Annealing, Lecture notes (in German), 2002,
http://troubadix.unisg.ch/klose.
[67] A. Klose, Tabu-Suche, Lecture notes (in German), 2002,
http://troubadix.unisg.ch/klose.
[68] J. Laiho, A. Wacker and T. Novosad, eds., Radio Network Planning and
Optimization for UMTS, John Wiley & Sons, Ltd., 2002.
[69] J. Laiho-Steffens, A. Wacker and P. Aikio, The impact of the radio network planning and site configuration on the WCDMA network capacity and
quality of service, Proceedings of 51th IEEE Vehicular Technology Conference, VTC 2000-Spring, vol. 2, pp. 1006-1010, Tokyo, Japan, May 15-18,
2000.
[70] I. Laki, L. Farkas and L. Nagy, Cell planning in mobile communication
systems using sga optimization, Proceedings of International Conference
on Trends in Communications, vol. 1, pp. 124-127, 2001.
[71] C. Y. Lee and H. G. Kang, Cell planning with capacity expansion in
mobile communications: A tabu search approach, IEEE Transactions on
Vehicular Technology, vol. 49, pp. 1678-1691, March 2000.
[72] R. T. Love, K. A. Beshir, D. Schaeffer and R. S. Nikides, A Pilot Optimization Technique for CDMA Cellular System, Proceedings of 50th IEEE
Vehicular Technology Conference, VTC 1999-Fall, vol. 4, pp. 2238-2242,
1999.
[73] R. M. Mathar and T. Niessen. Optimum positioning of base stations for
cellular radio networks, Wireless Networks, vol. 6, pp. 421-428, 2000.

BIBLIOGRAPHY

219

[74] C. Mecklenbrauker, Transmission Power Control, Lecture notes,


http://userver.ftw.at, 2002.
[75] CDMA solutions seminar series, Smart Cell Site Optimization,
Metawave, http://metawave.com, 2003.
[76] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, Berlin, Heidelberg, New York, 3 ed., 1999.
[77] Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics,
Springer-Verlag, Berlin, Heidelberg, New York, 2000.
[78] A. Molina, G. E. Athanasiadou and A. R. Nix, The automatic location
of base-stations for optimized cellular coverage: a new combinatorial approach, Proceedings of 49th IEEE Vehicular Technology Conference, vol. 1,
pp. 606-610, May 1999.
[79] H. M
uhlenbein, Parallel Genetic Algorithms, Population Genetics and
Combinatorial Optimization, Proceedings of 3rd International Conference
on Genetic Algorithms, pp. 416-421, Morgan Kaufmann Publishers, San
Mateo, CA, 1989.
[80] M. J. Nawrocki and T. W. Wieckowski, Optimal Site and Antenna Location for UMTS Output Results of 3G Network Simulation Software,
Journal of Telecommunications and Information Technology, 1/2003, National Institute of Telecommunications, Warsaw, 2003.
[81] T. Neubauer, T. Baumgartner and E. Bonek, Necessary and sufficient
network size for pole capacity estimation in UMTS FDD, Proc. ECWT
2000, pp. 131-134, October 2000.
[82] T. Neubauer, H. Jager, J. Fuhl and E. Bonek, Measurement of background
noise floor in the UMTS FDD uplink band, Proc. 4th European Personal
Mobile Communications Conference, EPMCC, February 2001.
[83] J. Nocedal and S. J. Wright, Numerical Optimization, Springer-Verlag,
Berlin, Heidelberg, New York, 1999.
[84] T. Ojanpera and R. Prasad, An overview of air interface multiple access for
IMT-2000/UMTS, IEEE Communications Magazine, Sept. 1998, vol. 1,
pp. 82-95, 1998.
[85] C. H. Papadimitriou, K. Steiglitz, Combinatorial Optimization: Algorithms
and Complexity, Prentice-Hall, Englewood Cliffs, New York, 1982.
[86] P. M. Pardalos and M. G. C. Resende, Handbook of Applied Optimization,
Oxford University Press, 2002.

220

BIBLIOGRAPHY

[87] K. I. Pederson, Antenna Arrays in Mobile Communications, PhD Thesis,


Center for PersonKommunikation, Aalborg University, Danmark, 2000.
[88] S. Plimmer, M. Feeney, D. Barker and T. Normann, Adjusting antenna
downtilt boosts UMTS optimization, Wireless Europe, pp. 34-35, November 2002.
[89] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, Numerical Recipes in C++: The Art of Scientific Comupting, Cambridge University Press, 2nd ed., 2002.
[90] J. G. Proakis, Digital communications, McGraw Hill Book Comp. Inc., 3rd
ed., 1995.
[91] J. X. Qiu and J. W. Mark, A Dynamic Load Sharing Algorithm Through
Power Control in Cellular CDMA, Proceedings of 9th IEEE International
Symposium on Personal, Indoor and Mobile Radio Communications, vol. 3,
pp. 1280-1284, September 8-11, 1998.
[92] G. Raidl, Evolutionare Algorithmen, Lecture notes (in German),
http://ads.tuwien.ac.at/teaching/ws02/EvolAlg/folien2002.pdf, 2002.
[93] C. B. Reeves, ed., Modern Heuristic Techniques for Combinatorial Problems, Blackwell, Cambridge MA, 1993.
[94] R. Schoonderwoerd, O. Holland and J. Bruten, Ant-like Agents for Load
Balancing in Telecommunications Networks, Proceedings of Agents97,
Marina del Rey, CA, ACM Press, pp. 209-216, 1997.
[95] P. Sehier, J. M. Gabriagues and A. Urie, Standardization of 3G mobile
systems, Alcatel Telecommunications Review 2001, vol. 1, pp. 11-18, 2001.
[96] A. J. Serrador, Optimisation of Cell Radius in UMTS-FDD Networks,
Master Thesis, Technical University of Lisbon, Portugal, December 2002.
[97] A. Springer, Mobilfunktechnik, Lecture notes (in German), Johannes
Kepler Universitat Linz, Austria, 2000.
[98] M. Steinbauer, The Radio Propagation Channel - A Non-Directional, directional and Double-Directional Point-of-View, PhD Thesis, Technische
Universitat Wien, Nov. 2001.
[99] K. Tutschku, Interference minimization using automatic design of cellular
communication networks, Proceedings of the IEEE VTC 98 Conference,
pp. 634-638, 1998.

BIBLIOGRAPHY

221

[100] UMTS Forum, Minimum spectrum demand per public terrestrial UMTS
operator in the initial phase, UMTS Report No. 5, September 1998,
http://www.umts-forum.org.
[101] UMTS Forum, UMTS/IMT-2000 Spectrum, UMTS Report No. 6, December 1998, http://www.umts-forum.org.
[102] UMTS Forum, The future mobile market, UMTS Report No. 8, March
1999, http://www.umts-forum.org.
[103] K. Valkealathi, A. Hoglund, J. Parkkinen and A. Hamalainen, WCDMA
Common Pilot Power Control for Load and Coverage Balancing, Proceedings of 13th IEEE International Symposium on Personal, Indoor and Mobile
Radio Communications, vol. 3, pp. 1412-1416, 2002.
[104] B. N. Vejlgaard, Data Receiver for the Universal Mobile Telecommunications System (UMTS), PhD Thesis, Aalborg University Danmark, 2001.
[105] A. M. Viterbi and A. J. Viterbi, Erlang capacity of a power controlled
CDMA system, IEEE Journal on Selected Areas in Communications,
vol. 11, pp. 892-900, Aug. 1993.
[106] R. M. Whitaker and S. Hurley, Evolution of Planning for Wireless Communication Systems, Proceedings of the 36th Hawaii International Conference
on System Sciences (HICSS03), pp. 295-305, 2003.
[107] D. Whitley, GENITOR II: A Distributed Genetic Algorithm, Journal of
Experimental and Theoretical Artificial Intelligence, vol. 2, pp. 189-214.
[108] J. H. Winters, Optimum combining in digital mobile radio with cochannel interference, IEEE Journal on Selected Areas in Communication,
vol. SAC-2, pp. 528-539, 1984.

You might also like